Skip to content

1.add scripts for ascend-npu, 2.bugfix for z-image in ascend-npu#1048

Merged
helloyongyang merged 1 commit intomainfrom
ascend_test
May 6, 2026
Merged

1.add scripts for ascend-npu, 2.bugfix for z-image in ascend-npu#1048
helloyongyang merged 1 commit intomainfrom
ascend_test

Conversation

@Watebear
Copy link
Copy Markdown
Collaborator

@Watebear Watebear commented May 2, 2026

No description provided.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Ascend NPU platform across several models by adding configuration files and inference scripts, alongside a scheduler workaround for NPU-specific indexing limitations. Feedback focuses on optimizing attention kernels for the NPU, correcting a script reference to a missing configuration file, and improving script portability by replacing hardcoded or empty local paths with placeholders.

"aspect_ratio": "16:9",
"prompt_template_encode": "<|im_start|>system\nDescribe the image by detailing the color, shape, size, texture, quantity, text, spatial relationships of the objects and background:<|im_end|>\n<|im_start|>user\n{}<|im_end|>\n<|im_start|>assistant\n",
"prompt_template_encode_start_idx": 34,
"attn_type": "flash_attn3",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The attn_type is set to flash_attn3, which is typically optimized for NVIDIA Hopper GPUs. For Ascend NPU platforms, you should use npu_flash_attn to leverage the platform-specific optimized kernels.

Suggested change
"attn_type": "flash_attn3",
"attn_type": "npu_flash_attn",

--model_cls wan2.2_moe \
--task t2v \
--model_path $model_path \
--config_json ${lightx2v_path}/configs/platforms/ascend_npu/wan_moe_t2v.json \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The script references wan_moe_t2v.json, but this file is not included in the current pull request. Based on the added files, it seems you might have intended to use wan_ti2v_t2v.json or forgot to include the specific MoE configuration file.

Suggested change
--config_json ${lightx2v_path}/configs/platforms/ascend_npu/wan_moe_t2v.json \
--config_json ${lightx2v_path}/configs/platforms/ascend_npu/wan_ti2v_t2v.json \

Comment on lines +4 to +5
lightx2v_path=
model_path=
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variables lightx2v_path and model_path are left empty. While these are often environment-specific, providing a placeholder or a default relative path (like in other scripts in this PR) would make the script more usable out-of-the-box.

Comment on lines +4 to +5
lightx2v_path=/data/nvme1/yongyang/ddc/yong/LightX2V
model_path=/data/nvme1/models/Qwen/Qwen-Image-2512
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These paths appear to be very specific to a local environment (/data/nvme1/yongyang/...). It is better to use generic placeholders or relative paths to ensure the script is portable across different systems.

@helloyongyang helloyongyang merged commit 3e69ba2 into main May 6, 2026
2 checks passed
@helloyongyang helloyongyang deleted the ascend_test branch May 6, 2026 03:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants