Skip to content

MoE prefill bf16 perf improvement for qwen-3.5-35B-A3B#18829

Draft
digantdesai wants to merge 5 commits intomainfrom
digantdesai/qwen35_moe
Draft

MoE prefill bf16 perf improvement for qwen-3.5-35B-A3B#18829
digantdesai wants to merge 5 commits intomainfrom
digantdesai/qwen35_moe

Conversation

@digantdesai
Copy link
Copy Markdown
Contributor

@digantdesai digantdesai commented Apr 11, 2026

Baseline Batched Speedup
Prefill (1341 tok) 588 tok/s 1807 tok/s 3.07x
Decode (128 tok) 90 tok/s 86 tok/s ~1.0x

Inductor emits aten::sort.stable for ops like argsort, but lacks a
native c-shim for it. This adds a thrust-based implementation
(aoti_torch_cuda_sort_stable) that handles int64, int32, and float32
dtypes on contiguous innermost-dim tensors. Registered as a supported
fallback kernel in CudaBackend so AOTI-compiled models can use sort.

This PR was authored with the assistance of Claude.
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 11, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18829

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 11 New Failures, 1 Cancelled Job, 2 Pending, 2 Unrelated Failures

As of commit 63548f5 with merge base 266ff2d (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 11, 2026
Sweeps prompt lengths [1..4095] with Qwen3.5-35B-A3B shapes (256 experts,
top-8, INT4 W4A16). Validates correctness against loop-based eager reference
at small M, benchmarks vectorized eager, torch.compile, and Triton fused_moe.
Handles OOM gracefully at large M where eager/compile dequantize all experts.

This PR was authored with the assistance of Claude.
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

When the Triton tile size fits within a single quantization group, load
one scale per N-element instead of per (K, N) element. Reduces scale
memory traffic in both GEMM1 and GEMM2 vec-mat kernels.

This PR was authored with the assistance of Claude.
Adds a batched (M>1) Triton fused MoE kernel using tensor-core mma
instructions for prefill workloads. Includes moe_align_block_size for
token-expert sorting and scale broadcast optimization in the batched
GEMM inner loops.

Weight layout: [E, N, K//2] (packed INT4).

This PR was authored with the assistance of Claude.
Add use_batched_moe flag on FusedMoEExperts, toggled by _set_batched_moe
in export.py before each method's torch.export call. Decode (T=1) uses
the vec-mat fused_moe kernel; prefill (T>=2) uses fused_moe_batched_gemm.

This PR was authored with the assistance of Claude.
@digantdesai digantdesai force-pushed the digantdesai/qwen35_moe branch from a0d199a to 63548f5 Compare April 13, 2026 15:15
@digantdesai digantdesai changed the title Add CUDA sort shim for AOTI export (thrust-based sort_stable fallback) [AOTI-CUDA] MoE prefill bf16 perf improvement for qwen-3.5-35B-A3B Apr 13, 2026
@digantdesai digantdesai changed the title [AOTI-CUDA] MoE prefill bf16 perf improvement for qwen-3.5-35B-A3B MoE prefill bf16 perf improvement for qwen-3.5-35B-A3B Apr 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant