Skip to content

MXFP8 training bug fixes for quantized_model_init and Torch FSDP fp8 all gather#587

Open
sudhu2k wants to merge 1 commit into
devfrom
sudhu/mxfp8_bug_fixes
Open

MXFP8 training bug fixes for quantized_model_init and Torch FSDP fp8 all gather#587
sudhu2k wants to merge 1 commit into
devfrom
sudhu/mxfp8_bug_fixes

Conversation

@sudhu2k
Copy link
Copy Markdown
Contributor

@sudhu2k sudhu2k commented May 15, 2026

Description

Ensure keep_fp8_weight_transpose_cache flag is set to True not only for autocast but also for quantized_model_init.
Fix padding during fp8 all-gather

Fixes: #15425
#15420

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

…el_init case and not just autocast case.

Fix padding during fp8 all-gather
@sudhu2k sudhu2k self-assigned this May 15, 2026
@sudhu2k sudhu2k added the ci-level 3 CI test level 3 label May 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-level 3 CI test level 3

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant