Skip to content

Ensure keep_fp8_weight_transpose_cache is True even for quantized_mod…

0deec6e
Select commit
Loading
Failed to load commit list.
Open

MXFP8 training bug fixes for quantized_model_init and Torch FSDP fp8 all gather #587

Ensure keep_fp8_weight_transpose_cache is True even for quantized_mod…
0deec6e
Select commit
Loading
Failed to load commit list.