Skip to content

[PyTorch] Fix L3 FA tests#2709

Open
cyanguwa wants to merge 2 commits intoNVIDIA:mainfrom
cyanguwa:fix_L3_FA
Open

[PyTorch] Fix L3 FA tests#2709
cyanguwa wants to merge 2 commits intoNVIDIA:mainfrom
cyanguwa:fix_L3_FA

Conversation

@cyanguwa
Copy link
Collaborator

@cyanguwa cyanguwa commented Feb 26, 2026

Description

This PR fixes the L3 tests for FP8 current scaling in L3_pytorch_FA_versions_test. The fix is only related to the selection logic in the test and not the backend support itself.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • L3 CI test fix.

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

cyanguwa and others added 2 commits February 26, 2026 10:58
Signed-off-by: Charlene Yang <8636796+cyanguwa@users.noreply.github.com>
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 26, 2026

Greptile Summary

Fixed the test_dpa_fp8_vs_f16 test to properly handle backend availability checks, preventing undefined variable errors when FusedAttention backends aren't supported.

Key changes:

  • Separated FP8 and F16 backend availability tracking (fused_attn_supported_fp8 vs fused_attn_supported_f16)
  • Wrapped FusedAttention test execution in conditional blocks that check for backend support
  • Updated all comparison blocks to verify both result sets exist before accessing them

Impact:
The fix prevents NameError exceptions when running tests on hardware that doesn't support certain attention backends (particularly relevant for FP8 current scaling mode). Tests now gracefully skip unsupported configurations instead of crashing.

Confidence Score: 5/5

  • This PR is safe to merge with no risk
  • Test-only change that fixes a real bug with proper conditional guards. All variable accesses are correctly guarded by backend availability checks, preventing undefined variable errors. The logic is sound and handles all edge cases properly.
  • No files require special attention

Important Files Changed

Filename Overview
tests/pytorch/attention/test_attention.py Fixed FP8 attention backend availability checking to prevent undefined variable errors when backends aren't supported

Last reviewed commit: 1a40989

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, no comments

Edit Code Review Agent Settings | Greptile

@cyanguwa
Copy link
Collaborator Author

/te-ci pytorch L3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant