Fix torch_logs tutorial: gate CUDA check properly, allow CPU fallback#3821
Fix torch_logs tutorial: gate CUDA check properly, allow CPU fallback#3821ShriyashP wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3821
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @ShriyashP! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Fixes #137285
Description
The
torch_logstutorial fails on CPU-only machines becausetorch.cuda.get_device_capability()is called unconditionally. This PR adds atorch.cuda.is_available()guard so the tutorial gracefully skips on CPU-only environments instead of crashing.Changes
torch.cuda.is_available()check beforeget_device_capability()Testing
Checklist