Skip to content

[WIP] Add additional platform page for third-party vendor#2072

Draft
can-gaa-hou wants to merge 7 commits into
pytorch:sitefrom
can-gaa-hou:site
Draft

[WIP] Add additional platform page for third-party vendor#2072
can-gaa-hou wants to merge 7 commits into
pytorch:sitefrom
can-gaa-hou:site

Conversation

@can-gaa-hou
Copy link
Copy Markdown

@can-gaa-hou can-gaa-hou commented May 9, 2026

Description

This PR adds a new page for users to search for guidelines for more PyTorch backends through the official website.

The new page will look like this:
image

Here is the page entrance: https://cosdt.github.io/get-started/additional-platform/

How it works

If a new backend would like to add to this page, then simply add a json file under the _ecosystem_platform/ directory. The gen_ecosystem_platform.py will automatically read the json file and re-generate the js file needed for rendering the page. Here is an example for the json file:

{
  "name": "NPU",
  "vendor": "Huawei",
  "documentation": "https://www.hiascend.com/document",
  "stable": {
    "linux": {
      "pip": {
        "python": {
          "CANN 8.0": "pip3 install torch torchvision --index-url https://download.pytorch.org/whl/npu/cann80",
          "CANN 9.0": "pip3 install torch torchvision --index-url https://download.pytorch.org/whl/npu/cann90"
        }
      }
    }
  },
  "preview": {
    "linux": {
      "pip": {
        "python": {
          "CANN 8.0": "pip3 install torch torchvision --pre --index-url https://download.pytorch.org/whl/nightly/npu/cann80",
          "CANN 9.0": "pip3 install torch torchvision --pre --index-url https://download.pytorch.org/whl/nightly/npu/cann90"
        }
      }
    }
  }
}

cc @fffrog

@netlify
Copy link
Copy Markdown

netlify Bot commented May 9, 2026

Deploy Preview for pytorch-dot-org-preview ready!

Name Link
🔨 Latest commit 14e4a12
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-dot-org-preview/deploys/6a06c58fbbf34800088c3a3c
😎 Deploy Preview https://deploy-preview-2072--pytorch-dot-org-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@fffrog
Copy link
Copy Markdown

fffrog commented May 9, 2026

@can-gaa-hou Great job, thank you.

@fffrog
Copy link
Copy Markdown

fffrog commented May 9, 2026

Hey @marco-s, sorry to bother you.

The PoC is ready—could you let us know if this provides enough detail for the WP integration? and the rendered version is here

Copy link
Copy Markdown

@dvrogozh dvrogozh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure where Intel XPU cmdline comes from as I don't see it being added in this PR, but it's incorrect as it directs to install deprecated intel-extension-for-pytorch. Intel XPU is a platform available in upstream pytorch and should be installed with:

pip3 install torch torchvision --index-url https://download.pytorch.org/whl/xpu

For the same reason, I think we need to clearly separate those platforms which are supported directly by pytorch team (such as CUDA, ROCm, XPU, Vulkan, etc.) and those which are available as pytorch extensions/plugins supported by 3rd parties. I suggest there should be 2 distinct sections on the page which clear separate these 2 group of platforms.

Also, I am not sure that current install selector matrix widget is the best approach we can take as it has potentially limited capacity and quickly becomes bloated with different buttons. A more straightforward and simple way can be considered. For example, having a simple table of content listing platforms one by one with the further links to the platform descriptions and instructions. For example:

PyTorch Platforms

  • CPU -> link
  • CUDA -> link
  • ROCm -> link
  • XPU -> link
  • Vulkan -> link
  • ...

PyTorch Extensions

  • NPU (Huawei) -> link
  • ...

@can-gaa-hou
Copy link
Copy Markdown
Author

I am not sure where Intel XPU cmdline comes from as I don't see it being added in this PR, but it's incorrect as it directs to install deprecated intel-extension-for-pytorch.

Hi @dvrogozh, all the content shown on the page is fake. The purpose of this PR is to introduce the secondary page for PyTorch extension installation and to add the installation widget framework. The content in the widget will be added through JSON files (the format is shown in the description) by each vendor in the following PRs.

For the same reason, I think we need to clearly separate those platforms which are supported directly by pytorch team(such as CUDA, ROCm, XPU, Vulkan, etc.) and those which are available as pytorch extensions/plugins supported by 3rd parties.

This primary page will manage the platforms which are supported directly by pytorch team, and the secondary page will manage those pytorch extensions/plugins supported by 3rd parties.

Also, I am not sure that current install selector matrix widget is the best approach we can take as it has potentially limited capacity and quickly becomes bloated with different buttons.

The format inherits from the primary page. We can discuss more about it in the Slack channel and the Accelerator Working Group.

@can-gaa-hou can-gaa-hou changed the title [WIP] Add ecosystem platform page for third-party vendor [WIP] Add additional platform page for third-party vendor May 12, 2026
@can-gaa-hou
Copy link
Copy Markdown
Author

Hi there, I have update the link: https://cosdt.github.io/get-started/additional-platform/

changing the name from ecosystem-platform --> additional-platform

@albanD
Copy link
Copy Markdown
Contributor

albanD commented May 12, 2026

@dvrogozh I don't think I agree on this. There should not be any distinction between in-core vs out-of-core backends from the end user perspective.
The code lives in-core or out-of-core only for technical reasons related to ease of development and maintenance.

We have many workstreams done and in flight working toward ensure the two can match and reach the same level of integration and stability.

Comment thread _get_started/get-started-locally.md Outdated
@@ -0,0 +1,115 @@
<p>Select your compute platform and configuration to get the installation command. These platforms provide alternative hardware acceleration options beyond NVIDIA CUDA.</p>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<p>Select your compute platform and configuration to get the installation command. These platforms provide alternative hardware acceleration options beyond NVIDIA CUDA.</p>
<p>In the following selector, you can find compute platform and configuration supported by partners and community members. Select your preferences and run the install command provided.</p>

This is closer to the language of the main page and makes it clear what's the difference between the two.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also suggest adding a note in the selector for each platform with details on where to provide feedback and report issues (which github repo, which label to use there, etc).

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD , can you, please, suggest where Intel XPU should be placed - in the primary page or on this additional platforms page. I do argue that Intel XPU is in-tree backend and Intel provides the necessary infrastructure to the Pytorch team to validate, develop and support XPU backend. Effectively Intel Pytorch team is a part of the greater Pytorch team who makes Pytorch releases.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also suggest adding a note in the selector for each platform with details on where to provide feedback and report issues (which github repo, which label to use there, etc).

Besides that I think it's reasonable to add information on how to install drivers to make pytorch actually runnable on the specific hardware. This might be just a link to the platform documentation or a link to the pytorch side page with the details such as https://docs.pytorch.org/docs/2.11/notes/get_start_xpu.html.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also suggest adding a note in the selector for each platform with details on where to provide feedback and report issues (which github repo, which label to use there, etc).

@albanD Thank you for this valuable suggestion, I will copy that.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides that I think it's reasonable to add information on how to install drivers to make pytorch actually runnable on the specific hardware. This might be just a link to the platform documentation or a link to the pytorch side page with the details such as https://docs.pytorch.org/docs/2.11/notes/get_start_xpu.html.

Thank you@dvrogozh, yes, we will reserve a spot to display this info.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we will reserve a spot to display this info.

@can-gaa-hou , can you, please, add this to the design sketch? I suggest couple new lines after "Run this Command" such as:

Box Description
Run this Command pip install ....
Documentation Link to documentation to install prerequisites (drivers), describing supported features, limitations, etc.
Leave feedback Link to Github project to file issues, questions, etc.

Copy link
Copy Markdown
Contributor

@EikanWang EikanWang May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD , can you, please, suggest where Intel XPU should be placed - in the primary page or on this additional platforms page. I do argue that Intel XPU is in-tree backend and Intel provides the necessary infrastructure to the Pytorch team to validate, develop and support XPU backend. Effectively Intel Pytorch team is a part of the greater Pytorch team who makes Pytorch releases.

@dvrogozh , Let’s focus on defining the criteria first. I think we are aligned that this should be the top priority, and the criteria should not be Intel GPU-specific. In my view, whether a backend is in-tree or out-of-tree should not be part of the criteria. Factors such as quality, end-user adoption, and ecosystem maturity should carry much more weight.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@can-gaa-hou , can you, please, add this to the design sketch? I suggest couple new lines after "Run this Command" such as:

I think we can extend this to allow each accelerator to have its own installation guideline markdown, and I add a mechanism to read through this markdown file and render it to html code and place it under the matrix when the user selects the specific accelerator as the compute platform. The overall looks like below:

image

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@can-gaa-hou, yes, I think this will do.

Comment thread _includes/quick_start_additional_platform.html
<div class="option-text">Package</div>
</div>
<div class="col-md-12 title-block">
<div class="option-text">Language</div>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest removing Package and Language tbh. libtorch + C++ is not really used and I don't think it is a good use of time for new bakends to focus on maintaining these.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it,we will copy that.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete ibtorch and c++. Source is remained.

@dvrogozh
Copy link
Copy Markdown

There should not be any distinction between in-core vs out-of-core backends from the end user perspective. The code lives in-core or out-of-core only for technical reasons related to ease of development and maintenance.

If there are just technical reasons distinctions, then yes. However, if code resides in a separate github org, has distinct workflow processes set, stand alone ci coverage and maintenance, separate release process, etc., then there is clear distinction between in-core and such out-of-core backends.

@dvrogozh
Copy link
Copy Markdown

The content in the widget will be added through JSON files (the format is shown in the description) by each vendor in the following PRs.

@can-gaa-hou , where such JSON file is located for XPU? I could not actually find it in the repo. Unfortunately I don't understand where current version is pulling information for XPU from.

@fffrog
Copy link
Copy Markdown

fffrog commented May 13, 2026

@can-gaa-hou , where such JSON file is located for XPU? I could not actually find it in the repo. Unfortunately I don't understand where current version is pulling information for XPU from.

@dvrogozh, this PR is strictly focused on the infrastructure of the secondary page and does not include any specific accelerator data. The XPU and Ascend entries seen in the demo were generated in our local environment using local files; we intentionally excluded them from this PR to keep the focus on the framework.

@fffrog
Copy link
Copy Markdown

fffrog commented May 13, 2026

Hi @albanD, sorry for the ping again. We are drafting a comprehensive mockups that includes both the primary page updates and the creation of the secondary pages, fully aligned with the TAC consensus.

We will also incorporate some minor adjustments based on the discussions and suggestions from other vendors in the Accelerator WG. Once finalized, I’ll send it over to you for final feedback and confirmation. We expect to complete this in about three days. Thank you for your patience and support!

cc @can-gaa-hou

@can-gaa-hou
Copy link
Copy Markdown
Author

can-gaa-hou commented May 13, 2026

@dvrogozh I have changed all the accelerator names to a fake name to lessen controversy and make this PR more focused on the format and style. You can find the generated javascript file here https://github.com/cosdt/cosdt.github.io/blob/pytorch/assets/quick-start-additional-platform.js (generated from JSON).
Update: You can now see the new page: https://cosdt.github.io/get-started/additional-platform/

Since we have a working group meeting to discuss this secondary page, I will leave this PR in draft status until we reach a consensus on the design. But feel free to leave any comments here. Thanks! cc @fffrog @albanD

@dvrogozh
Copy link
Copy Markdown

@dvrogozh I have changed all the accelerator names to a fake name to lessen controversy and make this PR more focused on the format and style.

@can-gaa-hou Thank you. This helps.

@EikanWang
Copy link
Copy Markdown
Contributor

@dvrogozh I don't think I agree on this. There should not be any distinction between in-core vs out-of-core backends from the end user perspective. The code lives in-core or out-of-core only for technical reasons related to ease of development and maintenance.

We have many workstreams done and in flight working toward ensure the two can match and reach the same level of integration and stability.

Agree with @albanD . in-core and out-of-core are implementation details, while they do not impact the quality, the adoption, and the ecosystem from the hardware accelerator perspective.

As I mentioned, clear criteria to reflect the accereator status is much more important. cc @dvrogozh , @fffrog , @zeshengzong

@EikanWang
Copy link
Copy Markdown
Contributor

EikanWang commented May 14, 2026

If there are just technical reasons distinctions, then yes. However, if code resides in a separate github org, has distinct workflow processes set, stand alone ci coverage and maintenance, separate release process, etc., then there is clear distinction between in-core and such out-of-core backends.

@dvrogozh , "distinct workflow processes set, stand alone ci coverage and maintenance, separate release process, etc.," are detailed implementations. For example, why do we care about the workflow processes if an accelerator can guarantee to deliver world-class quality with promising performance? Even the workflow for different accelerators in-core is not exactly the same as the others, like CPU/CUDA, and other accelerators.

@dvrogozh
Copy link
Copy Markdown

dvrogozh commented May 14, 2026

@dvrogozh , "distinct workflow processes set, stand alone ci coverage and maintenance, separate release process, etc.," are detailed implementations. For example, why do we care about the workflow processes if an accelerator can guarantee to deliver world-class quality with promising performance? Even the workflow for different accelerators in-core is not exactly the same as the others, like CPU/CUDA, and other accelerators.

@EikanWang , I am trying to highlight potential issues due to different ownership of some out-of-tree accelerators. I do not see a problem as long as pytorch community owns and supports accelerator even if it's out-of-tree. But this might be a potential issue if accelerator is supported by the party not affiliated with pytorch community. Consider for example that such party might change release cadence or validation coverage at any time without consulting with the pytorch community and community won't have any control to change anything with such accelerator. Thus, those accelerators which are being added to pytorch documentation must pass some scrutiny to guarantee its quality and support status.

To me the question of whether to document accelerator on pytorch.org is tightly related to the question whether pytorch community maintains such accelerator or not. And if someone else maintains it, then I personally would be very careful adding it to the generic documentation (adding it to dedicated section where 3rd party stuff is being discussed is totally fine as I see it).

Note that accelerator vendors can very well be part of the community. This depends on the vendor team involvement and used practices.

@fffrog
Copy link
Copy Markdown

fffrog commented May 15, 2026

As I mentioned, clear criteria to reflect the accereator status is much more important. cc @dvrogozh , @fffrog , @zeshengzong

Totally agree. We will kick off the initial draft of the criteria immediately and share it with everyone once it's ready for review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants