-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NVIDIA Blackwell codegen #12271
base: main
Are you sure you want to change the base?
NVIDIA Blackwell codegen #12271
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
does it have some requirements on the |
yes, cuda 12.7 for B100/B200 and cuda 12.8 for RTX50. in pytorch is greater than 12.6 |
Thanks, I saw the flash attention changes -- good to know about the B100/B200 vs RTX difference there |
I know more information: |
@tlrmchlsmth check it |
I mean, will it break old |
Yes, old nvcc can’t generate code for blackwell. I tried on rtx5090 using 12.6.3 and it’s not working(“unsupported architecture”). But new desktop blackwell has hopper capabilities like cuda thread clusters so it should work at the same 9.0 codegen. this means that flash attention v3 it should be compatible and the upcoming versions |
Blackwell B100/B200 codegen 10.0
Blackwell RTX 50 codegen 12.0