Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Misc] Move find_loaded_library to platform_aware_utils.py #12231

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

houseroad
Copy link
Contributor

find_loaded_library is a platform specific implementation, so create "platform_aware_utils.py". For different platform (such as using PAR to pack the binary), we can create our own version of "platform_aware_utils.py" to override such platform aware utility functions.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@houseroad houseroad force-pushed the create_platform_aware_utils branch from 515f144 to f0af1d9 Compare January 20, 2025 22:50
@youkaichao
Copy link
Member

The change of separating the function makes sense to me.

in vllm, platform mainly indicates accelerators, e.g. cuda, rocm, xpu, etc.

can you provide more intuitive names to them?

maybe you can provide your implementation of find_loaded_library , and we can see how to create proper abstraction for it.

@houseroad
Copy link
Contributor Author

How about deployment_aware_utils.py instead? Our internal changes can be found in https://github.com/vllm-project/vllm/pull/12129/files. So far two things to change: 1) libcudart path, 2) PYTHONPATH due to PAR packing method.

@youkaichao
Copy link
Member

In this case, I think we can have VLLM_CUDART_SO_PATH env var, similar to how we handle the VLLM_NCCL_SO_PATH. see

"VLLM_NCCL_SO_PATH":

@houseroad
Copy link
Contributor Author

I see, I can have similar way to handle VLLM_CUDART_SO_PATH. How about the solution for subprocess.run? I was about to have a wrapper of subprocess_run to pass in the PYTHONPATH internally. Any suggestion on that?

@youkaichao
Copy link
Member

How about the solution for subprocess.run?

commented at #12129 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants