-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Enables offline /score for embedding models #12021
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
96484b7
to
d37339e
Compare
@maxdebayser @gmarinho2 This looks like it only touches the offline entrypoint, but the PR title mentions It's not 100% clear to me from the linked issue either what was intended- is there more work planned to support the online interface or are we only aiming for offline? |
@joerunde, we're aiming for both. @gmarinho2 started with the offline API first. |
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left some suggestions, but it looks good to me. I think we can open this as a PR now.
Signed-off-by: Gabriel Marinho <[email protected]>
Signed-off-by: Gabriel Marinho <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial comments.
I also suggest splitting out the logic for scoring and general embedding models into separate functions.
Signed-off-by: Gabriel Marinho <[email protected]>
Enables LLM.score() for all embedding models. The request_id consists of the request_ids of each embedding in the pair, separated by "_". The prompt_token_ids are the concatenation of all the token ids, in order and separated by the padding token when it is available. This PR is the first of two for completing the issue. The second PR will implement the same feature in the OpenAI API.
Issue: [Feature]: Enable /score endpoint for all embedding models (1/2)