Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix][Mamba] Fix Multistep on Mamba-like models #10705

Merged
merged 3 commits into from
Nov 27, 2024

Conversation

mzusman
Copy link
Contributor

@mzusman mzusman commented Nov 27, 2024

Bugfix for Mamba with multistep scheduler, where finished requests ids were not passed into the MambaCacheManager,
which in turn ran into out of free slots situation, unable to receive new incoming requests,
The fix was to move the get_and_reset_finished_request_call inside the if clause so it would be called only upon finished multistep run.

FIX #10693

CC @tlrmchlsmth , @fabianlim

ids are not passed correctly to the mamba cache manager, thus doesn't
release mamba cache slots on time

Signed-off-by: mzusman <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Signed-off-by: mzusman <[email protected]>
Signed-off-by: mzusman <[email protected]>
Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix!

@tlrmchlsmth tlrmchlsmth added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 27, 2024
) -> None:
# This test is verifying that multistep works correctly
#on mamba-like models
with vllm_runner(model, num_scheduler_steps=8,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to add check to make sure that the mamba cache is empty at the end of the test to catch #10693, but unfortunately I don't see a way to do it that's not very high effort.

@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) November 27, 2024 14:56
@tlrmchlsmth tlrmchlsmth merged commit 197b448 into vllm-project:main Nov 27, 2024
61 checks passed
afeldman-nm pushed a commit to neuralmagic/vllm that referenced this pull request Dec 2, 2024
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
BKitor pushed a commit to BKitor/vllm that referenced this pull request Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: MambaCacheManager Can Possibly Run Out of Free Slots
3 participants