-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc][LoRA] Support Rank Stabilized LoRA (RSLoRA) #6909
Merged
+30
−22
Merged
Changes from 6 commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
e308f37
feat: support rslora
JohnGiorgi 6690e8b
fix: lint the codebase
JohnGiorgi e2cf420
fix: default to not using rslora
JohnGiorgi b782763
Merge branch 'main' into support-rslora
JohnGiorgi a3206bb
fix: move rslora scaling to peft_helper
JohnGiorgi 42e04aa
fix: set scaling factor directly, instead of modifying alpha
JohnGiorgi f73a3db
fix: remove error message about RSLoRA not being supported
JohnGiorgi 419a6fb
docs: add comments with arxiv links for rsLoRA and DoRA
JohnGiorgi 7ace76c
Done
jeejeelee 20c9f14
Merge branch 'main' into support-rslora
JohnGiorgi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeejeelee I think this is the cleanest way to support the rsLoRA scaling logic using the new PEFTHelper! My main outstanding worry is I am not sure how the scaling applied for rsLoRA and the scaling applied when
self.context_length
is provided should interact.The way this is written, if both
self.use_rslora
andself.context_length
, the custom scaling factor logic forself.context_length
will take precedence.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO, These two
scaling
are distinct and operate independently of each other.We shouldn't use
vllm_scaling_factor
- instead, we can add a variable calledscaling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine by me, but its gets a little confusing as
LoRAModel
already definesscaling_factor
and uses it for long context support:which is populated by
PEFTHelper.vllm_scaling_factor
,Is there an argument to sync
LoRAModel
andPEFTHelper
so each has ascaling_factor
andvllm_scaling_factor
argument? (Also, ifvllm_scaling_factor
is strictly used for long context support, maybe a more explicit name is in order, e.g.long_context_scaling_factor
or something)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
long_context_scaling_factor
looks reasonable. Let me verify this tomorrow and confirmThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good! I'll hang tight until then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have implemented the related logic, please check if it's reasonable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me @jeejeelee! Thanks for adding tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please sync with the main branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. It is just the DCO check that is failing but I am unsure how to fix, it suggests a rebase but I don't want to do that as there's multiple commit authors on this branch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a known issue in the current lora test. Let's check if other lora tests can pass first. If they do, we can consider force merging