Skip to content

Commit

Permalink
Revert "[Bugfix] Gpt-j-6B patch kv_scale to k_scale path (vllm-projec…
Browse files Browse the repository at this point in the history
…t#10063)"

This reverts commit ea928f6.
  • Loading branch information
flaviabeo committed Nov 6, 2024
1 parent 2dbfb98 commit b4fdf7d
Showing 1 changed file with 1 addition and 5 deletions.
6 changes: 1 addition & 5 deletions vllm/model_executor/models/gpt_j.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,7 @@
from vllm.model_executor.layers.sampler import Sampler, SamplerOutput
from vllm.model_executor.layers.vocab_parallel_embedding import (
ParallelLMHead, VocabParallelEmbedding)
from vllm.model_executor.model_loader.weight_utils import (
default_weight_loader, maybe_remap_kv_scale_name)
from vllm.model_executor.model_loader.weight_utils import default_weight_loader
from vllm.model_executor.sampling_metadata import SamplingMetadata
from vllm.sequence import IntermediateTensors

Expand Down Expand Up @@ -308,9 +307,6 @@ def load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]]):
weight_loader(param, loaded_weight, shard_id)
break
else:
name = maybe_remap_kv_scale_name(name, params_dict)
if name is None:
continue
# Skip loading extra bias for GPTQ models.
if name.endswith(".bias") and name not in params_dict:
continue
Expand Down

0 comments on commit b4fdf7d

Please sign in to comment.