You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: Module down_blocks.0.attentions.0.proj_in is not a LoRACompatibleConv or LoRACompatibleLinear module.
This kind of error appears when super().load_lora_weights(...) is called.
(Location: File Live2Diff/live2diff/animatediff/pipeline/loader.py line 21, in load_lora_weights)
File "/c1/username/Live2Diff/live2diff/utils/wrapper.py", line 451, in _load_model [32/1823]
stream.load_lora(few_step_lora)
File "/c1/username/Live2Diff/live2diff/pipeline_stream_animation_depth.py", line 140, in load_lora
self.pipe.load_lora_weights(
File "/c1/username/Live2Diff/live2diff/animatediff/pipeline/loader.py", line 21, in load_lora_weights
super().load_lora_weights(pretrained_model_name_or_path_or_dict, adapter_name=adapter_name, strict=False, **kwargs) #
ignore the incompatible layers
File "/c1/username/anaconda3/envs/live2diff/lib/python3.10/site-packages/diffusers/loaders/lora.py", line 117, in load_
lora_weights
self.load_lora_into_unet(
File "/c1/username/anaconda3/envs/live2diff/lib/python3.10/site-packages/diffusers/loaders/lora.py", line 479, in load_
lora_into_unet
unet.load_attn_procs(
File "/c1/username/anaconda3/envs/live2diff/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 11
8, in _inner_fn
return fn(*args, **kwargs)
File "/c1/username/anaconda3/envs/live2diff/lib/python3.10/site-packages/diffusers/loaders/unet.py", line 294, in load_
attn_procs
raise ValueError(f"Module {key} is not a LoRACompatibleConv or LoRACompatibleLinear module.")
I printed both state_dict keys and unet... and found out that only the Linear layers from the unet are converted into LoRACompatibleLinear, while Conv2d layers (proj_in and proj_out in BasicTransformerBlock) are still remaining as Conv2d.
Hi,
ValueError: Module down_blocks.0.attentions.0.proj_in is not a LoRACompatibleConv or LoRACompatibleLinear module.
This kind of error appears when
super().load_lora_weights(...)
is called.(Location: File Live2Diff/live2diff/animatediff/pipeline/loader.py line 21, in load_lora_weights)
I printed both state_dict keys and unet... and found out that only the Linear layers from the unet are converted into LoRACompatibleLinear, while Conv2d layers (proj_in and proj_out in BasicTransformerBlock) are still remaining as Conv2d.
unet_warmup
state_dicts
My environment:
The text was updated successfully, but these errors were encountered: