Sourced from diffusers's releases.
v0.32.2
Fixes for Flux Single File loading, LoRA loading for 4bit BnB Flux, Hunyuan Video
This patch release
- Fixes a regression in loading Comfy UI format single file checkpoints for Flux
- Fixes a regression in loading LoRAs with bitsandbytes 4bit quantized Flux models
- Adds
unload_lora_weights
for Flux Control- Fixes a bug that prevents Hunyuan Video from running with batch size > 1
- Allow Hunyuan Video to load LoRAs created from the original repository code
All commits
- [Single File] Fix loading Flux Dev finetunes with Comfy Prefix by
@DN6
in #10545- [CI] Update HF Token on Fast GPU Model Tests by
@DN6
#10570- [CI] Update HF Token in Fast GPU Tests by
@DN6
#10568- Fix batch > 1 in HunyuanVideo by
@hlky
in #10548- Fix HunyuanVideo produces NaN on PyTorch<2.5 by
@hlky
in #10482- Fix hunyuan video attention mask dim by
@a-r-r-o-w
in #10454- [LoRA] Support original format loras for HunyuanVideo by
@a-r-r-o-w
in #10376- [LoRA] feat: support loading loras into 4bit quantized Flux models. by
@sayakpaul
in #10578- [LoRA] clean up
load_lora_into_text_encoder()
andfuse_lora()
copied from by@sayakpaul
in #10495- [LoRA] feat: support
unload_lora_weights()
for Flux Control. by@sayakpaul
in #10206- Fix Flux multiple Lora loading bug by
@maxs-kan
in #10388- [LoRA] fix: lora unloading when using expanded Flux LoRAs. by
@sayakpaul
in #10397
560fb5f
Release: v0.32.28ab26ac
[Single File] Fix loading Flux Dev finetunes with Comfy Prefix (#10545)9f305e7
[CI] Update HF Token on Fast GPU Model Tests (#10570)2c25bf5
[CI] Update HF Token in Fast GPU Tests (#10568)0e14cac
Fix batch > 1 in HunyuanVideo (#10548)13ea83f
Fix HunyuanVideo produces NaN on PyTorch<2.5 (#10482)2b432ac
Fix hunyuan video attention mask dim (#10454)263b973
[LoRA] feat: support loading loras into 4bit quantized Flux models. (#10578)a663a67
[LoRA] clean up load_lora_into_text_encoder()
and
fuse_lora()
copied from...526858c
[LoRA] Support original format loras for HunyuanVideo (#10376)