Skip to content

Commit

Permalink
Bump diffusers from 0.32.1 to 0.32.2 in /samples (#1564)
Browse files Browse the repository at this point in the history
Bumps [diffusers](https://github.com/huggingface/diffusers) from 0.32.1
to 0.32.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/diffusers/releases">diffusers's
releases</a>.</em></p>
<blockquote>
<h2>v0.32.2</h2>
<h1>Fixes for Flux Single File loading, LoRA loading for 4bit BnB Flux,
Hunyuan Video</h1>
<p>This patch release</p>
<ul>
<li>Fixes a regression in loading Comfy UI format single file
checkpoints for Flux</li>
<li>Fixes a regression in loading LoRAs with bitsandbytes 4bit quantized
Flux models</li>
<li>Adds <code>unload_lora_weights</code> for Flux Control</li>
<li>Fixes a bug that prevents Hunyuan Video from running with batch size
&gt; 1</li>
<li>Allow Hunyuan Video to load LoRAs created from the original
repository code</li>
</ul>
<h2>All commits</h2>
<ul>
<li>[Single File] Fix loading Flux Dev finetunes with Comfy Prefix by <a
href="https://github.com/DN6"><code>@​DN6</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10545">#10545</a></li>
<li>[CI] Update HF Token on Fast GPU Model Tests by <a
href="https://github.com/DN6"><code>@​DN6</code></a> <a
href="https://redirect.github.com/huggingface/diffusers/issues/10570">#10570</a></li>
<li>[CI] Update HF Token in Fast GPU Tests by <a
href="https://github.com/DN6"><code>@​DN6</code></a> <a
href="https://redirect.github.com/huggingface/diffusers/issues/10568">#10568</a></li>
<li>Fix batch &gt; 1 in HunyuanVideo by <a
href="https://github.com/hlky"><code>@​hlky</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10548">#10548</a></li>
<li>Fix HunyuanVideo produces NaN on PyTorch&lt;2.5 by <a
href="https://github.com/hlky"><code>@​hlky</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10482">#10482</a></li>
<li>Fix hunyuan video attention mask dim by <a
href="https://github.com/a-r-r-o-w"><code>@​a-r-r-o-w</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10454">#10454</a></li>
<li>[LoRA] Support original format loras for HunyuanVideo by <a
href="https://github.com/a-r-r-o-w"><code>@​a-r-r-o-w</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10376">#10376</a></li>
<li>[LoRA] feat: support loading loras into 4bit quantized Flux models.
by <a href="https://github.com/sayakpaul"><code>@​sayakpaul</code></a>
in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10578">#10578</a></li>
<li>[LoRA] clean up <code>load_lora_into_text_encoder()</code> and
<code>fuse_lora()</code> copied from by <a
href="https://github.com/sayakpaul"><code>@​sayakpaul</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10495">#10495</a></li>
<li>[LoRA] feat: support <code>unload_lora_weights()</code> for Flux
Control. by <a
href="https://github.com/sayakpaul"><code>@​sayakpaul</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10206">#10206</a></li>
<li>Fix Flux multiple Lora loading bug by <a
href="https://github.com/maxs-kan"><code>@​maxs-kan</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10388">#10388</a></li>
<li>[LoRA] fix: lora unloading when using expanded Flux LoRAs. by <a
href="https://github.com/sayakpaul"><code>@​sayakpaul</code></a> in <a
href="https://redirect.github.com/huggingface/diffusers/issues/10397">#10397</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/huggingface/diffusers/commit/560fb5f4d65b8593c13e4be50a59b1fd9c2d9992"><code>560fb5f</code></a>
Release: v0.32.2</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/8ab26ac9bfb36e6f55a9f638ef48c6d912eb94d4"><code>8ab26ac</code></a>
[Single File] Fix loading Flux Dev finetunes with Comfy Prefix (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10545">#10545</a>)</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/9f305e7ce24ac1f2419f45a3d353cd054ff4dcff"><code>9f305e7</code></a>
[CI] Update HF Token on Fast GPU Model Tests (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10570">#10570</a>)</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/2c25bf5befef693655c893cd9f264b03931db871"><code>2c25bf5</code></a>
[CI] Update HF Token in Fast GPU Tests (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10568">#10568</a>)</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/0e14cacffc24b7926c94d5aa7a56ccc8baf1a800"><code>0e14cac</code></a>
Fix batch &gt; 1 in HunyuanVideo (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10548">#10548</a>)</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/13ea83f0faecf6ef475d58c4137e563c1014fcc5"><code>13ea83f</code></a>
Fix HunyuanVideo produces NaN on PyTorch&lt;2.5 (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10482">#10482</a>)</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/2b432ac5a89d940eb9aac2b4cefaf55f0b30172e"><code>2b432ac</code></a>
Fix hunyuan video attention mask dim (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10454">#10454</a>)</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/263b973466ea2abc88b579fc8f89bf366ce69c0f"><code>263b973</code></a>
[LoRA] feat: support loading loras into 4bit quantized Flux models. (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10578">#10578</a>)</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/a663a67ea208aa59affebada6b7c8ee2fd9a6221"><code>a663a67</code></a>
[LoRA] clean up <code>load_lora_into_text_encoder()</code> and
<code>fuse_lora()</code> copied from...</li>
<li><a
href="https://github.com/huggingface/diffusers/commit/526858c80126e253a38f2735a51aa6f8f32f0206"><code>526858c</code></a>
[LoRA] Support original format loras for HunyuanVideo (<a
href="https://redirect.github.com/huggingface/diffusers/issues/10376">#10376</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/diffusers/compare/v0.32.1...v0.32.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=diffusers&package-manager=pip&previous-version=0.32.1&new-version=0.32.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  • Loading branch information
dependabot[bot] authored Jan 16, 2025
1 parent e6fcca0 commit 7765bc3
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion samples/export-requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ optimum-intel @ git+https://github.com/huggingface/optimum-intel.git
numpy<2.0.0; sys_platform == 'darwin'
einops==0.8.0 # For Qwen
transformers_stream_generator==0.0.5 # For Qwen
diffusers==0.32.1 # For image generation pipelines
diffusers==0.32.2 # For image generation pipelines
timm==1.0.13 # For exporting InternVL2
torchvision # For visual language models
transformers>=4.43 # For Whisper
Expand Down

0 comments on commit 7765bc3

Please sign in to comment.