[Image2Video] Animate a given image with animatediff and controlnet
conda create --name controlgif python=3.10
conda activate controlgif
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
# torch's version is 1.13.1 you can find other ways to install torch(cuda11.7) in https://pytorch.org/get-started/previous-versions/
pip install -r requirements.txt
- git clone the stable-diffusion-v1-5 from huggingface in ./checkpoints (format of diffusers, rather than a single .ckpt or .safetensors)
- download some personalized models from civitai (What I use most frequently is dreamshaper) in ./checkpoints/base_models
- download motion model from here in ./checkpoints/unet_temporal
- download controlnet model from here in ./checkpoints/controlnet
- checkpoints
- base_models
- dreamshaper_8.safetensors
- controlnet
- controlnet_checkpoint.ckpt
- stable-diffusion-v1-5
- feature_extractor
- safety_checker
- scheduler
- text_encoder
- tokenizer
- unet
- vae
- unet_temporal
- motion_checkpoint_less_motion.ckpt
- motion_checkpoint_more_motion.ckpt
conda activate controlgif
python app.py
I can run it in my RTX3090. If you erase the clip_interrogator module, you can run it in low VRAM like 16G or 12G.
This method may lead to bad results when receive some portraits. (# in my TODO list)
And same method for SDXL version is under development , stay tuned!
If you have some questions, please open an issue or send an email to me at [email protected].
The code in this repository is derived from Animatediff and Diffusers.