If you find this project useful, please give it a star ❤️❤️
2024-08-16.14-58-31.mov
2024-08-16.15-10-11.mov
2024-08-16.15-16-24.mov
2024-08-16.15-11-48.mov
2024-08-16.15-03-46.mov
2024-08-16.15-26-12.mov
- python (3.10 recommended)
- pip
- git
- ffmpeg
- visual studio 2022 runtimes (windows)
git clone https://github.com/Mrkomiljon/Deep-Live-Monitor.git
cd Deep-Live-Monitor
Then put those 2 files on the "models" folder
We highly recommend to work with a venv
to avoid issues.
pip install -r requirements.txt
🔥 DONE!!! If you dont have any GPU, You should be able to run roop using python run.py
command. Keep in mind that while running the program for first time, it will download some models which can take time depending on your network connection.
If the GFPGAN model doesn't download automatically, download it manually!
-
Install CUDA Toolkit 11.8
-
Install dependencies:
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3
- Usage in case the provider is available:
python run.py --execution-provider cuda
- Install dependencies:
pip uninstall onnxruntime onnxruntime-silicon
pip install onnxruntime-silicon==1.13.1
- Usage in case the provider is available:
python run.py --execution-provider coreml
- Install dependencies:
pip uninstall onnxruntime onnxruntime-coreml
pip install onnxruntime-coreml==1.13.1
- Usage in case the provider is available:
python run.py --execution-provider coreml
- Install dependencies:
pip uninstall onnxruntime onnxruntime-directml
pip install onnxruntime-directml==1.15.1
- Usage in case the provider is available:
python run.py --execution-provider directml
- Install dependencies:
pip uninstall onnxruntime onnxruntime-openvino
pip install onnxruntime-openvino==1.15.0
- Usage in case the provider is available:
python run.py --execution-provider openvino
🔥 Executing python run.py
--execution-provider cuda command will launch this window:
- Choose a face image (the face you want to use).
- Choose the target image or video (where you want to replace the face).
- Click Start.
- Open the file explorer and go to the output directory you selected.
- You’ll see a folder named after the video title where the frames are being processed.
- When it’s done, the output file will be ready in that folder.
- That’s it!
You should open any video on desktop
🔥 Additional command line arguments are given below. To learn out what they do, check this guide.
options:
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH select an source image
-t TARGET_PATH, --target TARGET_PATH select an target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH select output file or directory
--frame-processor FRAME_PROCESSOR [FRAME_PROCESSOR ...] frame processors (choices: face_swapper, face_enhancer, ...)
--keep-fps keep original fps
--keep-audio keep original audio
--keep-frames keep temporary frames
--many-faces process every face
--video-encoder {libx264,libx265,libvpx-vp9} adjust output video encoder
--video-quality [0-51] adjust output video quality
--max-memory MAX_MEMORY maximum amount of RAM in GB
--execution-provider {cpu} [{cpu} ...] available execution provider (choices: cpu, ...)
--execution-threads EXECUTION_THREADS number of execution threads
-v, --version show program's version number and exit
I would like to thank main authors.
(Some demo images/videos above are sourced from image websites/repos. If there is any infringement, I will immediately remove them and apologize.)