Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Macbook M1 PRO (extremely slow!) #178

Open
dxcore35 opened this issue Jan 2, 2025 · 13 comments
Open

Macbook M1 PRO (extremely slow!) #178

dxcore35 opened this issue Jan 2, 2025 · 13 comments

Comments

@dxcore35
Copy link

dxcore35 commented Jan 2, 2025

As there is only CPU support for non-nvidia folks. I tried to create small audio from textfile

Category Details
HARDWARE MacBook M1 Pro Max
SOFTWARE Docker
INPUT FILE TXT file - 11K words
CPU Usage >10%
TIME ❗️ Running for 10 minutes - Only 12% done

I don't know but this is is completely useless to use it on Mac OS.

@ROBERT-MCDOWELL
Copy link
Collaborator

So sell your Mac and buy a PC with nvidia card... frankly this is not an ebook2audiobook issue but all A.I libraries we are using for the speech, which is tensorflows, torch, coqui-tts etc... and M1 is well known to not supported by any of them....

@dxcore35
Copy link
Author

dxcore35 commented Jan 2, 2025

I checked and there is support for Mac:

Native Apple Silicon (M1/M2) Support for TensorFlow, PyTorch, and Coqui-TTS

Library Native Support Installation Commands GPU Support Metal API Backend Details GPU Verification
TensorFlow ✅ Yes (2.5.0+) pip install tensorflow-macos tensorflow-metal Enabled via tensorflow-metal Optimized with Metal Performance Shaders (MPS) import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
PyTorch ✅ Yes (1.12.0+) pip install torch torchvision torchaudio Enabled via MPS backend Supports Metal via Metal Performance Shaders (MPS) import torch
print(torch.backends.mps.is_available())
Coqui-TTS ✅ Yes pip install TTS
(Requires TensorFlow or PyTorch installation)
Depends on TensorFlow/PyTorch GPU Relies on TensorFlow/PyTorch for Metal integration Same as TensorFlow/PyTorch verification based on the underlying library used.

Notes

  • Metal Support: Apple’s Metal API is used to enable GPU acceleration for deep learning libraries, offering improved performance on Apple Silicon.
  • Requirements:
    • macOS 11.0+ (Big Sur or later)
    • Xcode Command Line Tools
  • Performance:
    • TensorFlow offers the most mature Metal support.
    • PyTorch’s MPS backend is functional but still evolving.
    • Coqui-TTS performance depends on the framework it's paired with (TensorFlow or PyTorch).

@ROBERT-MCDOWELL
Copy link
Collaborator

ROBERT-MCDOWELL commented Jan 2, 2025

yes, you can use these libraries on M1, but with your CPU, not GPU unless you have installed an nvidia.
or, if I'm wrong, so modify our code and send a PR.

@ROBERT-MCDOWELL
Copy link
Collaborator

ROBERT-MCDOWELL commented Jan 2, 2025

for now what we can do is to check if MPS is available, it will be more optimized but will still work on CPU
While MPS can speed up inference, training performance might still lag compared to CUDA.

@ROBERT-MCDOWELL
Copy link
Collaborator

ROBERT-MCDOWELL commented Jan 2, 2025

Added "mps" device for the next git update/release, thanks to test if it will work for you.

@ROBERT-MCDOWELL ROBERT-MCDOWELL self-assigned this Jan 2, 2025
@ROBERT-MCDOWELL ROBERT-MCDOWELL added the feature request feature requests for making ebook2audiobookxtts better label Jan 2, 2025
@dxcore35
Copy link
Author

dxcore35 commented Jan 2, 2025

For tensorflow-macos

TensorFlow on Apple Silicon utilizes the CPU’s multiple cores (using the tensorflow-macos version) without needing NUMA-like handling, as the system’s unified memory allows for more efficient data sharing between cores.

@dxcore35
Copy link
Author

dxcore35 commented Jan 2, 2025

For torch

I tried:

import torch

# Check MPS availability
if torch.backends.mps.is_available():
    device = torch.device("mps")  # Use GPU
    print("Using MPS (GPU):", device)

    # Example tensor on GPU
    x = torch.randn(3, 3).to(device)
    print("Tensor on GPU:", x)
else:
    print("MPS backend is not available. Running on CPU.")

I got:

Using MPS (GPU): mps
Tensor on GPU: tensor([[ 0.4005, -0.1923, -0.5197],
        [ 1.2033,  0.5933,  1.6042],

@ROBERT-MCDOWELL
Copy link
Collaborator

ROBERT-MCDOWELL commented Jan 2, 2025

torch is one thing, coqui-tts is another thing using only a a part of torch, depending its version etc...
nothing is easy in A.I. development and it's not a simple test with one library that will solve everything.
as I said (and you must read my comment above), a patch will be done for the next update.
so if you are ok to test once updated I live this issue open, if not, I close it.

@dxcore35
Copy link
Author

dxcore35 commented Jan 2, 2025

Yes I read it. I know the Apple silicon is for some reason not compactible with multiple libraries for AI. I struggle to find some TTS that will run on this hardware...

Yes can test it on same file, just please let me know when new version will be ready. I will test it and post the difference in speed.

@ROBERT-MCDOWELL
Copy link
Collaborator

I don't know when I will update as I'm stuck with other issues for now... follow the repo and you'll receive an email when released.

@landerhe
Copy link

landerhe commented Jan 6, 2025

don't think about adding MPS support just yet, it will face a issue with the tts model, MPS does not support convolution operations with output channels greater than 65536. It's definitely not a priority to-do but one that will require loads of look into.

@DrewThomasson
Copy link
Owner

yup, as seem here xtts def has issues trying to run on mps

idiap/coqui-ai-TTS#65

@ROBERT-MCDOWELL
Copy link
Collaborator

nevermind, it's added already so maybe coqui-tts fork guys will fix it in the near future.

@ROBERT-MCDOWELL ROBERT-MCDOWELL added fixed in next update (pending) and removed feature request feature requests for making ebook2audiobookxtts better labels Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants