Replies: 1 comment
-
Not sure if this helps your comparison, but I have a Windows laptop with 16GB, a GeForce GTX 1080, tortoise-tts installed using WSL, and I use deepspeed and kv_cache, but do not use the half flag, and my generations are about 40 minutes per short sentence. Hopefully someone with a Mac will be able to chime in eventually. Edit: added that I use kv_cache as well. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a MacBook Pro with 32GB memory and a m1 chip. I tried generating three sentences with 215 characters, using “fast” preset, and it took 23 minutes. Is it the expected duration? Or maybe I made some mistakes when using tortoise-tts, like not enabling mps? Because it seems very slow for me. I'm not sure if it utilized all the hardware accelerations, I cannot find useful information in the log.
In case it would be helpful, this is the code I used, basically extracted from do_tts.py:
Beta Was this translation helpful? Give feedback.
All reactions