Instructions: Running on AMD 6800XT Radeon using ROCm within a docker container #670
Replies: 6 comments 1 reply
-
Can I just change the requirement.txt file and run the usage instructions? |
Beta Was this translation helpful? Give feedback.
-
In addition to the change in requirements.txt, you need a version of PyTorch that works with the version of rocm installed. From the last section of the initial post you can see some combinations I tried, only two of which would work to generate audio. |
Beta Was this translation helpful? Give feedback.
-
The link https://rocm.docs.amd.com/en/latest/deploy/linux/prerequisites.html has died. There's an archived version at https://web.archive.org/web/20230926163713/https://rocm.docs.amd.com/en/latest/deploy/linux/prerequisites.html. This version is for ROCm Platform 5.7.0, which I suppose is close enough to your version, which you note is 5.7.1. |
Beta Was this translation helpful? Give feedback.
-
Looks like it got moved to here: |
Beta Was this translation helpful? Give feedback.
-
@mrllama123 Yes! It works without docker, starting with Ubuntu 22.04, installing ROCm per AMD's instructions, then manually running the components of your dockerfile I am up and running on my 7900XT. |
Beta Was this translation helpful? Give feedback.
-
I got it working on my RX 6800 without docker, but I cannot get deepspeed to work. I get this error |
Beta Was this translation helpful? Give feedback.
-
I had a hard time getting tortoise running with my Radeon 6800xt. I'm sharing these instructions to help whoever else might be struggling. I am not a Docker master, nor have I had any experience with tortoise or rocm before I tried this so keep that in mind, I know there are places that could be simplified. I tried a bunch of different combinations of version of rocm/torch and this is the first I could get to work after trying all day. But the below instructions do work for me and if you've found them while trying to get tortoise working on an AMD card, I hope they are helpful and work for your too.
Prepare Host - ROCm
The host must have rocm installed. Use AMD's instructions:
Prequisites:
https://rocm.docs.amd.com/en/latest/deploy/linux/prerequisites.html
Note: I installed the kernel headers and dev packages. And followed the instructions for group permissions. I expect both are unnecessary for this use case.
And instructions:
https://rocm.docs.amd.com/en/latest/deploy/linux/os-native/install.html
Note: Because we will be using docker, only steps (1) and (2) are required on the host to get the kernel-mode driver installed.
Follow the steps to verify the Kernel-mode Driver and ROCm Installation.
Note: At the time of writing this, I used ROCm 5.7.1 on the host running Ubuntu 20.04.6 LTS
Prepare Host - Tortoise-TTS
Install git if needed.
Make a directory to work out of and cd into it.
Run
git clone https://github.com/neonbjb/tortoise-tts.git
change ./tortoise-tts/requirements.txt as follows
"tokenizers" -> "tokenizers==0.13.3"
otherwise you will get this:
Docker
Add this "Dockerfile"
Build with something like
docker build . --network=host -t tts-rocm
From scratch, the build took more than 10 mins, and generated build artifacts totaling ~32GB.
Then run it with something like
Test
Test you get "True" back from this call, with no errors:
python3 -c "import torch; print(torch.cuda.is_available());torch.zeros(1).cuda()"
And then try it out:
python3 tortoise/do_tts.py --output_path /results --preset ultra_fast --voice angie --text "hello world"
I got 10-20x speed up.
Caveat:
I do have this warning while running. Quick look seems to indicate it's a pytorch issue I didn't bother going further.
Bonus - here are some of the versions of rocm torch I tried and the outcome
Beta Was this translation helpful? Give feedback.
All reactions