-
Notifications
You must be signed in to change notification settings - Fork 698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train_tacotron.py: Random CUBLAS_STATUS_INTERNAL_ERROR #216
Comments
I'm seeing the same thing. Did you find a fix? Also, are you able to pick up training from where you left off? |
I didn't find a fix but I did find a workaround: Automatically restarting after a crash.
Yep, it always restarts from the latest step for me. |
Nice, that's a good solution. And yeah, I found out this morning that it picks up from where it left off very well. How many epochs did you leave it to train for? I'm on 100k so far and will probably let it run until close to a million I guess. |
I just followed in this guy's steps and fine-tuned the pre-trained model on my own data. I tried going up to 300k, but I found it starts getting worse after ~260k. I don't think I ever tried training it from scratch. 1 million epochs? Wow, that would take quite a while on my hardware. Can I ask what GPU you're using and how fast your training goes? |
Thanks. I was going to try fine-tuning but I have a 17.5 hour dataset so thought I would just train from scratch as it's not too much smaller than the LJ speech dataset. I'm on 102k steps and have been training off and on for the last 8 hours. However, there's been a fair bit of downtime messing with batch sizes to try and avoid memory crashes, so really only around 5-6 hours. I'm using a 2080 Super and with a batch size of 32 I'm getting around 4-5 steps/second. |
Occasionally when training tacotron (
train_tacotron.py
), CUDA throws an error and kills the training.I don't know why this happens, it seems almost random. Sometimes it happens 12 hours after starting, sometimes it happens 15 minutes after starting.
The text was updated successfully, but these errors were encountered: