This talk/repo is about using a recurrent neural network (in particular, an LSTM) to generate music from training MIDI files using Tensorflow's Magenta project and Ruby.
- Install Python requirements (
$ pip install -r requirements.txt
). We recommend using VirtualEnv to manage your Python version(s) and dependencies. - Install Ruby requirements (
$ bundle
).
You can train and use the LSTM neural network as follows:
- Place the training MIDI files in the
midi/
directory. - Change to the Ruby directory:
$ cd src/rb
. - Run the main Ruby file:
$ ruby main.rb
.
This will convert the MIDI files to a TFRecord file (which contains NoteSequence protocol buffers), create SequenceExamples from the TFRecord file, train the network on the data, and generate music from the resulting checkpoints. You can view the training and evaluation data via $ tensorboard --logdir=src/py/melody_rnn/checkpoints
:
Generated music will be written to the generated/
directory in the root of this project. We use timidity to listen to it: $ brew install timidity && timidity path_to_your.midi
.
- Generate MIDI files using Magenta and Python.
- Call into the Python code using the rubypython gem (this is currently super minimal).
- Help extend tensorflow.rb to more seamlessly integrate Ruby + Magenta.