Add Music Accompaniment Generator Project to ML-Nexus #121
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Melodic accompaniment is an indispensable part of the music creation process. Traditionally, it has a high professional threshold. This undoubtedly poses a limitation to people lacking deep background knowledge of music who are motivated to undertake this activity for a variety of reasons, such as interest. A solution is needed to simplify the task of melodic accompaniment and enable non-specialists to participate in simple melodic accompaniment activities. At the same time,
Transformer, which has emerged in the field of deep learning in recent years, has demonstrated high performance and great potential in various fields, and therefore, the use of deep learning techniques based on the Transformer model to simplify the task of melodic accompaniment is of great research value and possible to realize.
I use Python and Pytorch framework to design and implement the Transformer-based melody automatic accompaniment model.
In this project,I implement the innovative rule-based melody recognition algorithm and rule-based alignment algorithm to preprocess the dataset.After that I construct the Transformer model, including the embedding layer, the positional embedding layer, the self-attention layer, and encoder,decoder structures. The model can accept a MIDI file of the main melody and generate a corresponding MIDI file including main melody track together with other accompaniment instrument tracks such as piano,guitar,bass and drum.The project also provides the training, evaluation and validation results of the model.