Translating Melody to Chord: Structured and Flexible Harmonization of Melody With Transformer

Recent deep learning approaches for melody harmonization have achieved remarkable performance by overcoming the uneven chord distributions of music data.However, most of these approaches have not attempted to capture an original melodic structure and generate structured chord sequences with appropriate rhythms.Hence, we use a Transformer-based architecture that directly maps lower-level melody notes into keychron m6 mouse a semantic higher-level chord sequence.In particular, we encode the binary piano roll of a melody into a note-based representation.Furthermore, we address the flexible generation of various chords with Transformer expanded with a VAE framework.

We propose three Transformer-based melody harmonization models: 1) the standard Transformer-based model for the neural translation of a melody to chords (STHarm); 2) the variational Transformer-based lexus rc4 model for learning the global representation of complete music (VTHarm); and 3) the regularized variational Transformer-based model for the controllable generation of chords (rVTHarm).Experimental results demonstrate that the proposed models generate more structured, diverse chord sequences than LSTM-based models.

Leave a Reply

Your email address will not be published. Required fields are marked *