z-logo
open-access-imgOpen Access
A MUSIC GENERATION BY A COMBINING MODELOF RESNET AND LSTM NETWORKS
Author(s) -
Kazuya Ozawa,
Hiroyuki Okazaki
Publication year - 2022
Publication title -
international journal of advanced research
Language(s) - English
Resource type - Journals
ISSN - 2320-5407
DOI - 10.21474/ijar01/14474
Subject(s) - computer science , piano , long short term memory , residual neural network , speech recognition , residual , artificial intelligence , deep learning , artificial neural network , recurrent neural network , machine learning , algorithm , art , art history
In this paper, to automatically generate a music for the melody part by deep learning with training data collected from Chopins piano piecies, a combining model of Residual Neural Networks(ResNet) and Long-Short Term Memory Networks (LSTM) are proposed. First, to generate a music for the melody part of a piano music, a training dataset used for deep learning is provided. Secondly, by using each of a LSTM Model and a combining model of LSTM and ResNet,experiments on music generationare presented. Thirdly, the results of music generation by each model are compared and discussed. In conclusion, the principal results are summarized.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here