z-logo
open-access-imgOpen Access
A Discriminative Model to Generate Melodies through Evolving LSTM Recurrent Neural Networks
Author(s) -
Nanda Ashwin,
Uday Kumar Adusumilli,
Lakshmi Kurra,
Kemparaju N Prof.
Publication year - 2021
Publication title -
international journal of scientific research in science, engineering and technology
Language(s) - English
Resource type - Journals
eISSN - 2395-1990
pISSN - 2394-4099
DOI - 10.32628/ijsrset219411
Subject(s) - melody , discriminative model , computer science , recurrent neural network , set (abstract data type) , artificial intelligence , artificial neural network , speech recognition , training set , machine learning , art , musical , visual arts , programming language
The paper describes a method that uses evolving LSTM recurrent neural networks to generate melodic music through a discriminative model. The approach enclosed has achieved an accuracy level of over 90%, thus enabling our model to understand & generate music as per the input parameters. The input expected from the user is minimal and can be provided by a layman. The experiments presented here demonstrate how LSTM can successfully learn a form of training music data and compose a novel (and pleasing) melody based on that style of training. LSTM can play melodies with good timing and appropriate structure if the parameters have been set appropriately. The RNN Model presented in this paper leverages the benefits of LSTM networks and demonstrates how this feat can be achieved.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here