Premium
Improvised emotion and genre detection for songs through signal processing and genetic algorithm
Author(s) -
Geetha Ramani R.,
Priya K.
Publication year - 2018
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.5065
Subject(s) - tamil , computer science , anger , speech recognition , formant , semitone , lyrics , classical music , musical , algorithm , literature , psychology , art , vowel , psychiatry
Summary Musical tunes are bundle of chords representing emotion which impart diverse of genres. Past history highlighted copious amount of research work emotion and genre classification with still increasingly rapid advancement. Music has various emotional forms as happy, sad, anger and fear. Its various genre forms are Classical, Country, Disco, Hip‐hop, Jazz, and Rock. These emotions and genre can be segregated by identifying the frequency of chords notes (swarams in Tamil music). This paper deals with identifying emotions and genre for classical music both western and south Indian classical music, viz, Carnatic music. The music was clipped and segmented to determining frequency of notes using Shortest Fast Fourier Transformation (STFT). Music features such as mel frequency, pitch beat, zero crossing rate, and spectral centroid were derived from the obtained frequency. Based on the audio features, emotion and genre were identified for the given data set with genetic algorithm as a classification technique. The MIREX‐Mood classification dataset was considered for listing out emotions. The songs from Million song data set and emotion classification repository were considered as ground truth for western classical music and group of Illayaraja Tamil film songs was considered to identify Carnatic music emotions. The classification was done using genetic algorithm. Mel frequency, pitch, and zero crossing rate were considered as individual representations to get best fit ratio and it is found to give accuracy percentage of 99.03%.