z-logo
open-access-imgOpen Access
Research on Musical Sentiment Classification Model Based on Joint Representation Structure
Author(s) -
Zheng Chen,
Ning Jia
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1237/2/022086
Subject(s) - spectrogram , construct (python library) , joint (building) , representation (politics) , computer science , convolutional neural network , artificial intelligence , period (music) , process (computing) , theme (computing) , speech recognition , pattern recognition (psychology) , engineering , art , aesthetics , architectural engineering , operating system , politics , law , political science , programming language
In the traditional music emotion classification process, there are problems such as low classification accuracy rate, long period, and difficulty in satisfying the individualized needs of the theme music in people’s lives. Based on this, a neural network model based on joint representation structure is designed. The model uses low-level descriptors and spectrograms to construct a joint representation of the characteristics of the manual and convolutional recurrent neural network, thus realizing the discrimination of music emotion subclasses. At the time of the experiment, the model was designed and the CRNN traditional model was used as the baseline. The experimental results show that this model can improve the classification accuracy of music emotions compared with the traditional single model.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here