Multimodal Music Emotion Recognition Method Based on the Combination of Knowledge Distillation and Transfer Learning
Author(s) -
Guiying Tong
Publication year - 2022
Publication title -
scientific programming
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.269
H-Index - 36
eISSN - 1875-919X
pISSN - 1058-9244
DOI - 10.1155/2022/2802573
Subject(s) - lyrics , computer science , generalization , speech recognition , singing , emotion recognition , artificial intelligence , distillation , pattern recognition (psychology) , mathematics , art , mathematical analysis , chemistry , literature , management , organic chemistry , economics
The main difficulty of music emotion recognition is the lack of sufficient labeled data. Only the labeled data with unbalanced categories are used to train the emotion recognition model. Not only is accurate labeling of emotion categories costly and time-consuming, but it also requires extensive musical background for labelers At the same time, the emotion of music is often affected by many factors. Singing methods, music styles, arrangement methods, lyrics, and other factors will affect the expression of music emotions. This paper proposes a multimodal method based on the combination of knowledge distillation and music style transfer learning and verifies the effectiveness of the method on 20,000 songs. Experiments show that compared with traditional methods, such as single audio, single lyric, and single audio with multimodal lyric methods, the method proposed in this paper has significantly improved the accuracy of emotion recognition, and the generalization ability has been significantly improved.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom