z-logo
open-access-imgOpen Access
Improving automatic music classification performance by extracting features from different types of data
Author(s) -
Cory McKay,
Ichiro Fujinaga
Publication year - 2010
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/1743384.1743430
Subject(s) - computer science , artificial intelligence , musical , statistical classification , feature extraction , pattern recognition (psychology) , data mining , art , visual arts
This paper discusses two sets of automatic musical genre classification experiments. Promising research directions are then proposed based on the results of these experiments. The first set of experiments was designed to examine the utility of combining features extracted from separate and independent audio, symbolic and cultural sources of musical information. The results from this set of experiments indicate that combining feature types can indeed substantively improve classification accuracy as well as reduce the seriousness of those misclassifications that do occur. The second set of experiments examined which high-level features were most important in successfully classifying symbolic data. It was found that features associated with instrumentation were particularly effective. The paper also presents the jMIR toolset, which was used to carry out these experiments and which is particularly well suited to combining information extracted from different types of data sources. jMIR is a free and open-source software suite designed for applications related to automatic music classification of various kinds.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom