z-logo
open-access-imgOpen Access
An Associative Memorization Architecture of Extracted Musical Features from Audio Signals by Deep Learning Architecture
Author(s) -
Tadaaki Niwa,
Keitaro Naruse,
Ryosuke Ooe,
Masahiro Kinoshita,
Tamotsu Mitamura,
Takashi Kawakami
Publication year - 2014
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2014.09.032
Subject(s) - memorization , computer science , content addressable memory , associative property , process (computing) , audio signal , musical composition , architecture , musical , representation (politics) , artificial intelligence , pop music automation , music information retrieval , speech recognition , artificial neural network , linguistics , art , mathematics , pure mathematics , visual arts , philosophy , speech coding , politics , political science , law , operating system
In this paper, we develop associative memorization architecture of the musical features from time sequential data of the music audio signals. This associative memorization architecture is constructed by using deep learning architecture. Challenging purpose of our research is the development of the new composition system that automatically creates a new music based on some existing music. How does a human composer make musical compositions or pieces? Generally speaking, music piece is generated by the cyclic analysis process and re-synthesis process of musical features in music creation procedures. This process can be simulated by learning models using Artificial Neural Network (ANN) architecture. The first and critical problem is how to describe the music data, because, in those models, description format for this data has a great influence on learning performance and function. Almost of related works adopt symbolic representation methods of music data. However, we believe human composers never treat a music piece as a symbol. Therefore raw music audio signals are input to our system. The constructed associative model memorizes musical features of music audio signals, and regenerates sequential data of that music. Based on experimental results of memorizing music audio data, we verify the performances and effectiveness of our system

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom