z-logo
open-access-imgOpen Access
Fast accuracy estimation of deep learning based multi-class musical source separation
Author(s) -
Alexandru Mocanu,
Benjamin Ricaud,
Miloš Cerňak
Publication year - 2022
Publication title -
proceedings of the northern lights deep learning workshop
Language(s) - English
Resource type - Journals
ISSN - 2703-6928
DOI - 10.7557/18.6241
Subject(s) - computer science , artificial intelligence , source separation , artificial neural network , oracle , deep learning , pattern recognition (psychology) , machine learning , measure (data warehouse) , data mining , software engineering
Music source separation represents the task of extracting all the instruments from a given song. Recent breakthroughs on this challenge have gravitated around a single dataset, MUSDB, only limited to four instrument classes. Larger datasets and more instruments are costly and time-consuming in collecting data and training deep neural networks (DNNs). In this work, we propose a fast method to evaluate the separability of instruments in any dataset without training and tuning a DNN.This separability measure helps to select appropriate samples for the efficient training of neural networks. Based on the oracle principle with an ideal ratio mask, our approach is an excellent proxy to estimate the separation performances of state-of-the-art deep learning approaches such as TasNet or Open-Unmix.Our results contribute to revealing two essential points for audio source separation: 1) the ideal ratio mask, although light and straightforward, provides an accurate measure of the audio separability performance of recent neural nets, and 2) new end-to-end learning methods such as Tasnet, that operate directly on waveforms, are, in fact, internally building a Time-Frequency (TF) representation, so that they encounter the same limitations as the TF based-methods when separating audio pattern overlapping in the TF plane.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here