Project Page of Neural FCA
Project Page of Neural FCA

Neural Full-Rank Spatial Covariance Analysis for Blind Source Separation

Fig. 1: Overview of our unsupervised method called neural FCA.

Yoshiaki Bando, Kouhei Sekiguchi, Yoshiki Masuyama, Aditya Arie Nugraha, Mathieu Fontaine, and Kazuyoshi Yoshii

Abstract: This paper describes a neural blind source separation (BSS) method based on amortized variational inference (AVI) of a non-linear generative model of mixture signals. A classical statistical approach to BSS is to fit a linear generative model that consists of spatial and source models representing the inter-channel covariances and power spectral densities of sources, respectively. Although the variational autoencoder (VAE) has successfully been used as a non-linear source model with latent features, it should be pretrained from a sufficient amount of isolated signals. Our method, in contrast, enables the VAE-based source model to be trained only from mixture signals. Specifically, we introduce a neural mixture-to-feature inference model that directly infers the latent features from the observed mixture and integrate it with a neural feature-to-mixture generative model consisting of a full-rank spatial model and a VAE-based source model. All the models are optimized jointly such that the likelihood for the training mixtures is maximized in the framework of AVI. Once the inference model is optimized, it can be used for estimating the latent features of sources included in unseen mixture signals. The experimental results show that the proposed method outperformed the state-of-the-art BSS methods based on linear generative models and was comparable to a method based on supervised learning of the VAE-based source model.

Separation results for the spatialized WSJ0-2mix dataset

Input
Dereverberation result
Src. 1: cACGMM
Src. 1: FCA
Src. 1: FastMNMF
Src. 1: Neural cACGMM
Src. 1: Pseudo supervised
Src. 1: Neural FCA (fix z)
Src. 1: Neural FCA
Src. 1: MVAE (supervised)
Src. 2: cACGMM
Src. 2: FCA
Src. 2: FastMNMF
Src. 2: Neural cACGMM
Src. 2: Pseudo supervised
Src. 2: Neural FCA (fix z)
Src. 2: Neural FCA
Src. 2: MVAE (supervised)
Input
Dereverberation result
Src. 1: cACGMM
Src. 1: FCA
Src. 1: FastMNMF
Src. 1: Neural cACGMM
Src. 1: Pseudo supervised
Src. 1: Neural FCA (fix z)
Src. 1: Neural FCA
Src. 1: MVAE (supervised)
Src. 2: cACGMM
Src. 2: FCA
Src. 2: FastMNMF
Src. 2: Neural cACGMM
Src. 2: Pseudo supervised
Src. 2: Neural FCA (fix z)
Src. 2: Neural FCA
Src. 2: MVAE (supervised)
Input
Dereverberation result
Src. 1: cACGMM
Src. 1: FCA
Src. 1: FastMNMF
Src. 1: Neural cACGMM
Src. 1: Pseudo supervised
Src. 1: Neural FCA (fix z)
Src. 1: Neural FCA
Src. 1: MVAE (supervised)
Src. 2: cACGMM
Src. 2: FCA
Src. 2: FastMNMF
Src. 2: Neural cACGMM
Src. 2: Pseudo supervised
Src. 2: Neural FCA (fix z)
Src. 2: Neural FCA
Src. 2: MVAE (supervised)

Source code

We respect the reproducibility of research, and now we are working for releasing our source code for reproducibility.