z-logo
open-access-imgOpen Access
Fast Variational Block-Sparse Bayesian Learning
Author(s) -
Jakob Moderl,
Erik Leitinger,
Bernard H. Fleury,
Franz Pernkopf,
Klaus Witrisal
Publication year - 2025
Publication title -
ieee transactions on signal processing
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.638
H-Index - 270
eISSN - 1941-0476
pISSN - 1053-587X
DOI - 10.1109/tsp.2025.3611234
Subject(s) - signal processing and analysis , communication, networking and broadcast technologies , computing and processing
We propose a variational Bayesian (VB) implementation of block-sparse Bayesian learning (BSBL) to compute proxy probability density functions (PDFs) that approximate the posterior PDFs of the weights and associated hyperparameters in a block-sparse linear model, resulting in an iterative algorithm coined variational BSBL (VA-BSBL). The priors of the hyper-parameters are selected to belong to the family of generalized inverse Gaussian distributions. This family contains as special cases commonly used hyperpriors such as the Gamma and inverse Gamma distributions, as well as Jeffreyߣs improper distribution. Inspired by previous work on classical sparse Bayesian learning (SBL), we investigate the update stage in which the proxy PDFs of a single block of weights and of its associated hyperparameter are successively updated, while keeping the proxy PDFs of the other parameters fixed. This stage defines a nonlinear first-order recurrence relation for the mean of the proxy PDF of the hyperparameter. By iterating this relation “ad infinitum” we obtain a criterion that determines whether the so-generated sequence of hyperparameter means converges or diverges. Incorporating this criterion into the VA-BSBL algorithm yields a fast implementation, coined fast-BSBL (F-BSBL), which achieves a two-order-of-magnitude runtime improvement. We further identify the range of the parameters of the generalized inverse Gaussian distribution which result in an inherent pruning procedure that switches off “weak” components in the model, which is necessary to obtain sparse results. Lastly, we show that expectation-maximization (EM)-based and VB-based implementations of BSBL are identical methods. Thus, we extend a well-known result from classical SBL to BSBL. Consequently, F-BSBL and BSBL using coordinate ascent to maximize the marginal likelihood coincide. These results provide a unified framework for interpreting existing BSBL methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom