Premium
A trust‐region method for the parameterized generalized eigenvalue problem with nonsquare matrix pencils
Author(s) -
Li Jiaofen,
Wang Kai,
Liu Yueyuan,
Duan Xuefeng,
Zhou Xuelin
Publication year - 2021
Publication title -
numerical linear algebra with applications
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.02
H-Index - 53
eISSN - 1099-1506
pISSN - 1070-5325
DOI - 10.1002/nla.2363
Subject(s) - mathematics , stiefel manifold , parameterized complexity , eigenvalues and eigenvectors , trust region , rate of convergence , convergence (economics) , matrix (chemical analysis) , mathematical optimization , function (biology) , eigendecomposition of a matrix , manifold (fluid mechanics) , algorithm , computer science , pure mathematics , key (lock) , mechanical engineering , physics , materials science , computer security , quantum mechanics , evolutionary biology , economics , engineering , radius , composite material , biology , economic growth
The l parameterized generalized eigenvalue problems for the nonsquare matrix pencils, proposed by Chu and Golub [ SIAM J. Matrix Anal. Appl. , 28(2006), pp. 770‐787], can be formulated as an optimization problem on a corresponding complex product Stiefel manifold. Some early proposed algorithms are based on the first‐order information of the objective function, and fast convergence could not be expected. In this article, we turn the generic Riemannian trust‐region method of Absil et al. into a practical algorithm for solving the underlying problem, which enjoys the global convergence and local superlinear convergence rate. Numerical experiments are provided to illustrate the efficiency of the proposed method. Detailed comparisons with some existing results ( l = 1 and l = n ) are given first. For the case l = n , the algorithm can yield exactly the same optimal solution as Ito and Murota's algorithm, which is essentially a direct method to solve the problem. The algorithm runs faster than Boutry et al.'s algorithm for the case l = 1. Further comparisons with some early proposed gradient‐based algorithms, and some latest infeasible methods for solving manifold optimization problems are also provided to show the merits of the proposed approach.