About

Matrix Factorizations and Block Diagonalization Algorithms is a research project funded by Croatian science fundation (HRZZ) in the period from May 1st, 2015 until August 31st, 2019.

Summary

Matrix factorizations and decompositions are the main tool for solving a wide variety of problems from science, engineering and the social sciences that can be expressed in matrix form: as systems of linear equations, least squares problems, ordinary and generalized eigenvalue and singular value problems, to name a few. Although there are many algorithms that solve these problems and the theory is well developed, many unanswered questions still remain and so major improvements are possible. For example, from a theoretical point of view, for many algorithms, although they work well in practice, there exist no convergence proofs or accuracy estimates. Furthermore, new applications that require the existing algorithms to be modified to make use of a specific structure of the problem so as to speed up the computation or to produce more accurate results are regularly being found.

The aim of this project is to study and improve the existing and to develop new Jacobi-type algorithms for standard and generalized eigenvalue and singular value problems that will have a significant impact on the field of numerical linear algebra and all applications that use these results. Special attention will be devoted to the study of the real Schur decomposition, and the spectral decomposition of the definite pair of real symmetric and Hermitian matrices. Since each implicit algorithm for the spectral decomposition is in fact an algorithm for the singular value decomposition, these problems are closely related. Furthermore, as ancillary algorithms to compute the two previously mentioned decompositions, algorithms for other matrix decompositions and factorizations appear. In particular, highly accurate algorithms for the standard and hyperbolic singular value decompositions and cosine-sine factorizations of orthogonal and J-orthogonal matrices are needed. The last two belong to the class of orthosymmetric matrices and we intend to study structured matrices in this class and work on the efficient algorithm development.

Modern numerical algorithms must be efficient (which is achieved by using block-matrix partitions and BLAS3 subroutines), robust (they must be well behaved for most of the inputs to the algorithm), stable (it is desirable they have high relative accuracy, and when this is not possible, absolute accuracy) and adaptable for use on the latest computing resources (CPU, CPU + numerical coprocessors, and GPU parallel processing). Since many of the standard diagonalization algorithms are adorned with high relative accuracy and inherent parallelism, one of the goals of this project is the development of new sequential and parallel block algorithms, which will be tested for their efficiency and high relative accuracy. Theoretically, their global and maybe the quadratic asymptotic convergence will be proved and high relative accuracy estimated.