Block Decompositions and Applications of Generalized Reflexive Matrices
Vol.08No.03(2018), Article ID:87545,12 pages
10.4236/alamt.2018.83011
Hsin-Chu Chen
Department of Cyber-Physical Systems, Clark Atlanta University, Atlanta, GA, USA
Copyright © 2018 by author and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/
Received: August 25, 2018; Accepted: September 24, 2018; Published: September 27, 2018
ABSTRACT
Generalize reflexive matrices are a special class of matrices that have the relation where and are some generalized reflection matrices. The nontrivial cases ( or ) of this class of matrices occur very often in many scientific and engineering applications. They are also a generalization of centrosymmetric matrices and reflexive matrices. The main purpose of this paper is to present block decomposition schemes for generalized reflexive matrices of various types and to obtain their decomposed explicit block-diagonal structures. The decompositions make use of unitary equivalence transformations and, therefore, preserve the singular values of the matrices. They lead to more efficient sequential computations and at the same time induce large-grain parallelism as a by-product, making themselves computationally attractive for large-scale applications. A numerical example is employed to show the usefulness of the developed explicit decompositions for decoupling linear least-square problems whose coefficient matrices are of this class into smaller and independent subproblems.
Keywords:
Generalized Reflexive Matrices, Reflexive Matrices, Centrosymmetric Matrices, Generalized Simultaneous Diagonalization, Simultaneous Diagonalization, Linear Least-Square Problems
1. Introduction
In [1] we introduced two special classes of rectangular matrices A and B that have the relations
where P and Q are two generalized reflection matrices of dimensions n and m, respectively. A matrix X is said to be a generalized reflection matrix if , i.e., if X is unitary and Hermitian. The matrices A (and B) are referred to as generalized reflexive (and antireflexive respectively) matrices. They are a generalization of centrosymmetric (anti-centrosymmetric) matrices whose special properties have been under extensive studies [2] - [11] and a generalization of reflexive (antireflexive) matrices U (V), exploited in [1] [12] [13] , that have the relations
where P is some reflection (symmetric signed permutation) matrix.
Like U, the generalized reflexive matrices A arise naturally and frequently from physical problems with some sort of reflexive symmetry. Although the generalized antireflexive matrices B also also possess many interesting properties, in this paper, we shall focus only on generalized reflexive matrices. Our main objective is, thus, to present a generalized simultaneous diagonalization theorem and various decomposition schemes for the matrices A so that linear least-squares problems (or linear systems) whose coefficient matrices are of this class can be solved more efficiently. The decomposition schemes can be applied to a great number of scientific and engineering problems.
The organization of this paper is as follows. In §2, we present a generalization to the classical simultaneous diagonalization of two diagonalizable commuting square matrices. Our generalization, referred to as the generalized simultaneous diagonalization, simultaneously diagonalize a rectangular matrix H and two square matrices F and G that have the relation , assuming F and G are diagonalizable. Based on this simultaneous diagonalization, we develop explicit and semi-explicit decomposed forms in §3 for some important types of generalized reflexive matrices. An application of the decompositions to linear least-squares problems of this class is also given to show the usefulness of the decompositions. More numerical examples are provided in §4 to demonstrate the frequent occurrences of generalized reflexive matrices in many scientific and engineering disciplines.
Throughout this paper, we use the superscripts T, *, and −1 to denote the transpose, conjugate transpose, and inverse of matrices (vectors), respectively. The symbol stands for the direct sum of matrices as usual. Unless otherwise noted, we use to denote the identity matrix of dimension k. All matrix-matrix multiplications and additions are assumed to be conformable if their dimensions not mentioned. .
2. Generalized Simultaneous Diagonalization
Before developing the (semi-)explicit block-diagonal structures for some important types of generalized reflexive matrices, we present first the following theoretically simple yet computationally useful observation regarding a simultaneous diagonalization process. Although diagonalization usually refers to square matrices, in this paper, we use the same term for rectangular matrices. In other words, a rectangular matrix is also said to be diagonal if for . Block-diagonal rectangular matrices are defined in an analogous way.
Theorem 2.1. (Generalized Simultaneous Diagonalization) Let and be diagonalizable, . If , then there exist nonsingular matrices and such that
are all diagonal matrices.
Proof. The proof given below basically employs the same technique used in [14] [15] for the simultaneous diagonalization of two square matrices that commute. Let and be the matrices that diagonalize F and G, respectively:
(1)
where the diagonal elements of (respectively ) are the eigenvalues of F (respectively G). Suppose that the matrix F has k distinct eigenvalues with multiplicities , respectively, where ; and the matrix G has l distinct eigenvalues with multiplicities , respectively, where . Assume further that among the k distinct eigenvalues of F, s of them are also eigenvalues of G, . If , then all and are distinct, implying that A is a null matrix, as can be seen later. Therefore, we exclude this trivial case. Without loss of generality, we can assume that
(2)
where denotes a block-diagonal matrix and . Note that and are all distinct. Now, partition the matrix , denoted by B, according to the block forms of and as so that are pi-by-qj submatrices, and . If , we have which implies that
(3)
Since only if , we know that B is a block-diagonal matrix, or more precisely, if or if . (This can be considered as a block-equivalence decomposition for rectangular matrices.) It is well-known that for any matrix B in there exist unitary matrices and such that the singular value decomposition is diagonal with nonnegative elements [16] . Now, let and be the matrices that diagonalize and take
(4)
Let . We see that is diagonal. Taking
it is clear that
(5)
Therefore, they are all diagonal matrices.
Remark 1: Note that the converse of this theorem is not true in general. It is simple to construct such examples from diagonal matrices.
Remark 2: If the diagonalizable matrix F is the same as G, and A is diagonalizable (A is a square matrix in this case), by taking to be the matrices such that are diagonal and replacing with , this theorem along with its converse part (it now exists) then reduces to the classical simultaneous diagonalization theorem for commuting square matrices as given in ( [15] , p. 50).
Note also that this theorem is different from the simultaneous diagonalization theorems presented in [14] [16] where the simultaneous diagonalization applies to rectangular matrices of the same size.
Corollary 2.2. Let and be Hermitian, . If , then there exist unitary matrices and such that
are all diagonal matrices.
Proof. Since Hermitian matrices are diagonalizable by unitary matrices, the proof is trivial.
The usefulness of Theorem 2.1 or Corollary 2.2 lies in the fact that if we know the eigenpairs of the matrices F and G, then the matrix A can be block-diagonalized into independent submatrices by the eigenvectors (with some proper ordering) of F and G so that a single large problem can be handled via smaller and independent subproblems, yielding computational efficiency and large-grain parallelism at the same time. The question then boils down to whether those eigenpairs can easily be obtained or not. This of course depends on F and G. Fortunately, for our generalized reflexive matrices that come from physical problems, their eigenpairs of P and Q are explicitly known in most cases, as can be seen from the example presented in Section 4. In the next section, we present several generalized reflexive decompositions that lead to either explicit or semi-explicit block-decomposed forms, which are computationally attractive.
3. Decompositions for Generalized Reflexive Matrices
We now turn to generalized reflexive matrices A, which are not necessary square matrices. The decomposition schemes presented below for A are special applications of the general results developed in the previous section. Our main purpose is to obtain explicit forms of the block structure for some frequently encountered cases of A. Let be generalized reflexive. Recall that P and Q are two generalized reflection matrices, which are unitary Hermitian matrices. Therefore, they have at most two distinct eigenvalues 1 and −1. Furthermore, the relation can be expressed as since . From Corollary 2.2, we know that A can be block-diagonalized into two independent submatrices. This information along, however, is not enough from the computational point of view. we still need to know the eigenpairs of P and Q in order to obtain the explicit decomposed form of A. In the following, we derive several explicit or semi-explicit decomposed forms for some important types of generalized reflexive matrices, starting with the simplest one.
Theorem 3.1. Let and , n and m even, be two matrices that take the following forms.
(6)
where and are unitary. Let be partitioned as , , with each , and . If , then there exist two unitary matrices X and Y such that
(7)
Proof. Clearly, both P and Q are generalized reflection matrices. Therefore, A is a generalized reflexive matrix. Take X and Y to be the unitary matrices
(8)
Then
(9)
where we have used the unitarity of and and the relations and , which results from the assumption of . Note that and , which also explains, via Corollary 2.2, why this decomposition is possible.
Theorem 3.2. Let and , and , be the following two generalized reflection matrices:
(10)
where and are unitary matrices of dimensions p and q, respectively; , . Let be partitioned as , , with , , and . If , then there exist two unitary matrices X and Y such that
and
Proof. Take X and Y to be the following two unitary matrices:
(11)
Then the unitary transformation yields
(12)
If , we immediately have the following relations among the submatrices .
Employing these relations and the unitarity of and for (12), we obtain a much simplified form of the transformation . Namely,
(13)
Accordingly, we have the results we want:
and
Note that in (13), can be replaced by and replaced by since . Computationally, one should use the expressions that are easier to compute. Note also that X and Y do not depend on and , and
Remark 3: In Theorem 3.1 and 3.2 if the unitarity requirement of P, Q, X, and Y is lifted, a slightly more general case can be obtained simply by replacing the conjugate transpose with the inverse (existence of and assumed) in places of , , , and . With this replacement, all the results in the proofs remain intact. The matrices A in this case, however, are not necessarily generalized reflexive since P and Q may not be generalized reflection matrices.
Remark 4: Obviously, Theorem 3.2 reduces to Theorem 3.1 if and in (10) do not exist, i.e., . If is present and disappears, then by partitioning A as , and , according to the block forms of P and Q, we have
(14)
which is decoupled into two independent sub-blocks when . Analogous to (13), and can be expressed as and , respectively since in this case . Instead, if disappears and remains and the matrix A is partitioned in accordance with P and Q as
then we have
(15)
where and because . This transformation again decouples the matrix A into two independent sub-blocks when .
4. Applications
As seen from the transformations presented in the previous section, the decomposed forms of A of this class are very simple to compute. This is especially true when P and Q are reflection (symmetric signed permutation) matrices, which arise frequently in a very wide range of real-world applications, because any reflection matrix can be symmetrically permuted to yield one of the forms of (6) and (10), with and being some signed permutation matrices whereas and some reflection matrices. Furthermore, the decompositions preserve all singular values because they make use of unitarily equivalence transformations, which can be applied to both square matrices and rectangular matrices. Therefore, they are useful not only for linear systems but for linear least-squares problems and singular value problems as well. The only requirement is the existence of the generalized reflexivity property of the matrix A. When P is the same as Q, the decompositions lead to similarity transformations and, accordingly, preserve all eigenvalues. It is exactly this simplicity and preservance of singular values or eigenvalues that makes these decompositions computationally attractive. To demonstrate the usefulness of these decompositions in attacking applications of this type, we present in this section an application of the decompositions to one of the numerical examples described in [1] , where the same problem is solved using only basic generalized reflexive properties, without resorting to matrix decompositions.
Numerical example. Consider the following overdetermined linear system:
(16)
Let A be the coefficient matrix of the overdetermined system. It is simple to observe that A is a generalized reflexive matrix: where
(17)
are two reflection matrices. It deserves mentioning that the coefficient matrix A is the edge-node incidence matrix of a level network with reflexive symmetry.
Whether this overdetermined linear system is to be solved via its normal equation or using a QR decomposition instead, we can decompose the original problem into two independent subproblems first, using the decomposition techniques presented in the previous section. Let
(18)
The overdetermined system is then transformed to with
where is intentionally inserted to avoid unnecessary multiplications of in forming from b. Now, let be partitioned, according to the block forms of X and Y, as
(19)
The transformation can easily be obtained without actually performing expensive matrix-matrix multiplications. We simply use the explicit form of (14) by substituting for and −1 for , yielding
where
(20)
It is simple to obtain without resorting to a dense matrix-vector multiplication.
This transformation then decouples the original system into
(21)
and
(22)
The normal equations of (21) and (22) are simply
respectively, whose solutions are and . The final solution x can now be retrieved from and with ease.
whose correctness can be verified from the normal equation of the original system.
At this point, it is clear that the main reason why transformations of this type are so cheap to obtain is not only that explicit forms are available but that no arithmetic multiplications or divisions are involved in forming the decoupled subsystems except the central block row of A and b and the central block column of A, if any, such as and in this example. The dimensions of these blocks are usually very small for large-scale problems with reflexive symmetry because they involve only the nodes/edges on the line or plane of symmetry. Therefore, this extra work can easily be offset by the tremendous savings resulting from solving two smaller subproblems whose sizes are only about half of the original problem. It is worth mentioning that solving sequentially two independent decomposed subproblems each of half size of a single problem is about four times faster than solving the undecomposed one. This is exactly where computational efficiency comes from. The large-grain parallelism induced by these decompositions is an additional advantage when the subproblems are solved on a multiprocessor on multiple networked computers.
We close this section by emphasizing the fact that a great number of scientific and engineering applications require solutions to linear least-squares problems, singular value problems, linear systems, or eigenvalue problems whose coefficient matrices are either generalized reflexive nontrivial reflection matrices P and Q or reflexive P (or Q). Instead of giving more numerical examples, we just mention that the node-edge (or edge-node) incidence matrix of any finite network or graph that possesses reflexive symmetry or that can be redrawn as one that displays reflexive symmetry is generalized reflexive. Refer to [1] for more numerical examples.
5. Conclusions
Generalized reflexive matrices, a newly exploited special class of matrices that have the relation with P and Q being some generalized reflection matrices, are a generalization of centrosymmetric matrices and reflexive matrices. Although it is not trivial to realize their existence purely from the entries of a given matrix, this new class of matrices indeed arise very often from physical problems in many areas of scientific and engineering applications, especially from those with reflexive symmetry. Three such nontrivial numerical examples, each from a distinct real-world application area, can be found in [1] .
A major part of this paper has been devoted to the exploration of computationally attractive decompositions for taking advantage of the special relation possessed by this class of matrices. The decompositions are based on a generalized simultaneous diagonalization theorem presented in this paper and derived using the eigenvectors of P and Q via unitarily equivalence transformations. When the eigenpairs of P and Q are explicitly known, which is usually the case for generalized reflexive matrices that arise from physical problems with reflexive symmetry, the decompositions yield simple and explicit forms of the decomposed submatrices for the matrices A. One of the generalized reflexive matrices presented in this paper has also been employed to serve as an example to show the usefulness of the derived explicit decompositions for decoupling linear least-squares problems whose coefficient matrices are of this class into smaller and independent subproblems. These decompositions, though theoretically simple, can lead to much more efficient computation for large-scale applications. It also induces large-grain parallelism as a by-product. Furthermore, they preserve either the singular values or the eigenvalues of the matrices and, therefore, immediately applicable not only for handling linear least-squares problems and linear systems but for attacking singular value problems or eigenvalue problems.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
Cite this paper
Chen, H.-C. (2018) Block Decompositions and Applications of Generalized Reflexive Matrices. Advances in Linear Algebra & Matrix Theory, 8, 122-133. https://doi.org/10.4236/alamt.2018.83011
References
- 1. Chen, H.-C. (1998) Generalized Reflexive Matrices: Special Properties and Applications. SIAM Journal on Matrix Analysis and Applications, 19, 140-153. https://doi.org/10.1137/S0895479895288759
- 2. Zehfuss, G. (1862) Zwei Sätze über determinanten. Zeitschrift für Angewandte Mathematik und Physik, VII, 436-439.
- 3. Aitken, A.C. (1949) Determinants and Matrices. 6th Edition, Wiley-Interscience, New York.
- 4. Good, I.J. (1970) The Inverse of a Centrosymmetric Matrix. Technometrics, 12, 925-928. https://doi.org/10.1080/00401706.1970.10488743
- 5. Andrew, A.L. (1973) Solution of Equations Involving Centrosymmetric Matrices. Technometrics, 15, 405-407. https://doi.org/10.1080/00401706.1973.10489052
- 6. Andrew, A.L. (1973) Eigenvectors of Certain Matrices. Linear Algebra and Its Applications, 7, 151-162. https://doi.org/10.1016/0024-3795(73)90049-9
- 7. Pye, W.C., Boullino, T.L. and Atchison, T.A. (1973) The Pseudoinverse of a Centrosymmetric Matrix. Linear Algebra and Its Applications, 6, 201-204. https://doi.org/10.1016/0024-3795(73)90020-7
- 8. Cantoni, A. and Butler, P. (1976) Eigenvalues and Eigenvectors of Symmetric Centrosymmetric Matrices. Linear Algebra and Its Applications, 13, 275-288. https://doi.org/10.1016/0024-3795(76)90101-4
- 9. Weaver, J.R. (1985) Centrosymmetric (Cross-Symmetric) Matrices, Their Basic Properties, Eigenvalues, and Eigenvectors. The American Mathematical Monthly, 92, 711-717. https://doi.org/10.1080/00029890.1985.11971719
- 10. Weaver, J.R. (1988) Real Eigenvalues of Nonnegative Matrices Which Commute with a Symmetric Matrix Involution. Linear Algebra and Its Applications, 110, 243-253. https://doi.org/10.1016/0024-3795(83)90138-6
- 11. Tao, D. and Yasuda, M. (2002) A Spectral Characterization of Generalized Real Symmetric Centrosymmetric and Generalized Real Symmetric Skew-Centrosymmetric Matrices. SIAM Journal on Matrix Analysis and Applications, 23, 885-895. https://doi.org/10.1137/S0895479801386730
- 12. Chen, H-C. and Sameh, A. (1989) A Matrix Decomposition Method for Orthotropic Elasticity Problems. SIAM Journal on Matrix Analysis and Applications, 10, 39-64. https://doi.org/10.1137/0610004
- 13. Chen, H.-C. and Sameh, A. (1989) A Domain Decomposition Method for 3D Elasticity Problems. In: Brebbia, C.A. and Peters, A., Eds., Applications of Supercomputers in Engineering: Fluid Flow and Stress Analysis Applications, Computational Mechanics Publications, Southampton University, Southampton, England, 171-188.
- 14. Gibson, P.M. (1974) Simultaneous Diagonalization of Rectangular Complex Matrices. Linear Algebra and Its Applications, 9, 45-53. https://doi.org/10.1016/0024-3795(74)90025-1
- 15. Horn, R.A. and Johnson, C.A. (1985) Matrix Analysis. Cambridge University Press, New York. https://doi.org/10.1017/CBO9780511810817
- 16. Eckart, C. and Young, G. (1939) A Principal Axis Transformation for Non-Hermitian Matrices. Bulletin of the American Mathematical Society, 45, 118-121. https://doi.org/10.1090/S0002-9904-1939-06910-3
上一篇:Jordan Γ*-Derivation on 下一篇:On Horn Matrix Function H2(A,A
最新文章NEWS
- On Characterization of Poised Nodes for a Space of Bivariate Functions
- Least-Squares Solutions of Generalized Sylvester Equation with Xi Satisfies Different Linear Constra
- Matrices and Division by Zero z/0 = 0
- Jordan Γ*-Derivation on Semiprime Γ-Ring M with Involution
- Two Nonzero Component Lemma and Matrix Trigonometry
- Using Row Reduced Echelon Form in Balancing Chemical Equations
- Tight Monomials with t-Value ≤ 9 for Quantum Group of Type D4
- Minimum Covering Randić Energy of a Graph
推荐期刊Tui Jian
- Chinese Journal of Integrative Medicine
- Journal of Genetics and Genomics
- Journal of Bionic Engineering
- Chinese Journal of Structural Chemistry
- Pedosphere
- Nuclear Science and Techniques
- 《传媒》
- 《哈尔滨师范大学自然科学学报》
热点文章HOT
- Using Row Reduced Echelon Form in Balancing Chemical Equations
- Minimum Covering Randić Energy of a Graph
- A Note on the Inclusion Sets for Tensors
- A General Hermitian Nonnegative-Definite Solution to the Matrix Equation AXB = C
- Jordan Γ*-Derivation on Semiprime Γ-Ring M with Involution
- Matrices and Division by Zero z/0 = 0
- On Characterization of Poised Nodes for a Space of Bivariate Functions
- Two Nonzero Component Lemma and Matrix Trigonometry