Of course. Here is the article you requested.
Matrix equal to its transpose
This article is about a matrix symmetric about its diagonal. For a matrix symmetric about its center, see Centrosymmetric matrix.
- For matrices with symmetry over the complex number field, see Hermitian matrix.
!Symmetry of a 5×5 matrix
In the relentlessly structured world of linear algebra, a symmetric matrix is a square matrix that possesses a kind of mirror-image integrity. It is a matrix that remains unchanged when subjected to a transpose operation—that is, when its rows are flipped to become columns and its columns become rows. It’s the rare object that looks the same after you turn it inside out.
Formally, the condition is brutally simple. A matrix A is symmetric if and only if:
This definition immediately imposes a constraint: since the dimensions must match for the matrices to be equal, only square matrices can even attempt to be symmetric. A non-square matrix wouldn't survive the transpose with its dimensions intact, making the comparison impossible.
The elements of a symmetric matrix obey a strict rule of reflection across the main diagonal—the line of entries running from the top-left to the bottom-right corner. If we denote the entry in the i-th row and j-th column as , then for a matrix to be symmetric, the entry at position (j, i) must be identical to the entry at (i, j). This must hold for all possible indices and . The diagonal itself is the axis of this symmetry, its own elements beholden to no such reflection.
Consider any square diagonal matrix. It is inherently symmetric by default, as all its off-diagonal elements are zero. There is nothing to reflect, so the condition (which becomes 0 = 0) is trivially satisfied. In a related vein, for a skew-symmetric matrix in any characteristic other than 2, every diagonal element must be zero, as each element is required to be its own negative.
In the context of linear algebra, a real symmetric matrix is more than just a pretty pattern; it is the concrete representation of a self-adjoint operator when expressed in an orthonormal basis over a real inner product space. When venturing into the realm of complex inner product spaces, the analogous concept is the Hermitian matrix, a matrix with complex-valued entries that equals its own conjugate transpose. Because of this parallel, in discussions centered on complex numbers, the term "symmetric matrix" is often implicitly understood to mean a matrix with real-valued entries. Symmetric matrices are not merely a theoretical curiosity; they emerge organically in numerous applications, and as a result, numerical linear algebra software is typically optimized to handle them with special efficiency.
Example
The following matrix is symmetric. Try not to be overwhelmed by the complexity.
It qualifies because it is equal to its own transpose, . Observe the reflection: the 7 in the first row, second column is mirrored by the 7 in the second row, first column. The 3 is mirrored. The 5 is mirrored. The diagonal elements—1, 4, 2—are their own reflections.
Properties
Basic properties
Symmetric matrices exhibit a certain predictable behavior when combined.
- The sum and difference of two symmetric matrices will always result in another symmetric matrix. If you add or subtract two perfectly mirrored objects, the result maintains that mirror property.
- The product is more temperamental. Given two symmetric matrices and , their product is symmetric only if and commute, which is to say, if . The order of operations usually matters for matrices; for the product to preserve symmetry, that order must be irrelevant.
- For any integer , if is symmetric, then is also symmetric. Repeatedly multiplying a matrix by itself doesn't break this fundamental property.
- The rank of a symmetric matrix is precisely the number of its non-zero eigenvalues. There's a direct, elegant connection between its structural dimension and its spectral properties.
Decomposition into symmetric and skew-symmetric
Any square matrix, no matter how arbitrary, can be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric matrix. This is known as the Toeplitz decomposition. It’s like separating a vector into its orthogonal components, but for matrices.
Let be the space of all matrices. Let be the subspace of symmetric matrices and be the subspace of skew-symmetric matrices. The space of all matrices is the direct sum of these two subspaces: This means that not only can any matrix be written as such a sum, but the only matrix that is both symmetric and skew-symmetric is the zero matrix ().
For any square matrix , the decomposition is achieved as follows: The first term, , is symmetric. Taking its transpose simply swaps and , leaving the sum unchanged. The second term, , is skew-symmetric. Taking its transpose flips the sign of the expression. This decomposition works for any square matrix with entries from any field whose characteristic is not 2. If the characteristic were 2, then 1 = -1, and the distinction between symmetric and skew-symmetric collapses.
A symmetric matrix is defined by scalars—the number of entries on and above the main diagonal. The rest are determined by the symmetry requirement. Similarly, a skew-symmetric matrix is determined by scalars, as the diagonal must be zero and the lower triangle is just the negative of the upper triangle.
Matrix congruent to a symmetric matrix
Symmetry is a property that persists under congruence transformation. If is a symmetric matrix, then the matrix is also symmetric for any compatible matrix . This is a direct consequence of the properties of the transpose: .
Symmetry implies normality
Any real-valued symmetric matrix is also a normal matrix, meaning it commutes with its own transpose (). Since for a symmetric matrix , this becomes , a condition that is, to put it mildly, easily satisfied.
Real symmetric matrices
Let denote the standard inner product on . A real matrix is symmetric if and only if it satisfies the following condition for all vectors : This means you can apply the linear operator represented by to either vector in the inner product and get the same result. It is self-adjoint. Since this definition is independent of the choice of basis, symmetry is an intrinsic property of the linear operator itself, given a specific inner product. This abstract characterization is fundamental in fields like differential geometry, where each tangent space of a manifold can be equipped with an inner product, leading to the concept of a Riemannian manifold. It is also a cornerstone of analysis in Hilbert spaces.
The finite-dimensional spectral theorem is the crown jewel of real symmetric matrices. It states that any symmetric matrix with real entries can be diagonalized by an orthogonal matrix. More explicitly: for every real symmetric matrix , there exists a real orthogonal matrix such that is a diagonal matrix. This is profound. It means that for any transformation represented by a symmetric matrix, there exists a set of perpendicular axes (the columns of ) along which the transformation acts simply by stretching or compressing. Every real symmetric matrix is, therefore, a diagonal matrix, up to a change of orthonormal basis.
Furthermore, if and are two real symmetric matrices that commute, they can be simultaneously diagonalized by a single orthogonal matrix. This implies the existence of a basis of where every basis vector is an eigenvector for both and .
Every real symmetric matrix is also Hermitian (since it equals its conjugate transpose, as all its entries are real). A direct and powerful consequence is that all of its eigenvalues are real. These eigenvalues are precisely the entries in the diagonal matrix mentioned above. Thus, is uniquely determined by , apart from the ordering of its diagonal entries. In essence, the property of being symmetric for real matrices is the direct counterpart to the property of being Hermitian for complex matrices.
Complex symmetric matrices
A complex symmetric matrix, one where but the entries of can be complex, behaves differently. It can be 'diagonalized' using a unitary matrix through a specific transformation. For any complex symmetric matrix , there exists a unitary matrix such that is a real diagonal matrix with non-negative entries. This is the Autonne–Takagi factorization, a result first established by Léon Autonne (1915) and Teiji Takagi (1925), and later rediscovered by several others.
The proof is constructive. The matrix is Hermitian and positive semi-definite. Therefore, a unitary matrix exists that diagonalizes it: is a diagonal matrix with non-negative real entries. Now, consider the matrix . It is complex symmetric, and is real. If we write , where and are real symmetric matrices, then the condition that is real implies that , meaning and commute.
Since and are commuting real symmetric matrices, there is a real orthogonal matrix that simultaneously diagonalizes both and . By setting (which is a unitary matrix), the matrix becomes complex diagonal. To make the diagonal entries real and non-negative, one final adjustment is needed. Suppose . We can construct a diagonal unitary matrix . Pre-multiplying by and post-multiplying its transpose gives . The new unitary matrix is . The squares of these diagonal entries are the eigenvalues of , which makes them the singular values of . It's crucial to note that the Jordan normal form of a complex symmetric matrix may not be diagonal, so may not be diagonalizable by a similarity transformation.
Decomposition
Beyond the simple symmetric/skew-symmetric split, matrices can be broken down in other useful ways.
- Using the Jordan normal form, it can be proven that any square real matrix can be written as a product of two real symmetric matrices. Similarly, any square complex matrix can be written as a product of two complex symmetric matrices.
- Every real non-singular matrix has a unique factorization into the product of an orthogonal matrix and a symmetric positive definite matrix. This is the polar decomposition. Singular matrices can also be factored this way, though not uniquely.
- Cholesky decomposition states that every real positive-definite symmetric matrix can be written as the product of a lower-triangular matrix and its transpose, . This is an incredibly efficient method used in numerical methods, for instance, to solve systems of linear equations.
- If a symmetric matrix is indefinite, it can still be decomposed. The Bunch–Kaufman decomposition takes the form , where is a permutation matrix (to handle the need for pivoting), is a lower unit triangular matrix, and is a block diagonal matrix, composed of a direct sum of symmetric and blocks.
- A general complex symmetric matrix that is diagonalizable (not all are, as some can be defective) can be decomposed as , where is an orthogonal matrix (), and is a diagonal matrix containing the eigenvalues of . If is a real symmetric matrix, then and are also real. The orthogonality of the eigenvectors can be seen by considering two eigenvectors and corresponding to distinct eigenvalues and . Then: Since , the only way for this equality to hold is if . They are orthogonal. It's a clean, inevitable conclusion.
Hessian
Symmetric matrices of real functions appear as the Hessians of twice differentiable functions of real variables. The requirement for the Hessian matrix to be symmetric is guaranteed by Schwarz's theorem on the equality of mixed partials, which holds if the second derivatives are continuous (though this condition is stricter than necessary).
Every quadratic form on can be uniquely written as with a symmetric matrix . Thanks to the spectral theorem, we can choose an orthonormal basis for that diagonalizes . In this new coordinate system, the quadratic form simplifies dramatically: where the are the real eigenvalues of . This transformation simplifies the study of quadratic forms and their level sets, , which are generalizations of conic sections.
This is significant because the second-order behavior of any smooth multi-variable function near a point is described by the quadratic form associated with its Hessian. This is a direct consequence of Taylor's theorem. The Hessian, and its symmetric nature, is the key to understanding the local geometry of functions.
Symmetrizable matrix
An matrix is called symmetrizable if there exists an invertible diagonal matrix and a symmetric matrix such that . In essence, a symmetrizable matrix is one that can be made symmetric by scaling its rows.
The transpose of a symmetrizable matrix is also symmetrizable, since , and the matrix is symmetric. A matrix is symmetrizable if and only if it meets these two conditions:
- implies for all . The pattern of zero and non-zero entries must be symmetric.
- for any finite sequence of indices . This is a path-product condition: the product of entries along any closed loop of indices must be the same as the product along the loop in the reverse direction.
See also
Other types of symmetry or pattern in square matrices have special names; see for example:
- Skew-symmetric matrix (also called antisymmetric or antimetric )
- Centrosymmetric matrix
- Circulant matrix
- Covariance matrix
- Coxeter matrix
- GCD matrix
- Hankel matrix
- Hilbert matrix
- Persymmetric matrix
- Sylvester's law of inertia
- Toeplitz matrix
- Transpositions matrix
See also symmetry in mathematics.
Notes
- ^ Jesús Rojo García (1986). Álgebra lineal (in Spanish) (2nd ed.). Editorial AC. ISBN 84-7288-120-2.
- ^ Bellman, Richard (1997). Introduction to Matrix Analysis (2nd ed.). SIAM. ISBN 08-9871-399-4.
- ^ Horn & Johnson 2013, pp. 263, 278
- ^ See:
- Autonne, L. (1915), "Sur les matrices hypohermitiennes et sur les matrices unitaires", Ann. Univ. Lyon, 38: 1–77
- Takagi, T. (1925), "On an algebraic problem related to an analytic theorem of Carathéodory and Fejér and on an allied theorem of Landau", Jpn. J. Math., 1: 83–93, doi:10.4099/jjm1924.1.0_83
- Siegel, Carl Ludwig (1943), "Symplectic Geometry", American Journal of Mathematics, 65 (1): 1–86, doi:10.2307/2371774, JSTOR 2371774, Lemma 1, page 12
- Hua, L.-K. (1944), "On the theory of automorphic functions of a matrix variable I–geometric basis", Amer. J. Math., 66 (3): 470–488, doi:10.2307/2371910, JSTOR 2371910
- Schur, I. (1945), "Ein Satz über quadratische Formen mit komplexen Koeffizienten", Amer. J. Math., 67 (4): 472–480, doi:10.2307/2371974, JSTOR 2371974
- Benedetti, R.; Cragnolini, P. (1984), "On simultaneous diagonalization of one Hermitian and one symmetric form", Linear Algebra Appl., 57: 215–226, doi:10.1016/0024-3795(84)90189-7
- ^ Bosch, A. J. (1986). "The factorization of a square matrix into two symmetric matrices". American Mathematical Monthly. 93 (6): 462–464. doi:10.2307/2323471. JSTOR 2323471.
- ^ Golub, G.H.; van Loan, C.F. (1996). Matrix Computations. Johns Hopkins University Press. ISBN 0-8018-5413-X. OCLC 34515797.
- ^ Dieudonné, Jean A. (1969). "Theorem (8.12.2)". Foundations of Modern Analysis. Academic Press. p. 180. ISBN 0-12-215550-5. OCLC 576465.