← Back to home

Semisimple Algebra

This article requires more citations to ensure its accuracy. Please, if you have the time, and it’s not too much of an imposition, contribute to improving it by adding references to credible sources. Without them, certain information might be questioned and subsequently removed. It’s a shame, really, to let good data go to waste. (July 2014)

Semisimple Algebra

In the rather intricate landscape of ring theory, a field of mathematics that often feels like navigating a meticulously constructed labyrinth, a semisimple algebra over a field is a rather specific beast. It’s an associative Artinian algebra, which, in simpler terms, means it has certain finiteness properties and behaves predictably under multiplication. The defining characteristic, the thing that truly sets it apart, is its Jacobson radical. This radical, a crucial concept in understanding the structure of algebras, must be utterly trivial – meaning it contains only the zero element. No other element can hide within it, no matter how small or seemingly insignificant. If the algebra happens to be finite-dimensional, this condition is equivalent to stating that it can be neatly decomposed into a Cartesian product of simple subalgebras. Think of it as a complex structure that, when you strip away its problematic parts, reveals itself to be a collection of fundamental, indivisible building blocks.

Definition

Let’s delve a bit deeper into this “Jacobson radical.” For an algebra defined over a field, this radical is essentially an ideal. This ideal is composed of all elements that manage to annihilate every simple left-module. If that sounds like abstract jargon, consider it this way: these are the elements that, when you multiply them by anything in certain fundamental structures (the simple modules), always result in zero. The Jacobson radical has a tendency to absorb all nilpotent ideals – those ideals whose elements, when multiplied by themselves enough times, eventually become zero. And if our algebra is finite-dimensional, this radical itself is guaranteed to be a nilpotent ideal. So, a finite-dimensional algebra is deemed semisimple precisely when its Jacobson radical is nothing more than the zero element. It’s clean. It’s pure.

Now, an algebra is called simple if it has no proper ideals – no smaller, contained algebraic structures that also behave like an algebra – and crucially, if the product of its elements is not the zero element. The very name “simple” hints at its fundamental nature, and indeed, simple algebras are inherently semisimple. Why? Because if an algebra is simple, its only possible ideals are the algebra itself and the trivial ideal {0}. This means it can’t be nilpotent. The product of any two elements in the algebra, A², forms an ideal. Since the algebra is simple, A² must equal A. By repeating this logic, Aⁿ will always equal A for any positive integer n. It simply refuses to become zero.

Consider a self-adjoint subalgebra A of n × n matrices with complex entries. Such a subalgebra is always semisimple. Let’s denote its radical as Rad(A). If we have a matrix M within Rad(A), then the product MM (the conjugate transpose of M multiplied by M) must reside in some nilpotent ideal of A. This implies that (MM)ᵏ = 0 for some positive integer k. However, because MM is positive-semidefinite, this can only be true if MM itself is the zero matrix. If M*M is zero, then multiplying M by any vector x results in the zero vector. This, in turn, means that M must be the zero matrix. Thus, the radical contains only the zero element.

Conversely, if we take a finite collection of simple algebras, {Aᵢ}, and form their Cartesian product, let’s call it A = Π Aᵢ, this product algebra A is also semisimple. Suppose we have an element (aᵢ) within the radical of A. Let e₁ be the multiplicative identity in A₁, and assume all simple algebras possess such an identity. Then, the element (a₁, a₂, ...) multiplied by (e₁, 0, ...) results in (a₁, 0, ..., 0). This element must lie within some nilpotent ideal of the product algebra Π Aᵢ. This forces a₁ to be nilpotent in A₁. Since the radical of A₁ contains all nilpotent elements, a₁ must belong to Rad(A₁). But if A₁ is simple, its radical is {0}, so a₁ must be 0. By a similar argument, all aᵢ must be 0. Thus, the radical of the product of simple algebras is also trivial.

The more challenging part, the converse, is not immediately obvious from these definitions. It requires deeper mathematical machinery to prove that any finite-dimensional semisimple algebra can indeed be expressed as a Cartesian product of a finite number of simple algebras. It’s like having a complex lock and knowing it opens, but not quite seeing how all the tumblers fall into place.

Characterization

Let’s imagine a finite-dimensional semisimple algebra, A. If we were to consider a composition series of A, which is a specific sequence of nested subgroups or ideals starting from {0} and ending at A, like so: {0} = J₀ ⊂ J₁ ⊂ ... ⊂ J<0xE2><0x82><0x99> ⊂ A, then a profound result states that A is isomorphic to a Cartesian product. This product looks like: J₁ × (J₂/J₁) × (J₃/J₂) × ... × (J<0xE2><0x82><0x99>/J<0xE2><0x82><0x99>₋₁) × (A/J<0xE2><0x82><0x99>). The key here is that each of these components, Jᵢ₊₁/Jᵢ, is a simple algebra.

The essence of this proof can be sketched with a few logical steps. First, armed with the assumption that A is semisimple, one can demonstrate that J₁ is itself a simple algebra and, importantly, it is unital (meaning it has a multiplicative identity). Since J₁ is both a subalgebra and an ideal of J₂, we can express J₂ as a direct product: J₂ ≅ J₁ × (J₂/J₁). Given that J₁ is a maximal ideal within J₂ and that A remains semisimple, it follows that the quotient algebra J₂/J₁ must also be simple. This pattern can be continued by induction. For instance, J₃ can be seen as the Cartesian product of J₂ and J₃/J₂, which further decomposes into J₁ × (J₂/J₁) × (J₃/J₂). This inductive process unravels the structure of the entire algebra.

This rather elegant result can be rephrased. If we have a semisimple algebra A that has been decomposed into its simple factors, A = A₁ × ... × A<0xE2><0x82><0x99>, we can consider the units eᵢ within each Aᵢ. These units give rise to elements Eᵢ = (0, ..., eᵢ, ..., 0) in A. These Eᵢ are idempotent elements – meaning Eᵢ² = Eᵢ – and they reside in the center of A, meaning they commute with all elements of A. Furthermore, these idempotents have specific properties: EᵢEⱼ = 0 when i ≠ j (they are central orthogonal idempotents), their sum is the multiplicative identity of A (Σ Eᵢ = 1), and A is isomorphic to the Cartesian product of the simple algebras E₁A × ... × E<0xE2><0x82><0x99>A.

Classification

A pivotal theorem, credited to Joseph Wedderburn, provides a complete classification of finite-dimensional semisimple algebras over a field k. It states that any such algebra is isomorphic to a finite product of matrix algebras: ∏ M<0xE2><0x82><0x99>ᵢ(Dᵢ). Here, nᵢ represents positive integers, Dᵢ are division algebras over k, and M<0xE2><0x82><0x99>ᵢ(Dᵢ) denotes the algebra of nᵢ × nᵢ matrices whose entries are elements of Dᵢ. This decomposition into matrix algebras is unique, up to the order of the factors.

This groundbreaking work was later extended by Emil Artin to encompass semisimple rings, not just algebras over fields. This more generalized theorem is now known as the Wedderburn–Artin theorem. It’s a testament to how fundamental mathematical concepts, when explored deeply, reveal interconnected structures and lead to powerful generalizations.