- 1. Overview
- 2. Etymology
- 3. Cultural Impact
The study of abstract algebraic structures, particularly in the realm of abstract algebra , delves into the intricate nature of modules when applied to associative algebras . It’s crucial to understand that within this context, an associative algebra is defined as a ring , not necessarily a unital one. Should the algebra in question lack a unit element, it can be standardly augmented to possess one. This process, often detailed in discussions of adjoint functors , doesn’t fundamentally alter the nature of modules associated with the algebra. The representations of the original algebra are essentially equivalent to the modules of the resulting unital ring, where the identity element acts as the identity mapping.
Examples
Linear Complex Structure
A particularly illuminating and relatively straightforward example of a non-trivial representation is what’s known as a linear complex structure . This concept arises when we consider the complex numbers , denoted as $\mathbb{C}$, not just as a number system but as an associative algebra over the real numbers , $\mathbb{R}$. This algebraic structure is concretely realized by the quotient ring:
$$ \mathbb{C} = \mathbb{R}[x] / (x^2 + 1) $$
This definition encapsulates the fundamental property of the imaginary unit, $i$, where $i^2 = -1$. A representation of this complex algebra over $\mathbb{R}$ then translates to a real vector space , let’s call it $V$, accompanied by a specific action of $\mathbb{C}$ upon $V$. This action is formally defined as a map from $\mathbb{C}$ to the set of linear transformations on $V$, denoted as $\mathrm{End}(V)$:
$$ \mathbb{C} \to \mathrm{End}(V) $$
In practical terms, since the algebra is generated by $i$, the entire action is determined by the action of $i$. The operator that represents $i$ within $\mathrm{End}(V)$ is commonly designated as $J$. This notation is employed to preempt any potential confusion with the identity matrix , $I$, which might also be present in the linear transformations. The operator $J$ is the key element that imbues the real vector space $V$ with a complex structure, effectively turning it into a complex vector space.
Polynomial Algebras
Another fundamental and widely studied category of examples involves representations of polynomial algebras . These algebras are characterized as the free commutative algebras, and they occupy a central position in both commutative algebra and its geometric counterpart, algebraic geometry .
Consider a polynomial algebra in $k$ variables over a field $K$. A representation of such an algebra is concretely understood as a $K$-vector space endowed with $k$ commuting operators. This abstract algebra, formally written as:
$$ K[x_1, \dots, x_k] $$
is represented by a mapping where the abstract variables $x_i$ correspond to these commuting operators, denoted as $T_i$:
$$ x_i \mapsto T_i $$
The representation is then denoted as:
$$ K[T_1, \dots, T_k] $$
A pivotal result concerning these representations, particularly when the underlying field $K$ is algebraically closed , asserts that the representing matrices can be simultaneously triangularised . This means there exists a basis in which all the operators $T_i$ are upper triangular matrices.
The significance of even the simplest case, the representation of a polynomial algebra in a single variable, cannot be overstated. This is denoted by $K[T]$, and it serves as a crucial tool for dissecting the structure of a single linear operator acting on a finite-dimensional vector space. By applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra, one can derive various canonical forms of matrices as corollaries , most notably the Jordan canonical form .
In certain advanced theoretical frameworks, such as some approaches to noncommutative geometry , the free noncommutative algebraâessentially polynomials in non-commuting variablesâplays a role analogous to polynomial algebras. However, the analytical challenges in studying these noncommutative structures are considerably more formidable.
Weights
The concept of eigenvalues and eigenvectors , so fundamental in linear algebra, finds a sophisticated generalization within the study of algebra representations.
The direct analogue of an eigenvalue in the context of an algebra representation is not a mere scalar value but a one-dimensional representation itself. This is formally an algebra homomorphism from the algebra $A$ to its underlying ring $R$, which is essentially a linear functional that also possesses multiplicative properties. Such a map is termed a weight . Correspondingly, the generalization of an eigenvector and eigenspace are known as a weight vector and a weight space, respectively.
To illustrate, consider the case of the eigenvalue of a single operator, which corresponds to the algebra $R[T]$. A map of algebras $R[T] \to R$ is entirely determined by the scalar value to which it maps the generator $T$. A weight vector, in this broader context, is a vector within the representation space $M$ such that any element of the algebra $A$ maps this vector to a scalar multiple of itself. This scalar multiple is dictated by the weight map $\lambda$. More formally, a vector $m \in M$ is a weight vector if, for all elements $a \in A$, the action of $a$ on $m$ is given by:
$$ am = \lambda(a)m $$
where $\lambda$ is a linear functional on $A$. It’s important to note the distinction between the left side, where $am$ represents the action of the algebra element $a$ on $m$, and the right side, where $\lambda(a)m$ signifies scalar multiplication.
Since a weight $\lambda$ is a map to a commutative ring , it can be understood through the abelianization of the algebra $A$, or equivalently, by considering the kernel of the map, which annihilates the derived algebra. In terms of matrices, if $v$ is a common eigenvector for operators $T$ and $U$, then $T$ and $U$ commute when acting on $v$, meaning $T U v = U T v$. This implies that common eigenvectors of an algebra must reside in a subspace where the algebra acts commutatively. Consequently, the focus often shifts to free commutative algebras, namely polynomial algebras .
In this particular and highly significant scenario, involving a set of commuting matrices, a weight vector of the algebra corresponds to a simultaneous eigenvector of these matrices. A weight itself is simply a $k$-tuple of scalars, $\lambda = (\lambda_1, \dots, \lambda_k)$, where each $\lambda_i$ is the eigenvalue of the corresponding matrix $T_i$. Geometrically, this $k$-tuple represents a point in $k$-dimensional space. These weights, and particularly their geometric configurations, are of paramount importance in understanding the representation theory of Lie algebras , especially the finite-dimensional representations of semisimple Lie algebras .
As a direct application of this geometric perspective, consider an algebra that is a quotient of a polynomial algebra in $k$ generators. This algebraic structure corresponds geometrically to an algebraic variety embedded within $k$-dimensional space. The weights associated with such an algebra must lie on this variety, meaning they must satisfy the defining equations of the variety. This generalizes the well-known fact that the eigenvalues of a matrix satisfy its characteristic polynomial when dealing with a single variable.
See Also
- Representation theory
- Intertwiner
- Representation theory of Hopf algebras
- Lie algebra representation
- Schurâs lemma
- Jacobson density theorem
- Double commutant theorem
Notes
- For a field , the endomorphism algebra of a one-dimensional vector space (a line) is inherently equivalent to the field itself. This is because all endomorphisms on such a space are simply scalar multiplications. Therefore, there’s no loss of generality in restricting our attention to concrete maps to the base field rather than abstract one-dimensional representations. For rings, while maps to quotient rings exist and don’t always factor through maps to the ring itself, abstract one-dimensional modules are not typically required.