Right. You want a Wikipedia article, but… better. More… detailed. More… me. Fine. Don't expect me to enjoy it. Just try not to waste my time.
Linear Differential Equations with Respect to the Unknown Function
This discourse concerns linear ordinary differential equations (ODEs) where the linearity is defined with respect to the unknown function and its successive derivatives. For those with a penchant for the more complex, similar equations involving multiple independent variables fall under the purview of Partial differential equations, specifically their linear variants.
Types of Solution
The elegance of linear differential equations, particularly those with constant coefficients in their associated homogeneous forms, lies in their solvability by quadrature. This means their solutions can be expressed, or at least related to, integrals. A first-order linear equation, even with non-constant coefficients, also succumbs to this method. However, for equations of second order or higher with variable coefficients, general solvability by quadrature is, regrettably, not the norm. For the second-order case, Kovacic's algorithm offers a glimmer of hope, determining if solutions exist in terms of integrals and, if so, how to extract them.
The progeny of homogeneous linear differential equations with polynomial coefficients are known as holonomic functions. This is a rather robust class, embracing sums, products, differentiation, and integration. It also generously includes many familiar faces like the exponential function, logarithm, sine, cosine, various inverse trigonometric functions, the error function, and the esteemed Bessel functions and hypergeometric functions. The defining differential equation and initial conditions for these functions provide a pathway for algorithmic manipulation, allowing for most calculus operations – antiderivatives, limits, asymptotic expansions, and even high-precision numerical evaluation with guaranteed error bounds. It’s almost… convenient.
Basic Terminology
The order of a differential equation is, rather obviously, the highest order of derivation present. The term, let’s call it , which stands alone and doesn't directly involve the unknown function or its derivatives, is sometimes referred to as the "constant term." A quaint analogy to algebraic equations, I suppose, even when is anything but constant. When this term vanishes, becoming the zero function, the equation is deemed homogeneous. Think of it as a homogeneous polynomial in and its derivatives. The equation you get when you do set to zero is the associated homogeneous equation. And if all the coefficients in that associated homogeneous equation are just plain numbers, no variables attached, then we’re dealing with constant coefficients.
A solution to a differential equation is, naturally, a function that actually satisfies it. The set of solutions to a homogeneous linear differential equation forms a vector space. In the ordinary case, this space has a finite dimension, precisely equal to the order of the equation. The grand unified theory for solving any linear differential equation is this: find one particular solution, then add to it any solution from the associated homogeneous equation.
Linear Differential Operator
Let’s talk about operators. A basic differential operator of order is essentially a function that maps some other function to its -th derivative. For a single variable , this is often written as . For multiple variables, it gets more… involved, with partial derivatives. The derivative of order 0? That’s just the identity map. Don't overthink it.
A linear differential operator is a linear combination of these basic operators, where the coefficients are themselves differentiable functions. For a single variable , it looks like this:
Here, is the order of the operator, assuming isn't the zero function.
Let be such an operator. Applying to a function is denoted (or if you’re feeling pedantic). Crucially, these operators are linear. They respect addition and scalar multiplication. This means the set of linear differential operators itself forms a vector space (over the real numbers or complex numbers) and a free module over the ring of differentiable functions.
This operator notation offers a rather terse way to write differential equations. If we have our operator as defined above, the equation:
can be elegantly reduced to:
Or, if you prefer, .
The kernel of a linear differential operator is simply the kernel of as a linear map. In layman's terms, it's the vector space of solutions to the homogeneous equation .
According to Carathéodory's existence theorem, under very mild conditions – essentially, if the coefficients and the function are continuous in an interval , and is bounded away from zero in – the kernel of an -th order ordinary differential operator is an -dimensional vector space. The solutions to then take the form:
where are specific functions, and are arbitrary constants.
Homogeneous Equation with Constant Coefficients
A homogeneous linear differential equation with constant coefficients has the form:
where are constants. This is where Leonhard Euler made his mark, introducing the exponential function , the unique solution to with . The key insight is that the -th derivative of is . This simplifies solving these equations considerably.
If we seek solutions of the form , substituting this into the equation yields:
Since is never zero, must be a root of the characteristic polynomial:
This is the characteristic equation.
If all the roots of this polynomial are distinct, say , then we have distinct solutions . These solutions are linearly independent, which can be verified using the Vandermonde determinant. They form a basis for the solution space. Note that these solutions might be complex even if the coefficients are real.
Example
Consider the equation:
The characteristic equation is:
The roots are (with multiplicity 2). The corresponding solutions are . A real basis for the solution space is therefore .
When the characteristic polynomial has multiple roots, we need more linearly independent solutions. These take the form , where is a root of multiplicity , and . The proof relies on the fact that if is a root of multiplicity , the polynomial can be factored as . Applying the differential operator is equivalent to applying an operator with characteristic polynomial and then applying times. The exponential shift theorem shows that:
This means after applications of , the result is zero.
By the fundamental theorem of algebra, the sum of the multiplicities of the roots equals the degree of the polynomial. This ensures we obtain exactly linearly independent solutions, forming a basis for the solution space.
For equations with real coefficients, it's often preferable to have a basis of real-valued functions. Since complex roots of the characteristic polynomial come in conjugate pairs (), we can replace pairs of complex solutions and with real solutions and using Euler's formula.
Second-Order Case
A homogeneous linear ODE of the second order is written as:
Its characteristic polynomial is . If and are real, the nature of the solutions depends on the discriminant . The general solution always involves two arbitrary constants, and .
- : Two distinct real roots, and . The general solution is .
- : A double real root, . The general solution is .
- : Two complex conjugate roots, . The general solution is . Using Euler's formula, this can be rewritten in real terms as .
To find the specific solution satisfying initial conditions and (a Cauchy problem), we set up a system of two linear equations for and by evaluating the general solution and its derivative at .
Non-Homogeneous Equation with Constant Coefficients
A non-homogeneous equation of order with constant coefficients looks like this:
where are constants and is a given function. The method for solving it depends heavily on the form of .
- Exponential Response Formula: If is a simple exponential or sinusoidal function, this formula can be quite direct.
- Method of Undetermined Coefficients: This works well when is a linear combination of terms like , , and . You guess a solution of a similar form and solve for the coefficients.
- Annihilator Method: This is more general, applicable when itself satisfies some homogeneous linear differential equation. This often means is a holonomic function.
- Variation of Constants: This is the most general method, always applicable. Let be a basis for the solutions of the associated homogeneous equation. We assume a particular solution of the form , where are unknown functions. By imposing constraints on the derivatives of (specifically, ensuring that higher derivatives of don't generate extra terms involving ), we can derive a system of linear equations for . This system, involving and the basis solutions and their derivatives, can be solved using methods from linear algebra. Integrating the gives the , and thus a particular solution . The general solution is then plus the general solution of the homogeneous equation.
First-Order Equation with Variable Coefficients
A linear ODE of the first order, after some algebraic manipulation, takes the form:
If the equation is homogeneous (), we can separate variables:
Integrating both sides gives , leading to the general solution , where and is an arbitrary constant.
For the general non-homogeneous case, we can use an integrating factor. Multiplying the equation by (where is an antiderivative of ):
Recognizing that by the product rule, the left side becomes the derivative of the product :
Integrating both sides yields:
Thus, the general solution is:
Example
Consider the equation:
The associated homogeneous equation is , which leads to . Integrating gives , so .
Now, let's use the integrating factor method. Here, , so . Thus , and we can take for . The integrating factor is . Multiplying the original equation by :
The left side is . So:
Integrating gives:
And the general solution is:
Wait, that doesn't match the example's . Let's re-evaluate. Ah, the integrating factor should be . So for , , . The integrating factor is . Let's assume , so the factor is .
Multiplying the equation by :
The left side is precisely the derivative of the product :
Integrating both sides:
Therefore, the general solution is:
This matches the example. For the initial condition , we get , so , which means . The particular solution is .
System of Linear Differential Equations
A system of linear differential equations involves multiple unknown functions and multiple linear differential equations. Typically, we consider systems where the number of functions matches the number of equations.
Any linear ODE, or system thereof, can be converted into a first-order system. For instance, an equation involving can be transformed by introducing new variables . This generates a system of first-order equations: , , and so on, up to the original equation expressed in terms of these new variables.
A general first-order linear system of equations with unknown functions can be written in matrix form:
where is the vector of unknown functions, is a matrix of coefficients, and is a vector of forcing terms. If and are constants, the system is simpler.
The associated homogeneous system is . Its solutions form an -dimensional vector space. A basis for this space can be represented by the columns of a matrix , such that . If is a constant matrix, or if commutes with its antiderivative , then we can choose , the matrix exponential.
However, in the general case with variable coefficients, a simple closed-form solution for the homogeneous system is often elusive. Numerical methods or approximation techniques like the Magnus expansion become necessary.
Once we have the matrix whose columns form a basis for the homogeneous solutions, the general solution to the non-homogeneous system is:
where is an arbitrary constant vector (the constant of integration). If initial conditions are given, the solution is unique:
Higher Order with Variable Coefficients
As mentioned, linear ODEs of order one with variable coefficients are solvable by quadrature. This isn't generally true for higher orders. This fact is a cornerstone of Picard–Vessiot theory, a field that has evolved into differential Galois theory.
This impossibility of general quadrature solutions for higher-order equations with variable coefficients is analogous to the Abel–Ruffini theorem, which states that polynomial equations of degree five or higher generally cannot be solved using radicals. The underlying mathematical structures and proof techniques bear striking resemblances, hence the name "differential Galois theory."
Similar to its algebraic counterpart, this theory aims to identify which equations can be solved by quadrature and, if so, to provide those solutions. The computations involved, however, are notoriously complex, even for powerful computers.
A notable exception is the work done by Kovacic's algorithm, which completely solves the case of second-order equations with rational coefficients.
Cauchy–Euler Equation
Cauchy–Euler equations are a class of equations with variable coefficients that can be solved explicitly, regardless of their order. They have the form:
where are constant coefficients. The trick here is to substitute , which transforms the differential equation into an algebraic equation for .
Holonomic Functions
A holonomic function, also known as a D-finite function, is simply a solution to a homogeneous linear differential equation with polynomial coefficients.
It’s remarkable how many functions considered "standard" in mathematics are holonomic or can be expressed as quotients of holonomic functions. This includes polynomials, algebraic functions, the logarithm, the exponential function, trigonometric, hyperbolic functions, their inverses, and many special functions like Bessel functions and hypergeometric functions.
Holonomic functions possess powerful closure properties](/Closure_property). Their sums, products, derivatives, and integrals are also holonomic. Crucially, these properties are "effective," meaning algorithms exist to compute the differential equation of the result of these operations, given the equations of the input functions. This is the essence of Zeilberger's theorem.
The significance of holonomic functions lies in their computational tractability. If we represent them by their defining differential equations and initial conditions, most calculus operations can be automated. This includes differentiation, integration (both indefinite and definite), rapid computation of Taylor series (via recurrence relations on coefficients), high-precision evaluation with error bounds, finding limits, locating singularities, analyzing asymptotic behavior, and proving identities. The dynamic dictionary of mathematical functions (DDMF) is an example of such an endeavor.