- 1. Overview
- 2. Etymology
- 3. Cultural Impact
Alright, letâs dissect this⊠Wikipedia article. Itâs a bit dry, isnât it? Like a desert after a particularly uninspired sandstorm. But underneath the layers of academic rigor, thereâs a certain elegance to the way these concepts build upon each other. Think of it as a meticulously constructed edifice; each stone, no matter how small, is crucial.
Calculus on Euclidean Space
This isn’t just some academic exercise; it’s the bedrock upon which much of our understanding of the physical world is built. When we talk about calculus on Euclidean space â thatâs $\mathbb{R}^n$ for those who prefer symbols â weâre generalizing the familiar one-variable calculus to situations involving multiple dimensions. It’s also known, particularly in the States, as advanced calculus. Itâs closely related to multivariable calculus , but it takes things a step further, leaning more heavily on linear algebra and even dipping its toes into functional analysis . This more robust toolkit allows us to explore concepts from differential geometry, like differential forms and the rather powerful Stokes’ theorem . The extensive use of linear algebra is key here; it provides a natural pathway to extend these ideas to calculus on Banach spaces or other topological vector spaces. Think of it as a local model for calculus on manifolds , which are far more abstract and complex structures.
Basic Notions
Letâs recap the fundamentals, shall we?
Functions in One Real Variable
This is just a quick refresher, a nod to where we started. A real-valued function, $f:\mathbb{R} \to \mathbb{R}$, is considered continuous at a point $a$ if it remains relatively constant in the vicinity of $a$. Mathematically, this means $\lim_{h\to 0}(f(a+h)-f(a))=0$. Itâs like saying the function doesnât jump or have any sudden, jarring changes.
On the other hand, a function is differentiable at $a$ if it behaves linearly near $a$. This means thereâs a real number $\lambda$ such that $\lim_{h\to 0} \frac{f(a+h)-f(a)-\lambda h}{h} = 0$. If we simplify by assuming $f(a)=0$, then $f(a+h) \approx \lambda h$. This $\lambda$, which depends on $a$, is the derivative, denoted $f’(a)$. When $f’(a)$ is not zero, the function is either strictly increasing or decreasing around $a$, allowing for a well-defined inverse function in a neighborhood. The inverse function theorem tells us that this inverse is also differentiable, with its derivative given by $(f^{-1})’(y) = \frac{1}{f’(f^{-1}(y))}$.
If $f$ is differentiable on an open interval $U$, and its derivative $f’$ is itself a continuous function on $U$, then $f$ is called a $C^1$ function. This extends to $C^k$ functions, where the $k$-th derivative is continuous. Taylor’s theorem essentially states that a $C^k$ function can be accurately approximated by a polynomial of degree $k$.
Derivative of a Map and Chain Rule
Now, letâs step up the complexity. For functions $f: X \to Y$, where $X$ is an open subset of $\mathbb{R}^n$ and $Y$ is an open subset of $\mathbb{R}^m$, the concept of differentiability needs to account for vector-valued outputs. At a point $x \in X$, the derivative $f’(x)$ is no longer a scalar but a linear transformation $f’(x): \mathbb{R}^n \to \mathbb{R}^m$. The condition for differentiability is: $$ \lim_{h\to 0} \frac{1}{|h|} |f(x+h) - f(x) - f’(x)h| = 0 $$ This basically means that the linear transformation $f’(x)$ is the best linear approximation of $f$ near $x$. If $f$ is differentiable at $x$, it’s also continuous at $x$, as the error term $f(x+h) - f(x) - f’(x)h$ goes to zero faster than $h$.
The chain rule is as essential here as it is in one dimension. If $f: X \to Y$ is differentiable at $x$ and $g: Y \to Z$ is differentiable at $y = f(x)$, then the composition $g \circ f: X \to Z$ is differentiable at $x$, with its derivative given by $(g \circ f)’(x) = g’(y) \circ f’(x)$. This is derived by carefully analyzing the error terms, much like in the single-variable case.
A map $f$ is called continuously differentiable , or $C^1$, if it’s differentiable and its derivative map $x \mapsto f’(x)$ is continuous. This property is preserved under composition, so if $f$ and $g$ are $C^1$, then $g \circ f$ is also $C^1$.
In terms of coordinates, the linear transformation $f’(x)$ is represented by the Jacobian matrix $Jf(x)$. This $m \times n$ matrix contains the partial derivatives of the components of $f$: $$ (Jf)(x)_{ij} = \frac{\partial f_i}{\partial x_j}(x) $$ The chain rule in matrix form becomes $J(g \circ f)(x) = Jg(y)Jf(x)$.
A crucial link between differentiability and the existence of partial derivatives is provided by the mean value inequality . If the partial derivatives of $f$ exist and are continuous, then $f$ is continuously differentiable. The inequality states that for a line segment within $X$, the change in $f$ is bounded by the supremum of the derivative along that segment. This inequality is instrumental in proving that if the partial derivatives are continuous, the map is indeed differentiable.
Consider the example of the inverse of an invertible matrix. If $f(g) = g^{-1}$ for $g \in GL(n, \mathbb{R})$, the derivative is $f’(g)h = -g^{-1}hg^{-1}$. This calculation, using the matrix exponential and chain rule, demonstrates that the inverse function is smooth, meaning all its derivatives are continuous.
Higher Derivatives and Taylor Formula
We can extend the notion of derivatives beyond the first. The second derivative, $f’’(x)$, is a bilinear map from $(\mathbb{R}^n)^2$ to $\mathbb{R}^m$. For scalar-valued functions ($m=1$), this bilinear map is represented by the Hessian matrix . The entries of the Hessian are the second partial derivatives: $$ H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}(x) $$ A fundamental result, stemming from the mean value inequality, is the symmetry of second derivatives . If $f$ is $C^2$, then $\frac{\partial^2 f}{\partial x_i \partial x_j} = \frac{\partial^2 f}{\partial x_j \partial x_i}$. This implies that the Hessian matrix is symmetric. This symmetry extends to higher-order derivatives; if $f$ is $C^k$, then the $k$-multilinear map $f^{(k)}(x)$ is symmetric.
Taylor’s theorem can be extended to multiple variables, providing a polynomial approximation of a function. The formula involves higher-order derivatives and an integral remainder term. A fascinating application of Taylor’s formula is in showing that certain linear operators, like the Fourier transform, have specific properties when they commute with coordinate multiplication and differentiation. This is illustrated with an example involving the Schwartz space $\mathcal{S}$, demonstrating how Taylor’s formula can simplify the analysis of such operators.
Inverse Function Theorem and Submersion Theorem
The inverse function theorem is a cornerstone. It states that if a $C^k$ map $f$ has an invertible derivative $f’(x)$ at a point $x$, then $f$ is a diffeomorphism (a smooth map with a smooth inverse) in a neighborhood of $x$. This means that locally, $f$ behaves like a simple stretching or rotation.
The implicit function theorem is a close relative. It allows us to define implicitly defined functions. If we have an equation $f(a,b)=0$ and the partial derivative with respect to $b$ is invertible at $(a,b)$, then we can locally express $b$ as a function of $a$, say $b=g(a)$.
The submersion theorem deals with maps where the derivative has maximal rank. These maps behave locally like projections, mapping a higher-dimensional space onto a lower-dimensional one.
Integrable Functions on Euclidean Spaces
This brings us to integration. For a rectangle $D$ in $\mathbb{R}^n$, the Riemann integral is defined using partitions of the rectangle and upper and lower sums. A function is integrable if these upper and lower sums converge to the same value. A crucial theorem states that a bounded function is Riemann integrable if and only if the set of its discontinuities has measure zero .
Fubini’s theorem is a powerful tool for computing multiple integrals. It allows us to reduce an $n$-dimensional integral to an iterated sequence of one-dimensional integrals. The order of integration, conveniently, does not matter for well-behaved functions.
For more general subsets $M$ of $\mathbb{R}^n$, the integral is defined by extending the function $f$ to a larger rectangle $D$ containing $M$ using the characteristic function $\chi_M$, and then integrating $\chi_M f$ over $D$.
Surface Integral
For surfaces in $\mathbb{R}^3$, we define the surface integral . If a surface $M$ is parameterized by $\mathbf{r}(u,v)$, the surface integral of a function $F$ is given by: $$ \int_{M} F , dS = \iint_{D} (F \circ \mathbf{r}) |\mathbf{r}_u \times \mathbf{r}_v| , du , dv $$ For vector-valued functions $F$, we integrate the normal component $F \cdot \mathbf{n}$. This leads to the integral form involving the determinant of $F$, $\mathbf{r}_u$, and $\mathbf{r}_v$.
Vector Analysis
Tangent Vectors and Vector Fields
A differentiable curve $c:[0,1] \to \mathbb{R}^n$ has a tangent vector at each point, given by its derivative $c’(t)$. For a helix, for example, the tangent vector captures the instantaneous direction of motion.
On a manifold $M$, the tangent space $T_pM$ at a point $p$ consists of all possible tangent vectors to curves passing through $p$. A vector field assigns a tangent vector to each point in $M$ in a smooth manner.
Differential Forms
The dual concept to a vector field is a differential form. A 1-form $\omega$ assigns a linear functional $\omega_p$ on the tangent space $T_pM$ to each point $p$, varying smoothly. The exterior derivative $df$ of a function $f$ is a 1-form where $df_p(v) = v(f)$, the directional derivative of $f$ in the direction $v$ at $p$. The basis 1-forms $dx_i$ are dual to the standard basis vectors $e_j$.
More generally, a $k$-form is an assignment of an element in the $k$-th exterior power of the cotangent space to each point. These forms can be expressed uniquely in terms of basis forms $dx_{i_1} \wedge \dots \wedge dx_{i_k}$.
The exterior derivative $d$ extends to all differential forms, satisfying $d \circ d = 0$. This property is a consequence of the symmetry of second derivatives.
Boundary and Orientation
Orientability is a crucial concept for integration on manifolds. A manifold is orientable if we can consistently choose normal vectors. A key result states that a $k$-dimensional manifold admits an orientation if and only if it has a non-vanishing $k$-form.
Integration of Differential Forms
The integral of an $n$-form $\omega = f , dx_1 \wedge \dots \wedge dx_n$ over an oriented $n$-manifold $M$ is defined as $\int_M f , dx_1 \dots dx_n$. The sign of the integral depends on the orientation.
The fundamental relationship between the exterior derivative and integration is Stokes’ formula : $$ \int_{\partial M} \omega = \int_M d\omega $$ This formula is a powerful generalization of the fundamental theorem of calculus , Green’s theorem, and the divergence theorem. It connects integrals over a region to integrals over its boundary. The derivation involves approximating the characteristic function of $M$ with smooth functions and using integration by parts.
Stokes’ formula also leads to a generalization of Cauchy’s integral formula in complex analysis. By introducing complex derivatives $\partial/\partial z$ and $\partial/\partial \bar{z}$, and using Stokes’ theorem on a punctured disk, we arrive at the formula relating the value of a holomorphic function at a point to an integral over a contour.
Winding Numbers and Poincaré Lemma
A differential form is called closed if its exterior derivative is zero, and exact if it is the exterior derivative of another form. While all exact forms are closed, the converse is not always true. A classic example is the 1-form $d\theta$ in polar coordinates, which is closed everywhere except at the origin but not exact on $\mathbb{R}^2 \setminus {0}$.
The PoincarĂ© lemma states that on a simply connected open set, every closed 1-form is exact. This means that if there are no “holes” in the space, the distinction between closed and exact forms disappears for 1-forms.
Geometry of Curves and Surfaces
Moving Frame
A frame field on $\mathbb{R}^3$ consists of three mutually orthogonal vector fields. Examples include the standard basis fields and cylindrical coordinate fields. For studying curves, the Frenet frame (tangent, normal, and binormal vectors) is particularly important.
The GaussâBonnet Theorem
The GaussâBonnet theorem is a profound result connecting the intrinsic geometry of a surface to its topology. It states that the integral of the Gaussian curvature $K$ over a surface $M$ is proportional to its Euler characteristic $\chi(M)$.
Calculus of Variations
Method of Lagrange Multiplier
The method of Lagrange multipliers is used to find extrema of a function subject to constraints. If $f$ is to be extremized subject to $g(x)=0$, we seek points where $\nabla f = \lambda \nabla g$ for some scalar $\lambda$. This principle is illustrated with an example of finding the minimum distance between a circle and a line. It also has a notable application in linear algebra, proving that a self-adjoint operator on a finite-dimensional vector space is diagonalizable.
Weak Derivatives
The concept of a weak derivative extends the notion of differentiation to functions that may not be classically differentiable. This is achieved by moving the derivative from the function to a test function (a smooth function with compact support) via integration by parts. If $\int (f-g)\phi , dx = 0$ for all test functions $\phi$, then $f=g$ almost everywhere. This allows us to define derivatives for a broader class of functions, including distributions like the Dirac delta function . The weak derivative of the Heaviside function, for instance, is the Dirac delta. Cauchy’s integral formula also has an interpretation in terms of weak derivatives, identifying the fundamental solution for the operator $\partial/\partial \bar{z}$.
Calculus on Manifolds
Definition of a Manifold
A manifold is a topological space that locally resembles Euclidean space. It’s equipped with an atlas of charts that map open sets to $\mathbb{R}^n$ and are compatible with each other via smooth transitions. A manifold is “maximal” in the sense that its atlas cannot be extended. A function on a manifold is smooth if it is smooth in the Euclidean sense within each chart. Manifolds are also paracompact , which allows for the existence of partitions of unity.
A manifold-with-boundary is similar but allows for charts mapping to half-spaces. The boundary of such a manifold is defined by the points that map to the boundary of the half-space.
A crucial theorem states that the zero set of a smooth map $g: U \to \mathbb{R}^r$ with derivative of rank $r$ is an $(n-r)$-manifold. This theorem provides a powerful way to construct manifolds, such as spheres.
Whitney’s embedding theorem guarantees that any $k$-manifold can be smoothly embedded into $\mathbb{R}^{2k}$. This means that abstract manifolds can always be realized as geometric objects within Euclidean space.
Tubular Neighborhood and Transversality
The tubular neighborhood theorem states that a compact submanifold $N$ within a manifold $M$ can be surrounded by a neighborhood that is diffeomorphic to its normal bundle. This is a fundamental tool in differential geometry.
Integration on Manifolds and Distribution Densities
Integration on manifolds can be approached in several ways: integrating differential forms, integrating with respect to a measure, or using a Riemannian metric. Integrating differential forms requires an orientation on the manifold. Integrating with respect to a measure is possible if the manifold can be embedded into Euclidean space. The third approach, using a Riemannian metric, leads to the concept of a density.
Generalizations
Extensions to Infinite-Dimensional Normed Spaces
The concepts of differentiability and calculus can be extended to infinite-dimensional normed spaces , opening up even more abstract and powerful mathematical landscapes.