- 1. Overview
- 2. Etymology
- 3. Cultural Impact
Alright, let’s dissect this. You want me to take a Wikipedia article, specifically the one on the Leibniz integral rule, and rewrite it. Not just paraphrase, mind you, but expand, elaborate, and inject… my perspective. All while keeping every single fact, every single link, and maintaining the original structure. And it needs to be at least as long, if not longer, than the original. This sounds like an exercise in controlled chaos. Fine. Let’s see if we can make this dry subject slightly less… desiccated.
Differentiation under the integral sign
This article delves into the intricacies of an integral rule. For those seeking knowledge on convergence tests for alternating series, the relevant discourse can be found under Alternating series test .
The current state of this article, frankly, is a bit anemic. It suffers from a distinct lack of verifiable sources. It’s as if the author just assumed everyone would take their word for it. Please, if you have any actual information, add citations to reliable sources to improve this article . Failing that, unsourced material is subject to challenge and, quite frankly, removal. A quick search for “Leibniz integral rule” in news , newspapers , books , scholar , and JSTOR might yield some illuminating results. (October 2016) ( Learn how and when to remove this message )
This piece is but a single segment within the grander tapestry of Calculus .
∫
a
b
f ′
( t )
d t
f ( b ) − f ( a )
{\displaystyle \int _{a}^{b}f’(t),dt=f(b)-f(a)}
A Glimpse into the Foundations of Calculus
This section serves as a foundational pillar for understanding more advanced mathematical concepts. It’s a curated list of theorems and definitions that form the bedrock upon which calculus is built.
- The Fundamental theorem is, of course, paramount. It’s the bridge between differentiation and integration, a concept so profound it practically hums with significance.
- Limits are the very essence of calculus, the infinitesimally small whispers that define change and accumulation. Without them, nothing else exists.
- Continuity ensures that functions behave predictably, without jarring jumps or sudden disappearances. It’s the mathematical equivalent of a smooth ride.
- Rolle’s theorem and the Mean value theorem provide crucial insights into the behavior of differentiable functions, guaranteeing the existence of points where the function’s rate of change matches its average rate of change over an interval. They’re like cosmic guarantees of fairness in the mathematical universe.
- The Inverse function theorem deals with the conditions under which a function can be “undone,” a concept vital for solving equations and understanding transformations.
Differential Calculus
This is where the magic of change is truly explored.
Definitions
- The Derivative itself, along with its generalizations , is the star of the show. It’s the instantaneous rate of change, the slope of a tangent line, the very pulse of a function.
- The Differential and the concept of the infinitesimal are the building blocks, the infinitesimally small quantities that allow us to dissect and understand continuous change.
- Understanding the differential of a function , both in its general form and as a total derivative, is key to grasping how functions change in multiple dimensions.
Concepts
- Notation for differentiation is more than just symbols; it’s a language that allows us to communicate complex ideas efficiently.
- The Second derivative tells us about the curvature, the rate of change of the rate of change. It’s how we understand concavity and inflection points.
- Implicit differentiation and Logarithmic differentiation are powerful techniques for handling functions that aren’t explicitly defined in a simple y = f(x) format. They’re like secret passages for finding derivatives.
- Related rates problems are where calculus meets the real world, analyzing how the rates of change of different quantities are interconnected.
- Taylor’s theorem is a marvel, allowing us to approximate complex functions with simpler polynomials, a cornerstone of numerical analysis and approximation theory.
Rules and Identities
These are the workhorses of differentiation.
- The Sum , Product , and Quotient rules are fundamental for combining simpler derivatives into more complex ones.
- The Chain rule is indispensable for differentiating composite functions, effectively allowing us to peel back layers of functions.
- The Power rule is a simple yet elegant rule for differentiating powers of x.
- L’Hôpital’s rule is a godsend for indeterminate forms, providing a systematic way to evaluate limits that would otherwise be intractable.
- The Inverse function rule connects the derivative of a function to the derivative of its inverse.
- The General Leibniz rule (yes, the namesake of this article) is a more advanced version, dealing with the derivatives of products of functions, and it’s a beautiful piece of mathematical machinery.
- Faà di Bruno’s formula is a generalization of the chain rule for higher derivatives of composite functions. It’s complex, but incredibly powerful.
- The Reynolds transport theorem is a vital tool in continuum mechanics, extending the Leibniz rule to integrals over moving regions.
Integral Calculus
This is where we sum up the infinitesimal.
- Lists of integrals and Integral transforms are vast landscapes of mathematical tools.
- The Leibniz integral rule is the focus here, a way to handle integrals where the integrand itself changes with respect to a variable.
Definitions
- The Antiderivative is the inverse operation of differentiation, the function whose derivative is the given function.
- The Integral itself, whether improper , Riemann , Lebesgue , or Contour integration , represents accumulation, area, volume, and more.
- The Integral of inverse functions is a specific application that ties together different aspects of integration.
Integration Techniques
These are the methods by which we conquer integrals.
- Integration by parts is like the product rule for integration, a way to break down complex integrals.
- Disc and Shell integration are methods for calculating volumes of revolution.
- Integration by substitution , including trigonometric , tangent half-angle , and Euler substitutions, is a way to simplify integrals by changing variables.
- Euler’s formula provides a pathway to integration using complex exponentials.
- Partial fractions , often employing Heaviside’s method , is crucial for integrating rational functions.
- Changing the order of integration can transform a difficult double integral into a manageable one.
- Reduction formulae systematically reduce the complexity of certain integrals.
- Differentiating under the integral sign is our main topic, a powerful technique for evaluating integrals by turning them into differentiation problems.
- The Risch algorithm is a sophisticated method for finding symbolic integrals.
Series
Infinite sums, a world unto themselves.
- Geometric and arithmetico-geometric series are foundational.
- The Harmonic series and its alternating counterpart are classic examples with fascinating convergence properties.
- Power series and Binomial series are essential for approximating functions and solving differential equations.
- Taylor series are the ultimate function approximators, representing functions as infinite polynomials.
Convergence Tests
How do we know if an infinite sum actually adds up to something?
- The Summand limit (term test) is the first line of defense.
- The Ratio and Root tests are powerful tools for determining convergence.
- The Integral test for convergence links series convergence to integral convergence.
- Direct comparison and Limit comparison tests allow us to compare a series to one whose convergence is known.
- The Alternating series test , Cauchy condensation test , Dirichlet’s test , and Abel’s test offer specialized methods for various types of series.
Vector Calculus
The calculus of fields and curves in space.
- Gradient , Divergence , Curl , and the Laplacian are fundamental operators describing how vector fields change.
- The Directional derivative tells us the rate of change of a scalar function in any direction.
- Vector calculus identities are the algebraic rules governing these operations.
Theorems
These theorems connect different aspects of vector calculus.
- The Gradient theorem , Green’s theorem , Stokes’ theorem , and the Divergence theorem (also known as Gauss’s theorem) are fundamental results relating integrals over regions to integrals over their boundaries.
- The Generalized Stokes theorem is a powerful unification of these ideas.
- Helmholtz decomposition breaks down vector fields into simpler components.
Multivariable Calculus
Extending calculus to functions of several variables.
Formalisms
- Matrix calculus , Tensor calculus , Exterior calculus , and Geometric calculus provide different frameworks for understanding multivariable functions and their operations.
Definitions
- Partial derivatives measure change with respect to a single variable while holding others constant.
- Multiple integrals , Line integrals , Surface integrals , and Volume integrals extend the concept of integration to higher dimensions.
- The Jacobian matrix and determinant and the Hessian matrix are crucial for understanding local behavior and transformations in multivariable calculus.
Advanced Topics
Pushing the boundaries of calculus.
- Calculus on Euclidean space and the theory of Generalized functions (or distributions) allow us to work with even more abstract mathematical objects.
- Understanding the Limit of distributions is essential for advanced analysis.
Specialized Areas
Niche but important branches of calculus.
- Fractional calculus , Malliavin calculus , Stochastic calculus , and the Calculus of variations explore non-traditional forms of differentiation and integration.
Miscellanea
A catch-all for related concepts.
Precalculus sets the stage.
The History of calculus provides context and perspective.
A Glossary of calculus terms is helpful for clarity.
A comprehensive List of calculus topics offers a roadmap.
The Integration Bee is a nod to the competitive side of mathematics.
Mathematical analysis and Nonstandard analysis offer deeper theoretical foundations.
v • t • e
Leibniz Integral Rule
In the grand, often unforgiving, landscape of calculus , the Leibniz integral rule stands as a testament to the power of differentiation under the integral sign. It’s a technique, named after the formidable Gottfried Wilhelm Leibniz , that allows us to untangle the derivative of an integral whose limits and integrand are functions of the differentiation variable. Imagine an integral of the form:
∫
a ( x )
b ( x )
f ( x , t )
d t ,
{\displaystyle \int _{a(x)}^{b(x)}f(x,t),dt,}
where the limits of integration, $a(x)$ and $b(x)$, are not static constants but rather functions that themselves depend on $x$. They are bounded, naturally: $-\infty < a(x), b(x) < \infty$. The integrand, $f(x, t)$, is also a function of both the integration variable $t$ and the variable $x$ with respect to which we are differentiating. This is where things get interesting.
The Leibniz integral rule elegantly states that the derivative of this integral with respect to $x$ can be expressed as:
d
d x
(
∫
a ( x )
b ( x )
f ( x , t )
d t
)
= f
(
x , b ( x )
)
⋅
d
d x
b ( x ) − f
(
x , a ( x )
)
⋅
d
d x
a ( x ) +
∫
a ( x )
b ( x )
∂
∂ x
f ( x , t )
d t
{\displaystyle {\begin{aligned}&{\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t),dt\right)\&=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t),dt\end{aligned}}}
The term $\frac{\partial}{\partial x} f(x, t)$ signifies that, within the integral, we are only concerned with how $f(x, t)$ changes with respect to $x$, treating $t$ as a constant for that specific differentiation. This is a crucial distinction. [1]
Simplified Scenarios
When the boundaries of integration decide to behave themselves and remain constant – say, $a(x) = a$ and $b(x) = b$, independent of $x$ – the rule simplifies considerably. It becomes:
d
d x
(
∫
a
b
f ( x , t )
d t
)
=
∫
a
b
∂
∂ x
f ( x , t )
d t
.
{\displaystyle {\frac {d}{dx}}\left(\int _{a}^{b}f(x,t),dt\right)=\int _{a}^{b}{\frac {\partial }{\partial x}}f(x,t),dt.}
This is the essence of differentiating under the integral sign when the limits are fixed.
Another common, and quite useful, scenario arises when the upper limit is $x$ itself, and the lower limit is a constant, say $a$. This is often encountered in proofs, such as the derivation of Cauchy’s repeated integration formula ). In this case, the Leibniz rule morphs into:
d
d x
(
∫
a
x
f ( x , t )
d t
)
= f
(
x , x
)
∫
a
x
∂
∂ x
f ( x , t )
d t
,
{\displaystyle {\frac {d}{dx}}\left(\int _{a}^{x}f(x,t),dt\right)=f{\big (}x,x{\big )}+\int _{a}^{x}{\frac {\partial }{\partial x}}f(x,t),dt,}
which cleverly incorporates the value of the integrand at the upper limit.
This rule is more than just a mathematical curiosity; it’s a powerful tool for interchanging the order of integration and differentiation, a maneuver that can unlock solutions to complex integral transforms . Think of the moment generating function in probability theory , a cousin to the Laplace transform . Differentiating it under the integral sign is how we extract the moments of a random variable . At its core, the applicability of Leibniz’s rule hinges on the subtle interplay of limits .
The General Theorem: A Rigorous Framework
Let’s formalize this. For the Leibniz integral rule to hold, we require certain conditions on our functions.
Theorem — Suppose $f(x, t)$ is a function such that both $f(x, t)$ and its partial derivative with respect to $x$, denoted as $f_x(x, t)$, are continuous within a specific region of the $xt$-plane. This region is defined by $a(x) \leq t \leq b(x)$ and $x_0 \leq x \leq x_1$. Furthermore, the boundary functions $a(x)$ and $b(x)$ must also be continuous and possess continuous derivatives across the interval $x_0 \leq x \leq x_1$.
Under these conditions, for any $x$ within the range $x_0 \leq x \leq x_1$, the following holds true:
d
d x
(
∫
a ( x )
b ( x )
f ( x , t )
d t
)
= f
(
x , b ( x )
)
⋅
d
d x
b ( x ) − f
(
x , a ( x )
)
⋅
d
d x
a ( x ) +
∫
a ( x )
b ( x )
∂
∂ x
f ( x , t )
d t
.
{\displaystyle {\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t),dt\right)=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t),dt.}
The right-hand side can also be expressed more compactly using Lagrange’s notation as:
f ( x , b ( x ) )
b
′
( x ) − f ( x , a ( x ) )
a
′
( x ) +
∫
a ( x )
b ( x )
f
x
( x , t )
d t
.
{\textstyle f(x,b(x)),b^{\prime }(x)-f(x,a(x)),a^{\prime }(x)+\displaystyle \int {a(x)}^{b(x)}f{x}(x,t),dt.}
It’s worth noting that stronger versions of this theorem exist, which relax the continuity requirement for the partial derivative, only needing it to exist almost everywhere . [2] This formula is the general manifestation of the Leibniz integral rule and is fundamentally derived from the fundamental theorem of calculus . In fact, the very first fundamental theorem of calculus is a special case of this rule, where $a(x)$ is a constant $a \in \mathbb{R}$, $b(x) = x$, and $f(x, t)$ is simply $f(t)$, independent of $x$.
When both the upper and lower limits are constants, the rule takes on a particularly elegant form as an operator equation:
I
t
∂
x
=
∂
x
I
t
{\displaystyle {\mathcal {I}}_{t}\partial _{x}=\partial {x}{\mathcal {I}}{t}}
Here, $\partial_x$ represents the partial derivative with respect to $x$, and $\mathcal{I}_t$ is the integration operator with respect to $t$ over a fixed interval . This relationship hints at the profound connection between differentiation and integration, echoing the symmetry of second derivatives but extended to encompass integrals. This specific scenario is often what’s meant by the “Leibniz integral rule.”
The following three fundamental principles concerning the interchange of limiting operations are, in essence, deeply intertwined and equivalent:
- The ability to interchange a derivative and an integral (this is differentiation under the integral sign, or the Leibniz integral rule itself).
- The ability to change the order of partial derivatives.
- The ability to change the order of integration (integration under the integral sign, exemplified by Fubini’s theorem ).
The Three-Dimensional, Time-Dependent Case
For scenarios involving vector fields in three-dimensional space that evolve over time, the Leibniz integral rule takes on a more complex form. Imagine a vector field $\mathbf{F}(\mathbf{r}, t)$ defined across space, and a surface $\Sigma(t)$ bounded by a curve $\partial\Sigma(t)$, which is itself moving with a velocity $\mathbf{v}$. The rule for the time rate of change of the surface integral of $\mathbf{F}$ is given by:
d
d t
(
∬
Σ ( t )
F
(
r
, t ) ⋅ d
A
)
=
∬
Σ ( t )
(
F
t
(
r
, t ) +
[
∇ ⋅
F
(
r
, t )
]
v
)
⋅ d
A
−
∮
∂ Σ ( t )
[
v
×
F
(
r
, t )
]
⋅ d
s
,
{\displaystyle {\frac {d}{dt}}\iint _{\Sigma (t)}\mathbf {F} (\mathbf {r} ,t)\cdot d\mathbf {A} =\iint _{\Sigma (t)}\left(\mathbf {F} _{t}(\mathbf {r} ,t)+\left[\nabla \cdot \mathbf {F} (\mathbf {r} ,t)\right]\mathbf {v} \right)\cdot d\mathbf {A} -\oint _{\partial \Sigma (t)}\left[\mathbf {v} \times \mathbf {F} (\mathbf {r} ,t)\right]\cdot d\mathbf {s} ,}
where:
- $\mathbf{F}(\mathbf{r}, t)$ is the vector field at spatial position $\mathbf{r}$ and time $t$.
- $\mathbf{F}_t(\mathbf{r}, t)$ denotes the partial derivative of the vector field with respect to time.
- $\Sigma$ represents the surface, and $\partial\Sigma$ is its boundary curve.
- $d\mathbf{A}$ is a vector element of the surface $\Sigma$.
- $d\mathbf{s}$ is a vector element of the boundary curve $\partial\Sigma$.
- $\mathbf{v}$ is the velocity at which the region $\Sigma$ is moving.
- $\nabla \cdot$ signifies the vector divergence .
- $\times$ denotes the vector cross product .
- The double integrals are surface integrals over $\Sigma$, and the line integral is performed along the boundary curve $\partial\Sigma$. [3] [4] [5]
Higher Dimensions and Differential Forms
The Leibniz integral rule extends naturally to multidimensional integrals. In two and three dimensions, this generalization is widely recognized in fluid dynamics as the Reynolds transport theorem . For a scalar function $F(\mathbf{x}, t)$ over a time-varying region $D(t)$ in $\mathbb{R}^3$:
d
d t
(
∫
D ( t )
F (
x
, t )
d V
)
=
∫
D ( t )
(
∂
∂ t
F (
x
, t )
)
d V +
∫
∂ D ( t )
F (
x
, t )
v
b
⋅ d
Σ
,
{\displaystyle {\frac {d}{dt}}\int _{D(t)}F(\mathbf {x} ,t),dV=\int _{D(t)}{\frac {\partial }{\partial t}}F(\mathbf {x} ,t),dV+\int _{\partial D(t)}F(\mathbf {x} ,t)\mathbf {v} _{b}\cdot d\mathbf {\Sigma } ,}
where $D(t)$ and $\partial D(t)$ represent the time-varying region and its boundary, respectively. $\mathbf{v}_b$ is the Eulerian velocity of the boundary (refer to Lagrangian and Eulerian coordinates ), and $d\mathbf{\Sigma} = \mathbf{n} , dS$ incorporates the unit normal vector $\mathbf{n}$ and the surface element $dS$.
To express the Leibniz integral rule in its most general form, we delve into the realm of differential geometry , employing concepts like differential forms , exterior derivatives , wedge products , and interior products . In $n$ dimensions, the rule becomes:
d
d t
(
∫
Ω ( t )
ω
)
=
∫
Ω ( t )
i
v
(
d
x
ω
) +
∫
∂ Ω ( t )
i
v
ω +
∫
Ω ( t )
ω ˙
,
{\displaystyle {\frac {d}{dt}}\int {\Omega (t)}\omega =\int {\Omega (t)}i{\mathbf {v} }(d{x}\omega )+\int {\partial \Omega (t)}i{\mathbf {v} }\omega +\int _{\Omega (t)}{\dot {\omega }},}
where $\Omega(t)$ is the time-varying domain of integration, $\omega$ is a $p$-form, $\mathbf{v} = \frac{\partial \mathbf{x}}{\partial t}$ is the velocity field, $i_{\mathbf{v}}$ denotes the interior product with $\mathbf{v}$, $d_x \omega$ is the exterior derivative with respect to spatial variables only, and $\dot{\omega}$ is the time derivative of $\omega$. [4]
This formulation can be derived from the property that the Lie derivative interacts harmoniously with the integration of differential forms:
d
d t
(
∫
Ω ( t )
ω
)
=
∫
Ω ( t )
L
Ψ
ω
,
{\displaystyle {\frac {d}{dt}}\int _{\Omega (t)}\omega =\int {\Omega (t)}{\mathcal {L}}{\Psi }\omega ,}
Considering the spacetime manifold $M = \mathbb{R} \times \mathbb{R}^3$, where the spacetime exterior derivative of $\omega$ is $d\omega = dt \wedge \dot{\omega} + d_x \omega$, and the spacetime velocity field of the surface $\Omega(t)$ is $\Psi = \frac{\partial}{\partial t} + \mathbf{v}$. Using Cartan’s magic formula , the Lie derivative simplifies:
$L_{\Psi}\omega = L_{\mathbf{v}}\omega + L_{\frac{\partial}{\partial t}}\omega = i_{\mathbf{v}}d\omega + di_{\mathbf{v}}\omega + i_{\frac{\partial}{\partial t}}d\omega = i_{\mathbf{v}}d_{x}\omega + di_{\mathbf{v}}\omega + \dot{\omega}$
Integrating this over $\Omega(t)$ and applying the generalized Stokes’ theorem to the second term yields the desired three components.
A Statement from Measure Theory
Let $X$ be an open subset of $\mathbf{R}$, and let $\Omega$ be a measure space . Suppose $f: X \times \Omega \to \mathbf{R}$ satisfies the following conditions: [6] [7] [2]
- For each $x \in X$, $f(x, \omega)$ is a Lebesgue-integrable function of $\omega$.
- For almost all $\omega \in \Omega$, the partial derivative $f_x$ exists for all $x \in X$.
- There exists an integrable function $\theta: \Omega \to \mathbf{R}$ such that $|f_x(x, \omega)| \leq \theta(\omega)$ for all $x \in X$ and almost every $\omega \in \Omega$.
Under these conditions, for all $x \in X$:
d
d x
(
∫
Ω
f ( x , ω )
d ω
)
=
∫
Ω
f
x
( x , ω )
d ω
.
{\displaystyle {\frac {d}{dx}}\int _{\Omega }f(x,\omega ),d\omega =\int {\Omega }f{x}(x,\omega ),d\omega .}
The proof relies on the dominated convergence theorem and the mean value theorem , as detailed below.
Proofs: Unraveling the Mechanics
Proof of the Basic Form (Constant Limits)
Let’s start with the simplest case: constant limits of integration, $a$ and $b$.
We leverage Fubini’s theorem to swap the order of integration. For any $x$ and $h$ (where $h > 0$ and both $x$ and $x+h$ fall within $[x_0, x_1]$), we have:
∫
x
x + h
∫
a
b
f
x
( x , t )
d t
d x
=
∫
a
b
∫
x
x + h
f
x
( x , t )
d x
d t
{\displaystyle \int _{x}^{x+h}\int {a}^{b}f{x}(x,t),dt,dx=\int _{a}^{b}\int {x}^{x+h}f{x}(x,t),dx,dt}
=
∫
a
b
(
f ( x + h , t ) − f ( x , t )
)
d t
{\displaystyle =\int _{a}^{b}\left(f(x+h,t)-f(x,t)\right),dt}
=
∫
a
b
f ( x + h , t )
d t −
∫
a
b
f ( x , t )
d t
{\displaystyle =\int _{a}^{b}f(x+h,t),dt-\int _{a}^{b}f(x,t),dt}
The continuity of $f_x(x, t)$ across the closed rectangle $[x_0, x_1] \times [a, b]$ ensures that these integrals are well-defined and that we can indeed pass the limit through the integration signs. This is a consequence of uniform continuity.
Therefore, we can write:
(
∫
a
b
f ( x + h , t )
d t −
∫
a
b
f ( x , t )
d t
)
/ h
=
1 h
∫
x
x + h
∫
a
b
f
x
( x , t )
d t
d x
{\displaystyle {\frac {\int _{a}^{b}f(x+h,t),dt-\int _{a}^{b}f(x,t),dt}{h}}={\frac {1}{h}}\int _{x}^{x+h}\int {a}^{b}f{x}(x,t),dt,dx}
=
(
F ( x + h ) − F ( x )
)
/ h
{\displaystyle ={\frac {F(x+h)-F(x)}{h}}}
Where $F(u) := \int_{x_0}^{u} \int_{a}^{b} f_x(x,t) , dt , dx$. (We could pick any starting point $x_0$ within the interval).
Since $F(u)$ is differentiable with derivative $\int_{a}^{b} f_x(x,t) , dt$, we can take the limit as $h \to 0$. For the left side, this limit is precisely $\frac{d}{dx} \int_{a}^{b} f(x,t) , dt$. For the right side, the limit is $F’(x) = \int_{a}^{b} f_x(x,t) , dt$. This establishes the result:
d
d x
(
∫
a
b
f ( x , t )
d t
)
=
∫
a
b
f
x
( x , t )
d t
.
{\displaystyle {\frac {d}{dx}}\int _{a}^{b}f(x,t),dt=\int {a}^{b}f{x}(x,t),dt}
Another Proof Using the Dominated Convergence Theorem
When dealing with Lebesgue integrals , the bounded convergence theorem (a close relative of the dominated convergence theorem ) becomes a powerful ally, allowing us to swap limits and integrals. This proof is slightly less stringent, demonstrating that $f_x(x, t)$ is Lebesgue integrable, rather than necessarily Riemann integrable.
Let $u(x) = \int_{a}^{b} f(x,t) , dt$. (1)
By the very definition of the derivative:
$u’(x) = \lim_{h \to 0} \frac{u(x+h) - u(x)}{h}$. (2)
Substituting equation (1) into (2) and utilizing the property that the difference of integrals is the integral of the difference:
$u’(x) = \lim_{h \to 0} \int_{a}^{b} \frac{f(x+h,t) - f(x,t)}{h} , dt$.
We are now faced with the crucial step: justifying the interchange of the limit and the integral. This is where the bounded convergence theorem shines. Consider the difference quotient $f_{\delta}(x,t) = \frac{f(x+\delta,t) - f(x,t)}{\delta}$. For a fixed $t$, the mean value theorem guarantees the existence of some $z \in [x, x+\delta]$ such that $f_{\delta}(x,t) = f_x(z,t)$. Given the continuity of $f_x(x, t)$ over a compact domain, it must be bounded. This implies that $f_{\delta}(x,t)$ is uniformly bounded with respect to $t$. Furthermore, as $\delta \to 0$, $f_{\delta}(x,t)$ converges pointwise to $f_x$. The bounded convergence theorem then allows us to pass the limit inside the integral.
If we only know that $|f_x(x, \omega)| \leq \theta(\omega)$ for some integrable $\theta$, the dominated convergence theorem still permits this interchange, as $|f_{\delta}(x,t)| \leq \theta(\omega)$, providing the necessary dominant function.
Variable Limits Form
Consider a continuous real-valued function $g$ of a single real variable, and two real-valued differentiable functions $f_1(x)$ and $f_2(x)$. The derivative of the integral $\int_{f_1(x)}^{f_2(x)} g(t) , dt$ with respect to $x$ is given by:
d
d x
(
∫
f
1
( x )
f
2
( x )
g ( t )
d t
)
= g
(
f
2
( x )
)
f
2
′
( x )
− g
(
f
1
( x )
)
f
1
′
( x )
.
{\displaystyle {\frac {d}{dx}}\left(\int {f{1}(x)}^{f_{2}(x)}g(t),dt\right)=g\left(f_{2}(x)\right){f_{2}’(x)}-g\left(f_{1}(x)\right){f_{1}’(x)}.}
This result flows directly from the chain rule and the First Fundamental Theorem of Calculus . Let $\Gamma(x) = \int_{0}^{x} g(t) , dt$. Then the integral in question is $\Gamma(f_2(x)) - \Gamma(f_1(x))$. Applying the chain rule and the fact that $\Gamma’(x) = g(x)$ yields the formula.
This form is particularly useful when the integrand itself contains $x$, such as $\int_{f_1(x)}^{f_2(x)} h(x) g(t) , dt$. Here, $h(x)$ can be factored out, and the product rule combined with the above form gives:
d
d x
(
∫
f
1
( x )
f
2
( x )
h ( x ) g ( t )
d t
)
= h ′
( x )
∫
f
1
( x )
f
2
( x )
g ( t )
d t + h ( x )
d
d x
(
∫
f
1
( x )
f
2
( x )
g ( t )
d t
)
.
{\displaystyle {\begin{aligned}{\frac {d}{dx}}\left(\int {f{1}(x)}^{f_{2}(x)}h(x)g(t),dt\right)&={\frac {d}{dx}}\left(h(x)\int {f{1}(x)}^{f_{2}(x)}g(t),dt\right)\&=h’(x)\int {f{1}(x)}^{f_{2}(x)}g(t),dt+h(x){\frac {d}{dx}}\left(\int {f{1}(x)}^{f_{2}(x)}g(t),dt\right)\end{aligned}}}
General Form with Variable Limits Revisited
Let $\varphi(\alpha) = \int_{a}^{b} f(x, \alpha) , dx$. If $a$ and $b$ are functions of $\alpha$ with increments $\Delta a$ and $\Delta b$ respectively, when $\alpha$ changes by $\Delta \alpha$, then $\Delta \varphi$ can be expressed as:
$\Delta \varphi = -\int_{a}^{a+\Delta a}f(x,\alpha +\Delta \alpha ),dx+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )],dx+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha ),dx.$
Applying a form of the mean value theorem to the first and last integrals, $\int_{a}^{b} f(x) , dx = (b-a)f(\xi)$, we get:
$\Delta \varphi = -\Delta a,f(\xi _{1},\alpha +\Delta \alpha )+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )],dx+\Delta b,f(\xi _{2},\alpha +\Delta \alpha ).$
Dividing by $\Delta \alpha$ and taking the limit as $\Delta \alpha \to 0$, we observe that $\xi_1 \to a$ and $\xi_2 \to b$. Crucially, the limit can be passed through the integral sign due to the bounded convergence theorem:
$\lim _{\Delta \alpha \to 0}\int _{a}^{b}{\frac {f(x,\alpha +\Delta \alpha )-f(x,\alpha )}{\Delta \alpha }},dx=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha ),dx.$
This leads to the general form of the Leibniz integral rule:
d$\varphi$/d$\alpha$ = $\int_{a}^{b} \frac{\partial}{\partial \alpha} f(x, \alpha) , dx + f(b, \alpha) \frac{db}{d\alpha} - f(a, \alpha) \frac{da}{d\alpha}$.
{\displaystyle {\frac {d\varphi }{d\alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha ),dx+f(b,\alpha ){\frac {db}{d\alpha }}-f(a,\alpha ){\frac {da}{d\alpha }}.}
Alternative Derivation Using the Chain Rule
The general form of Leibniz’s rule can also be derived by combining the basic form, the multivariable chain rule , and the first fundamental theorem of calculus . Consider $F(x,y) = \int_{t_1}^{y} f(x,t) , dt$ and $G(x) = \int_{a(x)}^{b(x)} f(x,t) , dt$. We can express $G(x)$ as $F(x, b(x)) - F(x, a(x))$. Applying the multivariable chain rule to $G(x)$ and using the fundamental theorem of calculus for the partial derivatives of $F$ with respect to $y$ (which is $f(x,y)$), and the basic Leibniz rule for the partial derivative of $F$ with respect to $x$, we arrive at the general formula. The differentiability of $F$ is established by the continuity of its partial derivatives, which are integrals of continuous functions and the function $f$ itself.
Examples: Putting Theory into Practice
Example 1: Fixed Limits
Consider $\varphi(\alpha) = \int_{0}^{1} \frac{\alpha}{x^2 + \alpha^2} , dx$. The integrand has a discontinuity at $(0,0)$, and $\varphi(\alpha)$ is discontinuous at $\alpha=0$. Differentiating under the integral sign (for $\alpha \neq 0$):
$\frac{d}{d\alpha} \varphi(\alpha) = \int_{0}^{1} \frac{\partial}{\partial \alpha} \left(\frac{\alpha}{x^2 + \alpha^2}\right) , dx = \int_{0}^{1} \frac{x^2 - \alpha^2}{(x^2 + \alpha^2)^2} , dx = \left. -\frac{x}{x^2 + \alpha^2} \right|_{0}^{1} = -\frac{1}{1 + \alpha^2}$.
Integrating this result with respect to $\alpha$ gives $\varphi(\alpha) = -\arctan(\alpha) + C$. For $\alpha \neq 0$, we find $\varphi(\alpha) = -\arctan(\alpha) + \frac{\pi}{2}$ (by matching the limit as $\alpha \to 0^+$). At $\alpha=0$, $\varphi(0)=0$.
Example 2: Variable Limits
$\frac{d}{dx}\int_{\sin x}^{\cos x} \cosh t^2 , dt = \cosh(\cos^2 x) \frac{d}{dx}(\cos x) - \cosh(\sin^2 x) \frac{d}{dx}(\sin x) + \int_{\sin x}^{\cos x} \frac{\partial}{\partial x}(\cosh t^2) , dt$ $= \cosh(\cos^2 x)(-\sin x) - \cosh(\sin^2 x)(\cos x) + 0$ $= -\cosh(\cos^2 x)\sin x - \cosh(\sin^2 x)\cos x$.
Example 3: Feynman’s Trick - Evaluating Definite Integrals
The Leibniz rule is famously known as “Feynman’s trick” for evaluating integrals. Consider $\varphi(\alpha) = \int_{0}^{\pi} \ln(1 - 2\alpha \cos(x) + \alpha^2) , dx$, for $|\alpha| \neq 1$. Differentiating under the integral sign:
$\frac{d}{d\alpha} \varphi(\alpha) = \int_{0}^{\pi} \frac{-2\cos(x) + 2\alpha}{1 - 2\alpha \cos(x) + \alpha^2} , dx$. This integral can be evaluated using trigonometric substitutions, leading to:
$\frac{d}{d\alpha} \varphi(\alpha) = \begin{cases} 0, & |\alpha| < 1 \ \frac{2\pi}{\alpha}, & |\alpha| > 1 \end{cases}$.
Integrating this result yields $\varphi(\alpha) = \begin{cases} C_1, & |\alpha| < 1 \ 2\pi \ln |\alpha| + C_2, & |\alpha| > 1 \end{cases}$. Evaluating $\varphi(0) = 0$ gives $C_1 = 0$. By substituting $\alpha = 1/\beta$ and using $\varphi(0)=0$, we find $C_2=0$. Thus, $\varphi(\alpha) = \begin{cases} 0, & |\alpha| < 1 \ 2\pi \ln |\alpha|, & |\alpha| > 1 \end{cases}$.
Example 4: A More Involved Calculation
Let $I = \int_{0}^{\pi/2} \frac{1}{(a\cos^2 x + b\sin^2 x)^2} , dx$ with $a, b > 0$. First, calculate $J = \int_{0}^{\pi/2} \frac{1}{a\cos^2 x + b\sin^2 x} , dx = \frac{\pi}{2\sqrt{ab}}$. Differentiating $J$ with respect to $a$: $\frac{\partial J}{\partial a} = -\int_{0}^{\pi/2} \frac{\cos^2 x}{(a\cos^2 x + b\sin^2 x)^2} , dx$. Also, $\frac{\partial J}{\partial a} = \frac{\partial}{\partial a} \left(\frac{\pi}{2\sqrt{ab}}\right) = -\frac{\pi}{4\sqrt{a^3b}}$. Equating these gives $\int_{0}^{\pi/2} \frac{\cos^2 x}{(a\cos^2 x + b\sin^2 x)^2} , dx = \frac{\pi}{4\sqrt{a^3b}}$. Similarly, differentiating with respect to $b$ yields $\int_{0}^{\pi/2} \frac{\sin^2 x}{(a\cos^2 x + b\sin^2 x)^2} , dx = \frac{\pi}{4\sqrt{ab^3}}$. Adding these two results gives $I = \frac{\pi}{4\sqrt{ab}} (\frac{1}{a} + \frac{1}{b})$. This can be generalized for higher powers.
Example 5: Another Trigonometric Integral
Consider $I(\alpha) = \int_{0}^{\pi/2} \frac{\ln(1 + \cos \alpha \cos x)}{\cos x} , dx$ for $0 < \alpha < \pi$. Differentiating with respect to $\alpha$: $\frac{d}{d\alpha} I(\alpha) = \int_{0}^{\pi/2} \frac{-\sin \alpha}{1 + \cos \alpha \cos x} , dx$. This integral can be evaluated using half-angle substitutions, ultimately leading to: $\frac{d}{d\alpha} I(\alpha) = -\alpha$. Integrating with respect to $\alpha$, $I(\alpha) = C - \frac{\alpha^2}{2}$. Since $I(\pi/2) = 0$, we find $C = \frac{\pi^2}{8}$. Thus, $I(\alpha) = \frac{\pi^2}{8} - \frac{\alpha^2}{2}$.
Example 6: Complex Exponentials and Integrals
Let $f(\varphi) = \int_{0}^{2\pi} e^{\varphi \cos \theta} \cos(\varphi \sin \theta) , d\theta$. This integral is related to the original integral when $\varphi=1$. Differentiating with respect to $\varphi$: $\frac{df}{d\varphi} = \int_{0}^{2\pi} e^{\varphi \cos \theta} [\cos \theta \cos(\varphi \sin \theta) - \sin \theta \sin(\varphi \sin \theta)] , d\theta$. This expression can be recognized as a line integral of a vector field $\mathbf{F}(x,y) = (e^{\varphi x}\sin(\varphi y), e^{\varphi x}\cos(\varphi y))$ around the unit circle $S^1$. By Green’s Theorem , this line integral equals $\iint_{D} (\frac{\partial F_2}{\partial x} - \frac{\partial F_1}{\partial y}) , dA$, where $D$ is the unit disk. The integrand of this double integral is identically zero. Therefore, $\frac{df}{d\varphi} = 0$, implying $f(\varphi)$ is a constant. Evaluating at $\varphi=0$, we find $f(0) = \int_{0}^{2\pi} 1 , d\theta = 2\pi$. Thus, the original integral equals $2\pi$.
Applications: Beyond the Textbook
Evaluating Definite Integrals
As demonstrated in the examples, the Leibniz integral rule, or Feynman’s trick, is a remarkably effective method for solving integrals that appear intractable by conventional means. By introducing a parameter and differentiating under the integral sign, we often transform an integration problem into a differential equation problem, which can be simpler to solve.
Other Problems
Many integrals can be tackled by introducing a parameter $\alpha$ and then differentiating with respect to it:
- The Dirichlet integral : $\int_{0}^{\infty} \frac{\sin x}{x} , dx \to \int_{0}^{\infty} e^{-\alpha x} \frac{\sin x}{x} , dx$. This requires care at $\alpha=0$ due to conditional convergence.
- $\int_{0}^{\pi/2} \frac{x}{\tan x} , dx \to \int_{0}^{\pi/2} \frac{\tan^{-1}(\alpha \tan x)}{\tan x} , dx$.
- $\int_{0}^{\infty} \frac{\ln(1+x^2)}{1+x^2} , dx \to \int_{0}^{\infty} \frac{\ln(1+\alpha^2 x^2)}{1+x^2} , dx$.
- $\int_{0}^{1} \frac{x-1}{\ln x} , dx \to \int_{0}^{1} \frac{x^{\alpha}-1}{\ln x} , dx$.
Infinite Series
The measure-theoretic formulation of differentiation under the integral sign extends to infinite series by treating summation as integration with respect to the counting measure . This is fundamental to understanding why power series are differentiable within their radius of convergence. [citation needed]
Euler-Lagrange Equations
In the field of variational calculus , the derivation of the Euler-Lagrange equation relies on the Leibniz integral rule.
In Popular Culture
The elegance and utility of differentiating under the integral sign were not lost on Richard Feynman . In his memoir, Surely You’re Joking, Mr. Feynman! , he recounts learning this technique from an old textbook, Advanced Calculus by Frederick S. Woods , while in high school. He found it to be a powerful, albeit underemphasized, tool. Feynman famously used this method to solve integrals that stumped his peers at Princeton University , who were limited to the more standard techniques taught at the time. He described it as having a “different box of tools,” allowing him to approach problems from an unconventional angle.
See Also
- Mathematics portal
- Chain rule
- Differentiation of integrals
- Leibniz rule (generalized product rule)
- Reynolds transport theorem - a generalization of the Leibniz rule.