Alright. You want an article on calculus. The mathematical study of continuous change. Because apparently, nothing in your universe has the decency to just stay put. Let's get this over with. Don't expect me to hold your hand.
This is about the branch of mathematics. If you’re looking for kidney stones, see Calculus (disambiguation).
Part of a series of articles about Calculus
∫ a b f ′ ( t ) d t = f ( b ) − f ( a ) {\displaystyle \int _{a}^{b}f'(t),dt=f(b)-f(a)}
- Definitions
- Concepts
- Rules and identities
- Lists of integrals
- Integral transform
- Leibniz integral rule
- Definitions
- Integration by
- Formalisms
- Definitions
Advanced
Specialized
Miscellanea
- Precalculus
- History
- Glossary
- List of topics
- Integration Bee
- Mathematical analysis
- Nonstandard analysis
Calculus is the mathematical tool for studying continuous change. It's what you get when you stop looking at static shapes, which is the business of geometry, and stop being content with the generalizations of arithmetic operations found in algebra, and instead decide to grapple with the messy, flowing, dynamic nature of reality.
It was originally saddled with the dramatic name infinitesimal calculus, or "the calculus of infinitesimals". It’s fundamentally split into two major branches: differential calculus and integral calculus. Differential calculus concerns itself with instantaneous rates of change—how fast something is changing right now—and the slopes of curves. Integral calculus, its mirror image, deals with the accumulation of quantities and calculating the areas under or between those curves. These two concepts, differentiation and integration, don't just coexist; they are intrinsically linked, two sides of the same coin, a relationship elegantly captured by the fundamental theorem of calculus. To make any of this work, they lean heavily on the concepts of convergence of infinite sequences and infinite series approaching a well-defined limit. In essence, calculus is the "mathematical backbone" for any problem where variables have the audacity to change over time or in relation to something else.
This whole system was hammered into a coherent form separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz, who then proceeded to have a very public and tedious squabble about it. Subsequent work, particularly the effort to formalize the idea of limits, was needed to drag their brilliant but conceptually shaky developments onto solid ground. The ideas and methods born from calculus are now unavoidable, with applications woven into the fabric of science, engineering, and countless other branches of mathematics.
Etymology
In the context of mathematics education, "calculus" is the shorthand you hear for courses on infinitesimal calculus and integral calculus, which are your first, painful introduction to mathematical analysis.
The word itself comes from Latin, where calculus means "small pebble." It's the diminutive of calx, meaning "stone." This meaning stubbornly persists in medicine, where it refers to mineral deposits that are decidedly less fun than their mathematical namesake. Because these pebbles were used for measuring distances, counting votes, and sliding around on an abacus, the word evolved to mean calculation. It was already in use in English in this sense by 1672, years before Leibniz and Newton got around to publishing their masterpieces, which they wrote in Latin because of course they did.
Beyond the main branches, the term "calculus" gets tacked onto other specific methods of computation or theories that feel like they involve some sort of calculation. You'll see it in propositional calculus, Ricci calculus, the calculus of variations, lambda calculus, sequent calculus, and process calculus. As if that weren't enough, philosophers and ethicists borrowed the term for systems like Bentham's felicific calculus and the more general ethical calculus, proving no concept is safe from being quantified.
History
Main article: History of calculus
Modern calculus was painfully born in 17th-century Europe, delivered by Isaac Newton and Gottfried Wilhelm Leibniz, who, working independently, managed to arrive at the same place around the same time. But the seeds of these ideas are ancient, with traces appearing in old Egypt and Greece, later in China and the Middle East, and then again in medieval Europe and India. It seems humanity has been trying to figure out how to deal with change for a very long time.
Ancient precursors
Egypt
The desire to calculate volume and area—a core task of integral calculus—can be seen in the Egyptian Moscow papyrus from around 1820 BC. However, the papyrus offers only simple formulas as instructions, with no explanation of how they were derived. It’s a recipe book, not a theory.
Greece
Laying a more solid foundation, the ancient Greeks, particularly Eudoxus of Cnidus (c. 390–337 BC), developed the method of exhaustion. It was a rigorous, if exhausting, technique for proving formulas for the volumes of cones and pyramids, and it foreshadowed the modern concept of the limit.
During the Hellenistic period, Archimedes (c. 287 – c. 212 BC) took this method and supercharged it, combining it with a concept of indivisibles—a precursor to infinitesimals. This allowed him to solve a host of problems now handled by integral calculus. In his work The Method of Mechanical Theorems, he details how to calculate the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines. He was doing calculus without calling it calculus.
China
Independently of the Greeks, the method of exhaustion was rediscovered in China by Liu Hui in the 3rd century AD, which he used to determine the area of a circle. Two centuries later, in the 5th century AD, Zu Gengzhi, the son of Zu Chongzhi, established a method that would later be known in the West as Cavalieri's principle to calculate the volume of a sphere.
Medieval
Middle East
In the Islamic Golden Age, Hasan Ibn al-Haytham, known in Europe as Alhazen (c. 965 – c. 1040 AD), derived a formula for the sum of fourth powers. He figured out the equations to find the area under the curve y = x k {\displaystyle y=x^{k}} (which we would now write as the integral ∫ x k d x {\textstyle \int x^{k}dx}) for any non-negative integer k. He then used these results to perform what is essentially an integration of this function, allowing him to calculate the volume of a paraboloid.
India
The Indian mathematician and astronomer Bhāskara II (c. 1114–1185) had a grasp of ideas that bordered on differential calculus. He suggested that the "differential coefficient" becomes zero at a function's maximum or minimum value. In his astronomical work, he presented a procedure that hinted at infinitesimal methods. He noted that if x ≈ y {\displaystyle x\approx y}, then sin ( y ) − sin ( x ) ≈ ( y − x ) cos ( y ) . {\displaystyle \sin(y)-\sin(x)\approx (y-x)\cos(y).} This is effectively the discovery that the derivative of sine is cosine.
Later, in the 14th century, Indian mathematicians, particularly Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics, developed components of calculus. They created series expansions for trigonometric functions like sin ( x ) {\displaystyle \sin(x)}, cos ( x ) {\displaystyle \cos(x)}, and arctan ( x ) {\displaystyle \arctan(x)} that were equivalent to their Maclaurin expansions, centuries before they were known in Europe. However, as historian Victor J. Katz notes, they didn't manage to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today." They had the pieces, but not the assembly instructions.
Europe
In 14th-century Europe, the study of continuity was taken up again by the Oxford Calculators and French scholars like Nicole Oresme. They proved the "Merton mean speed theorem," which states that a body undergoing uniform acceleration travels the same distance as a body moving at a constant speed equal to half the accelerated body's final velocity. This is a special case of what would later be understood through integration.
Modern
Johannes Kepler's 1615 work Stereometria Doliorum was a foundational text for integral calculus. He devised a method to calculate the area of an ellipse by summing the lengths of countless radii drawn from a focus.
Building on Kepler's ideas, Bonaventura Cavalieri argued in a treatise that volumes and areas could be computed as the sums of infinitesimally thin cross-sections. His ideas were remarkably similar to those in Archimedes' The Method, but that work was lost to the Western world until the early 20th century, so Cavalieri arrived there on his own. His work was met with skepticism because his methods could produce erroneous results, and the infinitesimal quantities he worked with were considered philosophically dubious.
The formal study of calculus emerged from the fusion of Cavalieri's infinitesimals and the calculus of finite differences being developed elsewhere in Europe. Pierre de Fermat, claiming inspiration from Diophantus, introduced the concept of adequality, representing an equality that holds true up to an infinitesimally small error term. This combination was refined by John Wallis, Isaac Barrow, and James Gregory. Barrow and Gregory proved early versions of the second fundamental theorem of calculus around 1670.
The product rule, the chain rule, the concepts of higher derivatives, Taylor series, and analytic functions were all part of Isaac Newton's toolkit. He used them with his own idiosyncratic notation to solve problems in mathematical physics. In his published works, particularly the monumental Principia Mathematica (1687), Newton disguised his calculus-based calculations with equivalent geometrical arguments, which were considered more rigorous and beyond reproach at the time. He used his methods to solve the problem of planetary motion, determine the shape of a rotating fluid's surface, explain the oblateness of the Earth, and analyze the motion of a weight on a cycloid. He also developed series expansions for functions, including those with fractional and irrational powers, demonstrating a clear understanding of the principles behind the Taylor series. He kept many of these discoveries to himself, as infinitesimal methods were still viewed with suspicion.
These scattered ideas were finally organized into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was promptly accused of plagiarism by Newton. History now views him as an independent inventor and a crucial contributor. Leibniz's great contribution was providing a clear, systematic set of rules for manipulating infinitesimal quantities. This allowed for the computation of second and higher derivatives and gave us the product rule and chain rule in both their differential and integral forms. Unlike Newton, Leibniz obsessed over his choice of notation, and his system proved far more flexible and enduring.
Today, both Leibniz and Newton are credited with independently inventing and developing calculus. Newton was the first to apply calculus broadly to general physics, while Leibniz developed much of the notation that is still used today. The fundamental insights they both provided—the laws of differentiation and integration, the inverse relationship between the two, methods for higher derivatives, and the concept of approximating functions with polynomial series—formed the bedrock of modern analysis.
When their results were published, the infamous calculus controversy erupted over who deserved credit. Newton had developed his results first (which he would later publish in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton accused Leibniz of stealing ideas from his unpublished notes, which had been circulated among a few members of the Royal Society. This bitter dispute created a chasm between English-speaking mathematicians and their continental European counterparts that lasted for years, much to the detriment of English mathematics. A sober analysis of their papers reveals they arrived at their results independently; Leibniz started with integration, Newton with differentiation. It was Leibniz, however, who gave the new field its name. Newton called his system "the science of fluxions," a term that lingered in English schools well into the 19th century. The first complete calculus treatise written in English using Leibniz's notation didn't appear until 1815.
Since Newton and Leibniz, legions of mathematicians have contributed to calculus. One of the earliest and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.
Foundations
In calculus, "foundations" refers to the rigorous development of the subject from a solid base of axioms and definitions. Early calculus was anything but rigorous. The use of infinitesimal quantities was criticized fiercely, most notably by Michel Rolle and Bishop Berkeley. Berkeley, in his 1734 book The Analyst, famously and derisively described infinitesimals as the "ghosts of departed quantities". Establishing a rigorous foundation for calculus became a central preoccupation for mathematicians for over a century after Newton and Leibniz.
Mathematicians like Maclaurin attempted to prove the soundness of using infinitesimals, but a truly satisfactory solution wouldn't arrive for another 150 years. It was the work of Cauchy and Weierstrass that finally provided a way to avoid these ghostly quantities. In Cauchy's Cours d'Analyse, we see a range of foundational approaches, including a definition of continuity using infinitesimals and a prototype of the (ε, δ)-definition of limit. Weierstrass later formalized the concept of the limit and banished infinitesimals from mainstream analysis (though his definition can be used to validate nilsquare infinitesimals). After Weierstrass, calculus was rebuilt on the foundation of limits, though the old name "infinitesimal calculus" still pops up. Bernhard Riemann used these ideas to provide a precise definition of the integral. This period also saw the ideas of calculus generalized to the complex plane through the development of complex analysis.
In modern mathematics, the foundations of calculus are part of real analysis, which provides full definitions and proofs for all the theorems of calculus. The scope of calculus has also expanded dramatically. Henri Lebesgue, building on work by Émile Borel, invented measure theory and used it to define integrals for all but the most bizarre and pathological functions. Laurent Schwartz introduced distributions, which allow one to take the derivative of literally any function.
Limits aren't the only way to build a rigorous calculus. In the 1960s, Abraham Robinson developed non-standard analysis. His approach uses machinery from mathematical logic to augment the real number system with actual infinitesimal and infinite numbers, just as Newton and Leibniz had imagined. These are called hyperreal numbers, and they can be used to develop the rules of calculus in a way that feels very much like Leibniz's original methods. There is also smooth infinitesimal analysis, which differs from non-standard analysis by mandating the neglect of higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and using the methods of category theory, this approach views all functions as inherently continuous, unable to be expressed in terms of discrete entities. In this formulation, the law of excluded middle doesn't hold. This law is also rejected in constructive mathematics, which insists that a proof of an object's existence must provide a method for its construction. Reformulating calculus within this framework is the subject of constructive analysis.
Significance
While many core ideas of calculus were foreshadowed in Greece, China, India, Iraq, Persia, and Japan, the modern use of calculus began in 17th-century Europe, when Newton and Leibniz synthesized the work of their predecessors into its fundamental principles. The polymath John von Neumann had this to say about their achievement:
The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking.
Applications of differential calculus include computations of velocity and acceleration, finding the slope of a curve, and optimization. Applications of integral calculus include computations of area, volume, arc length, center of mass, work, and pressure. More advanced applications involve tools like power series and Fourier series.
Calculus is also the language used to gain a precise understanding of space, time, and motion. For centuries, mathematicians and philosophers tied themselves in knots over paradoxes involving division by zero or summing an infinite number of terms. These issues are central to the study of motion and area. The ancient Greek philosopher Zeno of Elea provided several famous examples of these paradoxes. Calculus provides the tools—specifically the limit and the infinite series—that finally and formally resolve them.
Principles
Limits and infinitesimals
Calculus is typically developed by working with quantities that are vanishingly small. Historically, the first way of doing this was with infinitesimals. These are treated like real numbers but are, in some sense, "infinitely small." An infinitesimal number could be greater than 0, but smaller than any number in the sequence 1, 1/2, 1/3, ... and thus smaller than any positive real number. From this perspective, calculus is a set of techniques for manipulating these infinitesimals. The symbols d x {\displaystyle dx} and d y {\displaystyle dy} were taken to be infinitesimals, and the derivative d y / d x {\displaystyle dy/dx} was simply their ratio.
This intuitive approach fell out of favor in the 19th century because making the notion of an infinitesimal mathematically precise was maddeningly difficult. Eventually, infinitesimals were kicked out of academia and replaced by the epsilon, delta approach to limits. Limits describe a function's behavior at a specific input by looking at its values at all nearby inputs. They capture small-scale behavior using the structure of the real number system (as a metric space with the least-upper-bound property). In this framework, calculus is a collection of techniques for manipulating certain limits. Infinitesimals are replaced by sequences of numbers getting smaller and smaller, and the function's "infinitely small" behavior is found by taking the limiting behavior of these sequences. For most of the 20th century, limits were considered the only rigorous foundation for calculus. However, the infinitesimal concept made a comeback in the 20th century with the development of non-standard analysis and smooth infinitesimal analysis, which provided solid logical foundations for their manipulation.
Differential calculus
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. For a given function and a point in its domain, the derivative encodes the small-scale behavior of the function near that point. By finding the derivative at every point in the domain, one can produce a new function, the derivative function. Formally, the derivative is a linear operator that takes one function as its input and produces another as its output. This is a step up in abstraction from elementary algebra, where functions usually take a number and spit out another number. For example, if the doubling function gets the input 3, it outputs 6. If the squaring function gets the input 3, it outputs 9. The derivative, however, can take the entire squaring function as its input. It processes all the information of the squaring function—that 2 maps to 4, 3 to 9, 4 to 16, and so on—and uses it to produce a completely new function. The function that results from differentiating the squaring function happens to be the doubling function.
To be more explicit, let's call the "doubling function" g ( x ) = 2 x and the "squaring function" f ( x ) = x 2. The "derivative" takes the function f ( x ), defined by the expression "x²", and all its corresponding information, and outputs the function g ( x ) = 2 x.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. The derivative of a function f is denoted f′, pronounced "f prime." So, if f ( x ) = x 2 is the squaring function, then f′ ( x ) = 2 x is its derivative.
If the function's input is time, the derivative represents change with respect to time. If f is a function that takes a time as input and gives the position of a ball as output, then the derivative of f is how that position is changing in time—in other words, the ball's velocity.
If a function is linear (meaning its graph is a straight line), it can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and m is the slope:
m = rise run = change in y change in x = Δ y Δ x . {\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.}
This gives a precise, constant value for the slope. If the graph isn't a straight line, however, the ratio of the change in y to the change in x varies. The derivative gives an exact meaning to this notion of change at a single instant. Let f be a function, and fix a point a in its domain. The point (a, f(a)) is on the graph. If h is a number close to zero, then a + h is a point close to a. Therefore, (a + h, f(a + h)) is a point on the graph close to (a, f(a)). The slope between these two points is:
m = f ( a + h ) − f ( a ) ( a + h ) − a = f ( a + h ) − f ( a ) h . {\displaystyle m={\frac {f(a+h)-f(a)}{(a+h)-a}}={\frac {f(a+h)-f(a)}{h}}.}
This expression is called a difference quotient. A line through two points on a curve is a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). This is only an approximation of the function's behavior at a. To find the exact behavior at a, we can't just set h to zero, because that would require dividing by zero. Instead, the derivative is defined by taking the limit as h approaches zero. It considers the behavior of f for all small values of h and extracts a consistent value for the case when h is precisely zero:
lim h → 0 f ( a + h ) − f ( a ) h . {\displaystyle \lim _{h\to 0}{f(a+h)-f(a) \over {h}}.}
Geometrically, the derivative is the slope of the tangent line to the graph of f at point a. The tangent line is the limit of secant lines, just as the derivative is the limit of difference quotients. For this reason, the derivative is often called the slope of the function f.
As an example, here is the derivative of the squaring function at input 3. Let f ( x ) = x 2.
f ′ ( 3 ) = lim h → 0 ( 3 + h ) 2 − 3 2 h = lim h → 0 9 + 6 h + h 2 − 9 h = lim h → 0 6 h + h 2 h = lim h → 0 ( 6 + h ) = 6 {\displaystyle {\begin{aligned}f'(3)&=\lim _{h\to 0}{(3+h)^{2}-3^{2} \over {h}}\&=\lim _{h\to 0}{9+6h+h^{2}-9 \over {h}}\&=\lim _{h\to 0}{6h+h^{2} \over {h}}\&=\lim _{h\to 0}(6+h)\&=6\end{aligned}}}
The slope of the tangent line to the squaring function at the point (3, 9) is 6. This means it's rising six times as fast as it's moving to the right at that exact point. This limit process can be performed for any point in the domain of the squaring function, which defines the derivative function. A similar calculation shows that the derivative of the squaring function is the doubling function.
Leibniz notation
A common notation for the derivative, introduced by Leibniz, is as follows:
y = x 2 d y d x = 2 x . {\displaystyle {\begin{aligned}y&=x^{2}\{\frac {dy}{dx}}&=2x.\end{aligned}}}
In the modern approach based on limits, the symbol dy/dx is not a quotient of two numbers but a shorthand for the limit computed above. Leibniz, however, fully intended it to be the ratio of two infinitesimally small numbers: dy, the infinitesimal change in y, caused by dx, the infinitesimal change in x. We can also think of d/dx as a differentiation operator, which takes a function as input and gives its derivative as output. For example:
d d x ( x 2 ) = 2 x . {\displaystyle {\frac {d}{dx}}(x^{2})=2x.}
In this usage, the dx in the denominator is read as "with respect to x". Another example:
g ( t ) = t 2 + 2 t + 4 d d t g ( t ) = 2 t + 2 {\displaystyle {\begin{aligned}g(t)&=t^{2}+2t+4\{d \over dt}g(t)&=2t+2\end{aligned}}}
Even when calculus is developed using limits, it's common to manipulate symbols like dx and dy as if they were real numbers. While it's possible to avoid these manipulations, they are notationally convenient for expressing operations like the total derivative.
Integral calculus
Integral calculus studies two related concepts: the indefinite integral and the definite integral. The process of finding an integral is called integration. The indefinite integral, also known as the antiderivative, is the inverse operation of the derivative. F is an indefinite integral of f if f is the derivative of F. The definite integral takes a function as input and outputs a number, representing the algebraic sum of the areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, known as a Riemann sum.
A motivating example is calculating the distance traveled in a given time. If the speed is constant, you just need multiplication:
Distance = Speed ⋅ Time {\displaystyle \mathrm {Distance} =\mathrm {Speed} \cdot \mathrm {Time} }
But if the speed changes, you need a more powerful method. One way is to approximate the distance by breaking the time into many short intervals. For each interval, you multiply the elapsed time by one of the speeds during that interval. Taking the sum of these products (a Riemann sum) gives an approximate distance. The idea is that over a very short time, the speed is more or less constant. This only gives an approximation. To get the exact distance, we must take the limit of all such Riemann sums as the intervals become infinitesimally small.
When velocity is constant, the distance traveled is just velocity times time. Graphing velocity versus time produces a rectangle whose area is the distance. This connection between area under a curve and distance traveled extends to any function representing a changing velocity. If f(x) represents a varying speed over time, the distance traveled between time a and time b is the area of the region between the graph of f(x) and the time axis, from x = a to x = b.
To approximate this area, one method is to divide the interval from a to b into equal segments of length Δx. For each segment, we pick a value of the function, say h. The area of the rectangle with base Δx and height h approximates the distance traveled in that small time interval. The sum of all such rectangles approximates the total area. Using a smaller Δx yields more rectangles and usually a better approximation. For the exact answer, we must take a limit as Δx approaches zero.
The symbol for integration is ∫ {\displaystyle \int }, an elongated S that suggests summation. The definite integral is written as:
∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x),dx}
This is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation dx suggests dividing the area into an infinite number of rectangles whose width Δx becomes the infinitesimal dx.
The indefinite integral, or antiderivative, is written:
∫ f ( x ) d x . {\displaystyle \int f(x),dx.}
Functions that differ only by a constant have the same derivative. This means the antiderivative of a function is actually a family of functions, all differing by a constant. Since the derivative of y = x² + C (where C is any constant) is y′ = 2x, the antiderivative of 2x is:
∫ 2 x d x = x 2 + C . {\displaystyle \int 2x,dx=x^{2}+C.}
This unspecified constant C is known as the constant of integration.
Fundamental theorem
The fundamental theorem of calculus states that differentiation and integration are inverse operations. It connects the values of antiderivatives to definite integrals. Since computing an antiderivative is usually easier than applying the definition of a definite integral, the theorem provides a practical way to compute definite integrals.
The theorem states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then:
∫ a b f ( x ) d x = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x),dx=F(b)-F(a).}
Furthermore, for every x in the interval (a, b):
d d x ∫ a x f ( t ) d t = f ( x ) . {\displaystyle {\frac {d}{dx}}\int _{a}^{x}f(t),dt=f(x).}
This profound realization, made by both Newton and Leibniz, was the key to the explosion of analytic results that followed their work. (The exact influence of their predecessors, especially what Leibniz might have learned from Isaac Barrow, is clouded by their priority dispute.) The fundamental theorem provides an algebraic method for computing many definite integrals without the tedious process of limits, simply by finding formulas for antiderivatives. It is also a prototype solution to a differential equation, which relates an unknown function to its derivatives and is fundamental to the sciences.
Applications
Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and any other field where a problem can be mathematically modeled and an optimal solution is sought. It’s the tool for moving between rates of change and total change. Often, you know one and need to find the other.
Calculus can be combined with other mathematical disciplines. With linear algebra, it can find the "best fit" linear approximation for a set of data points. In probability theory, it can determine the expectation value of a continuous random variable given its probability density function. In analytic geometry, calculus is used to find maxima and minima, slope, concavity, and inflection points. It's also the standard way to find approximate solutions to equations, through methods like Newton's method, fixed point iteration, and linear approximation. Spacecraft, for instance, use a variant of the Euler method to approximate curved trajectories in zero-gravity environments.
Physics is particularly dependent on calculus. All concepts in classical mechanics and electromagnetism are interconnected through calculus. The mass of an object with known density, the moment of inertia of objects, and potential energies from gravitational and electromagnetic forces are all calculated using calculus. Newton's second law of motion is a statement about a derivative: the derivative of an object's momentum with respect to time equals the net force acting on it. Starting from an object's acceleration, calculus allows us to derive its entire path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are expressed in the language of differential calculus. Chemistry uses calculus to determine reaction rates and model radioactive decay. In biology, population dynamics uses reproduction and death rates to model population changes.
Green's theorem, which relates a line integral around a closed curve to a double integral over the region it encloses, is applied in an instrument called a planimeter, used to calculate the area of a flat shape on a drawing. It can be used to find the area of an irregularly shaped flower bed or swimming pool.
In medicine, calculus can find the optimal branching angle of a blood vessel to maximize flow, because even your circulatory system is a tiny, desperate optimization problem. It can be applied to model how quickly a drug is eliminated from the body or how rapidly a cancerous tumor grows.
In economics, calculus determines maximal profit by providing a straightforward way to calculate both marginal cost and marginal revenue. It seems nothing escapes its reach.