Right, you want an article. On addition. The foundational pillar of quantitative reasoning, the first thing you learn after object permanence, and arguably, the last truly simple concept before everything gets… complicated. Let's get this over with. Don't expect balloons.
For other uses, see Addition (disambiguation). If you were looking for something else, congratulations on your brief escape.
"Add" redirects here. For other uses, see ADD (disambiguation). Try to focus.
Arithmetic operation
3 + 2 = 5 with apples, a popular choice in textbooks, presumably because they're easier to draw than existential dread.
Addition, typically indicated by the plus sign (+), stands as one of the four foundational operations of arithmetic. The other three, in case you were wondering, are its moody inverse, subtraction, its repetitive cousin, multiplication, and the one that introduces sharing and fractions, division. At its most basic, the addition of two whole numbers yields the total quantity, or sum, of those values combined. The adjacent image, for instance, dutifully illustrates two columns of apples—one with three, the other with two—which, when considered together, amount to five apples. This thrilling observation is captured in the equation "3 + 2 = 5," which is read aloud as "three plus two equals five."
Beyond the mundane task of counting fruit or other concrete objects, addition can be defined and executed within the realm of pure abstraction, using concepts we call numbers. This includes the straightforward integers, the more fluid real numbers, and the frankly over-the-top complex numbers. Addition is a cornerstone of arithmetic, which itself is a branch of mathematics. In the higher echelons of math, like algebra, addition is applied to far more abstract things, such as vectors, matrices, and the elements of so-called additive groups, proving that no concept is too simple to be made bewilderingly complex.
Addition possesses several properties of note. It is commutative, which means the order of the numbers being added is irrelevant, so 3 + 2 is the same as 2 + 3. A rare moment of cosmic indifference that actually works in your favor. It is also associative, meaning that when adding more than two numbers, the grouping of the operations doesn't alter the outcome. The repeated addition of 1 is functionally identical to counting (see Successor function). Adding 0 to a number leaves it unchanged, a mathematical shrug. Furthermore, addition adheres to predictable rules when interacting with subtraction and multiplication.
Performing addition is one of the simplest numerical tasks imaginable. The addition of very small numbers is within the grasp of toddlers; the most elementary task, 1 + 1, can be processed by infants as young as five months old, and even some non-human animal species. In primary education, students are methodically taught to add numbers within the decimal system, starting with single digits and progressively wrestling with more difficult problems. The tools for this task have evolved from the ancient abacus to the modern computer, where research into the most efficient methods of implementing addition continues, because humanity never tires of finding faster ways to do the easy things.
- Arithmetic operations
- v
- t
- e
| Addition (+) | |
|---|---|
| { term + term summand + summand addend + addend augend + addend } = {\displaystyle \scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}},+,{\text{term}}\\scriptstyle {\text{summand}},+,{\text{summand}}\\scriptstyle {\text{addend}},+,{\text{addend}}\\scriptstyle {\text{augend}},+,{\text{addend}}\end{matrix}}\right},=,} | sum {\displaystyle \scriptstyle {\text{sum}}} |
| Subtraction (−) | |
| { term − term minuend − subtrahend } = {\displaystyle \scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}},-,{\text{term}}\\scriptstyle {\text{minuend}},-,{\text{subtrahend}}\end{matrix}}\right},=,} | difference {\displaystyle \scriptstyle {\text{difference}}} |
| Multiplication (×) | |
| { factor × factor multiplier × multiplicand } = {\displaystyle \scriptstyle \left.{\begin{matrix}\scriptstyle {\text{factor}},\times ,{\text{factor}}\\scriptstyle {\text{multiplier}},\times ,{\text{multiplicand}}\end{matrix}}\right},=,} | product {\displaystyle \scriptstyle {\text{product}}} |
| Division (÷) | |
| { dividend divisor numerator denominator } = {\displaystyle \scriptstyle \left.{\begin{matrix}\scriptstyle {\frac {\scriptstyle {\text{dividend}}}{\scriptstyle {\text{divisor}}}}\[1ex]\scriptstyle {\frac {\scriptstyle {\text{numerator}}}{\scriptstyle {\text{denominator}}}}\end{matrix}}\right},=,} | { fraction quotient ratio {\displaystyle \scriptstyle \left{{\begin{matrix}\scriptstyle {\text{fraction}}\\scriptstyle {\text{quotient}}\\scriptstyle {\text{ratio}}\end{matrix}}\right.} |
| Exponentiation | |
| { base exponent base power } = {\displaystyle \scriptstyle \left.{\begin{matrix}\scriptstyle {\text{base}}^{\text{exponent}}\\scriptstyle {\text{base}}^{\text{power}}\end{matrix}}\right},=,} | power {\displaystyle \scriptstyle {\text{power}}} |
| n th root (√) | |
| radicand degree = {\displaystyle \scriptstyle {\sqrt[{\text{degree}}]{\scriptstyle {\text{radicand}}}},=,} | root {\displaystyle \scriptstyle {\text{root}}} |
| Logarithm (log) | |
| log base ( anti-logarithm ) = {\displaystyle \scriptstyle \log _{\text{base}}({\text{anti-logarithm}}),=,} | logarithm {\displaystyle \scriptstyle {\text{logarithm}}} |
- v
- t
- e
Notation and terminology
The plus sign
Addition is written using the plus sign "+" placed between the terms; the result is then presented with an equals sign. For instance,
1 + 2 = 3 {\displaystyle 1+2=3}
is read as "one plus two equals three". There are, however, situations where addition is merely "understood" without a symbol. A whole number followed immediately by a fraction implies the sum of the two, forming what is called a mixed number. For example:
3 1 2 = 3 + 1 2 = 3.5. {\displaystyle 3{\frac {1}{2}}=3+{\frac {1}{2}}=3.5.}
This notation is a masterclass in ambiguity, since in almost any other context, juxtaposition denotes multiplication. A perfect design for tripping up the unwary.
The terms of addends in the operation of an addition
The numbers or objects being combined in an addition are collectively known as the terms, the addends, or the summands. This vocabulary extends to summations involving multiple terms. It's crucial to distinguish these from factors, which are components of multiplication.
Some historical texts refer to the first addend as the augend. During the Renaissance, many authors didn't even classify the first number as an "addend." Today, thanks to the commutative property of addition, which renders the order irrelevant, "augend" is rarely used. Both numbers are simply called addends. A relic from a time when people felt the need to assign roles in an operation as simple as piling things together.
This terminology is steeped in Latin. "Addition" and "add" are English words derived from the Latin verb addere, a compound of ad ("to") and dare ("to give"). This comes from the Proto-Indo-European root *deh₃- ("to give"). So, to add is "to give to." Applying the gerundive suffix -nd gives us "addend," meaning "a thing to be added." Similarly, from augere ("to increase"), we get "augend," or "a thing to be increased."
A redrawn illustration from The Art of Nombryng, one of the first English arithmetic texts, from the 15th century.
"Sum" and "summand" are derived from the Latin noun summa, meaning "the highest" or "the top." This traces back to the Medieval Latin phrase summa linea ("top line"), which referred to the total of a column of numbers, following the ancient Greek and Roman custom of writing the sum at the top of a column—as if they knew the most important part was getting it over with.
The terms addere and summare appear in the works of Boethius and possibly earlier Roman writers like Vitruvius and Frontinus. Boethius also used other terms for the operation. The later Middle English terms "adden" and "adding" were popularized by none other than Chaucer.
Definition and interpretations
Addition is one of the four basic operations of arithmetic, along with subtraction, multiplication, and division. The operation combines two or more terms into a single sum. An arbitrary number of additions is called a summation. An infinite summation is a more delicate affair known as a series, which can be expressed using capital sigma notation ( ∑ {\textstyle \sum } ), a compact way to denote the iteration of addition over a given set of indexes. For example:
∑ k = 1 5 k 2 = 1 2 + 2 2 + 3 2 + 4 2 + 5 2 = 55. {\displaystyle \sum _{k=1}^{5}k^{2}=1^{2}+2^{2}+3^{2}+4^{2}+5^{2}=55.}
Addition is used to model countless physical processes. Even in the simple case of adding natural numbers, there are multiple interpretations and even more visual representations.
Combining sets
One set has three shapes while the other has two. The total of shapes is five, a consequence of the addition of objects from two sets: 3 + 2 = 5 {\displaystyle 3+2=5} .
Perhaps the most intuitive interpretation of addition is the combination of sets:
When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the numbers of objects in the original collections.
This interpretation is easy to visualize and leaves little room for ambiguity. It's also foundational in higher mathematics (see § Natural numbers for the rigorous version). However, its utility falters when one tries to extend it to fractional or negative numbers. An obvious design flaw.
One way around this is to consider collections of objects that are easily divisible, like pies or, even better, segmented rods. Instead of just combining collections of segments, rods can be joined end-to-end. This illustrates a different conception of addition: adding the lengths of the rods, not the rods themselves.
Extending a length
A number-line visualization of the algebraic addition 2 + 4 = 6 {\displaystyle 2+4=6} . A "jump" of distance 2 {\displaystyle 2} followed by another of distance 4 {\displaystyle 4} is the same as a single translation by 6 {\displaystyle 6} .
A number-line visualization of the unary addition 2 + 4 = 6 {\displaystyle 2+4=6} . A translation by 4 {\displaystyle 4} is equivalent to four translations by 1 {\displaystyle 1} .
A second interpretation of addition involves extending an initial length by a given length:
When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension.
The sum a + b {\displaystyle a+b} can be seen as a binary operation that algebraically combines a {\displaystyle a} and b {\displaystyle b} . Alternatively, it can be viewed as adding b {\displaystyle b} more units to a {\displaystyle a} . In this latter view, the parts of the sum a + b {\displaystyle a+b} play asymmetric roles. The operation a + b {\displaystyle a+b} is seen as applying the unary operation + b {\displaystyle +b} to a {\displaystyle a} . Instead of calling both a {\displaystyle a} and b {\displaystyle b} addends, it is more fitting to call a {\displaystyle a} the "augend" here, as it plays a passive role. One number just sits there while the other does all the work. A familiar dynamic. This unary perspective is also useful when discussing subtraction, as every unary addition has an inverse unary subtraction.
Properties
Commutativity
4 + 2 = 2 + 4 with blocks.
Addition is commutative, meaning you can swap the order of the terms in a sum and still arrive at the same, inevitable result. Symbolically, for any two numbers a {\displaystyle a} and b {\displaystyle b} :
a + b = b + a . {\displaystyle a+b=b+a.}
This fact is known as the "commutative law of addition" or the "commutative property of addition". Other binary operations like multiplication are also commutative, but many, like subtraction and division, are not.
Associativity
2 + (1 + 3) = (2 + 1) + 3 with segmented rods.
Addition is associative, meaning that when adding three or more numbers, the order of operations is irrelevant to the final outcome. For any three numbers a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} :
( a + b ) + c = a + ( b + c ) . {\displaystyle (a+b)+c=a+(b+c).}
For example, ( 1 + 2 ) + 3 = 1 + ( 2 + 3 ) {\displaystyle (1+2)+3=1+(2+3)} . When adding a crowd of numbers, it doesn't matter which two you introduce first. They all end up in the same regrettable pile.
When addition is used with other operations, the order of operations becomes critical. In the standard hierarchy, addition has a lower priority than exponentiation, nth roots, multiplication, and division, but shares equal priority with subtraction.
Identity element
5 + 0 = 5 with bags of dots.
Adding zero to any number doesn't change the number. Zero is the identity element for addition, also known as the additive identity. In symbols, for any number a {\displaystyle a} :
a + 0 = 0 + a = a . {\displaystyle a+0=0+a=a.}
This law was first formally identified in Brahmagupta's Brahmasphutasiddhanta in 628 AD. He wrote it as three separate laws, depending on whether a {\displaystyle a} was negative, positive, or zero, using words instead of algebraic symbols. Later Indian mathematicians refined the idea. Around 830, Mahavira wrote, "zero becomes the same as what is added to it," corresponding to 0 + a = a {\displaystyle 0+a=a} . In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same," which matches the statement a + 0 = a {\displaystyle a+0=a} . It's the mathematical equivalent of doing nothing and expecting a result. And for once, it works.
Successor
Within the context of integers, adding one has a special function: for any integer a {\displaystyle a} , the integer a + 1 {\displaystyle a+1} is the smallest integer greater than a {\displaystyle a} , known as the successor of a {\displaystyle a} . For instance, 3 is the successor of 2. Because of this, the value of a + b {\displaystyle a+b} can be viewed as the b {\displaystyle b} -th successor of a {\displaystyle a} , making addition an iterated succession. For example, 6 + 2 is 8, because 8 is the successor of 7, which is the successor of 6. This is the tedious, one-step-at-a-time version of getting somewhere.
Units
To numerically add physical quantities with units, they must be expressed in common units. For example, adding 50 milliliters to 150 milliliters yields 200 milliliters. Simple. However, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is equivalent to 5 feet. You can't just add 5 and 2. It is generally meaningless to try to add 3 meters and 4 square meters, as their units are incomparable. This kind of consideration is fundamental to dimensional analysis, a concept that prevents the universe from collapsing into nonsense.
Performing addition
Innate ability
Studies in mathematical development since the 1980s have used the phenomenon of habituation: infants stare longer at unexpected outcomes. A key experiment by Karen Wynn in 1992, using Mickey Mouse dolls behind a screen, showed that five-month-old infants expect 1 + 1 to be 2. They exhibit surprise when a situation suggests 1 + 1 is 1 or 3. This finding has been corroborated by various labs using different methods. Another 1992 experiment with older toddlers (18 to 35 months) leveraged their motor control, allowing them to retrieve ping-pong balls from a box. The youngest could handle small numbers, while older subjects could compute sums up to 5.
Even some nonhuman animals demonstrate a limited capacity for addition, particularly primates. A 1995 experiment mimicking Wynn's 1992 study (using eggplants instead of dolls) found that rhesus macaque and cottontop tamarin monkeys performed similarly to human infants. More dramatically, after learning the meanings of the Arabic numerals 0 through 4, one chimpanzee could compute the sum of two numerals without further training. More recently, Asian elephants have also demonstrated an ability to perform basic arithmetic. Even infants and some primates can grasp the basics. Don't feel too special.
Addition by counting
Children typically master counting first. When faced with a problem of combining two items and three items, young children model the situation with physical objects—often fingers or a drawing—and then count the total. The clumsy, fleshy abacus we're all born with. As they gain experience, they discover the "counting-on" strategy: to find two plus three, a child counts three past two, saying "three, four, five," and arriving at five. This strategy is nearly universal; children pick it up from peers or teachers, or discover it independently. With more experience, they learn to exploit commutativity by counting up from the larger number—starting with three and counting "four, five."
Eventually, children begin to recall certain addition facts, or "number bonds," through either experience or rote memorization. Once some facts are memorized, they start deriving unknown facts from known ones. For example, a child asked to add six and seven might know that 6 + 6 = 12 and reason that 6 + 7 is one more, or 13. Such derived facts are found quickly, and most elementary students eventually rely on a mix of memorized and derived facts to add fluently.
Different nations introduce whole numbers and arithmetic at different ages, with many countries teaching addition in pre-school. However, across the globe, addition is taught by the end of the first year of elementary school.
Single-digit addition
An ability to add pairs of single digits (0 to 9) is a prerequisite for adding larger numbers in the decimal system. With 10 choices for each of two digits, there are 100 single-digit "addition facts," which can be organized in an addition table.
| + | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
| 1 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
| 2 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
| 3 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
| 4 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
| 5 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
| 6 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
| 7 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |
| 8 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |
| 9 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |
Learning to compute these single-digit sums fluently is a major focus of early arithmetic education. While some students are forced into rote memorization of the entire table, pattern-based strategies are generally more enlightening and efficient. These are coping mechanisms for the tedious task of memorization:
- Commutative property: Using the pattern a + b = b + a {\displaystyle a+b=b+a} reduces the number of facts to memorize from 100 to 55.
- One or two more: Adding 1 or 2 is a basic task, accomplished by counting on or, eventually, intuition.
- Zero: Since zero is the additive identity, adding zero is trivial. Still, some students are introduced to addition as a process that always increases the addends, so word problems may be needed to rationalize this "exception."
- Doubles: Adding a number to itself relates to counting by two and multiplication. Doubles facts are a backbone for many related facts and are relatively easy to grasp.
- Near-doubles: Sums like 6 + 7 = 13 can be quickly derived from the doubles fact 6 + 6 = 12 by adding one, or from 7 + 7 = 14 by subtracting one.
- Five and ten: Sums like 5 + x and 10 + x are usually memorized early and can be used to derive other facts. For example, 6 + 7 = 13 can be derived from 5 + 7 = 12 by adding one more.
- Making ten: An advanced strategy uses 10 as an intermediate for sums involving 8 or 9; for example, 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14.
As students age, they commit more facts to memory and learn to derive others rapidly. Many never memorize all the facts but can still find any basic sum quickly.
Carry
The standard algorithm for adding multidigit numbers involves aligning the addends vertically and adding the columns, starting from the ones column on the right. If a column's sum exceeds nine, the extra digit is "carried" to the next column. For example, in the addition of 59 + 27, the sum of the ones column is 9 + 7 = 16. The digit 1 is the carry. This is the part where a column gets too crowded and spills a digit over to its neighbor. An alternate strategy starts adding from the most significant digit on the left; this makes carrying clumsier but is faster for getting a rough estimate.
Decimal fractions
Decimal fractions are added via a simple modification of the above process. Align two decimal fractions so their decimal points are in the same location. If needed, add trailing zeros to the shorter decimal to match the length of the longer one. Finally, perform the same addition as above, placing the decimal point in the answer exactly where it was in the summands. A simple rule to keep chaos at bay. For a moment. As an example, 45.1 + 4.34 is solved as:
45.10
+ 04.34
-------
49.44
Scientific notation
In scientific notation, numbers are written as x = a × 10 b {\displaystyle x=a\times 10^{b}} , where a {\displaystyle a} is the significand and 10 b {\displaystyle 10^{b}} is the exponential part. To add numbers in this form, they must be expressed with the same exponent, allowing their significands to be simply added. For numbers so offensively large or small they have to be dressed up in costumes.
For example:
2.34 × 10 − 5 + 5.67 × 10 − 6 = 2.34 × 10 − 5 + 0.567 × 10 − 5 = 2.907 × 10 − 5 . {\displaystyle {\begin{aligned}&2.34\times 10^{-5}+5.67\times 10^{-6}\&\quad =2.34\times 10^{-5}+0.567\times 10^{-5}\&\quad =2.907\times 10^{-5}.\end{aligned}}}
Non-decimal
Addition in other bases is similar to decimal addition. The same tedious process, but with fewer digits to work with. Consider addition in binary:
Adding two single-digit binary numbers is simple, using a form of carrying: 0 + 0 → 0 0 + 1 → 1 1 + 0 → 1 1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 2¹))
Adding two "1"s produces a "0", while a 1 must be carried to the next column. This mirrors decimal addition. If the result equals or exceeds the radix (10), the digit to the left is incremented: 5 + 5 → 0, carry 1 (since 5 + 5 = 10 = 0 + (1 × 10¹)) 7 + 9 → 6, carry 1 (since 7 + 9 = 16 = 6 + (1 × 10¹))
This is known as carrying. When a sum exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix to the left, adding it to the next positional value. Carrying works the same way in binary:
1 1 1 1 1 (carried digits)
0 1 1 0 1
+ 1 0 1 1 1
-------------
1 0 0 1 0 0 = 36
In this example, two numerals are added: 01101₂ (13₁₀) and 10111₂ (23₁₀). The top row shows the carry bits. Starting in the rightmost column, 1 + 1 = 10₂. The 1 is carried left, and the 0 is written at the bottom. The second column from the right is 1 + 0 + 1 = 10₂ again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 11₂. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding this way gives the final answer 100100₂ (36₁₀).
Computers
Analog computers work directly with physical quantities, so their addition mechanisms depend on the addends' form. A mechanical adder might use sliding blocks and an averaging lever. If the addends are rotation speeds of shafts, they can be added with a differential. A hydraulic adder can sum the pressures in two chambers using Newton's second law. The most common setup for a general-purpose analog computer is adding two voltages (referenced to ground), roughly done with a resistor network, though an operational amplifier provides a better design. The steam-punk approach to calculation.
Addition is also fundamental to digital computers, where the efficiency of the carry mechanism is a major performance bottleneck.
The abacus, or counting frame, was a calculating tool used centuries before the modern numeral system and is still used by merchants and clerks in Asia, Africa, and elsewhere. It dates to at least 2700–2300 BC in Sumer.
Blaise Pascal invented the mechanical calculator in 1642; it was the first operational adding machine. Pascal's calculator was limited by a gravity-assisted carry mechanism, forcing its wheels to turn only one way. To subtract, the operator had to use the machine's complement, a process as lengthy as addition. Gottfried Leibniz built the stepped reckoner, another mechanical calculator, finished in 1694, and Giovanni Poleni improved the design in 1709 with a wooden calculating clock that could perform all four arithmetic operations. These early devices were not commercially successful but inspired later mechanical calculators in the 19th century.
"Full adder" logic circuit that adds two binary digits, A and B, with a carry input Cᵢₙ, producing a sum bit S and a carry output Cₒᵤₜ.
Adders execute integer addition in digital computers, typically using binary arithmetic. The simplest architecture is the ripple carry adder, which mimics the standard multi-digit algorithm. A slight improvement is the carry skip design, which follows human intuition: one doesn't perform all carries in 999 + 1 but bypasses the group of 9s to get the answer.
In practice, computational addition can be achieved via XOR and AND bitwise logical operations combined with bitshift operations. In modern digital computers, integer addition is usually the fastest arithmetic instruction, yet it has the largest impact on performance, as it underlies all floating-point operations and basic tasks like address generation during memory access. To increase speed, modern designs calculate digits in parallel, using schemes like carry select, carry lookahead, and the Ling pseudocarry. An entire field of engineering dedicated to making 1+1 happen infinitesimally faster.
Arithmetic on a computer can deviate from the mathematical ideal. If an addition result is too large for the computer to store, an arithmetic overflow occurs, leading to an error or an incorrect answer. Unanticipated overflow is a common cause of program errors. The Year 2000 problem was a series of bugs where overflow errors occurred from using a 2-digit format for years. A classic example of humanity's shortsightedness, immortalized in code.
Computers also use floating-point arithmetic, similar to scientific notation, which mitigates overflow. To add two floating-point numbers, their exponents must match. If the numbers' magnitudes are too different, a loss of precision can occur. This makes floating-point addition non-associative in general. The universe's rounding error, now in digital form.
Addition of numbers
To prove the properties of addition, one must first define it for the context in question. It's first defined for natural numbers. In set theory, it's then extended to larger sets: the integers, rational numbers, and real numbers. In mathematics education, positive fractions are added before negative numbers are even considered, mirroring the historical route.
Natural numbers
There are two primary ways to define the sum of two natural numbers a {\displaystyle a} and b {\displaystyle b} . If natural numbers are defined as the cardinalities of finite sets, their sum is defined as:
- Let N ( S ) {\displaystyle N(S)} be the cardinality of a set S {\displaystyle S} . Take two disjoint sets A {\displaystyle A} and B {\displaystyle B} , with N ( A ) = a {\displaystyle N(A)=a} and N ( B ) = b {\displaystyle N(B)=b} . Then a + b {\displaystyle a+b} is defined as N ( A ∪ B ) {\displaystyle N(A\cup B)} .
Here, A ∪ B {\displaystyle A\cup B} is the union of A {\displaystyle A} and B {\displaystyle B} .
The other popular definition is recursive:
- Let n + {\displaystyle n^{+}} be the successor of n {\displaystyle n} (so 0 + = 1 {\displaystyle 0^{+}=1} , 1 + = 2 {\displaystyle 1^{+}=2} ). Define a + 0 = a {\displaystyle a+0=a} . Define the general sum by a + b + = ( a + b ) + {\displaystyle a+b^{+}=(a+b)^{+}} . Hence 1 + 1 = 1 + 0 + = ( 1 + 0 ) + = 1 + = 2 {\displaystyle 1+1=1+0^{+}=(1+0)^{+}=1^{+}=2} .
These are two ways to formally state the obvious, because mathematicians have to justify their existence somehow. This recursive formulation was developed by Dedekind as early as 1854, who proved its associative and commutative properties through mathematical induction.
Integers
The simplest concept of an integer involves an absolute value and a sign. The integer zero is neither positive nor negative. The corresponding definition of addition proceeds by cases:
- For an integer n {\displaystyle n} , let | n | {\displaystyle |n|} be its absolute value. Let a {\displaystyle a} and b {\displaystyle b} be integers. If either is zero, it acts as an identity. If both are positive, define a + b = | a | + | b | {\displaystyle a+b=|a|+|b|} . If both are negative, define a + b = − ( | a | + | b | ) {\displaystyle a+b=-(|a|+|b|)} . If they have different signs, a + b {\displaystyle a+b} is the difference between | a | {\displaystyle |a|} and | b | {\displaystyle |b|} , with the sign of the term with the larger absolute value.
For example, −6 + 4 = −2. While useful for concrete problems, the number of cases complicates proofs. A more elegant method defines integers as equivalence classes of ordered pairs of natural numbers under the equivalence relation ( a , b ) ∼ ( c , d ) {\displaystyle (a,b)\sim (c,d)} if and only if a + d = b + c {\displaystyle a+d=b+c} .
Addition of ordered pairs is component-wise: ( a , b ) + ( c , d ) = ( a + c , b + d ) . {\displaystyle (a,b)+(c,d)=(a+c,b+d).} This defines addition for integers and is equivalent to the case-based definition. From a clumsy list of rules to a slightly more abstract but cleaner method. Progress, I suppose.
Rational numbers (fractions)
Addition of rational numbers can be done using the least common denominator, but a conceptually simpler definition uses only integer operations:
a b + c d = a d + b c b d . {\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+bc}{bd}}.}
As an example, 3 4 + 1 8 = 3 × 8 + 4 × 1 4 × 8 = 24 + 4 32 = 28 32 = 7 8 {\textstyle {\frac {3}{4}}+{\frac {1}{8}}={\frac {3,\times ,8,+,4,\times ,1}{4\times 8}}={\frac {24,+,4}{32}}={\frac {28}{32}}={\frac {7}{8}}} . The reason your pizza-sharing word problems got complicated.
If the denominators are the same, one simply adds the numerators: a c + b c = a + b c {\displaystyle {\frac {a}{c}}+{\frac {b}{c}}={\frac {a+b}{c}},} so 1 4 + 2 4 = 1 + 2 4 = 3 4 {\textstyle {\frac {1}{4}}+{\frac {2}{4}}={\frac {1,+,2}{4}}={\frac {3}{4}}} .
The commutativity and associativity of rational addition are easy consequences of integer arithmetic laws.
Real numbers
A common construction of real numbers is the Dedekind completion of the rationals. A real number is a Dedekind cut: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers a and b is defined element by element:
a + b = { q + r ∣ q ∈ a , r ∈ b } . {\displaystyle a+b={q+r\mid q\in a,r\in b}.}
This definition was first published by Richard Dedekind in 1872.
Another approach is the metric completion of the rationals. A real number is the limit of a Cauchy sequence of rationals, lim aₙ. Addition is defined term by term:
lim n a n + lim n b n = lim n ( a n + b n ) . {\displaystyle \lim {n}a{n}+\lim {n}b{n}=\lim {n}(a{n}+b_{n}).}
This was first published by [Georg Cantor](/Georg_ Cantor), also in 1872. Once proven to be well-defined, all properties of real addition follow from the properties of rational numbers. Two terrifyingly abstract ways to define numbers that aren't nice and whole. Choose your poison.
Complex numbers
Complex numbers are added by adding their real and imaginary parts separately:
( a + b i ) + ( c + d i ) = ( a + c ) + ( b + d ) i . {\displaystyle (a+bi)+(c+di)=(a+c)+(b+d)i.}
Geometrically, in the complex plane, the sum of two complex numbers A and B is the point X obtained by constructing a parallelogram with vertices at the origin O, A, and B. Adding in two dimensions. Because one wasn't complicated enough.
Generalizations
Many binary operations can be seen as generalizations of addition. The field of algebra is centrally concerned with such operations.
Abelian group
In group theory, a Group is an algebraic structure where any two elements can be composed. When the order doesn't matter, the operation is sometimes called addition. Such groups are called Abelian or commutative, and the operator is often written as "+". A set where you can "add" things and the order doesn't matter. A pleasant, if rare, state of affairs.
Linear algebra
In linear algebra, a vector space allows for adding any two vectors and scaling vectors. A familiar example is the set of ordered pairs of real numbers. The sum of two vectors is obtained by adding their individual coordinates:
( a , b ) + ( c , d ) = ( a + c , b + d ) . {\displaystyle (a,b)+(c,d)=(a+c,b+d).}
This operation is central to classical mechanics, where velocities, accelerations, and forces are vectors.
Matrix addition is defined for two matrices of the same dimensions. The sum of two m × n matrices A and B, denoted A + B, is another m × n matrix computed by adding corresponding elements:
A + B = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] + [ b 11 b 12 ⋯ b 1 n b 21 b 22 ⋯ b 2 n ⋮ ⋮ ⋱ ⋮ b m 1 b m 2 ⋯ b m n ] = [ a 11 + b 11 a 12 + b 12 ⋯ a 1 n + b 1 n a 21 + b 21 a 22 + b 22 ⋯ a 2 n + b 2 n ⋮ ⋮ ⋱ ⋮ a m 1 + b m 1 a m 2 + b m 2 ⋯ a m n + b m n ] {\displaystyle {\begin{aligned}\mathbf {A} +\mathbf {B} &={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\a_{21}&a_{22}&\cdots &a_{2n}\\vdots &\vdots &\ddots &\vdots \a_{m1}&a_{m2}&\cdots &a_{mn}\\end{bmatrix}}+{\begin{bmatrix}b_{11}&b_{12}&\cdots &b_{1n}\b_{21}&b_{22}&\cdots &b_{2n}\\vdots &\vdots &\ddots &\vdots \b_{m1}&b_{m2}&\cdots &b_{mn}\\end{bmatrix}}\[8mu]&={\begin{bmatrix}a_{11}+b_{11}&a_{12}+b_{12}&\cdots &a_{1n}+b_{1n}\a_{21}+b_{21}&a_{22}+b_{22}&\cdots &a_{2n}+b_{2n}\\vdots &\vdots &\ddots &\vdots \a_{m1}+b_{m1}&a_{m2}+b_{m2}&\cdots &a_{mn}+b_{mn}\\end{bmatrix}}\\end{aligned}}}
In modular arithmetic, the set of numbers is restricted to a finite subset of integers, and addition "wraps around" when it reaches the modulus. Addition on a leash. For example, the set of integers modulo 12 has twelve elements and an addition operation central to musical set theory.
The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an operation include commutative monoids and abelian groups.
Set theory and category theory
A far-reaching generalization is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of natural number addition to the transfinite. Ordinal addition is not commutative. Cardinal addition, however, is a commutative operation related to the disjoint union operation.
In category theory, disjoint union is a case of the coproduct operation, and general coproducts are perhaps the most abstract generalization of addition. Addition generalized to the point of near-unrecognizability. This is what happens when you let mathematicians get bored.
Related operations
Arithmetic
Subtraction can be seen as adding an additive inverse. Subtraction is an inverse to addition, as adding x {\displaystyle x} and subtracting x {\displaystyle x} are inverse functions.
Multiplication can be thought of as repeated addition. If a term x appears in a sum n {\displaystyle n} times, the sum is the product of n {\displaystyle n} and x.
In real and complex numbers, addition and multiplication can be interchanged by the exponential function: e a + b = e a e b . {\displaystyle e^{a+b}=e^{a}e^{b}.} This identity allows multiplication to be performed by consulting a table of logarithms and adding by hand, or by using a slide rule.
Division is remotely related to addition. Since a / b = a b − 1 {\displaystyle a/b=ab^{-1}} , division is right distributive over addition: ( a + b ) / c = a / c + b / c {\displaystyle (a+b)/c=a/c+b/c} . However, it is not left distributive.
Ordering
The maximum operation max ( a , b ) {\displaystyle \max(a,b)} is a binary operation similar to addition. If two nonnegative numbers a {\displaystyle a} and b {\displaystyle b} have different orders of magnitude, their sum is approximately their maximum. When one number is vastly more important than the other, the smaller one might as well not exist. A harsh but useful approximation. This is useful in applications like truncating Taylor series, but causes difficulty in numerical analysis since "max" is not invertible. A calculation of ( a + b ) − b {\displaystyle (a+b)-b} can accumulate an unacceptable round-off error. See also Loss of significance.
Maximization is commutative and associative, like addition. Furthermore, addition distributes over "max": a + max ( b , c ) = max ( a + b , a + c ) . {\displaystyle a+\max(b,c)=\max(a+b,a+c).} For these reasons, in tropical geometry, multiplication is replaced with addition and addition with maximization.
Tying these together, tropical addition is approximately related to regular addition through the logarithm: log ( a + b ) ≈ max ( log a , log b ) , {\displaystyle \log(a+b)\approx \max(\log a,\log b),} which becomes more accurate as the logarithm's base increases. The approximation can be made exact by introducing a constant h {\displaystyle h} , analogous to the Planck constant, and taking the "classical limit" as h {\displaystyle h} approaches zero:
max ( a , b ) = lim h → 0 h log ( e a / h + e b / h ) . {\displaystyle \max(a,b)=\lim _{h\to 0}h\log(e^{a/h}+e^{b/h}).}
In this sense, the maximum operation is a dequantized version of addition.
In probability theory
Convolution is used to add two independent random variables defined by distribution functions. Its definition combines integration, subtraction, and multiplication. Adding random variables, which is as chaotic as it sounds.
(The remaining sections: "See also", "Notes", "Footnotes", "References", and templates are structural and informational, and have been preserved exactly as provided in the original text.)
See also
- Lunar arithmetic, a version of arithmetic with addition and multiplication replaced by digit-by-digit max and min
- Mental arithmetic, methods for performing addition without mechanical or written aid
- Minkowski sum, an addition operation on geometric shapes
- Parallel addition (mathematics), the reciprocal value of a sum of reciprocal values
- Prefix sum, computational problem of finding running totals
- Pythagorean addition, combining two side lengths of a right triangle to produce the length of the hypotenuse
- Verbal arithmetic (also known as cryptarithms), puzzles involving addition
- Velocity-addition formula for adding relativistic velocities
Notes
- ^ "Addend" is not a Latin word; in Latin it must be further conjugated, as in numerus addendus "the number to be added".
- ^ For example, al-Khwarizmi performed multi-digit addition in this way from left to right.
- ^ This is according to a survey of the nations with highest TIMSS mathematics test scores.
- ^ Enderton calls this statement the "Absorption Law of Cardinal Arithmetic"; it depends on the comparability of cardinals and therefore on the Axiom of Choice.
Footnotes
- ^ Enderton (1977), p. 138: "...select two sets K and L with card K = 2 and card L = 3. Sets of fingers are handy; sets of apples are preferred by textbooks."
- ^ a b c Musser, Peterson & Burger (2013), p. 87.
- ^ Devine, Olson & Olson (1991), p. 263.
- ^ Mazur (2014), p. 161.
- ^ Department of the Army (1961), Section 5.1.
- ^ Shmerko, Yanushkevich & Lyshevski (2009), p. 80; Schmid (1974); Schmid (1983).
- ^ a b Schwartzman (1994), p. 19.
- ^ Schubert, Hermann (1903). "Monism in Arithmetic". Mathematical Essays and Recreations. Chicago: Open Court. p. 10.
- ^ Karpinski (1925), pp. 56–57, reproduced on p. 104
- ^ Schwartzman (1994), p. 212.
- ^ Karpinski (1925), pp. 150–153.
- ^ Lewis (1974), p. 1.
- ^ Martin (2003), p. 49.
- ^ Stewart (1999), p. 8.
- ^ Apostol (1967), p. 37.
- ^ See Viro (2001) for an example of the sophistication involved in adding with sets of "fractional cardinality".
- ^ National Research Council (2001), p. 74.
- ^ Mosley (2001), p. 8.
- ^ Li & Lappan (2014), p. 204.
- ^ Musser, Peterson & Burger (2013), p. 89.
- ^ Berg (1967), p. 14.
- ^ Behr & Jungst (1971), p. 59.
- ^ Rosen (2013), See the Appendix I.
- ^ Posamentier et al. (2013), p. 71.
- ^ a b Musser, Peterson & Burger (2013), p. 90.
- ^ Bronstein & Semendjajew (1987).
- ^ Kaplan (2000), pp. 69–71.
- ^ Hempel (2001), p. 7.
- ^ Fierro (2012), Section 2.3.
- ^ Moebs, William; et al. (2022). "1.4 Dimensional Analysis". University Physics Volume 1. OpenStax. ISBN 978-1-947172-20-3.
- ^ Wynn (1998), p. 5.
- ^ Wynn (1998), p. 15.
- ^ Wynn (1998), p. 17.
- ^ Wynn (1998), p. 19.
- ^ Randerson, James (21 August 2008). "Elephants have a head for figures". The Guardian. Archived from the original on 2 April 2015. Retrieved 29 March 2015.
- ^ Smith (2002), p. 130.
- ^ Carpenter, Thomas; Fennema, Elizabeth; Franke, Megan Loef; Levi, Linda; Empson, Susan (1999). Children's mathematics: Cognitively guided instruction. Portsmouth, NH: Heinemann. ISBN 978-0-325-00137-1.
- ^ a b Henry, Valerie J.; Brown, Richard S. (2008). "First-grade basic facts: An investigation into teaching and learning of an accelerated, high-demand memorization standard". Journal for Research in Mathematics Education. 39 (2): 153–183. doi:10.2307/30034895. JSTOR 30034895.
- ^ Beckmann, S. (2014). The twenty-third ICMI study: primary mathematics study on whole numbers. International Journal of STEM Education, 1(1), 1–8. Chicago
- ^ Schmidt, W., Houang, R., & Cogan, L. (2002). "A coherent curriculum". American Educator, 26(2), 1–18.
- ^ a b c d e f g Fosnot & Dolk (2001), p. 99.
- ^ Some authors think that "carry" may be inappropriate for education; van de Walle (2004), p. 211 calls it "obsolete and conceptually misleading", preferring the word "trade". However, "carry" remains the standard term.
- ^ Crossley & Henry (1990).
- ^ Wingard-Nelson (2014), p. 40.
- ^ Cassidy, David; Holton, Gerald; Rutherford, James (2002). "Reviewing Units, Mathematics, and Scientific Notation". Understanding Physics. New York: Springer. p. 11. doi:10.1007/0-387-21660-X_3. ISBN 978-0-387-98755-2.
- ^ Dale R. Patrick, Stephen W. Fardo, Vigyan Chandra (2008) Electronic Digital System Fundamentals The Fairmont Press, Inc. p. 155
- ^ P.E. Bates Bothman (1837) The common school arithmetic. Henry Benton. p. 31
- ^ Truitt & Rogers (1960), pp. 1, 44–49, 2, 77–78.
- ^ Gschwind & McCluskey (1975), p. 233.
- ^ Ifrah, Georges (2001). The Universal History of Computing: From the Abacus to the Quantum Computer. New York: Wiley. ISBN 978-0-471-39671-0. p. 11
- ^ Marguin (1994), p. 48. Quoting Taton (1963).
- ^ Kistermann, F. W. (1998). "Blaise Pascal's adding machine: new findings and conclusions". IEEE Annals of the History of Computing. 20 (1): 69–76. Bibcode:1998IAHC...20a..69K. doi:10.1109/85.646211.
- ^ Campanile, Benedetta (2024). "La girandola di Poleni: un progetto destinato a scomparire". In Di Mauro, Marco; Romano, Luigi; Zanini, Valeria (eds.). Atti del XLIII Convegno annuale SISFA (in Italian). pp. 151–158. doi:10.6093/978-88-6887-278-6.
- ^ Flynn & Oberman (2001), pp. 2, 8.
- ^ Flynn & Oberman (2001), pp. 1–9; Liu et al. (2010), p. 194.
- ^ 301 - Programmer's Reference Manual (PDF). January 1962. 93-17-000. Retrieved July 9, 2025.
- ^ IBM 1620 Central Processing Unit, Model 1 (PDF). Archived from the original (PDF) on 2017-10-09. Retrieved 2017-12-18.
- ^ Joshua Bloch, "Extra, Extra – Read All About It: Nearly All Binary Searches and Mergesorts are Broken" Archived 2016-04-01 at the Wayback Machine. Official Google Research Blog, June 2, 2006.
- ^ Neumann (1987).
- ^ a b Goldberg, David (March 1991). "What every computer scientist should know about floating-point arithmetic". ACM Computing Surveys. 23 (1). Association for Computing Machinery (ACM): 5–48. doi:10.1145/103162.103163.
- ^ Enderton chapters 4 and 5, for example, follow this development.
- ^ Schmidt, Houang & Cogan (2002), p. 4.
- ^ Baez & Dolan (2001), p. 37 explains the historical development, in "stark contrast" with the set theory presentation: "Apparently, half an apple is easier to understand than a negative apple!"
- ^ Begle (1975), p. 49; Johnson (1975), p. 120; Devine, Olson & Olson (1991), p. 75.
- ^ Enderton (1977), p. 79.
- ^ For a version that applies to any poset with the descending chain condition, see Bergman (2005), p. 100
- ^ Enderton (1977), p. 79 observes, "But we want one binary operation + {\displaystyle +} , not all these little one-place functions."
- ^ Ferreirós (1999), p. 223.
- ^ Smith (1980), p. 234; Sparks & Rees (1979), p. 66.
- ^ Campbell (1970), p. 83.
- ^ Campbell (1970), p. 84.
- ^ Enderton (1977), p. 92.
- ^ a b Cameron & Craig (2013), p. 29.
- ^ The verifications are carried out in Enderton (1977), p. 104 and sketched for a general field of fractions over a commutative ring in Dummit & Foote (1999), p. 263.
- ^ Enderton (1977), p. 114.
- ^ Ferreirós (1999), p. 135; see section 6 of Stetigkeit und irrationale Zahlen Archived 2005-10-31 at the Wayback Machine.
- ^ The intuitive approach, inverting every element of a cut and taking its complement, works only for irrational numbers; see Enderton (1977), p. 117 for details.
- ^ Schubert, E. Thomas, Phillip J. Windley, and James Alves-Foss. "Higher Order Logic Theorem Proving and Its Applications: Proceedings of the 8th International Workshop, volume 971 of." Lecture Notes in Computer Science (1995).
- ^ Textbook constructions are usually not so cavalier with the "lim" symbol; see Burrill (1967), p. 138 for a more careful, drawn-out development of addition with Cauchy sequences.
- ^ Ferreirós (1999), p. 128.
- ^ Burrill (1967), p. 140.
- ^ Conway, John B. (1986), Functions of One Complex Variable I, Springer, ISBN 978-0-387-90328-6
- ^ Joshi, Kapil D (1989), Foundations of Discrete Mathematics, New York: Wiley, ISBN 978-0-470-21152-6
- ^ Özhan (2022), p. 10.
- ^ Gbur (2011), p. 1.
- ^ Lipschutz, S., & Lipson, M. (2001). Schaum's outline of theory and problems of linear algebra. Erlangga.
- ^ Riley, K.F.; Hobson, M.P.; Bence, S.J. (2010). Mathematical methods for physics and engineering. Cambridge University Press. ISBN 978-0-521-86153-3.
- ^ Omondi (2020), p. 142.
- ^ Princeton University Press (2008), p. 938.
- ^ Pratt (2017), p. 314.
- ^ Fenn, Roger (2012). Geometry. Springer Undergraduate Mathematics Series. Springer Science & Business Media. p. 42. ISBN 9781447103257.
- ^ Nicholson (2012), p. 70; Bhattacharya, Jain & Nagpaul (1994), p. 159.
- ^ Rieffel & Polak (2011), p. 16.
- ^ Cheng (2017), pp. 124–132.
- ^ Schindler (2014), p. 34.
- ^ Riehl (2016), p. 100.
- ^ Bhattacharya, Jain & Nagpaul (1994), p. 196.
- ^ Kay (2021), p. 44.
- ^ The set still must be nonempty. Dummit & Foote (1999), p. 48 discuss this criterion written multiplicatively.
- ^ Musser, Peterson & Burger (2013), p. 101.
- ^ Isoda, Olfos & Noine (2021), p. 163–164.
- ^ Rudin (1976), p. 178.
- ^ Lee (2003), p. 526, Proposition 20.9.
- ^ Linderholm (1971), p. 49 observes, "By multiplication, properly speaking, a mathematician may mean practically anything. By addition he may mean a great variety of things, but not so great a variety as he will mean by 'multiplication'."
- ^ Dummit & Foote (1999), p. 224. For this argument to work, one must assume that addition is a group operation and that multiplication has an identity.
- ^ For an example of left and right distributivity, see Loday (2002), p. 15.
- ^ Shortt, Roy F.; Trueblood, Cecil R. (June 1969). Teacher's Handbook; Elementary School Mathematics. Parts I and II (PDF). Pennsylvania State University Computer-Assisted Instruction Lab. pp. 52, 59.
- ^ Compare Viro (2001), p. 2, Figure 1.
- ^ Enderton (1977), p. 164.
- ^ Mikhalkin (2006), p. 1.
- ^ Akian, Bapat & Gaubert (2005), p. 4.
- ^ Mikhalkin (2006), p. 2.
- ^ Litvinov, Maslov & Sobolevskii (1999), p. 3.
- ^ Viro (2001), p. 4.
- ^ Gbur (2011), p. 300.