QUICK FACTS
Created Jan 0001
Status Verified Sarcastic
Type Existential Dread
renormalization, regularization, physics, infinities, quantum field theory, statistical field theory, self-similar

Renormalization

“Ah, the delightful realm where numbers, much like inconvenient truths, refuse to behave. You're asking about renormalization and regularization) in physics,...”

Contents
  • 1. Overview
  • 2. Etymology
  • 3. Cultural Impact

Ah, the delightful realm where numbers, much like inconvenient truths, refuse to behave. You’re asking about renormalization and regularization in physics , the elegant (or perhaps, desperately pragmatic) methods devised to wrestle with the infinities that stubbornly emerge from our most sophisticated theories. It’s a method born of necessity, a testament to the fact that even the most brilliant minds occasionally design equations that scream “nonsense” until coaxed into submission.

Renormalization and Regularization: Taming the Infinite Beast

Renormalization isn’t just a single technique; it’s a suite of conceptual and computational strategies employed across various sophisticated theoretical frameworks, including quantum field theory (QFT), statistical field theory , and even in the study of self-similar geometric structures. Its primary, and rather dramatic, purpose is to manage the appearance of infinities that inevitably arise when one attempts to calculate certain physical quantities within these theories. These calculations, often involving the intricate dance of particles and fields, frequently lead to divergent integrals—results that spiral off into the infinite, rendering the theory, on the surface, utterly useless.

The core idea behind renormalization is to adjust the fundamental parameters of a theory—such as a particle’s mass or charge—to compensate for the effects of its own self-interactions. It’s like trying to weigh a cat that keeps jumping on the scale; you have to account for its own chaotic influence to get a meaningful number. Even if, by some miracle, no explicit infinities manifested in the complex loop diagrams of quantum field theory , it would still be fundamentally necessary to renormalize the mass and fields initially postulated in the theory’s foundational Lagrangian . This isn’t merely an arbitrary mathematical fix; it reflects a deeper physical reality.

Consider, for instance, a theoretical electron . Our initial model might assign it a specific “bare” mass and charge. However, in the bustling quantum realm, this solitary electron is never truly alone. It’s perpetually enveloped by a dynamic, ephemeral cloud of virtual particles —photons flitting in and out of existence, electron-positron pairs spontaneously forming and annihilating, and other exotic entities. These virtual particles constantly interact with the “bare” electron , influencing its properties. When we meticulously account for these incessant interactions—for example, by summing up all possible collisions at various energies—we find that the composite “electron-system” behaves as if it possesses a mass and charge different from our initial, naive postulate. Renormalization , in this context, provides the mathematical framework to replace these hypothetical “bare” quantities with the experimentally observed, physical mass and charge. It’s a sophisticated sleight of hand that ensures our theoretical constructs align with the cold, hard facts of observation. Intriguingly, both positrons and more massive particles like protons are experimentally proven to exhibit precisely the same observed charge as the electron , even though they exist within much stronger interaction environments and are surrounded by far more energetic and dense clouds of virtual particles . This consistency across disparate particles underscores the robustness and necessity of the renormalization procedure.

Furthermore, renormalization establishes critical relationships between the parameters within a theory, particularly when those parameters describe phenomena at vastly different distance scales . Physics at macroscopic, large distance scales often appears to be governed by different effective parameters than those governing interactions at microscopic, small distance scales . The “pileup” of contributions from an infinite spectrum of scales involved in a physical problem can, without proper handling, lead to the aforementioned infinities . When we attempt to describe spacetime as a continuous entity, certain fundamental statistical and quantum mechanical constructs simply aren’t well-defined . To render them unambiguous and truly defined, a rigorous continuum limit must be carefully applied, effectively removing the “construction scaffolding” of theoretical lattices that might be employed at various scales. Renormalization procedures are grounded in the fundamental requirement that observable physical quantities—such as the mass and charge of an electron —must precisely match their empirically measured (experimental) values. This means that while experimental values offer practical applications, their empirical nature also highlights areas within quantum field theory where a deeper, more robust theoretical derivation is still required.

The concept of renormalization first gained prominence and was systematically developed within the framework of quantum electrodynamics (QED). Its initial purpose was to bring mathematical coherence to the otherwise infinite integrals encountered in perturbation theory . At its inception, even some of its pioneering proponents regarded renormalization with a degree of suspicion, viewing it as a provisional, perhaps even illegitimate, mathematical workaround. However, over time, this “suspect” procedure evolved to be recognized as an indispensable, fundamental, and self-consistent mechanism for understanding scale physics across a multitude of fields in both physics and mathematics . Despite his later, well-documented skepticism concerning the philosophical implications of renormalization , it was in fact Paul Dirac who initially laid some of the groundwork for this approach, pioneering its earliest forms.

Today, the prevailing perspective has undergone a significant shift. Influenced profoundly by the groundbreaking insights of the renormalization group , particularly from the work of Nikolay Bogolyubov and Kenneth Wilson , the focus has moved beyond merely canceling infinities . The contemporary view emphasizes the systematic variation of physical quantities across contiguous scales , recognizing that distant scales are intrinsically related through “effective” theoretical descriptions. This perspective posits that all scales are interconnected in a broadly systematic manner, and the specific physics relevant to each scale is extracted using appropriate computational techniques tailored to that particular regime. Wilson ’s seminal contributions were particularly instrumental in clarifying which variables within a complex system are truly crucial for describing its behavior at a given scale , and which can be considered redundant or less influential.

It is crucial to distinguish renormalization from regularization . While often employed together, regularization is a distinct technique specifically designed to control infinities by introducing an artificial cutoff or modifying the theory in a way that makes the problematic integrals finite. This often involves the implicit (or explicit) assumption that new, unknown physics exists at these newly introduced scales . Renormalization , on the other hand, is the subsequent process of absorbing these cutoff-dependent terms into the redefinition of physical parameters, ultimately yielding finite, cutoff-independent results. One might say regularization puts the infinities into a cage, and renormalization then teaches them manners.

Self-interactions in classical physics

One might be tempted to think that infinities are a uniquely quantum problem, a bizarre artifact of the subatomic realm. However, the thorny issue of infinities first reared its head much earlier, within the seemingly more straightforward domain of classical electrodynamics , particularly when attempting to describe point particles during the late 19th and early 20th centuries. It seems even classical physics harbored its own internal contradictions when pushed to its conceptual limits.

Consider the mass of a charged particle. Intuitively, this mass should encompass not only its intrinsic mechanical mass but also the mass–energy contained within its own electrostatic field—what came to be known as electromagnetic mass . If one models a charged particle as a perfectly spherical shell of charge with a radius $r_e$, the mass–energy stored in its surrounding electric field can be calculated. The field energy density is given by $1/2 E^2$, where $E$ is the electric field. For a point charge $q$, the electric field at a distance $r$ is $E = q/(4\pi r^2)$. Integrating this energy density over all space outside the sphere, from its radius $r_e$ to infinity , yields the electromagnetic mass :

$m_{\text{em}}=\int {\frac {1}{2}}E^{2},dV=\int {r{\text{e}}}^{\infty }{\frac {1}{2}}\left({\frac {q}{4\pi r^{2}}}\right)^{2}4\pi r^{2},dr={\frac {q^{2}}{8\pi r_{\text{e}}}},$

This rather elegant formula, however, presents a rather inconvenient problem: as the radius $r_e$ of the particle shrinks towards zero—the ideal of a point particle —the electromagnetic mass $m_{\text{em}}$ explodes to infinity . This implies that a true point particle would possess infinite inertia , rendering it utterly impossible to accelerate. A particle that cannot be accelerated is, by any reasonable definition, a rather uninteresting particle. As a side note, the specific value of $r_e$ that would make $m_{\text{em}}$ precisely equal to the observed electron mass is known as the classical electron radius . Setting $q=e$ (the elementary charge) and reintroducing the fundamental constants $c$ (speed of light) and $\varepsilon_0$ (vacuum permittivity), this radius is calculated as:

$r_{\text{e}}={\frac {e^{2}}{4\pi \varepsilon {0}m{\text{e}}c^{2}}}=\alpha {\frac {\hbar }{m_{\text{e}}c}}\approx 2.8\times 10^{-15}~{\text{m}},$

Here, $\alpha \approx 1/137$ represents the dimensionless fine-structure constant , a fundamental measure of the strength of the electromagnetic interaction, and $\hbar / (m_{\text{e}}c)$ is the reduced Compton wavelength of the electron , a characteristic quantum mechanical length scale.

The concept of renormalization then emerges, even in this classical context, as a proposed solution to this dilemma. The total effective mass of a spherical charged particle, as observed, must include not only this electromagnetic mass but also the actual “bare” mass of the spherical shell itself. If one allows this bare mass to be a negative quantity—a rather unsettling prospect for classical physics , but a mathematical necessity—it might then be possible to take a consistent point limit without encountering infinities . This audacious idea was the genesis of what they called renormalization , and pioneering physicists like Hendrik Lorentz and Max Abraham famously attempted to construct a consistent classical theory of the electron using this very approach. This early, somewhat desperate, work served as the intellectual wellspring for the later, more sophisticated attempts at regularization and renormalization that would become indispensable in quantum field theory . (For those interested in alternative strategies to banish these classical infinities by postulating new physics at minuscule scales , one might consult the entry on regularization (physics) .)

Beyond the problem of infinite mass, calculating the electromagnetic interactions of charged particles also runs into trouble when one attempts to ignore the particle’s own field acting upon itself—a phenomenon known as back-reaction . It’s akin to ignoring the back-EMF in circuit analysis; you simply can’t get a complete picture without it. This back-reaction is absolutely essential for explaining phenomena such as the frictional force experienced by charged particles when they emit radiation. However, if the electron is idealized as a point particle , the value of this back-reaction inevitably diverges, precisely for the same reason that the mass calculation diverges: the electric field strength follows an inverse-square law , leading to infinite energy concentration at zero radius.

The classical Abraham–Lorentz theory , an early attempt to incorporate this back-reaction , notoriously predicted a noncausal “pre-acceleration.” This bizarre effect suggested that an electron could begin to move before a force was even applied, a clear violation of causality that would make any reasonable physicist question their life choices. These fundamental problems persisted even in the relativistic formulations of the Abraham–Lorentz equation . Such inconsistencies were a stark indication that the point limit in classical electrodynamics was either fundamentally flawed, or that a more profound, quantum mechanical treatment was unequivocally required.

In a curious twist, the issue of infinities proved to be even more problematic in classical field theory than in its quantum counterpart. This is because, in quantum field theory , a charged particle experiences a peculiar jittering motion known as Zitterbewegung . This rapid, oscillatory motion, arising from interference effects with virtual particle–antiparticle pairs , effectively “smears out” the particle’s charge over a small region of spacetime , roughly comparable to its Compton wavelength . As a result, in quantum electrodynamics (QED) with small coupling constants, the electromagnetic mass doesn’t diverge as catastrophically as $1/r_e$, but rather as the logarithm of the particle’s radius, a much more manageable (though still infinite) divergence. It seems quantum mechanics, in its own peculiar way, offered a partial, if still incomplete, reprieve from the classical nightmare of point particles.

Divergences in quantum electrodynamics

When the brilliant minds of Max Born , Werner Heisenberg , Pascual Jordan , and Paul Dirac were busy constructing the edifice of quantum electrodynamics (QED) in the 1930s, they stumbled upon a rather unpleasant surprise. As they delved into the intricacies of perturbation theory to calculate corrections to their initial models, they found that many of the integrals they encountered were, to put it mildly, divergent. They yielded infinite answers, making the theory seem utterly nonsensical. This was “The problem of infinities,” a cosmic joke played on physicists.

A clearer, more systematic description of these divergences within perturbation theory emerged between 1947 and 1949, thanks to the pioneering efforts of Hans Kramers , Hans Bethe , Julian Schwinger , Richard Feynman , and Shin’ichiro Tomonaga . Their individual insights were later masterfully systematized by Freeman Dyson in 1949, who provided a coherent framework for understanding these troublesome phenomena. The divergences manifest themselves primarily in radiative corrections, which are represented graphically by Feynman diagrams containing closed loops of virtual particles . These loops, depicting particles popping into and out of existence, are where the mathematical nightmares reside.

While these virtual particles dutifully respect the fundamental laws of conservation of energy and momentum , they are, in a sense, rebels. They are allowed to possess any energy and momentum , even values that are explicitly forbidden by the relativistic energy–momentum relation for a real, “on-shell” particle with its observed mass. Such a particle, existing only ephemerally within a loop, is termed “off-shell .” When a Feynman diagram includes a closed loop, the precise momentum carried by the particles within that loop is not uniquely determined by the energies and momenta of the incoming and outgoing “real” particles. A shift in the energy of one particle traversing the loop can be perfectly counterbalanced by an equal and opposite shift in the energy of another particle in the same loop, all without affecting the observable external particles. This freedom implies a vast, indeed infinite, number of possible energy and momentum configurations for the virtual particles within the loop. To calculate the amplitude for such a loop process, one is therefore compelled to integrate over all these possible combinations of energy and momentum that could theoretically circulate within the loop.

These integrals, more often than not, turn out to be divergent, yielding those pesky infinite answers that make physicists sigh. The particular type of divergence that causes the most significant headache is the “ultraviolet ” (UV) divergence. An ultraviolet divergence is characterized as one that originates from:

  • The high-energy, high-momentum regime: Specifically, the region within the integral where all virtual particles in the loop possess exceedingly large energies and momenta. This represents probing extremely short distances and brief time intervals.
  • Short wavelengths and high frequencies: In the path integral formulation of a field, these divergences correspond to very short wavelengths and intensely high-frequencies fluctuations of the fields themselves.
  • Minimal proper-time intervals: If one conceptualizes the loop as a summation over various particle paths, these divergences arise from paths where the time elapsed between particle emission and absorption approaches zero.

Thus, these ultraviolet divergences are fundamentally phenomena of short distances and short times, probing the very fabric of spacetime at its most granular level.

In the case of quantum electrodynamics , a relatively simple theory in terms of its particle content, there are precisely three distinct one-loop divergent Feynman diagrams , as depicted in the accompanying illustrations:

  • Vacuum Polarization (a.k.a. charge screening): A photon momentarily transforms into a virtual electron–positron pair , which then almost immediately annihilates, reverting to a photon . This process effectively “screens” the bare charge of a particle, and the corresponding diagram exhibits a logarithmic ultraviolet divergence .
  • Electron Self-Energy: An electron rapidly emits a virtual photon and then reabsorbs it. This self-interaction modifies the electron ’s effective mass, and the diagram is known as a self-energy diagram.
  • Vertex Renormalization (a.k.a. “penguin” diagram): An electron emits a photon , then emits a second photon , and subsequently reabsorbs the first photon . This process, which modifies the strength of the electron-photon interaction, is shown in more detail in Figure 2 below and is indeed referred to as a vertex renormalization . Amusingly, the Feynman diagram for this particular interaction is sometimes colloquially termed a “penguin diagram ” due to its visual resemblance to the aquatic bird.

These three fundamental divergences in QED correspond directly to the three adjustable parameters inherent in the theory:

  • The field normalization Z: This accounts for the rescaling of the quantum fields themselves.
  • The mass of the electron: The observed mass is a renormalized quantity, distinct from the bare mass.
  • The charge of the electron: Similarly, the experimentally measured charge is a renormalized value.

Beyond these ultraviolet divergences , there exists a second, distinct class of divergence known as an infrared divergence . These arise specifically due to the presence of massless particles, such as the ever-present photon . The theory predicts that any process involving charged particles will inevitably emit an infinite number of coherent photons with infinitely long wavelengths (i.e., zero energy). Consequently, the calculated amplitude for emitting any finite number of photons is, paradoxically, zero. For photons , these infrared divergences are now exceptionally well understood and are typically handled differently from their ultraviolet counterparts. For instance, at the one-loop order, the vertex function in QED displays both ultraviolet and infrared divergences . Unlike the ultraviolet divergence , the infrared divergence does not necessitate the renormalization of a fundamental parameter within the theory. Instead, the infrared divergence of the vertex diagram is neatly canceled by incorporating an additional diagram into the calculation. This additional diagram is structurally similar to the vertex diagram, but with a crucial difference: the virtual photon connecting the two electron “legs” is conceptually “cut” and replaced by two on-shell (meaning, real, observable) photons whose wavelengths tend towards infinity . This new diagram effectively represents the bremsstrahlung process, where a charged particle emits real photons when it is accelerated. The inclusion of this additional diagram is physically mandated because there is no practical, observable way to distinguish between a zero-energy virtual photon circulating within a loop (as in the vertex diagram) and real, zero-energy photons emitted via bremsstrahlung . From a purely mathematical perspective, these infrared divergences can sometimes be regularized by employing techniques like fractional differentiation with respect to a parameter. For example, the expression $\left(p^{2}-a^{2}\right)^{\frac {1}{2}}$ is perfectly well-defined at $p=a$ but leads to a UV divergence when integrated; if one were to take the $3/2$-th fractional derivative with respect to $-a^2$, the result would be the IR divergent term ${\frac {1}{p^{2}-a^{2}}}$. This implies that one can, rather cleverly, cure infrared divergences by, in essence, transforming them into ultraviolet divergences , which can then be handled by standard renormalization techniques. A rather circuitous route, but often effective.

A loop divergence

Let’s dissect a specific example of a loop divergence, just to appreciate the kind of mathematical monster we’re dealing with. Figure 2 illustrates one of the many one-loop contributions to the process of electron–electron scattering within quantum electrodynamics . Imagine an electron on the left, represented by a solid line, entering the interaction with an initial 4-momentum $p^\mu$ and exiting with a final 4-momentum $r^\mu$. In the simplest interaction, it would emit a virtual photon carrying $r^\mu - p^\mu$ to transfer energy and momentum to another electron .

However, in the diagram shown, things get a bit more convoluted. Before this primary interaction can fully unfold, our electron spontaneously emits another virtual photon with 4-momentum $q^\mu$. It then, just as quickly, reabsorbs this same virtual photon after having emitted the first one. The crucial point here is that the laws of energy and momentum conservation do not uniquely determine the 4-momentum $q^\mu$ of this internal, ephemeral virtual photon . Since all possible values for $q^\mu$ are equally permissible within the confines of quantum uncertainty, we are forced to integrate over every single one of them to obtain the total amplitude for this process.

The amplitude corresponding to this particular diagram, among other factors, includes a term arising from the loop integral:

$-ie^{3}\int {\frac {d^{4}q}{(2\pi )^{4}}}\gamma ^{\mu }{\frac {i(\gamma ^{\alpha }(r-q){\alpha }+m)}{(r-q)^{2}-m^{2}+i\epsilon }}\gamma ^{\rho }{\frac {i(\gamma ^{\beta }(p-q){\beta }+m)}{(p-q)^{2}-m^{2}+i\epsilon }}\gamma ^{\nu }{\frac {-ig_{\mu \nu }}{q^{2}+i\epsilon }}.$

In this rather intimidating expression, the various $\gamma^\mu$ factors are the gamma matrices , which are essential components in the covariant formulation of the Dirac equation and account for the intrinsic spin of the electron . The constant $e$ represents the fundamental electric coupling constant , a measure of the strength of the electromagnetic interaction. The infinitesimal $i\epsilon$ term serves as a mathematical device, providing a heuristic definition for the contour of integration around the poles that inevitably appear in the complex space of momenta. For our current purpose of understanding divergences, the most critical aspect is the dependence on $q^\mu$ within the three large factors in the integrand. These factors originate from the propagators of the two internal electron lines and the virtual photon line that form the closed loop.

When one examines this integral at very large values of $q^\mu$, a specific part of the integrand dominates. This dominant piece scales with two powers of $q^\mu$ in the numerator, leading to the following simplified form (as detailed in Pokorski 1987, p. 122):

$e^{3}\gamma ^{\mu }\gamma ^{\alpha }\gamma ^{\rho }\gamma ^{\beta }\gamma {\mu }\int {\frac {d^{4}q}{(2\pi )^{4}}}{\frac {q{\alpha }q_{\beta }}{(r-q)^{2}(p-q)^{2}q^{2}}}.$

This integral, as it stands, is undeniably divergent and yields an infinite result. Unless, of course, we arbitrarily impose a cutoff at some finite energy and momentum to prevent it from running away to infinity . This is precisely the kind of problem that necessitated the development of regularization and renormalization . It’s not an isolated incident; similar loop divergences are a common and persistent feature in almost all other quantum field theories that attempt to describe fundamental interactions. The universe, it seems, has a penchant for mathematical extremes.

Renormalized and bare quantities

The fundamental breakthrough in dealing with these rampant infinities was the profound realization that the quantities initially appearing in the theory’s foundational equations (such as those in the Lagrangian ), which purported to represent physical constants like the electron’s electric charge and mass , or the normalizations of the quantum fields themselves, did not actually correspond to the physical values measured in a laboratory. They were, in essence, “bare” quantities—unadorned, idealized values that utterly failed to account for the intricate, omnipresent contributions of virtual-particle loop effects to the physical constants themselves.

These loop effects, among other things, would naturally include the quantum mechanical analogue of the electromagnetic back-reaction that had so tormented classical theorists of electromagnetism . Crucially, these quantum corrections would, in general, be just as divergent as the problematic amplitudes we were trying to calculate in the first place. Therefore, to obtain finite, measurable physical quantities, one was forced to concede that the underlying “bare” quantities must, by definition, be divergent themselves. A rather uncomfortable truth, but a truth nonetheless.

To align the theoretical framework with observable reality, the equations had to be meticulously rewritten in terms of these measurable, “renormalized” quantities. For example, the charge of the electron —a quantity we can precisely measure—is defined not by its bare value, but by its value measured at a specific kinematic “renormalization point” or “subtraction point.” This point typically corresponds to a characteristic energy, often referred to as the renormalization scale or simply the energy scale . The residual parts of the Lagrangian , those involving the remaining, divergent portions of the bare quantities, could then be ingeniously reinterpreted as “counterterms .” These counterterms are specifically designed to generate divergent diagrams that exactly cancel out the troublesome divergences arising from other diagrams, thus restoring mathematical sanity to the calculations. It’s a bit like having a perfectly tailored antidote for every poison your theory produces.

Renormalization in QED

Let’s illustrate this with the Lagrangian of QED , the fundamental equation that describes the interactions of electrons and photons :

${\mathcal {L}}={\bar {\psi }}{B}\left[i\gamma {\mu }\left(\partial ^{\mu }+ie{B}A{B}^{\mu }\right)-m_{B}\right]\psi {B}-{\frac {1}{4}}F{B\mu \nu }F_{B}^{\mu \nu }$

Here, the fields ($\psi_B$ for the electron , $A_B^\mu$ for the photon ) and the coupling constant ($e_B$ for the charge) are explicitly designated as “bare” quantities, hence the subscript B. This is the starting point, the raw, unphysical theory. Conventionally, these bare quantities are expressed as multiples of their renormalized, physical counterparts, introducing a set of renormalization constants ($Z$ factors):

$\left({\bar {\psi }}m\psi \right){B}=Z{0}{\bar {\psi }}m\psi$ $\left({\bar {\psi }}\left(\partial ^{\mu }+ieA^{\mu }\right)\psi \right){B}=Z{1}{\bar {\psi }}\left(\partial ^{\mu }+ieA^{\mu }\right)\psi$ $\left(F_{\mu \nu }F^{\mu \nu }\right){B}=Z{3},F_{\mu \nu }F^{\mu \nu }.$

These $Z$ factors, which are generally infinite, absorb the divergences. A crucial consequence of gauge invariance , upheld by the Ward–Takahashi identity , implies that the two terms within the covariant derivative piece, ${\bar {\psi }}(\partial +ieA)\psi$, can be renormalized together (Pokorski 1987, p. 115). This means that the renormalization constant for the kinetic term of the electron field, $Z_2$, is actually identical to $Z_1$, which renormalizes the electron-photon interaction vertex. A small but elegant simplification in an otherwise complex process.

Now, let’s take a specific interaction term from this Lagrangian , such as the electron-photon interaction depicted in Figure 1. This term can be meticulously decomposed:

${\mathcal {L}}_{I}=-e{\bar {\psi }}\gamma {\mu }A^{\mu }\psi -(Z{1}-1)e{\bar {\psi }}\gamma _{\mu }A^{\mu }\psi$

In this expression, $e$ represents the physical, measurable charge of the electron . This constant is defined in terms of a specific, real-world experiment. We then choose our renormalization scale to be precisely the energy characteristic of this experiment. The first term in the equation then yields the interaction we actually observe in the laboratory, albeit with small, finite corrections arising from higher-order loop diagrams (which explain subtle phenomena like the higher-order corrections to the magnetic moment of the electron ). The second term, $(Z_1-1)e{\bar {\psi }}\gamma _{\mu }A^{\mu }\psi$, is the counterterm . If the theory is indeed renormalizable (a concept we’ll get to shortly, and QED is a prime example), then the divergent parts of all loop diagrams can be systematically broken down into pieces with three or fewer external “legs.” These pieces possess an algebraic form that can be precisely canceled out by this second term, or by similar counterterms arising from $Z_0$ (mass renormalization) and $Z_3$ (photon field renormalization), thus elegantly banishing the mathematical infinities .

For instance, the diagram featuring the interaction vertex of the $Z_1$ counterterm , positioned as illustrated in Figure 3, performs the crucial function of canceling out the divergence that was so stubbornly present in the loop diagram shown in Figure 2. It’s a beautifully precise cancellation, turning mathematical chaos into empirical order.

Historically, this practical separation of “bare terms” into their original components and the necessary counterterms predated the profound insights offered by the renormalization group , particularly those developed by Kenneth Wilson . From the perspective of renormalization group theory, as will be elaborated in the subsequent section, this artificial splitting is actually considered unnatural and, perhaps more importantly, unphysical. This is because, according to renormalization group principles, all scales within a problem are interconnected and influence each other in a continuous, systematic fashion, rendering arbitrary divisions somewhat arbitrary.

Running couplings

To minimize the computational burden of loop diagrams in any given calculation—and thereby simplify the extraction of meaningful results—one typically selects a renormalization point that is judiciously close to the characteristic energies and momenta exchanged in the interaction being studied. However, it’s crucial to understand that the renormalization point itself is not a physical quantity. The ultimate physical predictions derived from the theory, when calculated to all orders of perturbation theory , must, in principle, be entirely independent of this arbitrary choice of renormalization point , provided it remains within the valid domain of application for the theory. Changes in the chosen renormalization scale simply reallocate how much of a given result originates from the simpler, tree-level Feynman diagrams (those without loops) and how much arises from the remaining, finite parts of the more complex loop diagrams.

This fundamental independence from the renormalization scale can be cleverly exploited to calculate the effective variation of fundamental physical constants as the energy scale of an interaction changes. This fascinating variation is quantitatively described by what are known as beta-functions , and the overarching theoretical framework that describes this profound scale-dependence is precisely what we call the renormalization group .

In common parlance, particle physicists often speak of certain physical “constants” as if they literally “vary” with the energy of interaction. While a convenient shorthand, it’s more accurate to say that it’s the renormalization scale that acts as the independent variable. This “running” of couplings, however, provides an incredibly intuitive and powerful means of describing how the behavior of a field theory fundamentally changes under different interaction energies. For instance, in quantum chromodynamics (QCD), the strong coupling constant becomes remarkably small at very large energy scales . This implies that at high energies, the theory behaves much more like a collection of nearly free particles—a phenomenon famously known as asymptotic freedom . By selecting an increasing energy scale and diligently applying the renormalization group equations, this behavior becomes strikingly evident even from relatively simple Feynman diagrams . Without this conceptual framework, predicting asymptotic freedom would still be possible, but the result would emerge from an incredibly complex and arduous series of high-order cancellations, making the underlying physics far less transparent.

Consider a simple mathematical analogy to illustrate the concept of taming ill-defined expressions. Imagine an expression like: $I=\int _{0}^{a}{\frac {1}{z}},dz-\int _{0}^{b}{\frac {1}{z}},dz=\ln a-\ln b-\ln 0+\ln 0$ This expression is manifestly ill-defined due to the presence of $\ln 0$, which is infinity . This is the kind of “infinity minus infinity” problem that plagues naive calculations. To make this expression tractable and remove the divergence, one can introduce a small, finite lower limit (a regulator ) for the integrals, let’s call them $\varepsilon_a$ and $\varepsilon_b$: $I=\ln a-\ln b-\ln {\varepsilon _{a}}+\ln {\varepsilon _{b}}=\ln {\tfrac {a}{b}}-\ln {\tfrac {\varepsilon _{a}}{\varepsilon _{b}}}$ Now, as long as we carefully ensure that the ratio $\varepsilon_b / \varepsilon_a$ approaches 1 in the limit, the problematic $\ln(\varepsilon)$ terms cancel, and we are left with a finite, well-defined result: $I = \ln (a/b)$. This simple mathematical maneuver captures the essence of how regularization and renormalization work together to extract finite, meaningful answers from potentially infinite expressions.

Regularization

Since the mathematical operation of $\infty - \infty$ is, by its very nature, utterly ill-defined , any attempt to precisely cancel divergences requires a preliminary step: the infinities must first be mathematically tamed. This crucial process is known as regularization (as detailed by Weinberg, 1995).

At its core, regularization involves introducing an essentially arbitrary, but mathematically precise, modification to the loop integrands. This modification, often called a “regulator ,” is designed to make these integrands “drop off” more rapidly at extremely high energies and momenta. The consequence is that the integrals, which were previously divergent, now converge to a finite value. A regulator always introduces a characteristic energy scale , known as the “cutoff .” The original, divergent integrals are recovered by taking this cutoff to infinity (or, equivalently, by letting the corresponding length or time scale shrink to zero).

With a regulator in place and the cutoff set to a finite value, the previously divergent terms within the integrals are transformed into finite, but crucially, cutoff -dependent terms. It’s like putting a lid on the infinite pot. Once these cutoff -dependent terms are precisely canceled out by corresponding contributions from equally cutoff -dependent counterterms (which are part of the renormalization procedure), the cutoff can then be safely taken to infinity . At this point, finite, physically meaningful results are recovered. The philosophical underpinning here is that if the physics we observe and measure at accessible scales is truly independent of whatever bizarre phenomena might occur at the very shortest distance and time scales (which our theories can’t fully resolve), then our calculations should ultimately yield cutoff -independent results.

Numerous distinct types of regulators are employed in quantum field theory calculations, each possessing its own set of advantages and disadvantages. One of the most widely adopted and elegant methods in modern theoretical physics is dimensional regularization . This ingenious technique, conceived by the Nobel laureates Gerardus ’t Hooft and Martinus J. G. Veltman , tames the unruly integrals by analytically continuing them into a fictitious space with a fractional number of dimensions, typically $4-\epsilon$ dimensions, where $\epsilon$ is a small parameter. The divergences then manifest as poles in $\epsilon$, which can be systematically subtracted.

Another historical and conceptually important method is Pauli–Villars regularization . This approach introduces fictitious, massive particles into the theory. These “ghost” particles are assigned very large masses, such that their contributions to loop integrands precisely cancel out the existing divergences from the real particles at large momenta. It’s like fighting fire with fire, but with carefully controlled, imaginary fire.

Yet another powerful regularization scheme is lattice regularization , pioneered by Kenneth Wilson . This method posits that our continuous spacetime is, in fact, approximated by a discrete, hyper-cubical lattice with a fixed grid size. This grid size then acts as a natural cutoff for the maximal momentum that a particle can possess when propagating on this discrete lattice. After performing calculations on several lattices with varying grid sizes, the ultimate physical result is then extrapolated to a grid size of zero, effectively recovering our continuous, natural universe. This procedure, of course, fundamentally presupposes the existence of a well-behaved scaling limit .

For a more mathematically rigorous approach to renormalization theory , one can turn to causal perturbation theory . In this elegant framework, ultraviolet divergences are avoided from the very outset of calculations. This is achieved by performing only well-defined mathematical operations within the sophisticated framework of distribution theory . In this approach, the problematic divergences are not “canceled” but rather replaced by a controlled ambiguity: a divergent diagram now corresponds to a finite term, but one with an undetermined coefficient. Other fundamental principles, such as gauge symmetry , must then be invoked to systematically reduce or entirely eliminate this ambiguity, ensuring a unique and physical result. It’s a testament to the ingenuity of physicists and mathematicians that so many paths lead to the same finite, physical results.

Attitudes and interpretation

The early architects of QED and other nascent quantum field theories were, almost without exception, profoundly uneasy with the state of affairs that renormalization presented. The notion of performing what amounted to “subtracting infinities from infinities ” just to arrive at finite answers struck many as mathematically illicit, a desperate, ad hoc procedure rather than a genuine physical insight.

Freeman Dyson , one of the key figures in systematizing renormalization , argued that these infinities were of a fundamental nature, suggesting they could not be simply wished away by formal mathematical procedures like the renormalization method. He saw them as intrinsic features, not mere calculational artifacts.

Paul Dirac ’s criticism was perhaps the most persistent and vocal. Even as late as 1975, long after renormalization had become a standard tool, he expressed his profound dissatisfaction:

“Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, ignoring them in an arbitrary way. This is just not sensible mathematics . Sensible mathematics involves disregarding a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!”

One can almost hear the cosmic sigh in his words. His intellectual integrity simply couldn’t stomach the arbitrary dismissal of infinite quantities, a sentiment many purists still echo today.

Richard Feynman , despite his absolutely crucial role in the development of quantum electrodynamics and the creation of Feynman diagrams , also harbored deep reservations. In 1985, he famously penned:

“The shell game that we play to find n and j is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent . It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.”

“Dippy process.” A blunt assessment from a man who knew the machinery intimately. Feynman ’s concern stemmed from a broader issue: all known field theories in the 1960s exhibited a property where interactions would become infinitely strong at sufficiently short distance scales . This phenomenon, known as a Landau pole , cast a long shadow, making it plausible that many quantum field theories were inherently inconsistent. However, a turning point arrived in 1974 when David Gross , Hugh David Politzer , and Frank Wilczek demonstrated that quantum chromodynamics (QCD), the theory of the strong nuclear force, did not suffer from a Landau pole . This discovery, asymptotic freedom , allowed Feynman and many others to accept QCD as a fully consistent theory, shifting the paradigm.

The general unease surrounding renormalization was almost universal in physics textbooks well into the 1970s and 1980s. However, starting in the 1970s, a significant shift in attitudes began to take hold, particularly among the younger generation of theorists. This change was largely inspired by groundbreaking work on the renormalization group and the concept of effective field theory . Despite the fact that titans like Dirac and others from the older guard never fully retracted their criticisms, the new insights began to reframe renormalization . Kenneth G. Wilson , whose contributions were pivotal, along with others, demonstrated that the renormalization group was not merely a trick for particle physics but an incredibly powerful tool in statistical mechanics , particularly when applied to condensed matter physics . Here, it offered profound insights into the complex behavior of phase transitions and critical phenomena .

In condensed matter physics , the philosophical discomfort with renormalization largely dissipates. A natural, physical short-distance regulator inherently exists: matter ceases to be continuous at the scale of atoms . Therefore, short-distance divergences in condensed matter physics do not pose a deep philosophical problem because the field theory itself is explicitly understood to be an effective, smoothed-out representation of the underlying atomic behavior. There are no true infinities because the cutoff is always finite and physically meaningful, and it makes perfect sense that the bare quantities are dependent on this finite cutoff .

This perspective has profound implications for particle physics . If quantum field theory itself is merely an effective description that holds true down to, but not beyond, the Planck length (the ultimate minimum length scale where quantum gravity effects are expected to dominate, perhaps leading to string theory , causal set theory , or something entirely different), then the problem of short-distance divergences in particle physics might not be a “real” problem at all. All field theories could simply be effective field theories , valid only up to a certain energy scale. In a sense, this approach echoes the older sentiment that divergences in quantum field theory highlight human ignorance about the deepest workings of nature, but it crucially adds that this ignorance can be quantified and that the resulting effective theories remain incredibly useful and predictive within their domain of applicability.

Despite this shift, Abdus Salam ’s insightful remark from 1972 still resonates:

“Field-theoretic infinities – first encountered in Lorentz ’s computation of electron self-mass – have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may, after all, be circumvented — and finite values for the renormalization constants computed – is considered irrational.”

He then drew a parallel to Bertrand Russell ’s observation from his autobiography:

“In the modern world, if communities are unhappy, it is often because they have ignorances, habits, beliefs, and passions, which are dearer to them than happiness or even life. I find many men in our dangerous age who seem to be in love with misery and death, and who grow angry when hopes are suggested to them. They think hope is irrational and that, in sitting down to lazy despair, they are merely facing facts.”

A rather poignant indictment of human (and scientific) stubbornness, wouldn’t you say?

In contemporary quantum field theory , the value of a physical constant is generally dependent on the specific energy scale chosen as the renormalization point . This makes the study of how physical constants “run” under changes in the energy scale —the domain of the renormalization group —an incredibly fruitful and interesting area of research. The coupling constants in the Standard Model of particle physics exhibit fascinating variations with increasing energy scale : the strong coupling of quantum chromodynamics and the weak isospin coupling of the electroweak force both tend to decrease, while the weak hypercharge coupling of the electroweak force tends to increase. Intriguingly, at the colossal energy scale of $10^{15}$ GeV (an energy far beyond the capabilities of our current particle accelerators ), these three seemingly disparate coupling constants all converge to approximately the same strength (Grotz and Klapdor 1990, p. 254). This remarkable convergence is a major driving force behind theoretical speculations about grand unified theory (GUTs), which propose that these forces are merely different manifestations of a single, more fundamental force at extremely high energies. Far from being a mere worrisome problem, renormalization has transformed into an indispensable theoretical tool for exploring the rich and complex behavior of field theories across different energy regimes.

However, if a theory that relies on renormalization (such as QED) can only be sensibly interpreted as an effective field theory —that is, as a mere approximation reflecting the current limits of human understanding of nature’s deepest mechanisms—then the fundamental problem remains: to discover a more accurate, complete theory that does not suffer from these renormalization issues. As Lewis Ryder succinctly put it, “In the Quantum Theory , these classical divergences do not disappear; on the contrary, they appear to get worse. And despite the comparative success of renormalisation theory , the feeling remains that there ought to be a more satisfactory way of doing things.” It’s a sentiment that still haunts the fringes of theoretical physics , a lingering question mark over the triumphs of renormalization .

Renormalizability

From this profound philosophical re-evaluation of renormalization , a new and crucial concept naturally emerges: the notion of renormalizability . Not all theories are equally amenable to the renormalization process in the neat, systematic manner described above, where a finite number of counterterms can effectively cancel all divergences, ultimately yielding results that are independent of the arbitrary cutoff .

The issue arises when the Lagrangian of a theory contains combinations of field operators that possess a sufficiently high dimension when expressed in units of energy. In such cases, the number of counterterms required to cancel all possible divergences proliferates to an infinite degree. At first glance, such a theory would appear to gain an infinite number of free parameters, thereby losing all its predictive power and becoming, by any scientific standard, utterly worthless. These unfortunate theories are branded as nonrenormalizable .

The highly successful Standard Model of particle physics is, thankfully, composed exclusively of renormalizable operators. This is a crucial feature that allows it to be a predictive theory. However, if one attempts to construct a field theory of quantum gravity in the most straightforward manner—by treating the metric in the Einstein–Hilbert Lagrangian as a perturbation around the flat Minkowski metric —the interactions arising from general relativity inevitably manifest as nonrenormalizable operators . This strongly suggests that conventional perturbation theory is simply inadequate for a consistent quantum description of gravity , indicating a need for entirely new theoretical frameworks.

However, within the more encompassing framework of an effective field theory , the term “renormalizability ” is, strictly speaking, a bit of a misnomer . In a nonrenormalizable effective field theory , terms in the Lagrangian do indeed multiply to infinity , but their coefficients are systematically suppressed by increasingly extreme inverse powers of the energy cutoff . If this cutoff is understood to be a real, physical quantity—meaning the theory is only an effective description of physics up to some maximum energy or minimum distance scale —then these additional terms are not mere mathematical artifacts. They could, in fact, represent genuine physical interactions that simply become negligible at lower energies.

Assuming that the dimensionless constants within the theory do not become excessively large, one can systematically group calculations by inverse powers of the cutoff . This allows for the extraction of approximate predictions to a finite order in the cutoff , which still retain a finite and manageable number of free parameters. It can even be useful to apply renormalization techniques to these “nonrenormalizable” interactions, effectively defining their strength at a particular energy scale .

A particularly elegant consequence of effective field theory is that nonrenormalizable interactions rapidly become weaker as the energy scale of interest becomes much smaller than the cutoff . The classic illustration of this principle is the Fermi theory of the weak nuclear force . This theory is a nonrenormalizable effective theory whose cutoff is approximately comparable to the mass of the W particle , the mediator of the weak force. This inherent suppression of nonrenormalizable interactions at low energies may provide a compelling explanation for why almost all the fundamental particle interactions we observe in our everyday world are describable by renormalizable theories . It could simply be that any other, more exotic interactions that might exist at the extreme GUT or Planck scale become far too weak to detect in the energy regimes we can currently probe. The singular exception, of course, is gravity itself, whose exceedingly weak interaction is dramatically magnified by the colossal masses of celestial bodies like stars and planets , making it profoundly observable despite its nonrenormalizable nature in a quantum field theory context.

Renormalization schemes

When performing actual calculations in quantum field theory , the precise manner in which counterterms are introduced and fixed to cancel the divergences arising from Feynman diagram calculations beyond the simplest “tree level” (diagrams without loops) is dictated by a set of specific renormalization conditions . These conditions define what is known as a “renormalization scheme .” The choice of scheme affects the intermediate steps and the definition of the running couplings, but physical observables must ultimately be independent of this choice.

Among the most common renormalization schemes in widespread use are:

  • Minimal subtraction (MS) scheme and its widely adopted variant, the modified minimal subtraction (MS-bar, or $\overline{\text{MS}}$) scheme. These schemes are designed to subtract only the divergent parts of the integrals, along with minimal finite pieces, making them computationally efficient and popular in perturbative QCD .
  • On-shell scheme . In this scheme, physical parameters like mass and charge are defined by their values at actual, physical on-shell scattering processes. For instance, the mass of a particle is defined as the pole in its propagator , and its charge is defined by the amplitude of a scattering process at zero momentum transfer. This provides a direct link to experimental measurements.

Beyond these established methods, there exists a more “natural” definition of the renormalized coupling (often combined with the photon propagator ) which emerges when considering the propagator of dual free bosons . This particular approach is noteworthy because it does not explicitly necessitate the introduction of counterterms in the traditional sense, offering an alternative conceptual pathway to managing divergences. A rather elegant circumvention, if one prefers not to explicitly acknowledge the mess.

In statistical physics

The profound physical significance and a more generalized understanding of the renormalization process, extending far beyond the conventional “dilatation group” of traditionally renormalizable theories , truly blossomed within the field of condensed matter physics . It was Leo P. Kadanoff ’s seminal paper in 1966 that introduced the groundbreaking concept of the “block-spin” renormalization group . The ingenious idea behind “blocking” is to define the components of a theory at larger distance scales as effective aggregates, or “blocks,” of components defined at shorter, more fundamental distances. This provides a conceptual bridge between microscopic and macroscopic descriptions.

This block-spin approach provided the crucial conceptual foundation, which was then given full computational rigor and substance through the extensive and profoundly important contributions of Kenneth Wilson . Wilson ’s ideas were so powerful that they enabled a constructive, iterative renormalization solution to a long-standing and notoriously difficult problem in condensed matter physics : the Kondo problem , which he definitively solved in 1974. This triumph followed his earlier, equally seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. For these decisive contributions, which fundamentally reshaped our understanding of scale and complexity, Kenneth Wilson was rightfully awarded the Nobel Prize in Physics in 1982.

Principles

In more technical terms, let’s consider a physical theory described by a function, let’s call it $Z$. This function depends on a set of state variables, denoted as ${s_i}$, and a corresponding set of coupling constants , ${J_k}$. This function $Z$ could represent various fundamental quantities, such as a partition function in statistical mechanics , an action in field theory , or a Hamiltonian in quantum mechanics . Regardless of its specific form, it must encapsulate the complete physical description of the system under consideration.

Now, imagine we apply a “blocking transformation” to our state variables: ${s_i} \to {{\tilde {s}}{i}}$. The crucial aspect of this transformation is that the number of new variables, ${{\tilde {s}}{i}}$, must be fewer than the original number of variables, ${s_i}$. This effectively coarse-grains the system, moving to a larger scale where fine details are averaged out. If it is possible to rewrite the original function $Z$ entirely in terms of these new, coarser-grained variables ${{\tilde {s}}{i}}$, and if this rewriting can be achieved solely by a corresponding transformation of the original parameters, ${J_k} \to {{\tilde {J}}{k}}$, then the theory is said to be renormalizable in this broader, statistical physics sense.

The macroscopic states of the system, when viewed at a sufficiently large scale , are then characterized by the “fixed points” of this renormalization group flow. These fixed points represent stable configurations where the system’s macroscopic properties no longer change with further coarse-graining.

Renormalization group fixed points

The most critical information contained within the intricate “flow” of the renormalization group is found in its fixed points. A fixed point is, by definition, where the associated beta function vanishes, indicating that the coupling constants of the theory cease to “run” or change with scale . Consequently, fixed points of the renormalization group are inherently scale invariant . In many cases of significant physical interest, this scale invariance expands to encompass a more profound conformal invariance , leading to the emergence of a conformal field theory at the fixed point. These theories possess an even richer set of symmetries, governing how physical quantities transform under local scale transformations and special conformal transformations.

The remarkable ability of several distinct theories to “flow” towards the same fixed point under the action of the renormalization group leads directly to the profound concept of universality . This means that systems with vastly different microscopic details can exhibit identical macroscopic behavior near a phase transition , as their underlying renormalization group flows converge to the same fixed point. This explains why, for example, the critical exponents describing phase transitions in very different materials can be identical.

If these fixed points correspond to a free field theory —meaning a theory where particles do not interact—the theory is said to exhibit quantum triviality . This implies that, despite initial appearances, the interactions in such a theory effectively vanish at sufficiently large distance scales . The study of lattice Higgs theories , for instance, reveals numerous fixed points, but the precise nature of the quantum field theories associated with these remains a fascinating and open question, a challenge still waiting for a definitive answer. It seems even after all this work, the universe still likes to keep a few secrets.


Further Reading and References:

(As per instructions, I am preserving the original references and further reading sections without modification, as they are not part of the content to be rewritten or extended by the persona.)

See also

• History of quantum field theory

• Quantum triviality

• Zeno’s paradoxes

• Nonoblique correction

References

• ^ See e.g., Weinberg vol I, chapter 10.

• ^ Sanyuk, Valerii I.; Sukhanov, Alexander D. (September 1, 2003). “Dirac in 20th century physics: a centenary assessment”. Physics-Uspekhi . 46 (9): 937–956. doi :10.1070/PU2003v046n09ABEH001165. ISSN  1063-7869.

• ^ • Kar, Arnab (2014). Renormalization from Classical to Quantum Physics (Thesis). University of Rochester.

• ^ • Griffiths, David J. (2023). Introduction to electrodynamics (5th ed.). New York: Cambridge University Press. ISBN   978-1-009-39773-5 .

• ^ Kramers presented his work at the 1947 Shelter Island Conference , repeated in 1948 at the Solvay Conference . The latter did not appear in print until the Proceedings of the Solvay Conference, published in 1950 (see Laurie M. Brown (ed.), Renormalization: From Lorentz to Landau (and Beyond) , Springer, 2012, p. 53). Kramers’ approach was nonrelativistic (see Jagdish Mehra , Helmut Rechenberg , The Conceptual Completion and Extensions of Quantum Mechanics 1932–1941. Epilogue: Aspects of the Further Development of Quantum Theory 1942–1999: Volumes 6, Part 2 , Springer, 2001, p. 1050).

• ^ • H. Bethe (1947). “The Electromagnetic Shift of Energy Levels”. Physical Review . 72 (4): 339–341. Bibcode :1947PhRv…72..339B. doi :10.1103/PhysRev.72.339. S2CID  120434909.

• ^ • Schwinger, J. (1948). “On quantum-electrodynamics and the magnetic moment of the electron”. Physical Review . 73 (4): 416–417. Bibcode :1948PhRv…73..416S. doi :10.1103/PhysRev.73.416.

• ^ • Schwinger, J. (1948). “I. A covariant formulation”. Physical Review . Quantum Electrodynamics. 74 (10): 1439–1461. Bibcode :1948PhRv…74.1439S. doi :10.1103/PhysRev.74.1439.

• ^ • Schwinger, J. (1949). “II. Vacuum polarization and self-energy”. Physical Review . Quantum Electrodynamics. 75 (4): 651–679. Bibcode :1949PhRv…75..651S. doi :10.1103/PhysRev.75.651.

• ^ • Schwinger, J. (1949). “III. The electromagnetic properties of the electron radiative corrections to scattering”. Physical Review . Quantum Electrodynamics. 76 (6): 790–817. Bibcode :1949PhRv…76..790S. doi :10.1103/PhysRev.76.790.

• ^ • Feynman, Richard P. (1948). “Space-time approach to non-relativistic quantum mechanics” (PDF). Reviews of Modern Physics . 20 (2): 367–387. Bibcode :1948RvMP…20..367F. doi :10.1103/RevModPhys.20.367.

• ^ • Feynman, Richard P. (1948). “A relativistic cut-off for classical electrodynamics” (PDF). Physical Review . 74 (8): 939–946. Bibcode :1948PhRv…74..939F. doi :10.1103/PhysRev.74.939.

• ^ • Feynman, Richard P. (1948). “A relativistic cut-off for quantum electrodynamics” (PDF). Physical Review . 74 (10): 1430–1438. Bibcode :1948PhRv…74.1430F. doi :10.1103/PhysRev.74.1430.

• ^ • Tomonaga, S. (August 1, 1946). “On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields”. Progress of Theoretical Physics . 1 (2). Oxford University Press (OUP): 27–42. Bibcode :1946PThPh…1…27T. doi :10.1143/ptp.1.27. ISSN  1347-4081.

• ^ • Koba, Z.; Tati, T.; Tomonaga, S.-i. (October 1, 1947). “On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. II: Case of Interacting Electromagnetic and Electron Fields”. Progress of Theoretical Physics . 2 (3). Oxford University Press (OUP): 101–116. Bibcode :1947PThPh…2..101K. doi :10.1143/ptp/2.3.101. ISSN  0033-068X.

• ^ • Koba, Z.; Tati, T.; Tomonaga, S.-i. (December 1, 1947). “On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. III: Case of Interacting Electromagnetic and Electron Fields”. Progress of Theoretical Physics . 2 (4). Oxford University Press (OUP): 198–208. Bibcode :1947PThPh…2..198K. doi :10.1143/ptp/2.4.198. ISSN  0033-068X.

• ^ • Kanesawa, S.; Tomonaga, S.-i. (March 1, 1948). “On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. [IV]: Case of Interacting Electromagnetic and Meson Fields”. Progress of Theoretical Physics . 3 (1). Oxford University Press (OUP): 1–13. doi :10.1143/ptp/3.1.1. ISSN  0033-068X.

• ^ • Kanesawa, S.; Tomonaga, S.-i. (June 1, 1948). “On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields V: Case of Interacting Electromagnetic and Meson Fields”. Progress of Theoretical Physics . 3 (2). Oxford University Press (OUP): 101–113. Bibcode :1948PThPh…3..101K. doi :10.1143/ptp/3.2.101. ISSN  0033-068X.

• ^ • Koba, Z.; Tomonaga, S.-i. (September 1, 1948). “On Radiation Reactions in Collision Processes. I: Application of the “Self-Consistent” Subtraction Method to the Elastic Scattering of an Electron”. Progress of Theoretical Physics . 3 (3). Oxford University Press (OUP): 290–303. Bibcode :1948PThPh…3..290K. doi :10.1143/ptp/3.3.290. ISSN  0033-068X.

• ^ • Tomonaga, Sin-Itiro; Oppenheimer, J. R. (July 15, 1948). “On Infinite Field Reactions in Quantum Field Theory”. Physical Review . 74 (2). American Physical Society (APS): 224–225. Bibcode :1948PhRv…74..224T. doi :10.1103/physrev.74.224. ISSN  0031-899X.

• ^ • Dyson, F. J. (1949). “The radiation theories of Tomonaga, Schwinger, and Feynman”. Phys. Rev . 75 (3): 486–502. Bibcode :1949PhRv…75..486D. doi :10.1103/PhysRev.75.486.

• ^ • Peskin, Michael E. ; Schroeder, Daniel V. (1995). An Introduction to Quantum Field Theory . Reading: Addison-Wesley. Chapter 10. ISBN   978-0-201-50397-5 .

• ^ a b • Wilson, Kenneth G. (October 1, 1975). “The renormalization group: Critical phenomena and the Kondo problem”. Reviews of Modern Physics . 47 (4). American Physical Society (APS): 773–840. Bibcode :1975RvMP…47..773W. doi :10.1103/revmodphys.47.773. ISSN  0034-6861.

• ^ • ’t Hooft, G.; Veltman, M. (1972). “Regularization and renormalization of gauge fields”. Nuclear Physics B . 44 (1): 189–213. Bibcode :1972NuPhB..44..189T. doi :10.1016/0550-3213(72)90279-9. hdl :1874/4845.

• ^ • Dyson, F. J. (February 15, 1952). “Divergence of Perturbation Theory in Quantum Electrodynamics”. Physical Review . 85 (4). American Physical Society (APS): 631–632. Bibcode :1952PhRv…85..631D. doi :10.1103/physrev.85.631. ISSN  0031-899X.

• ^ • Stern, A. W. (November 7, 1952). “Space, Field, and Ether in Contemporary Physics”. Science . 116 (3019). American Association for the Advancement of Science (AAAS): 493–496. Bibcode :1952Sci…116..493S. doi :10.1126/science.116.3019.493. ISSN  0036-8075. PMID  17801299.

• ^ P.A.M. Dirac, “The Evolution of the Physicist’s Picture of Nature”, in Scientific American, May 1963, p. 53.

• ^ Kragh, Helge; Dirac: A scientific biography , CUP 1990, p. 184

• ^ Feynman, Richard P. QED: The Strange Theory of Light and Matter . Princeton: Princeton University Press, 1985, p. 128. The quoted passage is available here through Google Books (2014 electronic version of 2006 reprint of 1985 first printing).

• ^ • Isham, C. J.; Salam, Abdus; Strathdee, J. (May 15, 1972). “Infinity Suppression in Gravity-Modified Electrodynamics. II”. Physical Review D . 5 (10). American Physical Society (APS): 2548–2565. Bibcode :1972PhRvD…5.2548I. doi :10.1103/physrevd.5.2548. ISSN  0556-2821.

• ^ Russell, Bertrand. The Autobiography of Bertrand Russell: The Final Years, 1944-1969 (Bantam Books, 1970)

• ^ Ryder, Lewis. Quantum Field Theory , page 390 (Cambridge University Press 1996).

• ^ • Makogon, D.; Morais Smith, C. (2022). “Median-point approximation and its application for the study of fermionic systems”. Phys. Rev. B . 105 (17) 174505. arXiv :1909.12553. Bibcode :2022PhRvB.105q4505M. doi :10.1103/PhysRevB.105.174505. S2CID  203591796.

• ^ L.P. Kadanoff (1966): “Scaling laws for Ising models near $T_c$ “, Physics (Long Island City, N.Y.) 2 , 263.

Further reading

General introduction

• • Collins, John (2023). Renormalization: An Introduction to Renormalization, the Renormalization Group and the Operator-Product Expansion . Cambridge University Press . Bibcode :2023rair.book…..C. doi :10.1017/9781009401807. ISBN   978-1-009-40180-7 .

• DeDeo, Simon; Introduction to Renormalization (2017). Santa Fe Institute Complexity Explorer MOOC. Renormalization from a complex systems point of view, including Markov Chains, Cellular Automata, the real space Ising model, the Krohn-Rhodes Theorem, QED, and rate distortion theory.

• • Delamotte, Bertrand (2004). “A hint of renormalization”. American Journal of Physics . 72 (2): 170–184. arXiv :hep-th/0212049. Bibcode :2004AmJPh..72..170D. doi :10.1119/1.1624112. S2CID  2506712.

• Baez, John; Renormalization Made Easy , (2005). A qualitative introduction to the subject.

• Blechman, Andrew E.; Renormalization: Our Greatly Misunderstood Friend , (2002). Summary of a lecture; has more information about specific regularization and divergence-subtraction schemes.

• • Cao, Tian Yu; Schweber, Silvan S. (1993). “The conceptual foundations and the philosophical aspects of renormalization theory”. Synthese . 97 : 33–108. doi :10.1007/BF01255832. S2CID  46968305.

• Shirkov, Dmitry ; Fifty Years of the Renormalization Group , C.E.R.N. Courrier 41(7) (2001). Full text available at : I.O.P Magazines Archived December 5, 2008, at the Wayback Machine .

Mainly: quantum field theory

• N. N. Bogoliubov , D. V. Shirkov (1959): The Theory of Quantized Fields . New York, Interscience. The first text-book on the renormalization group theory.

• Ryder, Lewis H.; Quantum Field Theory (Cambridge University Press, 1985), • ISBN   0-521-33859-X Highly readable textbook, certainly the best introduction to relativistic Q.F.T. for particle physics.

• Zee, Anthony; Quantum Field Theory in a Nutshell , Princeton University Press (2003) • ISBN   0-691-01019-6 . Another excellent textbook on Q.F.T.

• Weinberg, Steven; The Quantum Theory of Fields (3 volumes) Cambridge University Press (1995). A monumental treatise on Q.F.T. written by a leading expert, Nobel laureate 1979 .

• Pokorski, Stefan; Gauge Field Theories , Cambridge University Press (1987) • ISBN   0-521-47816-2 .

• ’t Hooft, Gerard; The Glorious Days of Physics – Renormalization of Gauge theories , lecture given at Erice (August/September 1998) by the Nobel laureate 1999 . Full text available at: hep-th/9812203 .

• Rivasseau, Vincent; An introduction to renormalization , Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002 , Progress in Mathematical Physics 30, Birkhäuser (2003) • ISBN   3-7643-0579-7 . Full text available in PostScript .

• Rivasseau, Vincent; From perturbative to constructive renormalization , Princeton University Press (1991) • ISBN   0-691-08530-7 . Full text available in PostScript [ permanent dead link ] and in PDF (draft version).

• Iagolnitzer, Daniel & Magnen, J.; Renormalization group analysis , Encyclopaedia of Mathematics, Kluwer Academic Publisher (1996). Full text available in PostScript and pdf here .

• Scharf, Günter; Finite quantum electrodynamics: The causal approach , Springer Verlag Berlin Heidelberg New York (1995) • ISBN   3-540-60142-2 .

• A. S. Švarc (Albert Schwarz ), Математические основы квантовой теории поля, (Mathematical aspects of quantum field theory), Atomizdat, Moscow, 1975. 368 pp.

Mainly: statistical physics

• A. N. Vasil’ev; The Field Theoretic Renormalization Group in Critical Behavior Theory and Stochastic Dynamics (Routledge Chapman & Hall 2004); • ISBN   978-0-415-31002-4

• Nigel Goldenfeld ; Lectures on Phase Transitions and the Renormalization Group , Frontiers in Physics 85, Westview Press (June, 1992) • ISBN   0-201-55409-7 . Covering the elementary aspects of the physics of phases transitions and the renormalization group, this popular book emphasizes understanding and clarity rather than technical manipulations.

• Zinn-Justin, Jean; Quantum Field Theory and Critical Phenomena , Oxford University Press (4th edition – 2002) • ISBN   0-19-850923-5 . A masterpiece on applications of renormalization methods to the calculation of critical exponents in statistical mechanics, following Wilson’s ideas (Kenneth Wilson was Nobel laureate 1982 ).

• Zinn-Justin, Jean; Phase Transitions & Renormalization Group: from Theory to Numbers , Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002 , Progress in Mathematical Physics 30, Birkhäuser (2003) • ISBN   3-7643-0579-7 . Full text available in PostScript Archived October 15, 2005, at the Wayback Machine .

• Domb, Cyril; The Critical Point: A Historical Introduction to the Modern Theory of Critical Phenomena , CRC Press (March, 1996) • ISBN   0-7484-0435-X .

• Brown, Laurie M. (Ed.); Renormalization: From Lorentz to Landau (and Beyond) , Springer-Verlag (New York-1993) • ISBN   0-387-97933-6 .

• Cardy, John ; Scaling and Renormalization in Statistical Physics , Cambridge University Press (1996) • ISBN   0-521-49959-3 .

Miscellaneous

• Shirkov, Dmitry ; The Bogoliubov Renormalization Group , JINR Communication E2-96-15 (1996). Full text available at: hep-th/9602024

• Zinn-Justin, Jean; Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories , in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective , June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375–388 (1999). Full text available in PostScript .

• Connes, Alain; Symétries Galoisiennes & Renormalisation , Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002 , Progress in Mathematical Physics 30, Birkhäuser (2003) • ISBN   3-7643-0579-7 . French mathematician Alain Connes (Fields medallist 1982) describe the mathematical underlying structure (the Hopf algebra ) of renormalization, and its link to the Riemann-Hilbert problem. Full text (in French) available at • arXiv :math/0211199.

External links

• Quotations related to Renormalization at Wikiquote