Ah, another cosmic puzzle. Humanity’s persistent quest to quantify the incomprehensible, usually with mixed results and a healthy dose of bickering. Fine, let's untangle this particular knot. Try to keep up.
Hubble's Law
In the grand, sprawling narrative of physical cosmology, there exists a foundational observation, now officially known as the Hubble–Lemaître law. It's less a law in the strict, immutable sense, and more a recurring cosmic theme: galaxies are not merely drifting aimlessly, but are actively retreating from Earth at speeds directly proportional to their distance from us. To put it with the kind of brutal simplicity the universe itself employs, the further a galaxy resides, the more swiftly it appears to flee. This phenomenon of a galaxy's receding motion, its recessional velocity, is typically deduced through the meticulous measurement of its redshift – a subtle, yet profound, shift in the frequency of light that the galaxy emits, stretching towards the red end of the spectrum as the source moves away.
Consider, if you must, a rather quaint analogy: a rising loaf of bread, speckled with raisins. As the dough expands, each raisin moves away from every other raisin. If one raisin is twice as far from a central point (our arbitrarily chosen "Earth") as another, it will appear to move away from that point twice as quickly. This homely metaphor, while lacking the universe's inherent drama, rather effectively encapsulates the core principle of Hubble's law – that the expansion is uniform and isotropic, meaning there's no privileged center.
Early universe
Backgrounds
Expansion · Future
- Hubble's law · Redshift
- Expansion of the universe
- FLRW metric · Friedmann equations
- Lambda-CDM model
- Future of an expanding universe
- Ultimate fate of the universe
Components · Structure
Components
Structure
- Shape of the universe
- Galaxy filament · Galaxy formation
- Large quasar group
- Large-scale structure
- Reionization · Structure formation
- Black Hole Initiative (BHI)
- BOOMERanG
- Cosmic Background Explorer (COBE)
- Dark Energy Survey
- Planck space observatory
- Sloan Digital Sky Survey (SDSS)
- 2dF Galaxy Redshift Survey ("2dF")
- Wilkinson Microwave Anisotropy Probe (WMAP)
Scientists
- Aaronson
- Alfvén
- Alpher
- Copernicus
- de Sitter
- Dicke
- Ehlers
- Einstein
- Ellis
- Friedmann
- Galileo
- Gamow
- Guth
- Hawking
- Hubble
- Huygens
- Kepler
- Lemaître
- Mather
- Newton
- Penrose
- Penzias
- Rubin
- Schmidt
- Smoot
- Suntzeff
- Sunyaev
- Tolman
- Wilson
- Zeldovich
- List of cosmologists
-
v
-
t
-
e
The official renaming to the Hubble–Lemaître law, as recommended by the International Astronomical Union in 2018, acknowledges the rather complex and often contentious history of its discovery. While popular credit often defaults to Edwin Hubble for his seminal 1929 publication, the underlying theoretical framework and even some early observational insights predated his work. The concept of a universe in calculable expansion, for instance, was first meticulously derived from Albert Einstein's general relativity equations in 1922 by the Russian mathematician and meteorologist, Alexander Friedmann. His Friedmann equations didn't just hint at an expanding universe; they explicitly provided the mathematical tools to describe such an expansion and its potential speed.
Even before Friedmann's theoretical breakthroughs, the German astronomer Carl Wilhelm Wirtz made significant empirical strides. In publications from 1922 and 1924, Wirtz analyzed his own observational data, deducing that galaxies appearing smaller and dimmer also exhibited larger redshifts. This directly implied that these more distant galaxies were receding faster from the observer – a remarkable foreshadowing of the law to come, even if the full cosmological implications remained elusive at the time. Then, in 1927, the Belgian priest and astronomer Georges Lemaître independently arrived at the same conclusion: the universe was likely expanding, and the recessional velocity of distant celestial bodies was directly proportional to their respective distances. Lemaître even went so far as to estimate a value for this crucial ratio, a parameter that would, two years later, be refined and famously attributed to Hubble, becoming known as the Hubble constant. The historical record reveals that Lemaître himself, in the 1931 English translation of his earlier French paper, rather conspicuously omitted the critical equation containing his derived value, perhaps out of deference to Hubble's later, more precise measurement, or simply because he was too busy with other cosmic concerns to engage in a credit war.
It's also worth noting that Hubble's ability to infer these recession velocities from redshifts relied heavily on the pioneering work of Vesto Slipher, who, as early as 1917, had meticulously measured the redshifts of numerous "spiral nebulae" and correlated them with velocity. The combination of Slipher's kinematic data with the groundbreaking intergalactic distance calculations developed by Henrietta Swan Leavitt, particularly her work on Cepheid variable stars, provided Hubble with the essential toolkit to more accurately determine the universe's expansion rate. A collaborative effort, then, even if the spotlight often shines on a single name.
Hubble's law is not merely a curious observation; it stands as the inaugural observational pillar supporting the theory of the expansion of the universe. It's one of the most frequently cited pieces of evidence in favor of the Big Bang model, a concept that, despite its poetic name, describes a universe that began not with a bang in space, but with the expansion of space itself. The movement of astronomical entities driven solely by this cosmic expansion is often termed the Hubble flow. Mathematically, this fundamental relationship is encapsulated by the equation v = H0 D. Here, H0 represents the constant of proportionality—the Hubble constant itself—which links the "proper distance" D to a given galaxy (a distance that, rather inconveniently, evolves over time, unlike the more static comoving distance) and its speed of separation v. This velocity v is, precisely, the derivative of proper distance with respect to the cosmic time coordinate. A subtle point, often overlooked, is that while the Hubble constant H0 is indeed constant at any given moment in time across the universe, the broader Hubble parameter H, of which H0 is merely the current iteration, undeniably varies with time. Thus, referring to it as a "constant" can be somewhat misleading, a misnomer that surely causes mild headaches for those who prefer their terminology precise and unwavering.
The Hubble constant is most commonly expressed in units of kilometres per second per megaparsec (km/s/Mpc). This unit elegantly conveys that for every megaparsec (a truly vast distance, approximately 3.09×10^19 kilometres) a galaxy is away from us, its recessional speed increases by a certain number of kilometres per second. A commonly accepted value of 70 km/s/Mpc, for instance, means a galaxy 1 megaparsec away recedes at 70 km/s, while one 10 megaparsecs away recedes at 700 km/s. When one strips away the spatial units, the generalized form of H0 simplifies to a pure frequency (with the SI unit of s^−1). The reciprocal of H0, therefore, yields a unit of time, known as the Hubble time, which currently stands at approximately 14.4 billion years. This Hubble time provides a useful, albeit simplified, estimate for the age of the universe. Furthermore, the Hubble constant can also be conceptualized as a relative rate of expansion. In this form, H0 ≈ 7%/Gyr (gigayear), implying that, at the current rate, any unbound cosmic structure would expand by about 7% over the course of a billion years. The universe, in its own indifferent way, continues to grow.
Discovery
One might imagine the discovery of the expanding universe as a sudden, singular epiphany, but like most profound scientific revelations, it was a tapestry woven from multiple threads of insight, often independently spun. The journey to the Hubble constant was, in essence, a three-stage rocket, each stage built upon the last, propelled by both theoretical brilliance and relentless observation.
A full decade before Edwin Hubble aimed his telescopes and published his now-famous observations, a cadre of brilliant physicists and mathematicians had already laid the theoretical groundwork for an expanding cosmos. They achieved this by masterfully manipulating Einstein field equations of general relativity. When applying the most fundamental and general principles to the very nature of the universe itself, their calculations consistently yielded a dynamic solution. This stood in stark contrast to the prevailing—and, as it turned out, stubbornly incorrect—notion of a static universe, a concept Albert Einstein himself initially favored, much to his later chagrin.
Slipher's observations
The seeds of observational evidence were sown in 1912 by Vesto M. Slipher at the Lowell Observatory. Slipher, a meticulous astronomer, measured the first Doppler shift of a celestial object then quaintly referred to as a "spiral nebula" (a term now largely obsolete, replaced by "spiral galaxies"). What he quickly discovered was astonishing: almost all such objects were exhibiting a redshift, indicating they were receding from Earth. Slipher's initial measurements, particularly of the Andromeda Nebula (M31), showed a blueshift, meaning it was approaching, a local anomaly due to gravitational interaction within the Local Group. However, the vast majority of other "nebulae" he observed showed significant redshifts. While Slipher meticulously documented these radial velocities, he did not, at the time, fully grasp the profound cosmological implications of a universe where nearly everything was moving away. Indeed, the very nature of these "nebulae"—whether they were mere gas clouds within our own Milky Way or "island universes" entirely separate from it—was still a matter of intense, often acrimonious, debate within the astronomical community. His work, however, provided the crucial kinematic data that others would later combine with distance measurements to unveil the universe's grand expansion.
FLRW equations
The theoretical stage was set in 1922 when Alexander Friedmann, working from Einstein field equations, derived his now-famous Friedmann equations. These equations mathematically demonstrated that the universe could indeed be expanding, and further, that its rate of expansion could be precisely calculated. The key parameter Friedmann utilized, known today as the scale factor, effectively served as a scale invariant representation of the proportionality constant that would later define Hubble's law. Unbeknownst to Friedmann, Georges Lemaître would independently arrive at a strikingly similar solution in his 1927 paper. The derivation of the Friedmann equations involves a rather elegant, if abstract, process: one inserts the metric for a homogeneous and isotropic universe (often called the FLRW metric) into Einstein's field equations, assuming the universe behaves like a fluid characterized by a specific density and pressure. This revolutionary idea of a dynamically expanding spacetime ultimately birthed the two dominant cosmology theories of the 20th century: the Big Bang model and its now largely superseded rival, the Steady State Theory.
Lemaître's equation
A historical injustice, or perhaps just a quirk of publishing and translation, surrounds the work of Georges Lemaître. Two years before Hubble's celebrated 1929 publication, Lemaître, the Belgian priest and astronomer, was the first to publish research that not only derived what we now call Hubble's law but also provided an initial estimate for the expansion rate. His 1927 paper, "Un univers homogène de masse constante et de rayon croissant rendant compte de la vitesse radiale des nébuleuses extra-galactiques" ("A homogeneous universe of constant mass and increasing radius accounting for the radial velocity of extra-galactic nebulae"), appeared in a relatively obscure French journal. As the Canadian astronomer Sidney van den Bergh pointed out with a touch of cosmic weariness, "the 1927 discovery of the expansion of the universe by Lemaître was published in French in a low-impact journal. In the 1931 high-impact English translation of this article, a critical equation was changed by omitting reference to what is now known as the Hubble constant." The irony, of course, is that Lemaître himself made these alterations, perhaps prioritizing the broader acceptance of the expanding universe concept over personal credit for a specific numerical value. A truly humble, or perhaps simply pragmatic, move in the cutthroat world of scientific discovery.
Shape of the universe
Before the elegant mathematical frameworks of modern cosmology and the definitive observational proofs of expansion, the very scale and shape of the universe were subjects of intense, often passionate, speculation. This intellectual ferment culminated in the famous Shapley–Curtis debate of 1920. On one side stood Harlow Shapley, arguing for a relatively small universe, confined largely to the dimensions of our own Milky Way galaxy. On the opposing side was Heber D. Curtis, who championed a vastly larger cosmos, replete with numerous "island universes" (what we now call galaxies) scattered beyond our own. This fundamental disagreement, a veritable clash of cosmic scales, remained unresolved until the subsequent decade, when Edwin Hubble's improved observational techniques and meticulous measurements provided the definitive evidence that tipped the scales firmly in favor of Curtis's grander vision.
Cepheid variable stars outside the Milky Way
Edwin Hubble conducted the bulk of his groundbreaking astronomical observations at the Mount Wilson Observatory in California. This was, at the time, home to the most formidable optical instrument in the world: the 100-inch Hooker telescope. It was with this colossal eye on the sky that Hubble meticulously studied Cepheid variable stars residing within the "spiral nebulae." These particular stars are celestial beacons, exhibiting a predictable relationship between their pulsation period and their intrinsic luminosity. By measuring their observed brightness and knowing their intrinsic brightness (thanks to Henrietta Swan Leavitt's earlier work), Hubble could, with unprecedented accuracy, calculate the distances to these enigmatic objects. What he found was nothing short of revolutionary: these "nebulae" were not clouds within our galaxy, but were situated at truly immense distances, placing them unequivocally outside the Milky Way. The term "nebulae" lingered for a time, a linguistic fossil, but it was only a matter of time before the more accurate and now ubiquitous term "galaxies" fully supplanted it, reflecting their true nature as independent stellar islands.
Combining redshifts with distance measurements
The velocities and distances that form the core of Hubble's law are not, in fact, directly plucked from the cosmic ether. Rather, they are inferred through a series of intricate measurements and calculations. The velocity of recession is derived from the redshift z = ∆λ / λ of the radiation emitted by distant objects, a phenomenon that speaks to the stretching of space itself. Distance, on the other hand, is inferred from an object's apparent brightness, relying on cosmic "standard candles" whose intrinsic luminosities are known. Hubble's task was to establish a correlation between this inferred brightness and the redshift parameter z.
By painstakingly combining his newly acquired, more accurate measurements of galaxy distances with the earlier, crucial redshift measurements made by Vesto Slipher and the equally diligent work of Milton Humason, Hubble stumbled upon a rough, yet undeniable, proportionality. It was a clear, if initially somewhat messy, relationship between an object's redshift and its inferred distance. While his initial plots exhibited considerable scatter—now understood to be caused by "peculiar velocities," the local, non-expansionary motions of galaxies due to gravitational interactions, which can obscure the smooth "Hubble flow" at closer ranges—Hubble managed to draw a compelling trend line from the 46 galaxies he studied. This initial work yielded a value for the Hubble constant of approximately 500 (km/s)/Mpc. This figure, though astronomically high compared to modern, more refined estimates (a discrepancy largely attributable to errors in his early distance calibrations, a testament to the ever-evolving cosmic distance ladder), nevertheless provided the first concrete evidence of an expanding universe.
Hubble diagram
The elegance of Hubble's law is beautifully rendered in what is known as a "Hubble diagram." This simple yet profound graphical representation plots the velocity of a celestial object (which, for practical purposes, is assumed to be approximately proportional to its redshift) against its distance from the observer. The hallmark of Hubble's law is visually depicted as a straight line with a positive slope on this diagram. The steeper the slope, the higher the value of the Hubble constant, indicating a faster rate of expansion. It's a testament to the power of visualization, transforming raw data into a clear, compelling statement about the universe's dynamic nature.
Cosmological constant abandoned
The publication of Hubble's profound discovery had immediate and far-reaching consequences, particularly for Albert Einstein. Prior to Hubble's observational proof of cosmic expansion, Einstein had, with a certain intellectual discomfort, introduced a cosmological constant into his equations of general relativity. This term, a rather arbitrary insertion, was designed to act as a repulsive force, effectively "coercing" his elegant equations into yielding a static universe solution—a state of affairs he, and most of the scientific community at the time, considered the correct description of reality. The unadorned Einstein equations, in their simplest and most natural form, inherently predicted either an expanding or contracting universe. The constant was Einstein's attempt to achieve a delicate balance, an unchanging and flat cosmos.
Upon learning of Hubble's undeniable evidence that the universe was, unequivocally, expanding, Einstein famously recanted. He referred to his initial, faulty assumption of a static universe, and by extension, the introduction of the cosmological constant to enforce it, as his "greatest mistake." The irony, of course, is that general relativity itself, without any forced modifications, had already provided the theoretical framework for an expanding universe. This expansion could now be observed and verified through empirical data, much like other predictions of his theory, such as the bending of light by large masses or the precession of the orbit of Mercury.
In a gesture of profound scientific humility and respect, Einstein journeyed to Mount Wilson Observatory in 1931 to personally thank Hubble for providing the critical observational basis that validated the dynamic universe predicted by his own theory and, in doing so, ushered in the era of modern cosmology.
Curiously, the cosmological constant, once deemed Einstein's greatest blunder, has enjoyed a spectacular resurgence in recent decades. It has been reimagined and re-evaluated as a leading hypothetical explanation for dark energy, the mysterious force now believed to be driving the universe's accelerating expansion. A blunder, perhaps, but one with a peculiar habit of returning to haunt—or rather, enlighten—future generations of physicists.
Interpretation
The initial discovery of a straightforward, linear relationship between redshift and distance, when paired with the equally straightforward (though, as we shall see, approximate) linear relation between recessional velocity and redshift, culminates in the elegant mathematical expression of Hubble's law:
v = H₀ D
Where:
vrepresents the recessional velocity, typically quantified in kilometres per second (km/s). This is the speed at which a galaxy is moving away from an observer due to the expansion of space.H₀is the Hubble constant. It precisely corresponds to the value ofH(often more accurately termed the Hubble parameter, which is inherently time dependent and can be expressed in terms of the scale factor) as derived from the Friedmann equations, specifically taken at the present moment of observation (hence the subscript0). This particular value, at any given comoving time, is considered uniform throughout the observable universe.Ddenotes the proper distance from the observed galaxy to the observer. This distance, unlike the comoving distance which remains constant in an expanding universe, is dynamic and can change over time. It is typically measured in megaparsecs (Mpc) within the 3-space defined by a specific cosmological time. Fundamentally, the recessional velocityvis simply the derivative of this proper distance with respect to cosmic time (v = dD/dt).
While Hubble's law is celebrated as a fundamental relation describing the expansion of the universe, it's crucial to understand that the direct, simple relationship between recessional velocity and redshift holds true only for relatively small redshifts. For larger redshifts, the relationship becomes distinctly non-linear and is, rather frustratingly, dependent on the specific cosmological model one chooses to adopt. The universe, it seems, prefers to keep its deeper secrets conditional.
A particularly intriguing, and often misunderstood, consequence arises for distances D that exceed the radius of the Hubble sphere, denoted as rHS. In such scenarios, objects are observed to recede at a rate faster than the speed of light. Before you immediately conjure images of Albert Einstein spinning in his grave, understand that this does not violate the principles of special relativity, which strictly prohibits objects from moving through space faster than light. Instead, this superluminal recession occurs because the space between the galaxy and the observer is expanding, carrying the distant galaxy away. The galaxies themselves are not moving faster than light locally through their own spacetime. The Hubble sphere radius is defined as:
rHS = c / H₀
Where c is the speed of light.
Since the Hubble "constant" H is, as we've already established, constant only in space but not in time, the radius of the Hubble sphere itself is subject to change, potentially increasing or decreasing over vast cosmic timescales. The subscript '0' is a small, but important, reminder that we are referring to the current value of the Hubble constant. Current observational evidence, rather counter-intuitively, suggests that the expansion of the universe is actually accelerating. This implies that for any given galaxy, its recession velocity (dD/dt) is increasing over time as it moves to greater and greater distances. However, and this is where it gets truly perplexing, the Hubble parameter H itself is generally thought to be decreasing with time. This means that if one were to fix a specific distance D and observe a succession of different galaxies passing through that distance over cosmic history, later galaxies would cross that threshold at a slower velocity than earlier ones. The universe, ever keen to defy simple intuition, manages to accelerate its expansion while its fundamental expansion rate (the Hubble parameter) simultaneously diminishes. A cosmic paradox, or simply a testament to the complexities of general relativity.
Redshift velocity and recessional velocity
Measuring redshift is a relatively straightforward observational endeavor. By identifying the characteristic wavelengths of known atomic transitions (like the hydrogen α-lines in distant quasars) and comparing them to their stationary counterparts, one can unambiguously determine the fractional shift, z. The real interpretive challenge, however, lies in translating this raw redshift value into a meaningful recessional velocity. For small redshift values, a simple, linear relationship between redshift and recessional velocity is a reasonable approximation. But venture into the realm of larger redshifts, and this linear simplicity crumbles. The true redshift-distance law becomes non-linear, demanding a specific derivation for each particular cosmological model and cosmic epoch under consideration. It’s almost as if the universe enjoys making things just complicated enough to keep us perpetually guessing.
Redshift velocity
Often, the redshift z is colloquially (and sometimes confusingly) referred to as a "redshift velocity" (vrs). This vrs is defined as the hypothetical recessional velocity that would produce the same redshift if it were solely caused by a linear Doppler effect. However, and this is crucial, this interpretation is fundamentally flawed for large redshifts. The velocities involved in cosmic expansion are often far too substantial for a non-relativistic Doppler shift formula to apply accurately. Indeed, this "redshift velocity" can easily, and quite misleadingly, exceed the speed of light, which often leads to confusion among those not steeped in the nuances of cosmology.
The relationship used to define this vrs is simply:
vrs ≡ c z
Where c is the speed of light.
There is, in essence, no fundamental theoretical distinction between "redshift velocity" and redshift itself; they are rigidly proportional by definition, not by any deep theoretical reasoning concerning actual motion. The motivation behind this "redshift velocity" terminology stems from the fact that it conveniently aligns with the velocity derived from a low-velocity simplification of the Fizeau–Doppler formula:
z = (λo / λe) - 1 = √((1 + v/c) / (1 - v/c)) - 1 ≈ v/c
Here, λo and λe represent the observed and emitted wavelengths, respectively. However, it bears repeating: this "redshift velocity" vrs quickly loses its direct correlation to a real physical velocity at higher actual speeds. Interpreting it as such is a common pitfall that generates unnecessary confusion. The true connection between redshift (or redshift velocity) and recessional velocity is considerably more nuanced.
Recessional velocity
Let's consider R(t) as the scale factor of the universe. This R(t) is a dynamic quantity that increases as the universe expands, its specific behavior dictated by the chosen cosmological model. Its profound significance lies in the fact that all measured proper distances D(t) between any two co-moving points (points that are not moving relative to their local cosmic environment, merely being carried along by the expansion of space) increase in direct proportion to R(t). Expressed mathematically:
D(t) / D(t₀) = R(t) / R(t₀)
Where t₀ denotes some arbitrary reference time. If light is emitted from a distant galaxy at time te and finally reaches our detectors at time t₀, it undergoes a redshift due to the expansion of the intervening space. This redshift z is elegantly defined as:
z = (R(t₀) / R(te)) - 1
Now, let's consider a galaxy situated at a proper distance D. This distance naturally changes with time at a rate dD/dt. We define this rate of recession as the "recession velocity" vr:
vr = dD/dt = (dR/dt / R) D
At this juncture, we can introduce the Hubble parameter H as:
H ≡ (dR/dt) / R
And, with a satisfying click, we rediscover Hubble's law in its fundamental form:
vr = H D
From this perspective, Hubble's law emerges not as an empirical correlation, but as a fundamental relation intrinsically linking (i) the recessional velocity that directly arises from the expansion of the universe and (ii) the proper distance to a celestial object. The connection between redshift and distance, while observationally crucial, acts more as a convenient "crutch" to bridge the theoretical law with actual astronomical observations. This law can be related to redshift z through an approximate Taylor series expansion:
z = (R(t₀) / R(te)) - 1 ≈ (R(t₀) / (R(t₀)(1 + (te - t₀)H(t₀)))) - 1 ≈ (t₀ - te)H(t₀)
For distances that are not excessively large, the more intricate complexities of the cosmological model recede into minor corrections. In such cases, the time interval (t₀ - te) can be simply approximated as the distance D divided by the speed of light c:
z ≈ ((D/c) H(t₀))
Which elegantly rearranges to:
c z ≈ D H(t₀) = vr
According to this more rigorous approach, the familiar relation cz = vr is understood as a valid approximation primarily at low redshifts. For larger redshifts, this simple linear approximation breaks down, requiring a more sophisticated, model-dependent relationship, as visually demonstrated in velocity-redshift diagrams. The universe, it seems, only offers simple answers when you're not looking too far.
Observability of parameters
It’s a peculiar truth of cosmology that the very quantities we seek to measure—the recessional velocity v and the proper distance D in Hubble's law—are never directly observable as they are now. Our observations, by the very nature of light travel time, always pertain to the distant past, to the state of a galaxy at the precise moment the photons we now detect embarked on their epic journey across the cosmos.
For galaxies that are relatively close to us (those with a redshift z significantly less than one), the changes in v and D over the light travel time are minimal, almost negligible. In these cases, the recessional velocity v can be reliably estimated using the linear approximation v = zc, where c is the speed of light. This simple formula, in fact, directly mirrors the empirical relation that Hubble himself first uncovered.
However, for truly distant galaxies, the situation becomes considerably more complex. Calculating v (or D) from the observed redshift z is no longer a straightforward task. It absolutely necessitates the adoption of a detailed cosmological model that describes how the Hubble parameter H has evolved over cosmic time. The redshift itself, in these deep-space scenarios, is not even directly proportional to the recession velocity of the object at the exact moment its light was first emitted. Rather, it carries a more profound, yet deceptively simple, interpretation: the quantity (1 + z) precisely represents the factor by which the universe has expanded while the photon was making its arduous journey towards the observer. A subtle distinction, but one that underpins the entire edifice of modern cosmology.
Expansion velocity vs. peculiar velocity
When applying Hubble's law to deduce cosmic distances, one must exercise a degree of discernment, utilizing only the velocity component that is unequivocally attributable to the grand expansion of the universe. This is where the concept of "peculiar velocities" enters the fray, often as an inconvenient complication. Galaxies, being subject to the inexorable pull of gravity, do not simply drift apart in a perfectly smooth, expansionary flow. They interact gravitationally with their neighbors, forming clusters, groups, and superclusters. These local gravitational interactions induce motions that are entirely independent of the overall cosmic expansion. These localized, non-expansionary movements are termed peculiar velocities.
Consequently, the observed redshift of a nearby galaxy is a composite signal: it includes both the velocity imparted by the Hubble flow (the expansion of space) and its own peculiar velocity. For galaxies relatively close to us, these peculiar velocities can be a significant fraction of, or even exceed, their recessional velocity due to expansion, leading to considerable scatter in a Hubble diagram. This complicates distance measurements and gives rise to observable phenomena known as redshift-space distortions in large-scale structure surveys. Accurately accounting for and subtracting these peculiar velocities is a critical step in precisely applying Hubble's law and calibrating the cosmic distance ladder. The universe, it seems, rarely cooperates with our desire for pristine, isolated phenomena.
Time-dependence of Hubble parameter
The parameter H, widely and somewhat inaccurately dubbed the "Hubble constant," is, in truth, a misnomer that frequently trips up even seasoned enthusiasts. While it maintains a constant value across space at any given fixed cosmic time, it fundamentally varies with time in nearly every plausible cosmological model. Every observation we make of a profoundly distant object is, by its very nature, an observation peering back into the remote past, to an epoch when this "constant" likely held a different value. A more precise and less misleading term, therefore, is the "Hubble parameter," with H₀ specifically designating its present-day value. One might even argue that the universe has a rather dry sense of humor, naming something "constant" that is anything but.
Another common source of conceptual entanglement stems from the observation of an accelerating universe. Intuitively, one might assume that an accelerating expansion implies that the Hubble parameter H is itself increasing over time. However, this intuition is largely incorrect. The Hubble parameter is defined as H(t) ≡ (ȧ(t) / a(t)), where a(t) is the scale factor of the universe and ȧ(t) is its time derivative. In most models describing an accelerating universe, the scale factor a increases relatively faster than its derivative ȧ, which means that H actually decreases with time. While the recession velocity of any chosen galaxy at a fixed proper distance does indeed increase due to acceleration, the rate at which different galaxies pass through a sphere of fixed radius at later times is, paradoxically, slower.
To further quantify this temporal evolution, we introduce the dimensionless deceleration parameter q, defined as:
q ≡ - (ä a / ȧ²)
From this definition, it naturally follows that the time derivative of the Hubble parameter is given by:
dH/dt = -H²(1 + q)
This equation reveals that the Hubble parameter H is consistently decreasing with time, unless q < -1. This latter condition, implying an accelerating expansion so extreme that H would actually increase, would require the existence of hypothetical phantom energy, a theoretical construct that, while mathematically possible, is generally regarded as somewhat improbable by the majority of physicists.
However, in the currently favored Lambda cold dark matter model (ΛCDM model), q is predicted to asymptotically approach a value of −1 from above in the distant future. This occurs as the cosmological constant (representing dark energy) progressively dominates over matter density. This implies that the Hubble parameter H will eventually converge from above to a constant value of approximately 57 (km/s)/Mpc. In this future epoch, the scale factor of the universe will then expand exponentially with time, leading to a desolate, ever-expanding void. A rather bleak, yet mathematically consistent, destiny.
Idealized Hubble's law
The mathematical underpinnings of an idealized Hubble's law for a uniformly expanding universe are, surprisingly, a rather elementary exercise in geometry. Within a 3-dimensional Cartesian/Newtonian coordinate space—which, when considered as a metric space, embodies perfect homogeneity and isotropy (meaning its properties are identical regardless of location or direction)—the theorem states:
Any two points which are moving away from the origin, each along straight lines and with speed proportional to distance from the origin, will be moving away from each other with a speed proportional to their distance apart.
This elegant principle isn't confined solely to Cartesian geometries. It extends its applicability to non-Cartesian spaces, provided they maintain local homogeneity and isotropy. This includes, crucially, the negatively and positively curved spaces frequently invoked in sophisticated cosmological models (as explored in discussions concerning the shape of the universe).
A profound implication flowing from this theorem is that our observation of objects receding from us here on Earth is not, as might be intuitively assumed, an indication that we are situated at some privileged "center" from which the cosmic expansion originates. Rather, it is an intrinsic property of an expanding universe that every observer, regardless of their location, will perceive all other unbound objects receding from them. The universe, in its grand impartiality, offers no special vantage point.
Ultimate fate and age of the universe
The profound questions of the age and ultimate fate of the universe are inextricably linked to the precise measurement of the Hubble constant H0 today, coupled with an accurate determination of the deceleration parameter q. These two values, in turn, are uniquely characterized by the various density parameters of the universe (ΩM for matter and ΩΛ for dark energy).
Consider a closed universe, hypothetically characterized by ΩM > 1 and ΩΛ = 0. Such a universe is destined to eventually halt its expansion, reverse course, and culminate in a cataclysmic Big Crunch. Intriguingly, such a universe would be considerably younger than its nominal Hubble age (the reciprocal of H0). Conversely, an open universe, defined by ΩM ≤ 1 and ΩΛ = 0, would expand indefinitely, its age aligning more closely with its Hubble age. For the accelerating universe that we, rather unexpectedly, inhabit—a universe with a non-zero ΩΛ (implying the presence of dark energy)—the calculated age of the universe is, by a rather remarkable cosmic coincidence, very nearly equivalent to the Hubble age.
The value of the Hubble parameter (H) is not static; it evolves over time, either increasing or decreasing depending on the precise value of the deceleration parameter q, which is defined as:
q = - (1 + Ḣ / H²)
In a simplified model of a universe where the deceleration parameter q is exactly zero (meaning a universe expanding at a constant rate), the relationship simplifies to H = 1/t, where t represents the time elapsed since the Big Bang. However, for a universe with a non-zero, time-dependent value of q, determining the age of the universe requires the more rigorous approach of integrating the Friedmann equations backward from the present moment to the point where the comoving horizon size effectively shrinks to zero.
For a considerable portion of the 20th century, it was widely presumed that q would be positive. This assumption was based on the intuitive notion that the mutual gravitational attraction of all matter in the universe should inevitably cause the expansion to slow down. Such a positive q would necessarily imply an age of the universe that is less than the Hubble time (1/H, which is approximately 14 billion years). For example, a value of q = 1/2 (a figure once favored by many theorists) would yield an age of the universe equal to 2/(3H). The groundbreaking discovery in 1998, however, that q is, in fact, apparently negative, dramatically overturned this long-held expectation. A negative q signifies that the expansion is currently accelerating, which, counter-intuitively, means the universe could actually be older than the simple Hubble time estimate. Nonetheless, the most precise estimates of the age of the universe today remain remarkably close to the Hubble time, a testament to the complex interplay of cosmic parameters.
Olbers' paradox
The seemingly straightforward expansion of space, as eloquently summarized by the Big Bang interpretation of Hubble's law, offers a rather elegant resolution to an ancient and perplexing cosmic riddle known as Olbers' paradox. The paradox, first formally articulated by the German astronomer Heinrich Wilhelm Olbers in the 19th century, poses a simple yet profound question: If the universe were truly infinite in its spatial extent, eternally static, and uniformly populated with an endless distribution of stars, then every conceivable line of sight in the night sky should eventually terminate on the surface of a star. Consequently, the entire night sky ought to be as blindingly bright as the surface of a star, a cosmic inferno. Yet, as anyone with a pair of eyes can attest, the night sky is predominantly, and rather reassuringly, dark.
Since the 17th century, a succession of astronomers, physicists, and philosophers have proposed myriad ingenious solutions to this cosmic conundrum. However, the currently accepted resolution, a synthesis of modern cosmology, leans heavily on two key pillars: the Big Bang theory and the Hubble expansion. Firstly, the Big Bang theory posits a universe that has existed for a finite amount of time, approximately 13.8 billion years. This finite age implies that light from only a finite number of stars and galaxies has had sufficient time to reach us since the beginning of the universe. Beyond a certain cosmic horizon, their light simply hasn't arrived yet. Secondly, the Hubble expansion plays a crucial role: as distant objects recede from us due to the expansion of space, the light they emit undergoes a significant redshift. This stretching of wavelengths not only shifts the light towards the red (and eventually infrared and beyond, rendering it invisible to our eyes) but also causes a corresponding diminution in their apparent brightness by the time it finally reaches us. Thus, the combined effect of a finite cosmic age and the redshifting and dimming caused by an expanding universe conspire to paint our night sky in comforting shades of dark. A dark sky, it seems, is a profound cosmological statement.
Dimensionless Hubble constant
In the often-fraught endeavor of cosmology, where precise measurements are paramount but often elusive, a common and rather pragmatic practice has emerged: the introduction of the dimensionless Hubble constant, typically denoted by h and affectionately (or perhaps exasperatedly) referred to as "little h." This dimensionless constant serves as a convenient shorthand, allowing researchers to express the Hubble constant H0 in a standardized form: h × 100 km⋅s⁻¹⋅Mpc⁻¹. By doing so, all the inherent relative uncertainty surrounding the true, empirically determined value of H0 is neatly relegated to the value of h.
This little h is particularly useful when quoting distances that have been calculated from an observed redshift z using the approximate formula d ≈ c / H0 × z. Since H0 is not yet known with absolute certainty, the distance is often expressed in a way that explicitly carries the h dependence:
c z / H0 ≈ (2998 × z) Mpc h⁻¹
In practice, this means one calculates 2998 × z and then appends the units as Mpc h⁻¹ or h⁻¹ Mpc. This allows for calculations and comparisons without committing to a specific, potentially disputed, value of H0, deferring that final numerical insertion until more definitive measurements emerge.
Occasionally, a reference value other than 100 may be chosen for the scaling factor, in which case a subscript is added to h to prevent confusion. For example, h70 would denote H0 = 70 h70 (km/s)/Mpc, which logically implies that h70 = h / 0.7. This ensures clarity when different scaling conventions are employed.
It is crucial not to conflate this practical dimensionless Hubble constant (h) with the truly dimensionless value of the Hubble constant when expressed in terms of Planck units. The latter is obtained by multiplying H0 by a minuscule factor of 1.75×10⁻⁶³ (derived from the definitions of the parsec and the Planck time). For instance, if H0 is 70 km/s/Mpc, its Planck unit equivalent would be a truly miniscule 1.2×10⁻⁶¹. One is a practical placeholder for uncertainty, the other a fundamental constant in a system of natural units.
Acceleration of the expansion
The scientific community experienced a profound shock in 1998. Observations of Type Ia supernovae, which serve as exceptionally reliable "standard candles" due to their consistent intrinsic brightness, unequivocally revealed that the deceleration parameter q was, astonishingly, negative. This groundbreaking discovery implied that the expansion of the universe is not, as previously assumed, slowing down due to gravity, but is, in fact, currently "accelerating." This revelation, which earned the Nobel Prize in Physics in 2011, fundamentally reshaped modern cosmology.
It's important to reiterate, however, that this cosmic acceleration does not mean the Hubble parameter itself is increasing with time. As discussed previously in the Interpretation section, the Hubble parameter H is actually still decreasing, albeit more slowly than it would in a decelerating universe. The acceleration refers to the fact that the recessional velocity of a given distant galaxy is increasing over time. This perplexing phenomenon is widely attributed to the enigmatic influence of dark energy, a mysterious form of energy that permeates space and exerts a repulsive gravitational force. The prevailing cosmological model that incorporates this acceleration is the ΛCDM model, which posits a universe dominated by a cosmological constant (Λ) and cold dark matter. The universe, it seems, is not just expanding, it's doing so with a peculiar, self-driven eagerness.
Derivation of the Hubble parameter
To truly appreciate the behavior of the Hubble parameter H, one must delve into the foundational equations of cosmology. The journey begins with the Friedmann equation, a cornerstone of general relativity when applied to a homogeneous and isotropic universe:
H² ≡ (ȧ / a)² = (8πG / 3)ρ - (kc² / a²) + (Λc² / 3)
Let's break down this rather imposing equation:
His the Hubble parameter, representing the instantaneous rate of expansion of the universe.ais the scale factor of the universe, a dimensionless quantity that describes the relative expansion of space. Its derivative with respect to time isȧ.Gis the gravitational constant, a fundamental constant in physics.ρis the total mass density of the universe, encompassing all forms of matter and energy.kis the normalised spatial curvature of the universe. This parameter dictates the overall shape of the universe:k = -1for an open, negatively curved universe;k = 0for a flat universe; andk = 1for a closed, positively curved universe.cis the speed of light in a vacuum.Λis the cosmological constant, a term originally introduced by Albert Einstein and now associated with dark energy.
Matter-dominated universe (with a cosmological constant)
Let us consider a simplified scenario, an epoch where the universe is predominantly matter-dominated. In this case, the total mass density ρ is primarily composed of matter (ρm). According to the principles of thermodynamics and general relativity for non-relativistic particles, the mass density of matter decreases inversely proportionally to the volume of the universe, which scales as a³. Thus, we can write:
ρ = ρm(a) = ρm₀ / a³
Where ρm₀ is the density of matter at the present time (when a = 1).
To make this more universally applicable, we introduce the critical density (ρc) and the density parameter for matter (Ωm):
ρc = (3H₀² / 8πG)
Ωm ≡ (ρm₀ / ρc) = (8πG / 3H₀²) ρm₀
From these definitions, we can express the matter density as:
ρ = (ρc Ωm / a³)
Similarly, we define density parameters for curvature (Ωk) and the cosmological constant (ΩΛ) at the present epoch:
Ωk ≡ (-kc² / (a₀H₀)²)
ΩΛ ≡ (Λc² / 3H₀²)
Where a₀ = 1 represents the scale factor today.
Now, by substituting all these defined terms back into the original Friedmann equation and expressing the scale factor a in terms of redshift (a = 1 / (1 + z)), we arrive at an expression for the Hubble parameter as a function of redshift:
H²(z) = H₀² (Ωm(1 + z)³ + Ωk(1 + z)² + ΩΛ)
This equation succinctly captures how the expansion rate of a matter-dominated universe, potentially influenced by curvature and a cosmological constant, evolves through cosmic history as observed via redshift.
Matter- and dark energy-dominated universe
Now, let us consider a more realistic and complex scenario: a universe that is not only matter-dominated but also significantly influenced by dark energy. In this case, the total mass density ρ must account for both components:
ρ = ρm(a) + ρde(a)
Where ρde represents the mass density contributed by dark energy. In cosmology, the behavior of dark energy is typically characterized by its equation of state, P = wρc², where P is its pressure and w is the dimensionless equation of state parameter. If we substitute this into the fluid equation, which describes how the mass density of the universe changes over time (a fundamental consequence of energy conservation in an expanding spacetime):
ρ̇ + 3(ȧ / a)(ρ + P/c²) = 0
This can be rewritten as:
dρ / ρ = -3 (da / a) (1 + w)
If we assume that w is a constant (a common, though not universally accepted, simplification), we can integrate this equation:
ln ρ = -3(1 + w) ln a
Which implies that the density scales as:
ρ = a⁻³⁽¹⁺ʷ⁾
Therefore, for dark energy characterized by a constant equation of state w, its density evolves as:
ρde(a) = ρde₀ a⁻³⁽¹⁺ʷ⁾
Where ρde₀ is the dark energy density at the present time. If we then substitute this into the Friedmann equation, and for simplicity, assume a spatially flat universe (i.e., k = 0), the Hubble parameter as a function of redshift becomes:
H²(z) = H₀² (Ωm(1 + z)³ + Ωde(1 + z)³⁽¹⁺ʷ⁾)
If the dark energy originates from a cosmological constant, as originally conceived by Albert Einstein, it can be shown that w = -1. In this specific and widely favored case, the equation simplifies, reducing to the last equation from the matter-dominated universe section (with Ωk naturally set to zero). Here, the initial dark energy density ρde₀ is given by:
ρde₀ = (Λc² / 8πG)
Ωde = ΩΛ
However, the universe, in its infinite complexity, might not be so simple. If dark energy possesses a non-constant equation of state w(a), then its density evolution becomes more intricate:
ρde(a) = ρde₀ e⁻³∫(da/a)(¹⁺ʷ⁽ᵃ⁾)
To solve this, w(a) must be explicitly parameterized. A common parameterization, for instance, is w(a) = w₀ + wa(1 − a). If we substitute this into the Friedmann equation, the Hubble parameter takes on an even more complex form:
H²(z) = H₀² (Ωm a⁻³ + Ωde a⁻³⁽¹⁺ʷ₀⁺ʷᵃ⁾ e⁻³ʷᵃ⁽¹⁻ᵃ⁾)
These progressively more complex equations reflect our ongoing efforts to precisely model the universe's expansion, trying to fit the observed data with the most accurate theoretical framework possible. It’s a painstaking process, but one that slowly, reluctantly, reveals the universe's true nature.
Units derived from the Hubble constant
The Hubble constant H0 is not merely a number; it is a gateway to understanding the fundamental scales of the universe. Its unique units allow for the derivation of two profound cosmological measures: the Hubble time and the Hubble length. These units provide crucial insights into the age and sheer scale of our cosmos.
Hubble time
The Hubble constant H0, when stripped of its distance component, fundamentally possesses units of inverse time. This makes it straightforward to define the Hubble time (tH) as simply the reciprocal of the Hubble constant:
tH ≡ 1 / H0 = 1 / 67.8 (km/s)/Mpc = 4.55 × 10¹⁷ s = 14.4 billion years
This value, approximately 14.4 billion years, serves as a useful benchmark. It represents the hypothetical age of the universe if its expansion had been perfectly linear and constant throughout its history. However, it is important to note that this Hubble time is slightly different from the actual age of the universe, which is currently estimated to be around 13.8 billion years. The discrepancy arises because the universe's expansion has not been linear; it has been influenced by its varying energy content (i.e., matter, radiation, and dark energy), leading to periods of deceleration and, more recently, acceleration (as detailed in the § Derivation of the Hubble parameter section).
We appear to be entering a fascinating cosmic epoch where the expansion of the universe is becoming increasingly exponential, primarily driven by the growing dominance of vacuum energy (a manifestation of dark energy). In this particular regime, the Hubble parameter H approaches a constant value, and the scale factor of the universe a grows by a factor of e (the base of the natural logarithm) during each Hubble time:
H ≡ ȧ / a = constant ⟹ a ∝ e^(Ht) = e^(t/tH)
Similarly, the widely accepted value of 2.27 exaseconds⁻¹ (which is H0 in inverse seconds) implies that, at the current rate, the universe would expand by a factor of e^2.27 within a single exasecond.
Over the truly vast stretches of cosmic time, the universe's dynamics are complicated by the intricate interplay of general relativity, the mysterious influence of dark energy, and the fleeting, yet profound, epoch of inflation, all of which conspire to make the simple linear expansion a mere approximation.
Hubble length
The Hubble length or Hubble distance is another fundamental unit of distance in cosmology, intrinsically linked to the Hubble constant. It is defined as c H⁻¹—that is, the speed of light c multiplied by the Hubble time (1/H). This distance is equivalent to approximately 4,420 million parsecs or 14.4 billion light years. Intriguingly, the numerical value of the Hubble length when expressed in light years is, by its very definition, numerically identical to the Hubble time when expressed in years.
Substituting D = c H⁻¹ into the equation for Hubble's law (v = H₀ D) reveals its profound significance: the Hubble distance precisely specifies the distance from an observer to those distant galaxies that are currently receding from us at the exact speed of light. It marks a conceptual boundary in our understanding of the observable cosmos, not a physical barrier, but a point where the expansion of space itself reaches a speed equal to light.
Hubble volume
The Hubble volume is a concept in cosmology that, like many things in this field, suffers from a degree of definitional ambiguity. It is sometimes broadly defined as a volume of the universe with a comoving size equal to the Hubble length (c H⁻¹). However, the precise geometric interpretation can vary: it is occasionally envisioned as the volume of a sphere with a radius of c H⁻¹, or, alternatively, as a cube with sides of length c H⁻¹.
It is crucial to distinguish the Hubble volume from the volume of the observable universe. While related, they are not interchangeable. The observable universe has a radius that is approximately three times larger than the Hubble length, meaning its volume is significantly greater. Some cosmologists do, colloquially, use the term Hubble volume to refer to the observable universe, but this loose terminology can lead to considerable confusion. The Hubble volume primarily delineates the region of space where objects are receding from us at less than the speed of light due to the Hubble flow at the present cosmic epoch. Beyond it, objects recede faster than light, not through space, but with space.
Determining the Hubble constant
The precise value of the Hubble constant H0 is not something one can simply pluck from the sky. It is, rather, a meticulously derived quantity, born from a complex interplay of astronomical observations and carefully considered cosmological model-dependent assumptions. For decades, as observational technology and theoretical models have advanced, this value has been refined, leading to increasingly accurate—yet, paradoxically, increasingly discordant—sets of measurements. This persistent and statistically significant disagreement between different measurement methodologies has come to be known as the "Hubble tension," a cosmic headache that continues to plague modern cosmology.
Earlier measurements
Edwin Hubble's initial, pioneering estimate of the constant that now bears his name, published in 1929, utilized observations of Cepheid variable stars. These pulsating giants, with their predictable period-luminosity relationship, served as crucial "standard candles" for measuring the vast distances to other galaxies. The figure Hubble arrived at was approximately 500 (km/s)/Mpc. While a monumental achievement for its time, this value was, by modern standards, dramatically too high. The universe, it turned out, was expanding far more slowly than Hubble initially calculated.
The reason for this significant discrepancy became clearer with the later work of astronomer Walter Baade in the 1940s and 50s. Baade's meticulous observations led him to a crucial realization: stars within a galaxy are not all of a single type. He distinguished between distinct "stellar populations" (Population I and Population II), each with different characteristics and evolutionary stages. More importantly for the cosmic distance ladder, Baade discovered that there were, in fact, two types of Cepheid variable stars, each possessing different intrinsic luminosities. This meant that Hubble's original calibration, which implicitly assumed a single type of Cepheid, was systematically flawed. Using this profound discovery, Baade recalculated the Hubble constant and, consequently, the scale of the known universe, effectively doubling Hubble's original 1929 estimate of cosmic distances. He unveiled this revised understanding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome, fundamentally resetting our understanding of the cosmos.
For the majority of the latter half of the 20th century, the estimated value of H0 remained a subject of vigorous debate, generally fluctuating between 50 and 90 (km/s)/Mpc. This wide variance was exacerbated by a long-standing and, frankly, rather bitter controversy between two prominent astronomers: Gérard de Vaucouleurs, who staunchly advocated for a value closer to 100 (km/s)/Mpc, and Allan Sandage, who just as vehemently argued for a value nearer to 50 (km/s)/Mpc. The scientific discourse, in this period, was far from genteel. In a particularly vivid demonstration of the academic vitriol, when Sandage and his colleague Gustav Andreas Tammann formally acknowledged some shortcomings in their methodology in 1975, de Vaucouleurs responded with thinly veiled disdain: "It is unfortunate that this sober warning was so soon forgotten and ignored by most astronomers and textbook writers." The intensity of this disagreement even led to a public debate in 1996, moderated by John Bahcall, between Sidney van den Bergh and Gustav Tammann, echoing the earlier, equally dramatic Shapley–Curtis debate over the scale of the universe.
This previously expansive range of estimates began to narrow considerably with the advent of the ΛCDM model of the universe in the late 1990s. This standard cosmological model, combining a cosmological constant (Λ) and cold dark matter, provided a more robust theoretical framework. Incorporating this model, new observational techniques began to converge. Measurements of high-redshift galaxy clusters using X-ray and microwave wavelengths via the Sunyaev–Zel'dovich effect, detailed analyses of anisotropies in the cosmic microwave background radiation (the afterglow of the Big Bang), and large-scale optical surveys all began to yield values for the Hubble constant clustering around 50–70 km/s/Mpc. A consensus, it seemed, was finally emerging from the cosmic fog.
Precision cosmology and the Hubble tension
By the late 1990s and early 2000s, the field of cosmology had entered an era of "precision cosmology," marked by significant advancements in both theoretical understanding and observational technology. This allowed for measurements of unprecedented accuracy, pushing the boundaries of our understanding of the universe. However, this newfound precision brought with it a perplexing new problem: two major categories of measurement methods, each boasting high internal precision, stubbornly refused to agree on the value of H0.
"Late universe" measurements, which rely on the traditional calibrated distance ladder techniques (starting from local measurements and extending outwards), have consistently converged on a value of approximately 73 (km/s)/Mpc. Meanwhile, "early universe" techniques, which became available around 2000 and are based on meticulously measuring the properties of the cosmic microwave background (CMB) radiation—the universe's oldest light—have consistently yielded a value closer to 67.7 (km/s)/Mpc. It's crucial to note that these "early universe" measurements are not directly measuring H0 in the early universe, but rather inferring its current value by modeling the change in the expansion rate since the early universe, making them directly comparable to the "late universe" figures.
Initially, this discrepancy was considered to be within the estimated measurement uncertainties of the various techniques, and therefore, no immediate cause for alarm. However, as both sets of techniques have been rigorously refined and their estimated uncertainties meticulously reduced, the discrepancies have, rather inconveniently, not diminished. The gap has remained, and indeed, widened, to the point where the disagreement is now profoundly statistically significant, exceeding a 5-sigma level (meaning there's less than a 1 in 3.5 million chance that the difference is due to random error alone for Gaussian errors). This persistent and profound disagreement is what has been dubbed the Hubble tension.
For example, the highly precise Planck mission, representing the pinnacle of "early universe" CMB measurements, published a value for H0 of 67.4±0.5 (km/s)/Mpc in 2018. In stark contrast, the "late universe" camp, led by measurements from the Hubble Space Telescope (HST), determined a higher value of 74.03±1.42 (km/s)/Mpc. This latter value was further corroborated by observations from the immensely powerful James Webb Space Telescope (JWST) in 2023, effectively ruling out many potential systematic errors in the HST data. The "early" and "late" measurements now disagree at a level that is beyond any plausible level of chance, forcing cosmologists to confront a fundamental challenge to their understanding of the universe. The resolution to this deep-seated disagreement remains one of the most active and pressing areas of research in modern cosmology.
The landscape of H0 measurements around 2021, with the 2018 results from CMB measurements highlighted in pink and 2020 distance ladder values highlighted in cyan.
Reducing systematic errors
The existence of the Hubble tension has naturally spurred an intense, almost frantic, effort within the cosmology community to scrutinize every aspect of the measurement methodologies. Since 2013, there has been a relentless focus on identifying and mitigating potential systematic errors, alongside a concerted drive to improve the reproducibility and independent verification of all measurements.
The "late universe" or distance ladder measurements typically unfold in a sequence of three crucial stages, or "rungs." The first rung involves determining distances to relatively nearby Cepheid variable stars. This requires meticulous care to minimize luminosity errors that can arise from interstellar dust and the subtle correlations of metallicity (the abundance of elements heavier than hydrogen and helium) with a Cepheid's intrinsic brightness. The second rung of the ladder utilizes Type Ia supernovae. These cataclysmic stellar explosions are particularly valuable because they are believed to result from the detonation of white dwarfs reaching a critical mass, leading to explosions that release an almost constant amount of light. This makes them exceptionally luminous and consistent "standard candles," capable of probing much greater cosmic distances. The primary systematic error here often stems from the limited number of such objects that can be observed with sufficient precision. The third and final rung involves measuring the redshift of these distant supernovae to extract the pure Hubble flow, from which the Hubble constant is then derived. At this stage, crucial corrections must be applied to account for any motion other than expansion, such as local gravitational pulls.
As a prime example of the painstaking work involved in reducing systematic errors, recent photometry observations from the James Webb Space Telescope (JWST) of extra-galactic Cepheids have provided a powerful independent check. The JWST's unparalleled resolution allowed astronomers to avoid issues like stellar crowding in the field of view that could potentially bias earlier Hubble Space Telescope (HST) measurements. Crucially, these JWST observations have confirmed the HST's findings, yielding the same value for H0. This strong agreement effectively rules out a significant class of systematic errors related to instrumental resolution or crowding, further solidifying the "late universe" value and, in turn, deepening the Hubble tension.
Conversely, the "early universe" or inverse distance ladder methods operate on an entirely different physical principle. They measure the observable consequences of spherical sound waves that propagated through the primordial plasma of the early universe. These pressure waves, known as baryon acoustic oscillations (BAO), effectively froze in place once the universe cooled sufficiently for electrons to bind with nuclei. This event, known as recombination, ended the opaque plasma era and allowed photons (light particles) that were previously trapped by interactions with the plasma to freely escape, forming the cosmic microwave background (CMB). The subsequent pressure waves left subtle, yet detectable, imprints: very small perturbations in the density of the plasma, which are evident both in the detailed structure of the cosmic microwave background and in the large-scale distribution of galaxies across the sky. By matching the intricate patterns observed in high-precision CMB measurements to theoretical physics models of these oscillations, a value for the Hubble constant can be derived. Similarly, the BAO features affect the statistical distribution of matter, which is observed as the clustering of distant galaxies.
The fact that these two entirely independent measurement approaches—one building up from local, relatively nearby objects, the other inferring from the universe's earliest light—produce consistently different values for the Hubble constant (within the framework of the current ΛCDM model) provides strong evidence that the discrepancy is not simply due to easily identifiable systematic errors within the measurements themselves. The problem, it seems, lies deeper, perhaps in the very fabric of our cosmological model.
Other kinds of measurements
Beyond the two dominant methodologies—the calibrated distance ladder and cosmic microwave background (CMB) measurements—a variety of other ingenious methods have been developed and employed to constrain the value of the Hubble constant, each offering a unique observational window into the universe's expansion.
One particularly promising alternative involves analyzing transient celestial events that are observed in multiple images produced by a phenomenon known as strong gravitational lensing. When a massive galaxy or galaxy cluster acts as a cosmic lens, it can bend the light from a more distant source, creating multiple distorted images of that source. If the distant source is a transient event, such as a supernova, its light will arrive at Earth at slightly different times through each lensed path due to varying path lengths and gravitational delays. If this "time delay" between the appearances of the multiple images can be precisely measured, it can be used to constrain the Hubble constant. This technique, known as "time-delay cosmography," was first proposed by Sjur Refsdal in 1964, decades before the first strongly lensed object was even observed. The first strongly lensed supernova discovered, SN Refsdal, was named in his honor. While Refsdal initially envisioned using supernovae, he also noted the potential of extremely luminous, distant, star-like objects—later identified as quasars. To date (April 2025), the majority of time-delay cosmography measurements have indeed relied on strongly lensed quasars, simply because the current samples of such lensed quasars vastly outnumber known lensed supernovae (of which fewer than 10 are currently known). This situation is expected to change dramatically in the coming years, with surveys like the Vera C. Rubin Observatory (LSST) projected to discover approximately 10 lensed supernovae within its first three years of operation. Examples of H0 constraints from this method include results from the STRIDES and H0LiCOW collaborations, which are included in the table below.
In October 2018, scientists unveiled a revolutionary new approach: using information gleaned from gravitational wave events, particularly those involving the spectacular merger of neutron stars (like GW170817), to determine the Hubble constant. These "standard sirens" provide a unique, independent way to measure cosmic distances, as the amplitude of the gravitational wave signal directly relates to the luminosity distance of the source.
Building on this, in July 2019, astronomers reported a refined method for determining the Hubble constant using the mergers of neutron stars as "cosmic rulers." Following the detection of GW170817 and its electromagnetic counterpart (an event termed a "dark siren"), they measured H0 to be 73.3+5.3−5.0 (km/s)/Mpc, a value that aligns more closely with the "late universe" measurements.
Also in July 2019, another independent method emerged, utilizing data from the Hubble Space Telescope and focusing on distances to red giant stars calculated using the "tip of the red-giant branch" (TRGB) distance indicator. This method yielded a value of 69.8+1.9−1.9 (km/s)/Mpc, placing it somewhat in between the two main camps, though still closer to the "early universe" estimates.
February 2020 saw the publication of independent results from the Megamaser Cosmology Project, which employs astrophysical masers—naturally occurring microwave lasers—visible at cosmological distances. These measurements offer a purely geometric distance, bypassing the multi-step calibration inherent in the distance ladder. This work confirmed the higher "late universe" distance ladder results, differing from the "early universe" values at a statistical significance level of 95%.
In July 2020, new measurements of the cosmic background radiation by the Atacama Cosmology Telescope further complicated the picture, predicting that the universe should be expanding more slowly than is currently observed by "late universe" methods, thus reinforcing the "early universe" side of the tension.
Most recently, in July 2023, an independent estimate of the Hubble constant was derived from a kilonova—the optical afterglow of a neutron star merger—using the expanding photosphere method. Due to the precise blackbody nature of early kilonova spectra, such systems provide exceptionally strong constraints on cosmic distances. Using the kilonova AT2017gfo (yet another consequence of the GW170817 event), these measurements indicated a local estimate of the Hubble constant of 67.0±3.6 (km/s)/Mpc, aligning more with the "early universe" predictions.
The sheer diversity of these independent methodologies, each operating on different physics and observing different cosmic phenomena, only serves to underscore the robustness of the individual measurements while simultaneously deepening the mystery of the Hubble tension. The universe, it seems, enjoys being consistently inconsistent.
Estimated values of the Hubble constant, 2001–2020. Estimates in black represent calibrated distance ladder measurements which tend to cluster around 73 (km/s)/Mpc; red represents early universe CMB/BAO measurements with ΛCDM parameters which show good agreement on a figure near 67 (km/s)/Mpc, while blue are other techniques, whose uncertainties are not yet small enough to decide between the two.
Possible resolutions of the Hubble tension
The root cause of the Hubble tension remains one of the most pressing and frustrating enigmas in modern cosmology. The lack of a clear explanation has spawned a veritable cottage industry of proposed solutions, each attempting to bridge the chasm between the "early universe" and "late universe" measurements.
The most conservative, and perhaps least exciting, explanation is that there exists an as-yet-undiscovered systematic error affecting either the "early universe" or "late universe" observations. While intuitively appealing (human error is, after all, a constant), this explanation faces significant hurdles. It would require multiple, unrelated systematic effects to be at play, regardless of which camp is ultimately deemed "incorrect." Furthermore, any such error would need to consistently affect several different instruments and observational techniques, as both sets of measurements are derived from diverse data sources. As of now, no obvious candidate for such a pervasive systematic error has emerged that could reconcile the discrepancy.
Alternatively, it could be that the observations themselves are perfectly correct, but our interpretation is flawed due to some unaccounted-for cosmic effect. One intriguing, albeit radical, possibility is that the fundamental cosmological principle—the assumption that the universe is homogeneous and isotropic on large scales—might, in some subtle way, be failing (see Lambda-CDM model § Violations of the cosmological principle). If, for instance, we happened to be located within an exceptionally large, local cosmic void, extending to a redshift of approximately 0.5, this could potentially bias our "late universe" measurements, leading to an artificially inflated Hubble constant. Such a scenario, however, would need to be carefully reconciled with existing supernovae and baryon acoustic oscillation observations, a non-trivial task. Another less likely possibility is that the uncertainties in the measurements themselves have been underestimated, but given the rigorous internal consistency within each measurement camp, this explanation seems increasingly improbable and, even if true, wouldn't fully resolve the fundamental tension.
Finally, and perhaps most excitingly for physicists yearning for new discoveries, the Hubble tension could herald the advent of new physics beyond the currently accepted cosmological model of the universe, the ΛCDM model. This category of solutions is vast and imaginative. For example, replacing general relativity with a modified theory of gravity could potentially resolve the tension by altering the expansion history of the universe. Alternatively, the introduction of an early dark energy component—a form of dark energy that had a non-negligible impact in the very early universe (unlike standard ΛCDM, where its effect is considered minimal until later epochs)—could also shift the "early universe" prediction of H0. Other proposals include dark energy with a time-varying equation of state (where w is not a constant -1), or even exotic scenarios where dark matter decays into "dark radiation" (new, relativistic particles).
The inherent challenge for all these "new physics" theories is that both "early universe" and "late universe" measurements are supported by multiple, independent lines of physics. It is exceedingly difficult to modify one aspect of the ΛCDM model to resolve the Hubble tension without inadvertently undermining its spectacular successes in explaining other cosmological phenomena. The sheer scale of this challenge is evident in the ongoing debate: some authors argue that new early-universe physics alone is insufficient, while others contend that new late-universe physics alone also falls short. Nonetheless, the scientific community, driven by the tantalizing prospect of a paradigm shift, continues its relentless pursuit. Interest in the Hubble tension has grown exponentially since the mid-2010s, transforming it into a fertile ground for theoretical innovation and observational ingenuity. The universe, it seems, still has plenty of surprises left.
Measurements of the Hubble constant
| Date published | Hubble constant (km/s)/Mpc | Observer | Citation | Remarks / methodology |
|---|---|---|---|---|
| 2025-05-27 | 70.39±1.94 | W. Freedman et al | [98] | This measurement utilized the Tip of the Red Giant Branch (TRGB) method, a powerful distance indicator relying on the consistent peak luminosity of the brightest red giant stars in a galaxy. The study incorporated data from both the James Webb Space Telescope (JWST) and the Hubble Space Telescope (HST), and also reported values derived from J-Region Asymptotic Giant Branch (JAGB) stars and Cepheid variable stars for comparison, offering a comprehensive look at late-universe calibrations. |
| 2025-01-14 | 75.7+8.1−5.5 | Pascale et al. | [100] | A groundbreaking measurement of H0 derived from the timing delay of gravitationally lensed images of Supernova H0pe, a Type Ia supernova. This method is entirely independent of the traditional cosmic distance ladder and the cosmic microwave background (CMB), offering a unique probe. The data for this study was acquired using the James Webb Space Telescope. (As of early 2025, this and the 2023-05-11 entry are the only two values obtained using this specific time-delay supernova lensing method). |
| 2024-12-01 | 72.6±2.0 | SH0ES+CCHP JWST | [101] | This represents a collaborative effort utilizing data from the James Webb Space Telescope (JWST) and combining three distinct distance measurement methods: Cepheid variable stars, the Tip of the Red Giant Branch (TRGB), and J-Region Asymptotic Giant Branch (JAGB) stars. The agreement across these different indicators, reinforced by JWST's high resolution, further strengthens the "late universe" value. |
| 2023-07-19 | 67.0±3.6 | Sneppen et al. | [86], [84] | This estimate was derived from the unique properties of kilonovae, specifically the optical afterglow of a neutron star merger (AT2017gfo, following GW170817). The early spectra of kilonovae exhibit a strong blackbody nature, allowing for the application of the expanding photosphere method to provide robust and strongly constraining estimates of cosmic distance. This measurement aligns more closely with "early universe" values. |
| 2023-07-13 | 68.3±1.5 | SPT-3G | [102] | A measurement derived from the temperature (TT), E-mode polarization (TE), and B-mode polarization (EE) power spectrum of the cosmic microwave background (CMB) as observed by the South Pole Telescope (SPT-3G). This result shows less than a 1-sigma discrepancy with the Planck mission results, reinforcing the "early universe" CMB values. |
| 2023-05-11 | 66.6+4.1−3.3 | P. L. Kelly et al. | [103] | This estimate utilized the rare phenomenon of time-delay measurements from gravitationally lensed images of Supernova Refsdal. This technique provides a geometric measurement of cosmic distances, making it independent of both the cosmic distance ladder and the cosmic microwave background. |
| 2022-12-14 | 67.3+10.0−9.1 | S. Contarini et al. | [104] | Derived from the statistics of cosmic voids—vast, empty regions of space—using the BOSS DR12 data set. This method offers an independent probe of large-scale structure and cosmic expansion, with its uncertainty reflecting the current challenges in precisely modeling void dynamics. |
| 2022-02-08 | 73.4+0.99−1.22 | Pantheon+ | [106] | A highly precise measurement based on the Type Ia supernova distance ladder (combined with data from the SH0ES collaboration). This represents one of the strongest determinations from the "late universe" perspective, further solidifying the higher value and the statistical significance of the Hubble tension. |
| 2022-06-17 | 75.4+3.8−3.7 | T. de Jaeger et al. | [107] | This measurement employed Type II supernovae as standardizable candles, offering an independent route to the Hubble constant. The analysis used 13 Type II supernovae whose host-galaxy distances were determined using Cepheid variable stars, the Tip of the Red Giant Branch (TRGB), and geometric distances (specifically for NGC 4258). |
| 2021-12-08 | 73.04±1.04 | SH0ES | [108] | A highly precise measurement from the SH0ES collaboration, combining Cepheid variable-Type Ia supernova distance ladder data from the Hubble Space Telescope (HST), Gaia EDR3 parallaxes, and the "Pantheon+" supernova sample. This result maintains a 5-sigma discrepancy with the Planck mission CMB measurements, highlighting the robustness of the Hubble tension. |
| 2021-09-17 | 69.8±1.7 | W. Freedman | [109] | An independent measurement derived from the Tip of the Red Giant Branch (TRGB) distance indicator, utilizing data from the Hubble Space Telescope (HST) and Gaia EDR3. This method provides a cross-check for Cepheid-based measurements and tends to yield values closer to the CMB results. |
| 2020-12-16 | 72.1±2.0 | Hubble Space Telescope and Gaia EDR3 | [110] | This work combined earlier studies on red giant stars (using the Tip of the Red Giant Branch (TRGB) distance indicator) with highly precise parallax measurements of Omega Centauri from Gaia EDR3. It aimed to improve the calibration of the TRGB method and its application to the Hubble constant. |
| 2020-12-15 | 73.2±1.3 | Hubble Space Telescope and Gaia EDR3 | [111] | A significant update from the SH0ES collaboration, combining Hubble Space Telescope (HST) photometry and Gaia EDR3 parallaxes for Milky Way Cepheid variable stars. This drastically reduced the uncertainty in the calibration of Cepheid luminosities to 1.0%, leading to an overall uncertainty in H0 of 1.8%, with expectations for further reduction to 1.3% with larger Type Ia supernova samples. |
| 2020-12-04 | 73.5±5.3 | E. J. Baxter, B. D. Sherwin | [112] | This method uses gravitational lensing in the cosmic microwave background (CMB) to estimate H0 without relying on the sound horizon scale, providing an independent way to analyze Planck mission data and offering an alternative perspective on the CMB constraints. |
| 2020-11-25 | 71.8+3.9−3.3 | P. Denzel et al. | [113] | This study determined H0 to a precision of 5% by analyzing eight quadruply lensed galaxy systems. This method is independent of both distance ladders and the cosmic microwave background and yielded a value consistent with both "early" and "late" universe estimates, suggesting a potential bridge for the tension. |
| 2020-11-07 | 67.4±1.0 | T. Sedgwick et al. | [114] | Derived from 88 Type Ia supernovae (with redshifts between 0.02 and 0.05) used as standard candle distance indicators. The H0 estimate was carefully corrected for the effects of peculiar velocities in the supernova environments, estimated from the local galaxy density field. This result assumes a ΛCDM model with Ωm = 0.3, ΩΛ = 0.7, and a sound horizon of 149.3 Mpc. |
| 2020-09-29 | 67.6+4.3−4.2 | S. Mukherjee et al. | [116] | This measurement utilized gravitational waves, specifically assuming that the transient ZTF19abanrh, discovered by the Zwicky Transient Facility, was the optical counterpart to the binary black hole merger GW190521. This offers a novel, independent method for cosmological parameter estimation. |
| 2020-06-18 | 75.8+5.2−4.9 | T. de Jaeger et al. | [117] | Another measurement using Type II supernovae as standardizable candles. This analysis focused on 7 Type II supernovae whose host-galaxy distances were determined from either Cepheid variable stars or the Tip of the Red Giant Branch (TRGB), further exploring independent distance indicators. |
| 2020-02-26 | 73.9±3.0 | Megamaser Cosmology Project | [82] | This measurement is based on purely geometric distance measurements to galaxies hosting megamasers—powerful natural masers that can be observed at cosmological distances. This method is entirely independent of the distance ladder and the cosmic microwave background and provides strong support for the higher "late universe" values. |
| 2019-10-14 | 74.2+2.7−3.0 | STRIDES | [118] | This result comes from modeling the mass distribution and time delay of the lensed quasar DES J0408-5354, a key component of the STRIDES collaboration's efforts to use gravitational lensing for cosmological parameter estimation. |
| 2019-09-12 | 76.8±2.6 | SHARP/H0LiCOW | [119] | This measurement utilized a combination of ground-based adaptive optics and Hubble Space Telescope observations to model three gravitationally lensed objects and their lensing galaxies. This method is independent of both the cosmic distance ladder and CMB measurements, providing a valuable cross-check. |
| 2019-08-20 | 73.3+1.36−1.35 | K. Dutta et al. | [120] | This value for H0 was obtained by analyzing a wide range of low-redshift cosmological data within the framework of the ΛCDM model. The datasets included Type Ia supernovae, baryon acoustic oscillations, time-delay measurements from strong lensing, H(z) measurements from cosmic chronometers, and growth measurements from large-scale structure observations. |
| 2019-08-15 | 73.5±1.4 | M. J. Reid, D. W. Pesce, A. G. Riess | [121] | This measurement involved precisely determining the distance to Messier 106 (NGC 4258) using its central supermassive black hole (via megamasers), combined with measurements of eclipsing binaries in the Large Magellanic Cloud. This provides a crucial geometric calibration for the distance ladder. |
| 2019-07-16 | 69.8±1.9 | Hubble Space Telescope | [79], [80], [81] | This measurement, utilizing data from the Hubble Space Telescope, derived distances to red giant stars using the Tip of the Red Giant Branch (TRGB) distance indicator. It offered an independent pathway to H0, generally yielding a value that sits between the CMB and Cepheid-based measurements. |
| 2019-07-10 | 73.3+1.7−1.8 | H0LiCOW collaboration | [122] | An updated and refined measurement from the H0LiCOW collaboration, based on observations of multiply imaged quasars. This iteration used six quasars, further strengthening the independence of this method from the cosmic distance ladder and cosmic microwave background measurements, and contributing to the higher range of H0 values. |
| 2019-07-08 | 70.3+5.3−5.0 | The LIGO Scientific Collaboration and The Virgo Collaboration | [78] | This measurement leveraged the radio counterpart of the gravitational wave event GW170817 (a neutron star merger), combining it with earlier gravitational wave (GW) and electromagnetic (EM) data. This "dark siren" method provides a completely independent way to measure H0. |
| 2019-03-28 | 68.0+4.2−4.1 | Fermi-LAT | [123] | This method derived H0 by analyzing the attenuation of gamma rays from distant blazars due to interaction with the extragalactic background light. It offers an independent probe of cosmic expansion, distinct from both the cosmic distance ladder and the cosmic microwave background. |
| 2019-03-18 | 74.03±1.42 | Hubble Space Telescope | [68] | A pivotal measurement from the SH0ES collaboration. Precision Hubble Space Telescope (HST) photometry of Cepheid variable stars in the Large Magellanic Cloud (LMC) significantly reduced the uncertainty in the LMC's distance from 2.5% to 1.3%. This revision pushed the tension with cosmic microwave background (CMB) measurements to the 4.4-sigma level (a probability of 99.999% for Gaussian errors), placing the discrepancy firmly beyond a plausible level of chance and intensifying the Hubble tension. |
| 2019-02-08 | 67.78+0.91−0.87 | Joseph Ryan et al. | [124] | This study used a combination of quasar angular size measurements and baryon acoustic oscillations (BAO) to constrain H0, assuming a flat ΛCDM model. The authors noted that alternative cosmological models could result in different (generally lower) values for the Hubble constant. |
| 2018-11-06 | 67.77±1.30 | Dark Energy Survey | [125] | This measurement utilized Type Ia supernovae from the Dark Energy Survey (DES) and applied the inverse distance ladder method, which is calibrated using baryon acoustic oscillations. This approach links the late-universe supernovae to early-universe physics. |
| 2018-09-05 | 72.5+2.1−2.3 | H0LiCOW collaboration | [126] | Observations of multiply imaged quasars provided this value, independent of both the cosmic distance ladder and cosmic microwave background measurements. This geometric method offers a valuable alternative probe of the universe's expansion rate. |
| 2018-07-18 | 67.66±0.42 | Planck Mission | [64] | These are the final results from the Planck Mission, representing the most precise "early universe" measurement of H0 based on the cosmic microwave background (CMB) within the ΛCDM model. This value forms the lower end of the Hubble tension discrepancy. |
| 2018-04-27 | 73.52±1.62 | Hubble Space Telescope and Gaia | [127], [128] | This measurement from the SH0ES collaboration combined additional Hubble Space Telescope (HST) photometry of galactic Cepheid variable stars with early Gaia parallax measurements. The revised value further increased the tension with cosmic microwave background (CMB) measurements to the 3.8-sigma level. |
| 2018-02-22 | 73.45±1.66 | Hubble Space Telescope | [129], [130] | This SH0ES collaboration result incorporated new parallax measurements of galactic Cepheid variable stars, obtained through spatially scanning the Hubble Space Telescope, to enhance the calibration of the distance ladder. The resulting H0 value suggested a discrepancy with CMB measurements at the 3.7-sigma level, with further reductions in uncertainty anticipated with the final release of the Gaia catalog. |
| 2017-10-16 | 70.0+12.0−8.0 | The LIGO Scientific Collaboration and The Virgo Collaboration | [131] | This was a pioneering "standard siren" measurement, entirely independent of traditional "standard candle" techniques. The gravitational wave analysis of a binary neutron star (BNS) merger, GW170817, directly estimated the luminosity distance to cosmological scales. This groundbreaking event provided the first direct measurement of H0 from gravitational waves, with expectations that fifty similar detections in the next decade could significantly arbitrate the tension between other methodologies. Future detections of neutron star-black hole mergers (NSBH) are also expected to offer even greater precision. |
| 2016-11-22 | 71.9+2.4−3.0 | Hubble Space Telescope | [134] | This measurement, from the H0 Lenses in COSMOGRAIL's Wellspring (H0LiCOW) collaboration, used time delays between multiple images of distant variable sources (like quasars) produced by strong gravitational lensing. This geometric method offers an independent and robust way to measure H0. |
| 2016-08-04 | 76.2+3.4−2.7 | Cosmicflows-3 | [135] | This comprehensive study compared redshift data with various other distance measurement methods, including the Tully–Fisher relation, Cepheid variable stars, and Type Ia supernovae. A more restrictive analysis of the data suggested a more precise value of 75±2 (km/s)/Mpc, reflecting the ongoing efforts to refine the distance ladder. |
| 2016-07-13 | 67.6+0.7−0.6 | SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) | [136] | This measurement was derived from baryon acoustic oscillations (BAO) observed in the distribution of galaxies. The extended survey (eBOSS), which began in 2014 and ran through 2020, was designed to specifically explore the cosmic epoch when the universe was transitioning from the deceleration effects of gravity (3 to 8 billion years after the Big Bang), providing crucial data for H0 constraints. |
| 2016-05-17 | 73.24±1.74 | Hubble Space Telescope | [138] | A measurement by the SH0ES collaboration using Type Ia supernovae. This work highlighted the expectation that the uncertainty in H0 would be reduced by a factor of more than two with upcoming Gaia measurements and other improvements in distance ladder calibration. |
| 2015-02 | 67.74±0.46 | Planck Mission | [139], [140] | These results stem from a comprehensive analysis of the Planck mission's full dataset, publicly released in February 2015. They represented the most precise "early universe" constraints on H0 derived from the cosmic microwave background (CMB) at the time. |
| 2013-10-01 | 74.4±3.0 | Cosmicflows-2 | [141] | This study, a predecessor to Cosmicflows-3, compared redshift measurements with various other distance determination methods, including Tully–Fisher, Cepheid variable stars, and Type Ia supernovae, contributing to the "late universe" side of the debate. |
| 2013-03-21 | 67.80±0.77 | Planck Mission | [52], [142], [143], [144], [145] | The initial release of data from the European Space Agency's (ESA) Planck cosmology probe, launched in May 2009. Over a four-year period, Planck conducted an unprecedentedly detailed investigation of cosmic microwave radiation using advanced HEMT radiometers and bolometer technology. This initial data release included a new all-sky CMB map and the mission's first determination of the Hubble constant, which was significantly lower than many "late universe" estimates, setting the stage for the Hubble tension. |
| 2012-12-20 | 69.32±0.80 | WMAP (9 years), combined with other measurements | [146] | This value represents the culmination of nine years of data from the Wilkinson Microwave Anisotropy Probe (WMAP), combined with other cosmological measurements. WMAP provided crucial data on the cosmic microwave background, refining earlier estimates of H0 from the early universe. |
| 2010 | 70.4+1.3−1.4 | WMAP (7 years), combined with other measurements | [147] | These values were derived from fitting a combination of seven-year WMAP data and other cosmological datasets to the simplest version of the ΛCDM model. If more general versions of the model were used, H0 tended to be smaller and more uncertain, typically around 67±4 (km/s)/Mpc, though some models allowed values near 63 (km/s)/Mpc. |
| 2010 | 71.0±2.5 | WMAP only (7 years). | [147] | This specific value was obtained solely from the seven-year dataset of the Wilkinson Microwave Anisotropy Probe (WMAP) observations, without combining it with other external cosmological data. |
| 2009-02 | 70.5±1.3 | WMAP (5 years), combined with other measurements | [149] | A measurement based on five years of data from the Wilkinson Microwave Anisotropy Probe (WMAP), integrated with other cosmological observations to refine the Hubble constant. |
| 2009-02 | 71.9+2.6−2.7 | WMAP only (5 years) | [149] | This value was derived exclusively from the five-year dataset of the Wilkinson Microwave Anisotropy Probe (WMAP), offering an early universe constraint on H0. |
| 2007 | 70.4+1.5−1.6 | WMAP (3 years), combined with other measurements | [150] | This measurement incorporated three years of data from the Wilkinson Microwave Anisotropy Probe (WMAP), combined with additional cosmological observations, providing an earlier estimate of H0 from the cosmic microwave background. |
| 2006-08 | 76.9+10.7−8.7 | Chandra X-ray Observatory | [151] | This measurement combined observations from the Chandra X-ray Observatory with the [ |