QUICK FACTS
Created Jan 0001
Status Verified Sarcastic
Type Existential Dread
peter debye, statistical mechanics, thermodynamics, kinetic theory, particle statistics, spin–statistics theorem, indistinguishable particles, maxwell–boltzmann, bose–einstein, fermi–dirac

Debye Model

“This article, it seems, has multiple issues. A textbook, they say. Lacking citations. As if the sheer effort of explaining fundamental physics isn't enough....”

Contents
  • 1. Overview
  • 2. Etymology
  • 3. Cultural Impact

This article, it seems, has multiple issues. A textbook, they say. Lacking citations. As if the sheer effort of explaining fundamental physics isn’t enough. Well, don’t worry, I’m here. Not because I want to be, mind you, but because apparently, some things just have to be spelled out. And if you’re going to learn, you might as well learn it properly, even if it means enduring my perpetually unimpressed commentary.

[Peter Debye](/Peter_Debye)

[Statistical mechanics](/Statistical_mechanics)

  • [Thermodynamics](/Thermodynamics)
  • [Kinetic theory](/Kinetic_theory_of_gases)

[Particle statistics](/Particle_statistics)

  • [Spin–statistics theorem](/Spin%E2%80%93statistics_theorem)
  • [Indistinguishable particles](/Indistinguishable_particles)
  • [Maxwell–Boltzmann](/Maxwell%E2%80%93Boltzmann_statistics)
  • [Bose–Einstein](/Bose%E2%80%93Einstein_statistics)
  • [Fermi–Dirac](/Fermi%E2%80%93Dirac_statistics)
  • [Parastatistics](/Parastatistics)
  • [Anyonic statistics](/Anyon)
  • [Braid statistics](/Braid_statistics)

[Thermodynamic ensembles](/Statistical_ensemble_(mathematical_physics))

  • NVE [Microcanonical](/Microcanonical_ensemble)
  • NVT [Canonical](/Canonical_ensemble)
  • µVT [Grand canonical](/Grand_canonical_ensemble)
  • NPH [Isoenthalpic–isobaric](/Isoenthalpic%E2%80%93isobaric_ensemble)
  • NPT [Isothermal–isobaric](/Isothermal%E2%80%93isobaric_ensemble)

Models

  • Debye
  • [Einstein](/Einstein_solid)
  • [Ising](/Ising_model)
  • [Potts](/Potts_model)

[Potentials](/Thermodynamic_potential)

  • [Internal energy](/Internal_energy)
  • [Enthalpy](/Enthalpy)
  • [Helmholtz free energy](/Helmholtz_free_energy)
  • [Gibbs free energy](/Gibbs_free_energy)
  • [Grand potential / Landau free energy](/Grand_potential)

Scientists

  • [Maxwell](/James_Clerk_Maxwell)
  • [Boltzmann](/Ludwig_Boltzmann)
  • [Helmholtz](/Hermann_von_Helmholtz)
  • [Bose](/Satyendra_Nath_Bose)
  • [Gibbs](/Josiah_Willard_Gibbs)
  • [Einstein](/Albert_Einstein)
  • [Dirac](/Paul_Dirac)
  • [Ehrenfest](/Paul_Ehrenfest)
  • [von Neumann](/John_von_Neumann)
  • [Tolman](/Richard_C._Tolman)
  • [Debye](/Peter_Debye)
  • [Fermi](/Enrico_Fermi)
  • [Synge](/John_Lighton_Synge)
  • [Ising](/Ernst_Ising)
  • [Landau](/Lev_Landau)

In the intricate realm of thermodynamics and solid-state physics , where the microscopic dance of atoms dictates macroscopic properties, the Debye model stands as a foundational method. Developed by the rather prolific Peter Debye in 1912, its primary purpose is to provide a more accurate estimation of the specific heat (or heat capacity ) in a solid by accounting for the quantum nature of atomic lattice vibrations .

Before Debye, Albert Einstein had made a commendable, if ultimately flawed, attempt with his Einstein solid model in 1907. Einstein’s model treated the solid as an assembly of many individual, non-interacting quantum harmonic oscillators , each vibrating at a single, characteristic frequency. While this model correctly predicted the Dulong–Petit law at high temperatures (where classical physics surprisingly reasserts itself), it catastrophically failed at low temperatures, predicting an exponential decay of heat capacity rather than the observed power law.

Debye, with a clear-eyed assessment of Einstein’s limitations, recognized that the vibrations in a solid are not isolated. They are collective phenomena, propagating through the material as waves, much like sound. He conceptualized these quantized lattice vibrations as phonons – quasi-particles representing quanta of vibrational energy – confined within the solid’s volume, much like how Planck’s law of black body radiation treats electromagnetic radiation as a photon gas in a cavity.

This elegant shift in perspective allowed the Debye model to correctly predict the crucial low-temperature dependence of the heat capacity of solids, which is found to be proportional to the cube of the absolute temperature – a relationship famously known as the Debye T^3 law . Furthermore, it gracefully recovers the Dulong–Petit law at high temperatures, just as Einstein’s model did. The ingenuity lies in its ability to bridge the gap between these two extremes. However, due to certain simplifying assumptions – which we’ll get to, don’t worry – its accuracy tends to suffer somewhat at intermediate temperatures. But then again, what model doesn’t have its compromises?

Derivation

Now, for the part where we pretend the universe is a neatly calculable system. The Debye model fundamentally treats the collective atomic vibrations within a solid as phonons – quantized packets of vibrational energy – which are effectively “confined” within the boundaries of the solid’s volume. This approach draws a rather striking analogy to Planck’s law of black body radiation , where electromagnetic radiation is viewed as a photon gas contained within an empty space. The underlying mathematical framework shares considerable common ground, as both scenarios involve a massless Bose gas exhibiting a linear dispersion relation at low energies.

Consider, if you must, a cubic solid with a side-length of L. The resonating modes of these sonic disturbances, for simplicity, initially considered along just one axis, can be described by wavelengths that are quantized, much like a particle in a box problem. These wavelengths are given by the rather straightforward expression:

$$ \lambda_n = {2L \over n},, $$

where n is a positive integer representing the mode number. Each phonon carries a discrete amount of energy, which, as with all quantum particles, is related to its frequency:

$$ E_n = h\nu_n,, $$

here, h is the venerable Planck constant , and $\nu_n$ is the frequency of the phonon. Now, making the approximation that the frequency is inversely proportional to the wavelength – essentially assuming a constant speed of sound – we can write the energy as:

$$ E_n = h\nu_n = {hc_{\rm {s}} \over \lambda_n} = {hc_{s}n \over 2L},, $$

where $c_s$ represents the speed of sound propagating through the solid. This is a crucial simplification, and one that, predictably, introduces some limitations. In three dimensions, this energy relationship can be generalized, reflecting the directional nature of the momentum of these quasi-particles:

$$ E_n^2 = {p_n^2c_{\rm {s}}^2} = \left({hc_{\rm {s}} \over 2L}\right)^2\left(n_x^2+n_y^2+n_z^2\right),, $$

where $p_n$ is the magnitude of the three-dimensional momentum of the phonon, and $n_x$, $n_y$, and $n_z$ are the integer components of the resonating mode along each of the three orthogonal axes.

It’s important to pause and acknowledge the approximation made here: assuming the frequency is inversely proportional to the wavelength (which implies a constant speed of sound across all frequencies). This simplification holds fairly well for low-energy phonons, those with long wavelengths, where the discrete atomic structure of the solid is less apparent. However, for high-energy phonons, with shorter wavelengths approaching the interatomic spacing, this linear dispersion relation breaks down. The actual relationship between frequency and wavelength becomes more complex. This divergence from reality is a primary reason why the Debye model yields inaccurate results at intermediate temperatures, though, somewhat conveniently, it becomes exact again at the low and high temperature limits. A model of compromises, as I said.

The total internal energy , U, within the solid is obtained by summing the energies of all individual phonons across all possible energy levels. Each level’s contribution is the product of its energy and the average number of phonons occupying that level.

$$ U = \sum_n E_n,{\bar {N}}(E_n),, $$

where ${\bar {N}}(E_n)$ denotes the average number of phonons within the box possessing energy $E_n$. Extending this to three dimensions, where each combination of mode numbers $(n_x, n_y, n_z)$ corresponds to a distinct energy level, the total energy becomes:

$$ U = \sum_{n_x}\sum_{n_y}\sum_{n_z}E_n,{\bar {N}}(E_n),. $$

Here lies a critical distinction between the Debye model and Planck’s law of black body radiation . Unlike electromagnetic photon radiation in a cavity, which can theoretically have infinitely high frequencies and thus infinite mode numbers, there is a finite number of phonon energy states in a solid. A phonon’s frequency is inherently bounded by the discrete nature of its propagation medium – the atomic lattice itself. The minimum possible wavelength for a phonon is approximately twice the atomic separation. Any wavelength shorter than this would imply atoms vibrating out of phase within a single unit cell, which isn’t a physically distinct mode.

If we consider a cubic solid containing N atoms, each axis of this cube would effectively span $N^{1/3}$ atoms. Consequently, the atomic separation can be expressed as $L/N^{1/3}$. Given our understanding of the minimum wavelength, $\lambda_{min}$, we can state:

$$ \lambda_{\rm {min}} = {2L \over {\sqrt[{3}]{N}}},, $$

This directly implies a maximum mode number, $n_{max}$, along each axis:

$$ n_{\rm {max}} = {\sqrt[{3}]{N}},. $$

This upper bound is the fundamental difference from photons, for which the maximum mode number is, rather inconveniently, infinite. This finite limit is then applied to the triple energy sum:

$$ U = \sum_{n_x}^{\sqrt[{3}]{N}}\sum_{n_y}^{\sqrt[{3}]{N}}\sum_{n_z}^{\sqrt[{3}]{N}}E_n,{\bar {N}}(E_n),. $$

When the energy function, $E_n$, varies slowly with respect to n, these discrete sums can be approximated by continuous integrals , which, let’s be honest, are often far easier to deal with:

$$ U \approx \int_{0}^{\sqrt[{3}]{N}}\int_{0}^{\sqrt[{3}]{N}}\int_{0}^{\sqrt[{3}]{N}}E(n),{\bar {N}}\left(E(n)\right),dn_x,dn_y,dn_z,. $$

To evaluate this integral, we first need to know the function ${\bar {N}}(E)$, which represents the average number of phonons with energy E. As phonons are bosons (particles with integer spin), they adhere to Bose–Einstein statistics . Their distribution is given by the rather elegant Bose–Einstein formula:

$$ \langle N\rangle_{BE} = {1 \over e^{E/kT}-1},. $$

However, a phonon isn’t quite as simple as a single scalar vibration. It possesses three possible polarization states : one longitudinal (vibrating parallel to the direction of propagation) and two transverse (vibrating perpendicular to the direction of propagation). These different polarizations, while approximately not affecting the phonon’s energy, must be accounted for in the density of states. Therefore, the formula for ${\bar {N}}(E)$ must be multiplied by 3:

$$ {\bar {N}}(E) = {3 \over e^{E/kT}-1},. $$

Considering all three polarization states collectively also necessitates defining an effective speed of sound, $c_{\rm {eff}}$, which is used in place of the standard sonic velocity, $c_s$. This $c_{\rm {eff}}$ is a weighted average of the longitudinal ($c_{\rm {long}}$) and transverse ($c_{\rm {trans}}$) sound-wave velocities, reflecting their respective contributions:

$$ T_{\rm {D}}^{-3}\propto c_{\rm {eff}}^{-3}:={\frac {1}{3}}c_{\rm {long}}^{-3}+{\frac {2}{3}}c_{\rm {trans}}^{-3} $$

The Debye temperature , $T_{\rm {D}}$ (which we’ll define more formally soon), is directly proportional to this effective speed of sound. More accurately, it is inversely proportional to $c_{\rm {eff}}^{-3}$, which means that a higher Debye temperature signifies a harder, more rigidly bonded crystal.

Substituting the expression for ${\bar {N}}(E)$ back into the energy integral yields:

$$ U = \int_{0}^{\sqrt[{3}]{N}}\int_{0}^{\sqrt[{3}]{N}}\int_{0}^{\sqrt[{3}]{N}}E(n),{3 \over e^{E(n)/kT}-1},dn_x,dn_y,dn_z,. $$

These integrals, particularly with the finite upper limits, are not always trivial to evaluate, especially for complex geometries. To approximate this triple integral , Peter Debye employed a rather clever, if geometrically imprecise, trick. He switched to spherical coordinates :

$$ \ (n_x,n_y,n_z)=(n\sin \theta \cos \phi ,n\sin \theta \sin \phi ,n\cos \theta ),, $$

and, in a move that would make any pure mathematician wince, approximated the cubic integration volume with an eighth of a sphere . This is done because, in the positive octant ($n_x, n_y, n_z > 0$), the spherical coordinate system simplifies the integration considerably.

$$ U \approx \int_{0}^{\pi /2}\int_{0}^{\pi /2}\int_{0}^{R}E(n),{3 \over e^{E(n)/kT}-1}n^2\sin \theta ,dn,d\theta ,d\phi ,, $$

where R is the radius of this imaginary sphere. Since the energy function $E(n)$ depends only on the magnitude n (which is analogous to the radial coordinate in spherical space) and not on the angular variables $\theta$ or $\phi$, the angular integrals can be performed separately:

$$ ,3\int_{0}^{\pi /2}\int_{0}^{\pi /2}\sin \theta ,d\theta ,d\phi ,\int_{0}^{R}E(n),{\frac {1}{e^{E(n)/kT}-1}}n^2dn,={\frac {3\pi }{2}}\int_{0}^{R}E(n),{\frac {1}{e^{E(n)/kT}-1}}n^2dn, $$

The radius R of this approximating sphere is determined by ensuring that the total number of modes (or “particles”) within the eighth of a sphere is equivalent to the total number of modes in the original cube. Given that the volume of the cube is N unit cell volumes, and the volume of an eighth-sphere is $\frac{1}{8} \cdot \frac{4}{3}\pi R^3$, we equate these:

$$ N = {1 \over 8}{4 \over 3}\pi R^3,, $$

Solving for R gives:

$$ R = {\sqrt[{3}]{6N \over \pi }},. $$

This substitution of a sphere for a cube is, of course, another source of inherent inaccuracy in the Debye model , particularly at the boundaries of the integration. It’s a simplification for mathematical tractability, not a perfect representation of reality.

After performing the spherical substitution and inserting the expression for $E(n)$, the energy integral transforms into:

$$ U={3\pi \over 2}\int_{0}^{R},{hc_{s}n \over 2L}{n^{2} \over e^{hc_{\rm {s}}n/2LkT}-1},dn $$

To further simplify this rather cumbersome expression, a new dimensionless integration variable, $x$, is introduced:

$$ x={hc_{\rm {s}}n \over 2LkT} $$

With this substitution, and a bit of algebraic rearrangement, the energy integral becomes:

$$ U={3\pi \over 2}kT\left({2LkT \over hc_{\rm {s}}}\right)^3\int_{0}^{hc_{\rm {s}}R/2LkT}{x^3 \over e^x-1},dx,. $$

Now, to make this expression less of an eyesore, we define the celebrated Debye temperature , $T_{\rm {D}}$. This parameter encapsulates the material properties and fundamental constants into a single, convenient temperature scale:

$$ T_{\rm {D}}\ {\stackrel {\mathrm {def} }{=}}\ {hc_{\rm {s}}R \over 2Lk}={hc_{\rm {s}} \over 2Lk}{\sqrt[{3}]{6N \over \pi }}={hc_{\rm {s}} \over 2k}{\sqrt[{3}]{{6 \over \pi }{N \over V}}} $$

where V is the volume of the cubic box (which is $L^3$).

Some authors, in their infinite wisdom, might describe the Debye temperature as merely a shorthand for a collection of constants and material-dependent variables. However, a more insightful interpretation is that $kT_{\rm {D}}$ (where k is the Boltzmann constant ) roughly corresponds to the energy of the highest-frequency phonon mode in the solid – the minimum wavelength mode, as it were. This means we can conceptualize the Debye temperature as the characteristic temperature at which these highest-frequency vibrational modes become significantly excited. By extension, if the highest-frequency modes are excited, it implies that all lower-energy modes are also excited. It’s a measure of the stiffness or “hardness” of the crystal, reflecting the strength of interatomic bonds and the speed at which vibrations propagate.

From the total energy U, the specific internal energy (energy per mole or per atom) can be calculated. Dividing by $Nk$ (where N is the number of atoms and k is the Boltzmann constant ) yields the dimensionless specific internal energy:

$$ {\frac {U}{Nk}}=9T\left({T \over T_{\rm {D}}}\right)^3\int_{0}^{T_{\rm {D}}/T}{x^3 \over e^x-1},dx=3TD_3\left({T_{\rm {D}} \over T}\right),, $$

where $D_3(x)$ is the third-order Debye function , a special mathematical function that arises frequently in this context. To obtain the heat capacity at constant volume, $C_V$, we differentiate this expression with respect to temperature T. This produces the dimensionless heat capacity:

$$ {\frac {C_V}{Nk}}=9\left({T \over T_{\rm {D}}}\right)^3\int_{0}^{T_{\rm {D}}/T}{x^4e^x \over \left(e^x-1\right)^2},dx,. $$

These comprehensive formulae describe the Debye model ’s predictions across all temperature ranges. The more elementary formulae, which we will revisit later, represent the asymptotic behaviors in the limits of very low and very high temperatures. The remarkable accuracy of the Debye model at these extremes stems from two key factors: at low frequencies, it accurately captures the dispersion relation $E(\nu)$, and at high temperatures, it correctly approximates the overall density of states (where $\int g(\nu),d\nu \equiv 3N$), which dictates the total number of vibrational modes per frequency interval.

Debye’s derivation

One might think that such a complex derivation was Debye’s first and only pass at the problem. However, Peter Debye arrived at his famous equation through a somewhat different, and arguably more intuitive, path. His original derivation relied on insights from continuum mechanics , treating the solid as a continuous elastic medium rather than a discrete lattice of atoms.

From this continuum perspective, he determined that the number of vibrational states with a frequency less than a particular value, $\nu$, asymptotically followed the relationship:

$$ n\sim {1 \over 3}\nu ^3VF,, $$

where V represents the volume of the solid, and F is a factor that Debye meticulously calculated using the material’s elasticity coefficients and its density. This implies that the density of states, or the number of available vibrational modes, increases rapidly with frequency.

Combining this formula with the average energy expected for a harmonic oscillator at a given temperature T (a concept already utilized by Einstein in his own model), one might initially propose an expression for the total energy U as:

$$ U=\int_{0}^{\infty },{h\nu ^3VF \over e^{h\nu /kT}-1},d\nu ,, $$

if, rather naively, one were to assume that vibrational frequencies could extend indefinitely to infinity. This form, as it turns out, correctly produces the characteristic $T^3$ behavior observed at low temperatures. However, Debye, being acutely aware of physical reality, recognized a fundamental constraint: for a solid composed of N atoms, there could not possibly be more than $3N$ distinct vibrational states (three degrees of freedom for each atom).

To reconcile this continuum approximation with the discrete nature of atomic solids, Debye introduced a crucial assumption: he proposed that the spectrum of frequencies for the vibrational states would continue to follow the aforementioned cubic rule, but only up to a specific maximum frequency , which he denoted as $\nu_m$. This maximum frequency was carefully chosen such that the total number of states, when integrated up to $\nu_m$, precisely equaled $3N$:

$$ 3N={1 \over 3}\nu_m^3VF,. $$

Debye, ever the pragmatist, was well aware that this assumption wasn’t perfectly accurate. He knew that the higher frequencies in a real crystal lattice are not as widely spaced as his continuum approximation suggested. Nevertheless, this pragmatic cutoff served a vital purpose: it guaranteed that the model would correctly reproduce the Dulong–Petit law at high temperatures, where all $3N$ classical vibrational modes are fully excited.

With this cutoff frequency $\nu_m$ in place, the total energy U is then given by the modified integral:

$$ {\begin{aligned}U&=\int_{0}^{\nu_{m}},{h\nu ^3VF \over e^{h\nu /kT}-1},d\nu ,,\&=VFkT(kT/h)^3\int_{0}^{T_{\rm {D}}/T},{x^3 \over e^x-1},dx,.\end{aligned}} $$

Finally, by substituting the Debye temperature $T_{\rm {D}}$ for $h\nu_m/k$ (where k is the Boltzmann constant ), the expression for the total internal energy takes its characteristic form:

$$ {\begin{aligned}U&=9NkT(T/T_{\rm {D}})^3\int_{0}^{T_{\rm {D}}/T},{x^3 \over e^x-1},dx,,\&=3NkTD_3(T_{\rm {D}}/T),,\end{aligned}} $$

where $D_3$ is the function that was later formally named the third-order Debye function . This derivation, while conceptually simpler, captured the essential physics and provided the same powerful results as the more detailed approach. It demonstrates that sometimes, a brilliant approximation can be more fruitful than a perfectly rigorous, yet intractable, calculation.

Another derivation

For those who enjoy redundancy, or perhaps prefer a slightly different flavor of mathematical exposition, let’s explore yet another derivation. This one, drawing inspiration from Terrell L. Hill’s “An Introduction to Statistical Mechanics,” meticulously constructs the vibrational frequency distribution from first principles.

Imagine a three-dimensional isotropic elastic solid – a conceptual block of material where properties are uniform in all directions – containing N atoms, shaped as a rectangular parallelepiped with side-lengths $L_x$, $L_y$, and $L_z$. Within this solid, elastic waves propagate, obeying the wave equation . For simplicity, we consider these to be plane waves characterized by a wave vector $\mathbf{k} = (k_x, k_y, k_z)$. We define the directional cosines $l_x, l_y, l_z$ as:

$$ l_x={\frac {k_x}{|\mathbf {k} |}},l_y={\frac {k_y}{|\mathbf {k} |}},l_z={\frac {k_z}{|\mathbf {k} |}} $$

such that they satisfy the condition:

$$ l_x^2+l_y^2+l_z^2=1. $$

Solutions to the wave equation for such a system can be expressed as sinusoidal functions:

$$ u(x,y,z,t)=\sin(2\pi \nu t)\sin \left({\frac {2\pi l_xx}{\lambda }}\right)\sin \left({\frac {2\pi l_yy}{\lambda }}\right)\sin \left({\frac {2\pi l_zz}{\lambda }}\right) $$

To ensure these waves are standing waves confined within the parallelepiped, we apply boundary conditions where the displacement u must be zero at the surfaces: $u=0$ at $x,y,z=0$ and $x=L_x, y=L_y, z=L_z$. These conditions quantize the possible wavelengths:

$$ {\frac {2l_xL_x}{\lambda }}=n_x;{\frac {2l_yL_y}{\lambda }}=n_y;{\frac {2l_zL_z}{\lambda }}=n_z $$

where $n_x, n_y, n_z$ are positive integers representing the mode numbers along each dimension. Now, incorporating the fundamental dispersion relation for sound waves, $c_s = \lambda\nu$ (where $c_s$ is the speed of sound and $\nu$ is the frequency ), we can substitute $\lambda = c_s/\nu$ into the boundary conditions:

$$ {\frac {n_x^2}{(2\nu L_x/c_s)^2}}+{\frac {n_y^2}{(2\nu L_y/c_s)^2}}+{\frac {n_z^2}{(2\nu L_z/c_s)^2}}=1. $$

This equation, for a fixed frequency $\nu$, describes the surface of an eighth of an ellipsoid in “mode space” – a conceptual space defined by the integer mode numbers $(n_x, n_y, n_z)$. We only consider an eighth because $n_x, n_y, n_z$ must be positive integers. The number of allowed modes with a frequency less than $\nu$ is equivalent to the number of integer points enclosed within this ellipsoid. In the limit of a very large parallelepiped ($L_x, L_y, L_z \to \infty$), this discrete counting can be accurately approximated by the continuous volume of the ellipsoid.

Thus, the number of modes, $N(\nu)$, with frequencies ranging from 0 to $\nu$, is given by:

$$ N(\nu )={\frac {1}{8}}{\frac {4\pi }{3}}\left({\frac {2\nu }{c_{\mathrm {s} }}}\right)^3L_xL_yL_z={\frac {4\pi \nu ^3V}{3c_{\mathrm {s} }^3}}, $$

where $V = L_xL_yL_z$ is the total volume of the parallelepiped. It’s also important to note that the speed of a longitudinal wave can differ from that of a transverse wave. Furthermore, there is one longitudinal polarization and two transverse polarizations for sound waves. To account for this, an effective speed of sound $c_s$ is often defined by:

$$ {\frac {3}{c_{s}^{3}}}={\frac {1}{c_{\text{long}}^{3}}}+{\frac {2}{c_{\text{trans}}^{3}}} $$

Following the derivation presented in “A First Course in Thermodynamics,” we introduce an upper limit to the possible frequencies of vibration, denoted as $\nu_D$. Since a solid with N atoms possesses $3N$ distinct quantum harmonic oscillators (three degrees of freedom per atom), these oscillators collectively vibrate across the frequency range $[0, \nu_D]$. This maximum frequency $\nu_D$ is determined by equating the total number of modes to $3N$:

$$ 3N=N(\nu_{\rm {D}})={\frac {4\pi \nu_{\rm {D}}^3V}{3c_{\rm {s}}^3}} $$

By defining $\nu_D = \frac{kT_D}{h}$, where k is the Boltzmann constant and h is the Planck constant , and substituting this into the expression for $N(\nu)$, we get a more standard definition for the density of states:

$$ N(\nu )={\frac {3Nh^3\nu ^3}{k^3T_{\rm {D}}^3}}, $$

This expression allows us to determine the energy contribution from all oscillators vibrating at a particular frequency $\nu$. For quantum harmonic oscillators , the allowed energy levels are $E_i = (i + 1/2)h\nu$, where $i = 0, 1, 2, \ldots$. Using Maxwell–Boltzmann statistics (which, for bosons at these energy scales, approximates Bose–Einstein statistics when the occupation numbers are low), the number of particles with energy $E_i$ is:

$$ n_i={\frac {1}{A}}e^{-E_i/(kT)}={\frac {1}{A}}e^{-(i+1/2)h\nu /(kT)}. $$

The total energy contribution from oscillators with frequency $\nu$ is then found by summing over all possible energy levels:

$$ dU(\nu )=\sum_{i=0}^{\infty }E_i{\frac {1}{A}}e^{-E_i/(kT)}. $$

Recognizing that the sum of all $n_i$ must equal the total number of modes $dN(\nu)$ at frequency $\nu$:

$$ {\frac {1}{A}}e^{-1/2h\nu /(kT)}\sum_{i=0}^{\infty }e^{-ih\nu /(kT)}={\frac {1}{A}}e^{-1/2h\nu /(kT)}{\frac {1}{1-e^{-h\nu /(kT)}}}=dN(\nu ). $$

From this, we can derive an expression for $1/A$. Substituting this back into the energy contribution equation $dU(\nu)$ and performing some algebraic manipulation (which I won’t bore you with in excruciating detail, but trust me, it involves geometric series sums and derivatives), we arrive at the average energy per mode:

$$ {\begin{aligned}dU&=dN(\nu )e^{1/2h\nu /(kT)}(1-e^{-h\nu /(kT)})\sum_{i=0}^{\infty }h\nu (i+1/2)e^{-h\nu (i+1/2)/(kT)}\\&=dN(\nu )(1-e^{-h\nu /(kT)})\sum_{i=0}^{\infty }h\nu (i+1/2)e^{-h\nu i/(kT)}\&=dN(\nu )h\nu \left({\frac {1}{2}}+(1-e^{-h\nu /(kT)})\sum_{i=0}^{\infty }ie^{-h\nu i/(kT)}\right)\&=dN(\nu )h\nu \left({\frac {1}{2}}+{\frac {1}{e^{h\nu /(kT)}-1}}\right).\end{aligned}} $$

The term $h\nu/2$ represents the zero-point energy of the harmonic oscillator, a quantum mechanical effect present even at absolute zero. The second term is the temperature-dependent contribution from excited states. Integrating this expression over all allowed frequencies from 0 to $\nu_D$ yields the total internal energy U:

$$ U={\frac {9Nh^4}{k^3T_{\rm {D}}^3}}\int_{0}^{\nu_D}\left({\frac {1}{2}}+{\frac {1}{e^{h\nu /(kT)}-1}}\right)\nu ^3d\nu . $$

This derivation, while perhaps more circuitous than Debye’s original, reinforces the quantum mechanical underpinnings of the model and explicitly shows the contribution of zero-point energy, even if it often cancels out in calculations of heat capacity (which depend on changes in energy).

Temperature limits

The real beauty of the Debye model often lies in its ability to accurately describe the behavior of heat capacity at the extremes of temperature, where the full integral can be simplified into more tractable forms.

Low-Temperature Limit: A Debye solid is considered to be at a low temperature when T is significantly less than the Debye temperature , i.e., $T \ll T_{\rm {D}}$. In this regime, the upper limit of the integral, $T_{\rm {D}}/T$, becomes very large. Consequently, the integral can be approximated by extending its upper limit to infinity without introducing significant error, as the exponential term in the denominator ($e^x - 1$) grows very rapidly, effectively making contributions from higher x values negligible.

$$ {\frac {C_V}{Nk}}\sim 9\left({T \over T_{\rm {D}}}\right)^3\int_{0}^{\infty }{x^4e^x \over \left(e^x-1\right)^2},dx,. $$

This definite integral is a known result from complex analysis (related to the Riemann zeta function ) and can be evaluated exactly, yielding a rather elegant constant:

$$ {\frac {C_V}{Nk}}\sim {12\pi ^4 \over 5}\left({T \over T_{\rm {D}}}\right)^3,. $$

This result is the famous Debye T^3 law . Its profound significance lies in its accurate prediction of the observed cubic dependence of heat capacity on temperature for many insulating, crystalline solids at very low temperatures. Physically, at these low temperatures, only the longest-wavelength, lowest-energy phonons are excited. These long-wavelength vibrations are insensitive to the discrete atomic structure and behave much like waves in a continuous elastic medium, which is precisely what the Debye model captures so well in this limit. The approximations inherent in the model, particularly the continuum approximation and the spherical integration, become remarkably accurate here.

High-Temperature Limit: Conversely, a Debye solid is said to be at a high temperature when T is much greater than the Debye temperature , i.e., $T \gg T_{\rm {D}}$. In this scenario, the upper limit of the integral, $T_{\rm {D}}/T$, becomes very small. For small values of x, the exponential term $e^x$ can be approximated by its Taylor expansion: $e^x \approx 1 + x$. Thus, the denominator $e^x - 1 \approx x$.

$$ {\frac {C_V}{Nk}}\sim 9\left({T \over T_{\rm {D}}}\right)^3\int_{0}^{T_{\rm {D}}/T}{x^4 \over x^2},dx $$

Simplifying the integrand to $x^2$ and performing the integration:

$$ {\frac {C_V}{Nk}}\sim 9\left({T \over T_{\rm {D}}}\right)^3\int_{0}^{T_{\rm {D}}/T}x^2,dx = 9\left({T \over T_{\rm {D}}}\right)^3 \left[ \frac{x^3}{3} \right]0^{T{\rm {D}}/T} = 9\left({T \over T_{\rm {D}}}\right)^3 \frac{1}{3}\left({T_{\rm {D}} \over T}\right)^3 = 3,. $$

This yields the result:

$$ {\frac {C_V}{Nk}}\sim 3,. $$

This is precisely the Dulong–Petit law , which states that the molar heat capacity of a solid is approximately $3R$ (where R is the ideal gas constant , and $3R \approx 3Nk$ if N is Avogadro’s number). This law is a hallmark of classical physics, suggesting that at high temperatures, each atom effectively acts as a classical harmonic oscillator with three degrees of freedom, each contributing $kT$ to the energy (or $k$ to the heat capacity). The Debye model correctly recovers this classical limit, demonstrating its versatility. While remarkably accurate, it doesn’t account for phenomena like anharmonicity – the deviation from perfect harmonic oscillations at very high temperatures – which can cause the heat capacity to continue rising slightly above the Dulong–Petit limit . Additionally, for electrical conductors or semiconductors , the total heat capacity may also include a non-negligible contribution from the electrons, which the Debye model does not address, being solely focused on lattice vibrations.

Debye versus Einstein

Ah, the eternal struggle between models. The Debye model and the earlier Einstein model both aimed to explain the heat capacity of solids, and both, in their own ways, corresponded to experimental data – at least in certain regimes. The critical difference, the one that makes Debye’s approach superior, is its accuracy at low temperatures, a regime where Einstein’s model faltered rather spectacularly.

To truly appreciate the distinction, one might naturally want to plot them on the same graph. However, this isn’t as straightforward as it sounds. Both models provide a functional form for the heat capacity , but they operate on different intrinsic energy scales.

The Einstein model characterizes its energy scale by $\epsilon/k$, where $\epsilon$ is the energy of a single quantum harmonic oscillator and k is the Boltzmann constant . Its heat capacity is given by:

$$ C_V=3Nk\left({\epsilon \over kT}\right)^2{e^{\epsilon /kT} \over \left(e^{\epsilon /kT}-1\right)^2}. $$

The Debye model , on the other hand, uses the Debye temperature , $T_{\rm {D}}$, as its characteristic scale. These scales, $\epsilon/k$ and $T_{\rm {D}}$, are not interchangeable. That is to say:

$$ {\epsilon \over k}\neq T_{\rm {D}},, $$

This means a direct, unscaled comparison on a single plot would be misleading. They are two different lenses through which to view the same physical phenomenon, each with its own calibration. To facilitate comparison, we can define an “Einstein temperature” as:

$$ T_{\rm {E}}\ {\stackrel {\mathrm {def} }{=}}\ {\epsilon \over k},, $$

And, predictably, $T_{\rm {E}}\neq T_{\rm {D}}$. The key to relating these two models lies in understanding the underlying physical assumptions. The Einstein solid is built upon the premise of single-frequency quantum harmonic oscillators , where $\epsilon = \hbar \omega = h\nu$. If such a single, characteristic frequency truly existed and dominated the solid’s vibrations, it would be related to the speed of sound within the material. Imagine sound propagating as atoms bumping into each other; the frequency of oscillation would correspond to the minimum wavelength that the atomic lattice can sustain, $\lambda_{min}$. This minimum wavelength is approximately twice the atomic separation.

For a solid with N atoms in volume V, the average atomic separation implies a characteristic length scale. If we consider this, the characteristic frequency $\nu$ can be approximated as:

$$ \nu ={c_{\rm {s}} \over \lambda }={c_{\rm {s}}{\sqrt[{3}]{N}} \over 2L}={c_{\rm {s}} \over 2}{\sqrt[{3}]{N \over V}} $$

This leads to the Einstein temperature:

$$ T_{\rm {E}}={\epsilon \over k}={h\nu \over k}={hc_{\rm {s}} \over 2k}{\sqrt[{3}]{N \over V}},, $$

Now, we can finally establish a meaningful ratio between the Einstein and Debye temperatures:

$$ {T_{\rm {E}} \over T_{\rm {D}}}={\sqrt[{3}]{\pi \over 6}}\ =0.805995977… $$

This specific ratio, approximately 0.806, is more than just a numerical curiosity. It is the cube root of the ratio of the volume of one octant of a three-dimensional sphere to the volume of the cube that contains it. This is precisely the geometric correction factor that Debye introduced when he approximated the cubic integration volume with a spherical one in his derivation. It highlights the fundamental geometrical difference in how the two models count vibrational modes.

Alternatively, the ratio of the two temperatures can be seen as the ratio of Einstein’s single, universal frequency (at which all oscillators in his model supposedly vibrate) to Debye’s maximum cutoff frequency. In this interpretation, Einstein’s single frequency can be viewed as a kind of mean frequency derived from the continuous spectrum of frequencies allowed in the Debye model . The mean frequency in Debye’s model is given by:

$$ {\bar {\nu }}=3\int_{0}^{\nu_{\rm {D}}}{\frac {\nu ^3}{\nu_{\rm {D}}^3}}d\nu ={\frac {3}{4}}\nu_{\rm {D}} $$

Thus, the relationship between Einstein’s characteristic frequency ($\nu_E$) and Debye’s maximum frequency ($\nu_D$), and consequently between their respective temperatures, is:

$$ {{\frac {\nu_{\rm {E}}}{\nu_{\rm {D}}}}={\frac {T_{\rm {E}}}{T_{\rm {D}}}}=0.75} $$

This implies that Einstein’s single frequency is about 75% of Debye’s maximum frequency. While the exact numerical ratio differs depending on how “Einstein’s frequency” is defined (either from the geometric argument or as a mean frequency), both interpretations underscore the fact that the Debye model , by incorporating a spectrum of frequencies up to a cutoff, provides a far more realistic description of lattice vibrations than Einstein’s simpler, single-frequency approach. This is why Debye’s model triumphs at low temperatures, where the long-wavelength, low-frequency modes are crucial.

Debye temperature table

The Debye model , despite its inherent approximations, offers a remarkably good approximation for the low-temperature heat capacity of many insulating, crystalline solids . In these materials, the lattice vibrations are the dominant contributors to heat capacity , and other contributions (such as those from highly mobile conduction electrons ) are often negligible. For metals , however, the electron contribution to the heat capacity is proportional to T (a linear dependence), which at sufficiently low temperatures will eventually dominate the cubic ($T^3$) dependence predicted by the Debye model for lattice vibrations. In such cases, the Debye model still approximates the lattice’s contribution, but one must remember to add the electronic component.

The Debye temperature , $T_{\rm {D}}$, serves as a characteristic material parameter, reflecting the stiffness of the lattice and the strength of the interatomic bonds. A high Debye temperature indicates a material where atoms are tightly bound and vibrate at high frequencies, requiring more energy to excite these vibrations. Conversely, a low Debye temperature suggests a softer material with weaker bonds and lower vibrational frequencies.

The following table provides a selection of Debye temperatures for various pure elements and, for good measure, sapphire . Don’t expect me to explain why each one is what it is; that’s what material science is for.

Element/MaterialDebye Temperature (K)
Aluminium428
Beryllium1440
Cadmium209
Caesium38
Carbon (diamond )2230
Chromium630
Copper343
Germanium374
Gold170
Iron470
Lead105
Manganese410
Nickel450
Platinum240
Rubidium56
Sapphire1047
Selenium90
Silicon645
Silver215
Tantalum240
Tin (white)200
Titanium420
Tungsten400
Zinc327

Notice how diamond , the hardest known natural material, boasts an exceptionally high Debye temperature of 2230 K. This reflects the incredibly strong covalent bonds between its carbon atoms, leading to very high-frequency vibrations. In stark contrast, soft metals like caesium and lead exhibit much lower Debye temperatures (38 K and 105 K, respectively), indicative of their weaker metallic bonds and lower characteristic vibrational frequencies. Beryllium also stands out with a very high Debye temperature, consistent with its high melting point and stiffness.

It’s also worth noting that the Debye model ’s fit to experimental data can sometimes be “phenomenologically improved” – which is a polite way of saying “fudged” – by allowing the Debye temperature itself to become temperature-dependent. For instance, the value for ice, a surprisingly complex material, has been observed to increase from approximately 222 K at absolute zero to around 300 K as the temperature rises to about 100 K. This temperature dependence hints at the model’s inherent simplifications and the subtle ways real materials deviate from ideal Debye behavior.

Extension to other quasi-particles

The utility of the Debye model framework isn’t strictly limited to phonons , those quantized sound waves in solids. The underlying statistical mechanics can be adapted to describe the thermal properties arising from other bosonic quasi-particles – collective excitations that behave like particles but aren’t fundamental particles themselves.

Consider, for example, magnons , which are quantized spin waves found in ferromagnets . Instead of lattice vibrations, magnons represent collective excitations of the electron spins in a magnetic material. While the general approach of counting states and applying Bose–Einstein statistics remains similar, the crucial difference lies in their dispersion relation . For magnons , at low frequencies, the energy E typically scales with the square of the wave vector k (i.e., $E(\nu) \propto k^2$), rather than the linear relationship ($E(\nu) \propto k$) seen with phonons .

This altered dispersion relation , combined with potentially different density of states (e.g., $\int g(\nu){\rm {d}}\nu \equiv N$, where N is the number of magnetic ions), leads to a different temperature dependence for their contribution to heat capacity . Specifically, in ferromagnets , one typically finds a magnon contribution to the heat capacity that is proportional to $T^{3/2}$ ($\Delta C_{,{\rm {V|,magnon}}},\propto T^{3/2}$). At sufficiently low temperatures, this $T^{3/2}$ dependence can actually dominate over the phonon contribution (which is proportional to $T^3$, i.e., $,\Delta C_{,{\rm {V|,phonon}}}\propto T^{3}$).

In stark contrast, for metals , the primary low-temperature contribution to the heat capacity comes not from bosonic excitations, but from the conduction electrons . These electrons are fermions (particles with half-integer spin), and their behavior is governed by Fermi–Dirac statistics , not Bose–Einstein statistics . Their contribution to heat capacity is linear with temperature ($\propto T$), a result famously derived using different methods, such as Arnold Sommerfeld ’s extension of the free electron model . So, while the Debye model is a powerful tool for lattice vibrations, one must always be mindful of which quasi-particles are actually at play in a given material.

Extension to liquids

For a remarkably long time, the prevailing wisdom held that phonon theory was fundamentally incapable of explaining the heat capacity of liquids . The conventional argument was that liquids , unlike solids, could only sustain longitudinal phonons (compressional waves), but not transverse phonons (shear waves). Since transverse phonons are responsible for a substantial two-thirds of the heat capacity in solids, their absence in liquids seemed to invalidate any Debye-like approach.

However, as is often the case in science, the prevailing wisdom eventually faced a challenge. Groundbreaking Brillouin scattering experiments, utilizing both neutrons and X-rays , provided compelling evidence to the contrary. These experiments, confirming an astute intuition from the renowned physicist Yakov Frenkel , unequivocally demonstrated that transverse phonons do exist in liquids . The catch? They are restricted to frequencies above a certain threshold, a characteristic value now known as the Frenkel frequency . Below this frequency, the liquid behaves like a fluid, unable to support shear. Above it, it exhibits solid-like elastic response to shear deformations.

This discovery was a game-changer. Given that most of the thermal energy in a system is contained within these higher-frequency modes, a relatively simple modification of the Debye model proved sufficient to yield a surprisingly good approximation for the experimental heat capacities of many simple liquids . The key was to incorporate the existence of these high-frequency transverse phonons .

More recently, research has delved even deeper into the microscopic dynamics of liquids . It has been shown that “instantaneous normal modes” – specific vibrational patterns associated with relaxations from saddle points in the liquid’s complex energy landscape – play a crucial role. These modes tend to dominate the frequency spectrum of liquids at lower frequencies and may, in fact, be instrumental in determining the specific heat of liquids as a function of temperature over a broad range. This ongoing work continues to refine our understanding of how collective atomic motions contribute to the thermal properties of even seemingly disordered states of matter.

Debye frequency

The Debye frequency (often symbolized as $\omega_{\rm {Debye}}$ or $\omega_{\rm {D}}$) is a critical parameter within the Debye model . It essentially represents a cut-off angular frequency for the vibrational waves (phonons) within a material, effectively imposing an upper limit on the possible frequencies of oscillation. This concept is vital for accurately describing the movement of ions within a crystal lattice and, perhaps most importantly, for correctly predicting that the heat capacity of such crystals approaches a constant value at high temperatures – the Dulong–Petit law . Peter Debye himself introduced this fundamental concept back in 1912.

For the sake of simplifying the mathematics, throughout this discussion, we generally assume periodic boundary conditions . This means we imagine the crystal extending infinitely, with waves passing through one boundary and reappearing on the opposite side, preventing surface effects from complicating our analysis.

Definition

Assuming a linear dispersion relation , which states that the angular frequency $\omega$ is directly proportional to the magnitude of the wave vector $|\mathbf{k}|$:

$$ \omega =v_{\rm {s}}|\mathbf {k} | $$

where $v_{\rm {s}}$ is the speed of sound in the crystal, the value of the Debye frequency can be derived based on the dimensionality of the system.

For a one-dimensional monatomic chain – imagine a string of identical atoms equally spaced – the Debye frequency is given by:

$$ \omega_{\rm {D}}=v_{\rm {s}}\pi /a=v_{\rm {s}}\pi N/L=v_{\rm {s}}\pi \lambda, $$

Here, a represents the equilibrium distance between two neighboring atoms in the chain when the system is in its ground state (i.e., no vibrational motion). N is the total number of atoms in the chain, and L is the total length of the chain. These are related by $L=Na$. The term $\lambda$ here denotes the linear number density of atoms, which is $N/L$. This expression establishes the maximum frequency based on the shortest possible wavelength, which is twice the interatomic spacing ($2a$).

For a two-dimensional monatomic square lattice – a flat sheet of atoms arranged in a grid – the square of the Debye frequency is:

$$ \omega_{\rm {D}}^{2}={\frac {4\pi }{a^{2}}}v_{\rm {s}}^{2}={\frac {4\pi N}{A}}v_{\rm {s}}^{2}\equiv 4\pi \sigma v_{\rm {s}}^{2}, $$

In this case, $A \equiv L^2 = Na^2$ represents the total area of the lattice, and $\sigma$ is the surface number density ($N/A$). The $a^2$ in the denominator implies the area occupied by each atom.

For a three-dimensional monatomic primitive cubic crystal – a simple 3D arrangement of atoms – the cube of the Debye frequency is:

$$ \omega_{\rm {D}}^{3}={\frac {6\pi ^{2}}{a^{3}}}v_{\rm {s}}^{3}={\frac {6\pi ^{2}N}{V}}v_{\rm {s}}^{3}\equiv 6\pi ^{2}\rho v_{\rm {s}}^{3}, $$

Here, $V \equiv L^3 = Na^3$ is the total volume of the system, and $\rho$ is the volume number density ($N/V$). The $a^3$ in the denominator represents the volume occupied by each atom.

These formulas can be generalized for a (hyper)cubic lattice in n dimensions. The general formula for the n-th power of the Debye frequency is:

$$ \omega_{\rm {D}}^{n}=2^{n}\pi ^{n/2}\Gamma \left(1+{\tfrac {n}{2}}\right){\frac {N}{L^{n}}}v_{\rm {s}}^{n}, $$

where $\Gamma$ is the gamma function , a generalization of the factorial function to real and complex numbers. This unified expression elegantly captures the dimensionality dependence of the Debye cutoff.

It’s crucial to remember that the speed of sound ($v_{\rm {s}}$) in a real crystal is not a single, universal constant. It depends on various factors, including the mass of the constituent atoms, the strength of their interatomic interactions (analogous to spring constants), the pressure exerted on the system, and the specific polarization of the sound wave (whether it’s longitudinal or transverse ). For simplicity in these initial derivations, we often assume the speed of sound is uniform for all polarizations, though this idealization limits the direct applicability of the result to real, anisotropic materials.

Finally, while the assumed linear dispersion relation ($\omega = v_s |\mathbf{k}|$) might seem overly simplistic and is easily proven inaccurate for a one-dimensional chain of masses (where the true relation involves a sine function), this approximation doesn’t necessarily render Debye’s model problematic in its core predictions. The cutoff frequency concept remains robust even with more complex dispersion relations, as we’ll explore further.

Relation to Debye’s temperature

The Debye temperature , $\theta_{\rm {D}}$ (or $T_{\rm {D}}$ as used previously), which we’ve already encountered as another fundamental parameter in the Debye model , is directly and elegantly related to the Debye frequency . This relationship bridges the energy scale of the highest-frequency phonon with a characteristic temperature:

$$ \theta_{\rm {D}}={\frac {\hbar }{k_{\rm {B}}}}\omega_{\rm {D}}, $$

Here, $\hbar$ is the reduced Planck constant (Planck’s constant divided by $2\pi$), and $k_{\rm {B}}$ is the Boltzmann constant . This equation essentially states that the energy corresponding to the Debye frequency ($\hbar \omega_{\rm {D}}$) is equivalent to the thermal energy at the Debye temperature ($k_{\rm {B}}\theta_{\rm {D}}$). It provides a direct link between the quantum mechanical vibrational properties of the lattice and the macroscopic thermal behavior of the material.

Debye’s derivation

Let’s revisit Debye’s approach to deriving the Debye frequency , specifically focusing on how he established the cutoff. In his derivation of the heat capacity , he summed over all possible vibrational modes of the system, meticulously accounting for different directions and polarizations. He made a crucial assumption: the total number of modes per polarization in a system should be equal to the number of constituent masses N. Since each mode can have three distinct polarizations (one longitudinal, two transverse), the total number of polarization-mode combinations sums to $3N$:

$$ \sum_{\rm {modes}}3=3N, $$

This assumption, rooted in classical mechanics , states that for N atoms, there are $3N$ independent ways they can vibrate. Debye’s genius was in recognizing that this finite number of modes implied a maximum possible frequency.

The left-hand side of this equation can be made more explicit, revealing its dependence on the Debye frequency . Assuming the system size L is very large ($L \gg 1$), the smallest possible increment in the wave vector in any direction ($dk_i$) can be approximated as $2\pi/L$ (due to periodic boundary conditions ). This transforms the discrete summation over modes into a continuous integral in k-space :

$$ \sum_{\rm {modes}}3={\frac {3V}{(2\pi )^3}}\iiint d\mathbf {k} , $$

where $\mathbf{k} \equiv (k_x, k_y, k_z)$, and $V \equiv L^3$ is the volume of the system. The integral is performed over all possible modes, which Debye assumed to occupy a finite region in k-space , bounded by a cutoff frequency.

To simplify the triple integral , it’s often more convenient to switch to spherical coordinates in k-space . This allows us to integrate over the magnitude of the wave vector , $|\mathbf{k}|$, rather than its individual components. The transformation yields:

$$ {\frac {3V}{(2\pi )^3}}\iiint d\mathbf {k} ={\frac {3V}{2\pi ^2}}\int_{0}^{k_{\rm {D}}}|\mathbf {k} |^2d\mathbf {k} , $$

Here, $k_{\rm {D}}$ represents the maximum magnitude of the wave vector , corresponding to the Debye frequency . From the dispersion relation $\omega = v_{\rm {s}}|\mathbf{k}|$, we have $k_{\rm {D}}=\omega_{\rm {D}}/v_{\rm {s}}$. Substituting this allows us to express the integral in terms of angular frequency $\omega$:

$$ {\frac {3V}{2\pi ^2}}\int_{0}^{k_{\rm {D}}}|\mathbf {k} |^2d\mathbf {k} ={\frac {3V}{2\pi ^2v_{\rm {s}}^3}}\int_{0}^{\omega_{\rm {D}}}\omega ^2d\omega , $$

Evaluating this integral, we get $\frac{V}{2\pi^2 v_s^3} \omega_D^3$. Equating this to the total number of modes, $3N$:

$$ {\frac {V}{2\pi ^2v_{\rm {s}}^3}}\omega_{\rm {D}}^3=3N. $$

Rearranging this expression to solve for $\omega_{\rm {D}}^3$ gives the familiar formula for the Debye frequency in three dimensions:

$$ \omega_{\rm {D}}^3={\frac {6\pi ^2N}{V}}v_{\rm {s}}^3. $$

This derivation clearly illustrates how the finite number of available vibrational modes ($3N$) fundamentally imposes a cutoff on the frequency spectrum, a cornerstone of the Debye model .

One-dimensional chain in 3D space: The same logical framework can be applied to a one-dimensional chain of atoms, even though it exists within a three-dimensional space . The total number of modes remains $3N$, as each atom still has three degrees of freedom for vibration (even if the chain itself is 1D):

$$ \sum_{\rm {modes}}3=3N. $$

The sum over modes now transforms into a one-dimensional integral in k-space . For a 1D system of length L, the density of states in k-space is $L/(2\pi)$. The integral is taken from $-k_{\rm {D}}$ to $k_{\rm {D}}$ (representing waves propagating in both positive and negative directions).

$$ \sum_{\rm {modes}}3={\frac {3L}{2\pi }}\int_{-k_{\rm {D}}}^{k_{\rm {D}}}dk={\frac {3L}{\pi v_{\rm {s}}}}\int_{0}^{\omega_{\rm {D}}}d\omega . $$

The integral from $-k_{\rm {D}}$ to $k_{\rm {D}}$ of $dk$ is $2k_{\rm {D}}$. By substituting $k = \omega/v_s$, the bounds of integration become 0 to $\omega_{\rm {D}}$. This yields:

$$ {\frac {3L}{\pi v_{\rm {s}}}}\int_{0}^{\omega_{\rm {D}}}d\omega ={\frac {3L}{\pi v_{\rm {s}}}}\omega_{\rm {D}}=3N. $$

From this, the Debye frequency for a one-dimensional chain is found to be:

$$ \omega_{\rm {D}}={\frac {\pi v_{\rm {s}}N}{L}}. $$

Two-dimensional crystal: Extending this to a two-dimensional crystal (e.g., a sheet of atoms with area A), the total number of modes is still $3N$. The integral in k-space becomes a two-dimensional integral over an area of $A/(2\pi)^2$.

$$ \sum_{\rm {modes}}3={\frac {3A}{(2\pi )^2}}\iint d\mathbf {k} ={\frac {3A}{2\pi v_{\rm {s}}^2}}\int_{0}^{\omega_{\rm {D}}}\omega d\omega ={\frac {3A\omega_{\rm {D}}^2}{4\pi v_{\rm {s}}^2}}=3N, $$

where $A \equiv L^2$ is the area of the system. Solving for $\omega_{\rm {D}}^2$:

$$ \omega_{\rm {D}}^2={\frac {4\pi N}{A}}v_{\rm {s}}^2. $$

These derivations, while progressively more complex in their integration, all follow the same core logic: defining a density of states in k-space and then imposing a cutoff frequency such that the total number of allowed states matches the total number of degrees of freedom ($3N$) in the atomic system. It’s a pragmatic, effective way to bridge the continuum approximation with the discrete reality.

Polarization dependence

In the simplified derivations above, we made the convenient, but often unrealistic, assumption that the speed of sound ($v_s$) is the same for all polarizations . In reality, longitudinal waves (compressional waves) typically propagate at a different velocity than transverse waves (shear waves). Reintroducing this distinction significantly improves the accuracy of the model, though it does, predictably, complicate the mathematics.

If we acknowledge that each polarization i has its own characteristic speed $v_{s,i}$, then the dispersion relation becomes $\omega_i = v_{s,i}|\mathbf{k}|$. However, the Debye frequency , $\omega_{\rm {D}}$, as a system-wide cutoff, does not depend on the individual polarization index i. To account for this, we reformulate the total number of modes as a sum over polarizations, where each polarization sum accounts for its specific speed: $\sum_i \sum_{\rm {modes}}1$, which, naturally, still equals $3N$. Here, the summation over modes for each polarization i will implicitly depend on $v_{s,i}$.

One-dimensional chain in 3D space: For a one-dimensional chain, considering the three distinct polarizations (one longitudinal, two transverse, or whatever specific set applies to the material’s anisotropy), the summation over modes becomes:

$$ \sum_{i}\sum_{\rm {modes}}1=\sum_{i}{\frac {L}{\pi v_{s,i}}}\int_{0}^{\omega_{\rm {D}}}d\omega_i=3N. $$

Evaluating the integral for each polarization and summing the results:

$$ {\frac {L\omega_{\rm {D}}}{\pi }}({\frac {1}{v_{s,1}}}+{\frac {1}{v_{s,2}}}+{\frac {1}{v_{s,3}}})=3N. $$

Solving for $\omega_{\rm {D}}$ gives:

$$ \omega_{\rm {D}}={\frac {\pi N}{L}}{\frac {3}{{\frac {1}{v_{s,1}}}+{\frac {1}{v_{s,2}}}+{\frac {1}{v_{s,3}}}}}={\frac {3\pi N}{L}}{\frac {v_{s,1}v_{s,2}v_{s,3}}{v_{s,2}v_{s,3}+v_{s,1}v_{s,3}+v_{s,1}v_{s,2}}}={\frac {\pi N}{L}}v_{\mathrm {eff} },. $$

The calculated effective velocity, $v_{\mathrm {eff}}$, in this case, is the harmonic mean of the velocities for each of the three polarizations . If we simplify by assuming the two transverse polarizations have the same speed, $v_{s,t}$, and the longitudinal polarization has speed $v_{s,l}$, the expression becomes:

$$ \omega_{\rm {D}}={\frac {3\pi N}{L}}{\frac {v_{s,t}v_{s,l}}{2v_{s,l}+v_{s,t}}}. $$

Setting $v_{s,t}=v_{s,l}$ (the initial simplification) correctly recovers the earlier, simpler expression for the 1D chain.

Two-dimensional crystal: For a two-dimensional crystal, a similar derivation yields:

$$ \omega_{\rm {D}}^{2}={\frac {4\pi N}{A}}{\frac {3}{{\frac {1}{v_{s,1}^{2}}}+{\frac {1}{v_{s,2}^{2}}}+{\frac {1}{v_{s,3}^{2}}}}}={\frac {12\pi N}{A}}{\frac {(v_{s,1}v_{s,2}v_{s,3})^{2}}{(v_{s,2}v_{s,3})^{2}+(v_{s,1}v_{s,3})^{2}+(v_{s,1}v_{s,2})^{2}}}={\frac {4\pi N}{A}}v_{\mathrm {eff} }^{2},. $$

Here, $v_{\mathrm {eff}}$ is the square root of the harmonic mean of the squares of the velocities. Assuming $v_{s,t}=v_{s,l}$:

$$ \omega_{\rm {D}}^{2}={\frac {12\pi N}{A}}{\frac {(v_{s,t}v_{s,l})^{2}}{2v_{s,l}^{2}+v_{s,t}^{2}}}. $$

Again, setting all velocities equal recovers the simpler expression.

Three-dimensional crystal: Finally, for a three-dimensional crystal, the Debye frequency with polarization dependence is:

$$ \omega_{\rm {D}}^{2}={\frac {6\pi ^{2}N}{V}}{\frac {3}{{\frac {1}{v_{s,1}^{3}}}+{\frac {1}{v_{s,2}^{3}}}+{\frac {1}{v_{s,3}^{3}}}}}={\frac {18\pi ^{2}N}{V}}{\frac {(v_{s,1}v_{s,2}v_{s,3})^{3}}{(v_{s,2}v_{s,3})^{3}+(v_{s,1}v_{s,3})^{3}+(v_{s,1}v_{s,2})^{3}}}={\frac {6\pi ^{2}N}{V}}v_{\mathrm {eff} }^{3},. $$

In this 3D case, $v_{\mathrm {eff}}$ is the cube root of the harmonic mean of the cubes of the velocities. With $v_{s,t}=v_{s,l}$:

$$ \omega_{\rm {D}}^{3}={\frac {18\pi ^{2}N}{V}}{\frac {(v_{s,t}v_{s,l})^{3}}{2v_{s,l}^{3}+v_{s,t}^{3}}}. $$

And, predictably, setting all velocities equal returns the initial, simplified 3D expression. These more nuanced derivations highlight the importance of considering the anisotropic nature of sound propagation in real materials, even within a seemingly simple model.

Derivation with the actual dispersion relation

So far, we’ve largely relied on the rather convenient, but ultimately approximate, linear dispersion relation ($\omega = v_s k$). However, in reality, the relationship between angular frequency ($\omega$) and wave vector (k) for atomic vibrations in a lattice is more complex, particularly at shorter wavelengths. The fact that only discretized points (the atoms themselves) matter means that different waves might produce physically indistinguishable manifestations, a phenomenon known as aliasing .

In classical mechanics , it’s well-established that for an equidistant chain of masses interacting harmonically (like springs), the true dispersion relation is not linear but sinusoidal:

$$ \omega (k)=2{\sqrt {\frac {\kappa }{m}}}\left|\sin \left({\frac {ka}{2}}\right)\right|, $$

where m is the mass of each atom, $\kappa$ is the effective spring constant representing the interatomic forces, and a is, as before, the spacing between atoms in the ground state .

If you were to plot this relation, you’d notice it’s linear for small k (long wavelengths) but flattens out as k approaches $\pi/a$. This flattening indicates that the group velocity (the speed at which energy propagates) goes to zero at the edge of the Brillouin zone . Debye’s estimation of the cutoff wavelength (and thus cutoff frequency) remains surprisingly accurate even with this more complex dispersion. This is because for any wavenumber k greater than $\pi/a$ (i.e., for a wavelength $\lambda$ smaller than $2a$), there exists a corresponding wavenumber smaller than $\pi/a$ that yields the same angular frequency . This means the physical manifestation of a mode with a larger wavenumber is indistinguishable from one with a smaller wavenumber within the first Brillouin zone . Therefore, we can limit our study of the dispersion relation to the first Brillouin zone , $k \in \left[-{\frac {\pi }{a}},{\frac {\pi }{a}}\right]$, without any loss of accuracy or information. This concept is vividly demonstrated in the animated image of lattice vibrations, where short-wavelength modes can be “folded back” into longer-wavelength equivalents.

If we divide the dispersion relation by k to find the phase velocity $\omega(k)/k$, and then insert the maximum k value, $k = \pi/a$, we find the speed of a wave at this cutoff:

$$ v_{\rm {s}}(k=\pi/a)={\frac {2a}{\pi }}{\sqrt {\frac {\kappa }{m}}}. $$

By simply inserting $k=\pi/a$ into the actual dispersion relation , we find the maximum angular frequency , which is precisely the Debye frequency :

$$ \omega (k=\pi/a)=2{\sqrt {\frac {\kappa }{m}}}=\omega_{\rm {D}}. $$

Combining these results, we once again arrive at the same expression for the Debye frequency :

$$ \omega_{\rm {D}}={\frac {\pi v_{\rm {s}}}{a}}. $$

This consistency is reassuring, demonstrating that even with a more accurate, non-linear dispersion relation , the fundamental concept of a cutoff frequency determined by the atomic spacing remains robust for a simple monatomic chain. However, for more complex systems, such as diatomic chains, the associated cutoff frequency and wavelength become less straightforward. Diatomic chains introduce additional “branches” to the dispersion relation (e.g., acoustic and optical branches), and the simple cutoff based on $2a$ is no longer entirely accurate. It’s also not immediately clear from this derivation whether Debye’s cutoff predictions remain accurate for higher-dimensional systems when considering their more complex, realistic dispersion relations. This highlights the inherent trade-offs between simplicity and absolute accuracy in physical models.

Alternative derivation

Sometimes, the same physical result can be arrived at through multiple conceptual paths. The crucial insight is that two different waves can manifest identically if at least one of them has a wavelength shorter than twice the initial distance between the masses. This phenomenon, often discussed in signal processing, is known as aliasing .

For a one-dimensional chain of atoms, the formula for the Debye frequency can also be derived by directly applying a theorem from signal processing that describes this aliasing effect: the Nyquist–Shannon sampling theorem . The core difference in its application here is that instead of discretization in time (as typically seen in digital signal processing), we’re dealing with discretization in space (the discrete positions of atoms in the lattice).

According to the Nyquist–Shannon sampling theorem , to perfectly reconstruct a continuous signal from discrete samples, the sampling rate must be at least twice the highest frequency (or lowest wavelength) present in the signal. In our spatial context, this means that for wavelengths smaller than $2a$ (where $a$ is the sampling distance, i.e., the interatomic spacing), every mode is effectively a “repeat” or an alias of a mode with a wavelength larger than $2a$. Physically, this means that the shortest distinct wavelength we can resolve in a discrete lattice is $2a$.

Therefore, the cutoff wavelength, $\lambda_{\rm {D}}$, beyond which no new physical information about the vibrational modes can be obtained, should be set at:

$$ \lambda_{\rm {D}}=2a $$

This directly leads to the maximum wavenumber , $k_{\rm {D}}$:

$$ k_{\rm {D}}={\frac {2\pi }{\lambda_D}}=\pi /a $$

And, finally, using the linear dispersion relation $\omega = v_s k$, we arrive at the Debye frequency :

$$ \omega_{\rm {D}}={\frac {\pi v_{\rm {s}}}{a}}. $$

This derivation, leveraging the principles of information theory, demonstrates that the choice of the cutoff frequency is not arbitrary but is fundamentally dictated by the discrete nature of the atomic lattice. It doesn’t even matter which specific dispersion relation (linear or sinusoidal) is used for this step, as the cutoff wavelength (and thus frequency) is determined by the sampling limit imposed by the atomic spacing itself. This is a rather elegant way to arrive at the same conclusion, and perhaps a more intuitive one for those accustomed to digital signal processing.

See also