← Back to homeDigital Modulation

Particle-In-Cell

Right. You need something rewritten. Because the original version was presumably too clear, too straightforward, and didn't adequately convey the existential weight of simulating charged particles. Fine. Let's get this over with. Don't expect hand-holding.


In the rarefied, and often desperate, world of plasma physics, the particle-in-cell (PIC) method stands as a monument to brute force. It is a computational technique, a mathematical sledgehammer used to solve a particularly stubborn class of partial differential equations. The core conceit is to track individual particles—or, more accurately, computational effigies of them—in a Lagrangian frame, letting them roam through a continuous phase space as they please. Simultaneously, the collective mess they make—their densities, their currents, the very fields they generate—is calculated on a static, unforgiving Eulerian mesh. It’s a hybrid approach, a forced marriage between the particle and the grid, and like most forced marriages, it's fraught with compromise.

This technique is not some new, shiny thing. PIC methods were being cobbled together as early as 1955, a time when programmers were wrestling with machines before the relative luxury of the first Fortran compilers. The method truly found its calling in the late 1950s and early 1960s, becoming the tool of choice for simulating plasma thanks to the efforts of the usual suspects: Buneman, Dawson, Hockney, Birdsall, Morse, and others. For plasma physics applications, this boils down to a Sisyphean task: meticulously tracking the trajectories of countless charged particles as they move through self-consistent electromagnetic or electrostatic fields—fields that are, in turn, generated by the particles themselves and calculated on that fixed grid. It's a feedback loop from hell, and we're just here to watch.

Technical aspects

For a great many problems, the classical PIC method is considered relatively intuitive. This is true only if your intuition is calibrated to appreciate managed chaos. Its enduring success, especially in plasma simulation, likely stems from its straightforward, almost brutish, implementation, which typically involves the following ritual:

  • Integration of the equations of motion: Calculating where each of the millions of particles is going to go next. One by one.
  • Interpolation of charge and current source terms to the field mesh: Taking the continuous positions of all those particles and smearing their properties onto the discrete grid points, like making a statistical mosaic out of individual points of light.
  • Computation of the fields on mesh points: Solving for the electric and magnetic fields on the grid, based on the collective mess interpolated in the previous step.
  • Interpolation of the fields from the mesh to the particle locations: Taking the fields from the grid points and projecting them back onto each individual particle, so you can tell it which way to move in the first step. And then you do it all over again. And again.

Models that restrict particle interactions to occur only through these averaged, grid-based fields are known as PM (particle-mesh). It's efficient, but impersonal. Models that dare to include direct, binary particle-particle interactions are called PP (particle-particle), a computationally expensive nightmare of N-squared complexity. Naturally, a compromise exists: PP-PM, or P3M, which uses both. Choose your poison.

It was recognized from the very beginning that the PIC method is plagued by an error source charmingly referred to as discrete particle noise. This isn't a flaw; it's a feature of your audacity in trying to represent a smooth, infinite-resolution universe with a finite number of computational stand-ins. The error is statistical, and even today, our understanding of it lags behind the more civilized, well-understood errors of traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes.

In stark contrast, modern geometric PIC algorithms are built on a far more elegant theoretical foundation. They don't just brute-force the problem; they approach it with a certain respect for the underlying physics. These methods employ the sophisticated toolkit of discrete manifolds, interpolating differential forms, and canonical or non-canonical symplectic integrators. The point of all this mathematical heavy lifting is to guarantee fundamental properties are upheld: gauge invariance, conservation of charge, and conservation of energy-momentum. Most critically, they preserve the infinitely dimensional symplectic structure of the particle-field system. These desirable features aren't happy accidents; they are a direct consequence of building the algorithms on a more fundamental field-theoretical framework, linking them directly to the variational principles of physics. They work well because they remember the rules.

Basics of the PIC plasma simulation technique

Within the plasma research community, the systems under investigation are teeming ecosystems of different species: electrons, ions, neutrals, molecules, dust particles, and whatever else the universe decided to throw in. The set of equations governing PIC codes is therefore the Lorentz force, which serves as the equation of motion and is solved in the pusher or particle mover part of the code, and Maxwell's equations, which dictate the electric and magnetic fields and are handled by the (field) solver.

Super-particles

The real systems are, to put it mildly, vast. The number of particles is astronomically large. To make simulation even remotely feasible, we cheat. We use super-particles. A super-particle (or macroparticle) is a single computational entity that stands in for a huge number of real particles. It could represent millions of electrons or ions in a plasma simulation, or something like a vortex element in a fluid simulation. This sleight of hand is permissible because the acceleration from the Lorentz force depends only on the charge-to-mass ratio. Consequently, a super-particle, carrying the combined charge and mass of its constituents, will follow the exact same trajectory as a single, real particle. You're just turning up the volume.

The number of real particles bundled into a single super-particle must be chosen with care. You need enough of them to collect meaningful statistics on particle motion. If your system has a severe population imbalance between species—say, ions and neutrals—you can use different representation ratios for each, giving the minority species a fighting chance to be statistically relevant.

The particle mover

Even with the super-particle shortcut, the number of simulated particles is typically enormous (> 105), and pushing each one forward in time is often the most computationally expensive part of a PIC simulation. It's a task that must be performed for every single particle, at every single time step. As such, the pusher algorithm must be both brutally efficient and highly accurate. Entire careers have been spent optimizing these schemes.

The schemes for moving particles fall into two main camps: implicit and explicit solvers. Implicit solvers, like the implicit Euler scheme, are precognitive; they calculate a particle's velocity using the fields that have already been updated for the next time step. Explicit solvers are simpler and faster, using only the old force from the previous time step to calculate the new position. They are less computationally demanding but require a smaller time step to remain stable. In the world of PIC, the leapfrog method, a second-order explicit method, is a common choice. Another heavyweight is the Boris algorithm, which cleverly cancels out the magnetic field term in the Newton-Lorentz equation.

For plasma applications, the leapfrog method is formulated as follows:

xk+1xkΔt=vk+1/2,{\displaystyle {\frac {\mathbf {x} _{k+1}-\mathbf {x} _{k}}{\Delta t}}=\mathbf {v} _{k+1/2},}

vk+1/2vk1/2Δt=qm(Ek+vk+1/2+vk1/22×Bk),{\displaystyle {\frac {\mathbf {v} _{k+1/2}-\mathbf {v} _{k-1/2}}{\Delta t}}={\frac {q}{m}}\left(\mathbf {E} _{k}+{\frac {\mathbf {v} _{k+1/2}+\mathbf {v} _{k-1/2}}{2}}\times \mathbf {B} _{k}\right),}

Here, the subscript k{\displaystyle k} denotes "old" quantities from the previous time step, while k+1{\displaystyle k+1} points to the updated quantities at the next time step (i.e., tk+1=tk+Δt{\displaystyle t_{k+1}=t_{k}+\Delta t}). Notice that the velocities are calculated at half-time steps, leapfrogging over the positions and fields, which are defined at integer time steps.

The Boris scheme's equations, which are substituted into the above, are:

xk+1=xk+Δtvk+1/2,{\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+{\Delta t}\mathbf {v} _{k+1/2},}

vk+1/2=u+qEk,{\displaystyle \mathbf {v} _{k+1/2}=\mathbf {u} '+q'\mathbf {E} _{k},}

with the intermediate steps:

u=u+(u+(u×h))×s,{\displaystyle \mathbf {u} '=\mathbf {u} +(\mathbf {u} +(\mathbf {u} \times \mathbf {h} ))\times \mathbf {s} ,}

u=vk1/2+qEk,{\displaystyle \mathbf {u} =\mathbf {v} _{k-1/2}+q'\mathbf {E} _{k},}

h=qBk,{\displaystyle \mathbf {h} =q'\mathbf {B} _{k},}

s=2h/(1+h2){\displaystyle \mathbf {s} =2\mathbf {h} /(1+h^{2})}

and q=Δt×(q/2m){\displaystyle q'=\Delta t\times (q/2m)}.

The Boris algorithm is the de facto standard for advancing charged particles, and for good reason: it exhibits excellent long-term accuracy. It was later understood that this remarkable stability in the nonrelativistic case comes from the fact that it conserves phase space volume, even though it isn't strictly symplectic. The global bound on energy error that one usually associates with symplectic algorithms still holds, making it incredibly effective for the multi-scale dynamics of plasmas. It has also been shown that the relativistic Boris push can be improved to be both volume-preserving and to maintain a constant-velocity solution in crossed E and B fields, which is more than you can say for most things.

The field solver

The methods for solving Maxwell's equations (or any partial differential equation (PDE) for that matter) generally fall into one of three categories:

  • Finite difference methods (FDM): The continuous domain is replaced by a discrete grid of points. At these points, fields are calculated. Derivatives are crudely approximated by the differences between values at neighboring points, transforming the elegant language of calculus into the blunt instrument of algebra.
  • Finite element methods (FEM): The domain is tessellated into a discrete mesh of smaller elements. The PDEs are recast as an eigenvalue problem. A trial solution is constructed using basis functions localized to each element, and this solution is then optimized until it reaches the desired accuracy. It's like building a sculpture out of a set of pre-defined blocks.
  • Spectral methods: These methods, such as the fast Fourier transform (FFT), also transform PDEs into an eigenvalue problem. However, the basis functions are high-order and defined globally across the entire domain. The domain itself remains continuous. A trial solution is found and optimized to determine the best parameters. It's a more holistic, and often more arrogant, approach.

Particle and field weighting

The name "particle-in-cell" comes from the crucial step of assigning macroscopic plasma quantities (like number density or current density) from the particles to the grid. This is particle weighting. The particles exist in a continuous domain, but the macro-quantities, like the fields, are only known at the discrete mesh points. To bridge this gap, we assume each particle has a "shape," defined by a shape function S(xX){\displaystyle S(\mathbf {x} -\mathbf {X} )}, where x{\displaystyle \mathbf {x} } is the particle's coordinate and X{\displaystyle \mathbf {X} } is the observation point on the grid.

Perhaps the simplest and most common choice is the cloud-in-cell (CIC) scheme, a first-order (linear) weighting scheme. It treats the particle not as a point, but as a small, uniformly charged cloud that overlaps with nearby grid points, distributing its charge among them based on proximity. Whatever the scheme, the shape function must satisfy a few non-negotiable conditions: space isotropy, charge conservation, and increasing accuracy (convergence) for higher-order implementations.

The fields calculated by the solver are also only defined at the grid points. To figure out the force acting on a particle, which can be anywhere, these fields must be interpolated back to the particle's exact location. This is field weighting:

E(x)=iEiS(xix),{\displaystyle \mathbf {E} (\mathbf {x} )=\sum _{i}\mathbf {E} _{i}S(\mathbf {x} _{i}-\mathbf {x} ),}

where the subscript i{\displaystyle i} labels the grid point. For the simulation to be self-consistent, the method of gathering information from particles onto the grid and the method of interpolating information from the grid back to the particles must be consistent. They both appear in Maxwell's equations, after all. Above all, the field interpolation scheme must conserve momentum. This is typically achieved by using the same weighting scheme for both particles and fields and ensuring the field solver itself has the appropriate spatial symmetry, which prevents a particle from exerting a net force on itself and upholds the action-reaction law.

Collisions

Because the field solver is designed to be free of self-forces, the field generated by a particle within its own cell must decrease as the distance to the particle decreases. This is an artifact, and it leads to an underestimation of inter-particle forces at close range. This deficiency can be patched up by explicitly adding Coulomb collisions between charged particles. Simulating the N-body interaction for every pair in a large system is computationally suicidal, so various Monte Carlo methods have been developed as a workaround. A widely used approach is the binary collision model, where particles are grouped by cell, randomly paired off, and then made to collide computationally.

In a real plasma, a whole zoo of other reactions can occur. These range from simple elastic collisions, like those between charged particles and neutrals, to inelastic collisions, such as electron-neutral ionization, and even chemical reactions. Each requires its own specialized treatment. Most models handling charged-neutral collisions use either a direct Monte-Carlo scheme, where every particle carries its own collision probability, or the null-collision scheme, which uses the maximum collision probability for each species to streamline the process.

Accuracy and stability conditions

As with any simulation, the choices of time step and grid size are not arbitrary suggestions; they are fundamental constraints that dictate whether your simulation produces physics or garbage. The time step Δt{\displaystyle \Delta t} and grid size Δx{\displaystyle \Delta x} must be small enough to properly resolve the phenomena you claim to be studying.

For an electrostatic plasma simulation using an explicit time integration scheme (like the ever-popular leapfrog), two critical conditions must be met to ensure the solution doesn't violently explode into numerical instability:

Δx<3.4λD,{\displaystyle \Delta x<3.4\lambda _{D},}

Δt2ωpe1,{\displaystyle \Delta t\leq 2\omega _{pe}^{-1},}

These conditions arise from considering the harmonic oscillations of a simple one-dimensional, unmagnetized plasma. The second condition is a hard requirement, but for the sake of energy conservation and sanity, practical applications use a much stricter constraint, replacing the factor of 2 with something an order of magnitude smaller. A time step of Δt0.1ωpe1,{\displaystyle \Delta t\leq 0.1\omega _{pe}^{-1},} is typical. It should come as no surprise that the natural scales of the plasma itself—the Debye length λD{\displaystyle \lambda _{D}} and the inverse plasma frequency ωpe1{\displaystyle \omega _{pe}^{-1}}—dictate the resolution required.

For an explicit electromagnetic plasma simulation, the time step must also respect the Courant–Friedrichs–Lewy (CFL) condition:

Δt<Δx/c,{\displaystyle \Delta t<\Delta x/c,}

where ΔxλD{\displaystyle \Delta x\sim \lambda _{D}} and c{\displaystyle c} is the speed of light. This is the universe's ultimate speed limit, translated into computational terms: information cannot be allowed to propagate more than one grid cell per time step.

Applications

Within its native habitat of plasma physics, PIC simulation has been thrown at a wide range of problems with considerable success. It has been used to study laser-plasma interactions, the acceleration of electrons and heating of ions in the auroral ionosphere, large-scale phenomena in magnetohydrodynamics, the violent tearing and rejoining of magnetic field lines in magnetic reconnection, and various microinstabilities such as the ion-temperature-gradient mode inside tokamaks—those magnetic doughnuts designed to contain miniature suns. It has also been applied to model vacuum discharges and the complex behavior of dusty plasmas.

Hybrid models also exist, which use the PIC method for a kinetic treatment of certain species while modeling others (those that are conveniently Maxwellian) with a less computationally expensive fluid model.

The reach of PIC simulations has even extended beyond plasma physics, finding applications in problems of solid and fluid mechanics. Apparently, the idea of tracking particles and their interactions is a universally applicable form of punishment.

Electromagnetic particle-in-cell computational applications

Here is a list of tools people have built to perform this task. Make of it what you will.

Computational application Web site License Availability Canonical Reference
Ansys Charge Plus [17] Proprietary Commercially available from Ansys
SHARP [18] Proprietary doi:10.3847/1538-4357/aa6d13
ALaDyn [19] GPLv3+ Open Repo: [20] doi:10.5281/zenodo.49553
EPOCH [21] GPLv3 Open Repo: [22] doi:10.1088/0741-3335/57/11/113001
FPIC Proprietary Unknown doi:10.3847/2041-8213/ae06a6
FBPIC [23] 3-Clause-BSD-LBNL Open Repo: [24] doi:10.1016/j.cpc.2016.02.007
LSP [25] Proprietary Available from ATK doi:10.1016/S0168-9002(01)00024-9
MAGIC [26] Proprietary Available from ATK doi:10.1016/0010-4655(95)00010-D
OSIRIS [27] GNU AGPL Open Repo [28] doi:10.1007/3-540-47789-6_36
PhotonPlasma [29] Unknown Open Repo: [30] doi:10.1063/1.4811384
PICCANTE [31] GPLv3+ Open Repo: [32] doi:10.5281/zenodo.48703
PICLas [33] GPLv3+ Open Repo: [34] doi:10.1016/j.crme.2014.07.005
doi:10.1063/1.5097638
PICMC [35] Proprietary Available from Fraunhofer IST
PIConGPU [36] GPLv3+ Open Repo: [37] doi:10.1145/2503210.2504564
SMILEI [38] CeCILL-B Open Repo: [39] doi:10.1016/j.cpc.2017.09.024
iPIC3D [40] Apache License 2.0 Open Repo: [41] doi:10.1016/j.matcom.2009.08.038
The Virtual Laser Plasma Lab (VLPL) [42] Proprietary Unknown doi:10.1017/S0022377899007515
Tristan v2 [43] 3-Clause-BSD Open source, [44] but also has a private version with QED/radiative [45] modules doi:10.5281/zenodo.7566725 [46]
VizGrain [47] Proprietary Commercially available from Esgee Technologies Inc.
VPIC [48] 3-Clause-BSD Open Repo: [49] doi:10.1063/1.2840133
VSim (Vorpal) [50] Proprietary Available from Tech-X Corporation doi:10.1016/j.jcp.2003.11.004
Warp [51] 3-Clause-BSD-LBNL Open Repo: [52] doi:10.1063/1.860024
WarpX [53] 3-Clause-BSD-LBNL Open Repo: [54] doi:10.1016/j.nima.2018.01.035
ZPIC [55] AGPLv3+ Open Repo: [56]
ultraPICA Proprietary Commercially available from Plasma Taiwan Innovation Corporation.

See also