Particle-in-cell

From HandWiki
Short description: Mathematical technique used to solve a certain class of partial differential equations

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles (or fluid elements) in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

PIC methods were already in use as early as 1955,[1] even before the first Fortran compilers were available. The method gained popularity for plasma simulation in the late 1950s and early 1960s by Buneman, Dawson, Hockney, Birdsall, Morse and others. In plasma physics applications, the method amounts to following the trajectories of charged particles in self-consistent electromagnetic (or electrostatic) fields computed on a fixed mesh. [2]

Technical aspects

For many types of problems, the classical PIC method invented by Buneman, Dawson, Hockney, Birdsall, Morse and others is relatively intuitive and straightforward to implement. This probably accounts for much of its success, particularly for plasma simulation, for which the method typically includes the following procedures:

  • Integration of the equations of motion.
  • Interpolation of charge and current source terms to the field mesh.
  • Computation of the fields on mesh points.
  • Interpolation of the fields from the mesh to the particle locations.

Models which include interactions of particles only through the average fields are called PM (particle-mesh). Those which include direct binary interactions are PP (particle-particle). Models with both types of interactions are called PP-PM or P3M.

Since the early days, it has been recognized that the PIC method is susceptible to error from so-called discrete particle noise. [3] This error is statistical in nature, and today it remains less-well understood than for traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes.

Modern geometric PIC algorithms are based on a very different theoretical framework. These algorithms use tools of discrete manifold, interpolating differential forms, and canonical or non-canonical symplectic integrators to guarantee gauge invariant and conservation of charge, energy-momentum, and more importantly the infinitely dimensional symplectic structure of the particle-field system. [4] [5] These desired features are attributed to the fact that geometric PIC algorithms are built on the more fundamental field-theoretical framework and are directly linked to the perfect form, i.e., the variational principle of physics.

Basics of the PIC plasma simulation technique

Inside the plasma research community, systems of different species (electrons, ions, neutrals, molecules, dust particles, etc.) are investigated. The set of equations associated with PIC codes are therefore the Lorentz force as the equation of motion, solved in the so-called pusher or particle mover of the code, and Maxwell's equations determining the electric and magnetic fields, calculated in the (field) solver.

Super-particles

The real systems studied are often extremely large in terms of the number of particles they contain. In order to make simulations efficient or at all possible, so-called super-particles are used. A super-particle (or macroparticle) is a computational particle that represents many real particles; it may be millions of electrons or ions in the case of a plasma simulation, or, for instance, a vortex element in a fluid simulation. It is allowed to rescale the number of particles, because the acceleration from the Lorentz force depends only on the charge-to-mass ratio, so a super-particle will follow the same trajectory as a real particle would.

The number of real particles corresponding to a super-particle must be chosen such that sufficient statistics can be collected on the particle motion. If there is a significant difference between the density of different species in the system (between ions and neutrals, for instance), separate real to super-particle ratios can be used for them.

The particle mover

Even with super-particles, the number of simulated particles is usually very large (> 105), and often the particle mover is the most time consuming part of PIC, since it has to be done for each particle separately. Thus, the pusher is required to be of high accuracy and speed and much effort is spent on optimizing the different schemes.

The schemes used for the particle mover can be split into two categories, implicit and explicit solvers. While implicit solvers (e.g. implicit Euler scheme) calculate the particle velocity from the already updated fields, explicit solvers use only the old force from the previous time step, and are therefore simpler and faster, but require a smaller time step. In PIC simulation the leapfrog method is used, a second-order explicit method. [6] Also the Boris algorithm is used which cancel out the magnetic field in the Newton-Lorentz equation.[7][8]

For plasma applications, the leapfrog method takes the following form:

[math]\displaystyle{ \frac{\mathbf{x}_{k+1} - \mathbf{x}_{k}}{\Delta t} = \mathbf{v}_{k+1/2}, }[/math]
[math]\displaystyle{ \frac{\mathbf{v}_{k+1/2} - \mathbf{v}_{k-1/2}}{\Delta t} = \frac{q}{m} \left( \mathbf{E}_k + \frac{\mathbf{v}_{k+1/2} + \mathbf{v}_{k-1/2}}{2} \times \mathbf{B}_{k} \right), }[/math]

where the subscript [math]\displaystyle{ k }[/math] refers to "old" quantities from the previous time step, [math]\displaystyle{ k+1 }[/math] to updated quantities from the next time step (i.e. [math]\displaystyle{ t_{k+1} = t_k + \Delta t }[/math]), and velocities are calculated in-between the usual time steps [math]\displaystyle{ t_k }[/math].

The equations of the Boris scheme which are substitute in the above equations are:

[math]\displaystyle{ \mathbf{x}_{k+1} = \mathbf{x}_{k} + {\Delta t} \mathbf{v}_{k+1/2}, }[/math]
[math]\displaystyle{ \mathbf{v}_{k+1/2} = \mathbf{u}' + q' \mathbf{E}_k, }[/math]

with

[math]\displaystyle{ \mathbf{u}' = \mathbf{u} + (\mathbf{u} + (\mathbf{u} \times \mathbf{h})) \times \mathbf{s}, }[/math]
[math]\displaystyle{ \mathbf{u} = \mathbf{v}_{k-1/2} + q' \mathbf{E}_k, }[/math]
[math]\displaystyle{ \mathbf{h} = q' \mathbf{B}_k, }[/math]
[math]\displaystyle{ \mathbf{s} = 2 \mathbf{h}/(1 + h^2) }[/math]

and [math]\displaystyle{ q' = \Delta t \times (q/2m) }[/math].

Because of its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. It was realized that the excellent long term accuracy of nonrelativistic Boris algorithm is due to the fact it conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas. It has also been shown [9] that one can improve on the relativistic Boris push to make it both volume preserving and have a constant-velocity solution in crossed E and B fields.

The field solver

The most commonly used methods for solving Maxwell's equations (or more generally, partial differential equations (PDE)) belong to one of the following three categories:

With the FDM, the continuous domain is replaced with a discrete grid of points, on which the electric and magnetic fields are calculated. Derivatives are then approximated with differences between neighboring grid-point values and thus PDEs are turned into algebraic equations.

Using FEM, the continuous domain is divided into a discrete mesh of elements. The PDEs are treated as an eigenvalue problem and initially a trial solution is calculated using basis functions that are localized in each element. The final solution is then obtained by optimization until the required accuracy is reached.

Also spectral methods, such as the fast Fourier transform (FFT), transform the PDEs into an eigenvalue problem, but this time the basis functions are high order and defined globally over the whole domain. The domain itself is not discretized in this case, it remains continuous. Again, a trial solution is found by inserting the basis functions into the eigenvalue equation and then optimized to determine the best values of the initial trial parameters.

Particle and field weighting

The name "particle-in-cell" originates in the way that plasma macro-quantities (number density, current density, etc.) are assigned to simulation particles (i.e., the particle weighting). Particles can be situated anywhere on the continuous domain, but macro-quantities are calculated only on the mesh points, just as the fields are. To obtain the macro-quantities, one assumes that the particles have a given "shape" determined by the shape function

[math]\displaystyle{ S(\mathbf{x}-\mathbf{X}), }[/math]

where [math]\displaystyle{ \mathbf{x} }[/math] is the coordinate of the particle and [math]\displaystyle{ \mathbf{X} }[/math] the observation point. Perhaps the easiest and most used choice for the shape function is the so-called cloud-in-cell (CIC) scheme, which is a first order (linear) weighting scheme. Whatever the scheme is, the shape function has to satisfy the following conditions: [10] space isotropy, charge conservation, and increasing accuracy (convergence) for higher-order terms.

The fields obtained from the field solver are determined only on the grid points and can't be used directly in the particle mover to calculate the force acting on particles, but have to be interpolated via the field weighting:

[math]\displaystyle{ \mathbf{E}(\mathbf{x}) = \sum_{i}\mathbf{E}_i S(\mathbf{x}_i-\mathbf{x}), }[/math]

where the subscript [math]\displaystyle{ i }[/math] labels the grid point. To ensure that the forces acting on particles are self-consistently obtained, the way of calculating macro-quantities from particle positions on the grid points and interpolating fields from grid points to particle positions has to be consistent, too, since they both appear in Maxwell's equations. Above all, the field interpolation scheme should conserve momentum. This can be achieved by choosing the same weighting scheme for particles and fields and by ensuring the appropriate space symmetry (i.e. no self-force and fulfilling the action-reaction law) of the field solver at the same time[10]

Collisions

As the field solver is required to be free of self-forces, inside a cell the field generated by a particle must decrease with decreasing distance from the particle, and hence inter-particle forces inside the cells are underestimated. This can be balanced with the aid of Coulomb collisions between charged particles. Simulating the interaction for every pair of a big system would be computationally too expensive, so several Monte Carlo methods have been developed instead. A widely used method is the binary collision model,[11] in which particles are grouped according to their cell, then these particles are paired randomly, and finally the pairs are collided.

In a real plasma, many other reactions may play a role, ranging from elastic collisions, such as collisions between charged and neutral particles, over inelastic collisions, such as electron-neutral ionization collision, to chemical reactions; each of them requiring separate treatment. Most of the collision models handling charged-neutral collisions use either the direct Monte-Carlo scheme, in which all particles carry information about their collision probability, or the null-collision scheme,[12][13] which does not analyze all particles but uses the maximum collision probability for each charged species instead.

Accuracy and stability conditions

As in every simulation method, also in PIC, the time step and the grid size must be well chosen, so that the time and length scale phenomena of interest are properly resolved in the problem. In addition, time step and grid size affect the speed and accuracy of the code.

For an electrostatic plasma simulation using an explicit time integration scheme (e.g. leapfrog, which is most commonly used), two important conditions regarding the grid size [math]\displaystyle{ \Delta x }[/math] and the time step [math]\displaystyle{ \Delta t }[/math] should be fulfilled in order to ensure the stability of the solution:

[math]\displaystyle{ \Delta x \lt 3.4 \lambda_D, }[/math]
[math]\displaystyle{ \Delta t \leq 2 \omega_{pe}^{-1}, }[/math]

which can be derived considering the harmonic oscillations of a one-dimensional unmagnetized plasma. The latter conditions is strictly required but practical considerations related to energy conservation suggest to use a much stricter constraint where the factor 2 is replaced by a number one order of magnitude smaller. The use of [math]\displaystyle{ \Delta t \leq 0.1 \omega_{pe}^{-1}, }[/math] is typical.[10][14] Not surprisingly, the natural time scale in the plasma is given by the inverse plasma frequency [math]\displaystyle{ \omega_{pe}^{-1} }[/math] and length scale by the Debye length [math]\displaystyle{ \lambda_D }[/math].

For an explicit electromagnetic plasma simulation, the time step must also satisfy the CFL condition:

[math]\displaystyle{ \Delta t \lt \Delta x / c , }[/math]

where [math]\displaystyle{ \Delta x \sim \lambda_D }[/math], and [math]\displaystyle{ c }[/math] is the speed of light.

Applications

Within plasma physics, PIC simulation has been used successfully to study laser-plasma interactions, electron acceleration and ion heating in the auroral ionosphere, magnetohydrodynamics, magnetic reconnection, as well as ion-temperature-gradient and other microinstabilities in tokamaks, furthermore vacuum discharges, and dusty plasmas.

Hybrid models may use the PIC method for the kinetic treatment of some species, while other species (that are Maxwellian) are simulated with a fluid model.

PIC simulations have also been applied outside of plasma physics to problems in solid and fluid mechanics. [15] [16]

Electromagnetic particle-in-cell computational applications

Computational application Web site License Availability Canonical Reference
SHARP [17] Proprietary doi:10.3847/1538-4357/aa6d13
ALaDyn [18] GPLv3+ Open Repo:[19] doi:10.5281/zenodo.49553
EPOCH [20] GPL Open to academic users but signup required :[21] doi:10.1088/0741-3335/57/11/113001
FBPIC [22] 3-Clause-BSD-LBNL Open Repo:[23] doi:10.1016/j.cpc.2016.02.007
LSP [24] Proprietary Available from ATK doi:10.1016/S0168-9002(01)00024-9
MAGIC [25] Proprietary Available from ATK doi:10.1016/0010-4655(95)00010-D
OSIRIS [26] GNU AGPL Open Repo [27] doi:10.1007/3-540-47789-6_36
PICCANTE [28] GPLv3+ Open Repo:[29] doi:10.5281/zenodo.48703
PICLas [30] GPLv3+ Open Repo:[31] doi:10.1016/j.crme.2014.07.005
PIConGPU [32] GPLv3+ Open Repo:[33] doi:10.1145/2503210.2504564
SMILEI [34] CeCILL-B Open Repo:[35] doi:10.1016/j.cpc.2017.09.024
iPIC3D [36] Apache License 2.0 Open Repo:[37] doi:10.1016/j.matcom.2009.08.038
The Virtual Laser Plasma Lab (VLPL) [38] Proprietary Unknown doi:10.1017/S0022377899007515
Tristan v2 [39] 3-Clause-BSD Open source,[40] but also has a private version with QED/radiative[41] modules doi:10.5281/zenodo.7566725 [42]
VizGrain [43] Proprietary Commercially available from Esgee Technologies Inc.
VPIC [44] 3-Clause-BSD Open Repo:[45] doi:10.1063/1.2840133
VSim (Vorpal) [46] Proprietary Available from Tech-X Corporation doi:10.1016/j.jcp.2003.11.004
Warp [47] 3-Clause-BSD-LBNL Open Repo:[48] doi:10.1063/1.860024
WarpX [49] 3-Clause-BSD-LBNL Open Repo:[50] doi:10.1016/j.nima.2018.01.035
ZPIC [51] AGPLv3+ Open Repo:[52]
ultraPICA Proprietary Commercially available from Plasma Taiwan Innovation Corporation.

See also

References

  1. F.H. Harlow (1955). A Machine Calculation Method for Hydrodynamic Problems. Los Alamos Scientific Laboratory report LAMS-1956. 
  2. Dawson, J.M. (1983). "Particle simulation of plasmas". Reviews of Modern Physics 55 (2): 403–447. doi:10.1103/RevModPhys.55.403. Bibcode1983RvMP...55..403D. 
  3. Hideo Okuda (1972). "Nonphysical noises and instabilities in plasma simulation due to a spatial grid". Journal of Computational Physics 10 (3): 475–486. doi:10.1016/0021-9991(72)90048-4. Bibcode1972JCoPh..10..475O. 
  4. Qin, H. et al. (2016). "Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov-Maxwell system". Nuclear Fusion 56 (1): 014001. doi:10.1088/0029-5515/56/1/014001. Bibcode2016NucFu..56a4001Q. 
  5. Xiao, J. et al. (2015). "Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems". Physics of Plasmas 22 (11): 12504. doi:10.1063/1.4935904. Bibcode2015PhPl...22k2504X. 
  6. Birdsall, Charles K.; A. Bruce Langdon (1985). Plasma Physics via Computer Simulation. McGraw-Hill. ISBN 0-07-005371-5. 
  7. Boris, J.P. (November 1970). "Relativistic plasma simulation-optimization of a hybrid code". Naval Res. Lab., Washington, D.C.. pp. 3–67. 
  8. Qin, H. (2013). "Why is Boris algorithm so good?". Physics of Plasmas 20 (5): 084503. doi:10.1063/1.4818428. Bibcode2013PhPl...20h4503Q. https://digital.library.unt.edu/ark:/67531/metadc843445/m2/1/high_res_d/1090047.pdf. 
  9. Higuera, Adam V.; John R. Cary (2017). "Structure-preserving second-order integration of relativistic charged particle trajectories in electromagnetic fields". Physics of Plasmas 24 (5): 052104. doi:10.1016/j.jcp.2003.11.004. Bibcode2004JCoPh.196..448N. 
  10. 10.0 10.1 10.2 Tskhakaya, David (2008). "Chapter 6: The Particle-in-Cell Method". in Fehske, Holger; Schneider, Ralf; Weiße, Alexander. Computational Many-Particle Physics. Lecture Notes in Physics 739. 739. Springer, Berlin Heidelberg. doi:10.1007/978-3-540-74686-7. ISBN 978-3-540-74685-0. https://cds.cern.ch/record/1105877. 
  11. Takizuka, Tomonor; Abe, Hirotada (1977). "A binary collision model for plasma simulation with a particle code". Journal of Computational Physics 25 (3): 205–219. doi:10.1016/0021-9991(77)90099-7. Bibcode1977JCoPh..25..205T. 
  12. Birdsall, C.K. (1991). "Particle-in-cell charged-particle simulations, plus Monte Carlo collisions with neutral atoms, PIC-MCC". IEEE Transactions on Plasma Science 19 (2): 65–85. doi:10.1109/27.106800. ISSN 0093-3813. Bibcode1991ITPS...19...65B. 
  13. Vahedi, V.; Surendra, M. (1995). "A Monte Carlo collision model for the particle-in-cell method: applications to argon and oxygen discharges". Computer Physics Communications 87 (1–2): 179–198. doi:10.1016/0010-4655(94)00171-W. ISSN 0010-4655. Bibcode1995CoPhC..87..179V. https://zenodo.org/record/1253854. 
  14. Tskhakaya, D.; Matyash, K.; Schneider, R.; Taccogna, F. (2007). "The Particle-In-Cell Method". Contributions to Plasma Physics 47 (8–9): 563–594. doi:10.1002/ctpp.200710072. Bibcode2007CoPP...47..563T. 
  15. Liu, G.R.; M.B. Liu (2003). Smoothed Particle Hydrodynamics: A Meshfree Particle Method. World Scientific. ISBN 981-238-456-1. 
  16. Byrne, F. N.; Ellison, M. A.; Reid, J. H. (1964). "The particle-in-cell computing method for fluid dynamics". Methods Comput. Phys. 3 (3): 319–343. doi:10.1007/BF00230516. Bibcode1964SSRv....3..319B. 
  17. Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip; Pfrommer, Christoph; Lamberts, Astrid; Puchwein, Ewald (23 May 2017). "SHARP: A Spatially Higher-order, Relativistic Particle-in-Cell Code". The Astrophysical Journal 841 (1): 52. doi:10.3847/1538-4357/aa6d13. Bibcode2017ApJ...841...52S. 
  18. "ALaDyn". https://aladyn.github.io/ALaDyn/. 
  19. "ALaDyn: A High-Accuracy PIC Code for the Maxwell-Vlasov Equations". 18 November 2017. https://github.com/ALaDyn/ALaDyn. 
  20. "Codes". http://www.ccpp.ac.uk/codes.html. 
  21. "Sign in". https://cfsa-pmw.warwick.ac.uk. 
  22. "FBPIC documentation — FBPIC 0.6.0 documentation". https://fbpic.github.io/. 
  23. "fbpic: Spectral, quasi-3D Particle-In-Cell code, for CPU and GPU". 8 November 2017. https://github.com/fbpic/fbpic. 
  24. "Orbital ATK". http://www.mrcwdc.com/lsp. 
  25. "Orbital ATK". http://www.mrcwdc.com/magic. 
  26. "OSIRIS open-source - OSIRIS". https://osiris-code.github.io/. 
  27. "osiris-code/osiris: OSIRIS Particle-In-Cell code". https://github.com/osiris-code/osiris/. 
  28. "Piccante". https://aladyn.github.io/piccante/. 
  29. "piccante: a spicy massively parallel fully-relativistic electromagnetic 3D particle-in-cell code". 14 November 2017. https://github.com/ALaDyn/piccante. 
  30. "PICLas". https://piclas.readthedocs.io/. 
  31. "piclas-framework/piclas". https://github.com/piclas-framework/piclas. 
  32. "PIConGPU - Particle-in-Cell Simulations for the Exascale Era - Helmholtz-Zentrum Dresden-Rossendorf, HZDR". http://picongpu.hzdr.de/. 
  33. "ComputationalRadiationPhysics / PIConGPU — GitHub". 28 November 2017. https://github.com/ComputationalRadiationPhysics/picongpu. 
  34. "Smilei — A Particle-In-Cell code for plasma simulation". http://www.maisondelasimulation.fr/smilei/. 
  35. "SmileiPIC / Smilei — GitHub". 29 October 2019. https://github.com/SmileiPIC/Smilei. 
  36. Markidis, Stefano; Lapenta, Giovanni; Rizwan-uddin (17 Oct 2009). "Multi-scale simulations of plasma with iPIC3D". Mathematics and Computers in Simulation 80 (7): 1509. doi:10.1016/j.matcom.2009.08.038. 
  37. "iPic3D — GitHub". 31 January 2020. https://github.com/CmPA/iPic3D. 
  38. Dreher, Matthias. "Relativistic Laser Plasma". http://www2.mpq.mpg.de/lpg/research/RelLasPlas/Rel-Las-Plas.html. 
  39. "Tristan v2 wiki | Tristan v2". https://princetonuniversity.github.io/tristan-v2/. 
  40. "Tristan v2 public github page". https://github.com/PrincetonUniversity/tristan-v2/. 
  41. "QED Module | Tristan v2". https://princetonuniversity.github.io/tristan-v2/tristanv2-qed.html. 
  42. "Tristan v2: Citation.md". https://github.com/PrincetonUniversity/tristan-v2/blob/master/CITATION. 
  43. "VizGrain". http://esgeetech.com/products/vizgrain-particle-modeling/. 
  44. "VPIC". https://github.com/lanl/vpic. 
  45. "LANL / VPIC — GitHub". https://github.com/lanl/vpic. 
  46. "Tech-X - VSim". https://txcorp.com/vsim. 
  47. "Warp". http://warp.lbl.gov/. 
  48. "berkeleylab / Warp — Bitbucket". https://bitbucket.org/berkeleylab/warp. 
  49. "WarpX Documentation". https://ecp-warpx.github.io. 
  50. "ECP-WarpX / WarpX — GitHub". https://github.com/ECP-WarpX/WarpX. 
  51. "Educational Particle-In-Cell code suite". https://picksc.idre.ucla.edu/software/educational/zpic/. 
  52. "ricardo-fonseca / ZPIC — GitHub". https://github.com/ricardo-fonseca/zpic. 

Bibliography

  • Birdsall, Charles K.; A. Bruce Langdon (1985). Plasma Physics via Computer Simulation. McGraw-Hill. ISBN 0-07-005371-5. 

External links