There are many beam physics simulation codes. A key component of these simulations is the inclusion of the electromagnetic interactions between particles in the beam, also known as space charge forces. One way to compute space charge forces is the particle-in-cell (PIC) method. This post implements a basic version of the PIC method in Python.
Theoretical model
We’ll use bunch to refer to a group of particles in three-dimensional (3D) space, and we’ll use a local cartesian coordinate system whose origin moves with the center of the bunch as shown below:
The \(s\) coordinate specifies the position of the bunch in the accelerator, and the path can be curved. Now for a few assumptions and approximations. First, assume all particles in the bunch move at a constant velocity \(\beta c\), where \(c\) is the speed of light. We then make the paraxial approximation. It’s conventional to use the slope \(x' = dx/ds\) instead of the velocity, and the paraxial approximation assumes this slope is very small. Usually we report this slope in milliradians since \(tan\theta \approx \theta\) for small angles. Next we assume that the transverse (\(x\)-\(y\)) size of the bunch varies slowly along the \(s\) axis. If this is true and we look at the electric field in a transverse slice of the bunch, there won’t be much difference between the true field and the field of an infinitely long, uniform density cylinder. Our focus will be on the transverse dynamics of such a slice, so we’ll treat each “particle” as an infinite line of charge. The figure below illustrates this approximation.
Another approximation is to neglect any magnetic fields generated by the beam, which is again valid if the transverse velocities are very small relative to \(\beta c\). All this being said, the equations of motion without any external forces, i.e., in free space, can be written as
where \(\mathbf{x} = [x, y]^T\) is the coordinate vector, \(\mathbf{E} = [E_x, E_y]^T\) is the self-generated electric field, \(m\) is the particle mass, and \(\gamma = \left({1 - \beta^2}\right)^{-1/2}\). Let’s first address the factor \(\gamma^{-3}\) in the equation of motion, which means that the space charge force goes to zero as the velocity approaches the speed of light. This is because parallel moving charges generate an attractive magnetic force which grows with velocity, completely cancelling the electric force in the limit \(v \rightarrow c\).
One may ask: what about the rest frame in which there is no magnetic field? But special relativity says that electrogmagnetic fields change with reference frame. Using the transformations defined here, you can quickly prove that
This inverse relationship between velocity and the space charge force has real-life consequences. It tells us that space charge is important if 1) the beam is very intense, meaning there are many particles in a small area, or 2) the beam is very energetic, meaning it is moving extremely fast. For example, space charge can usually be ignored in electron beams, which move near the speed of light for very modest energies due to their tiny mass, but is significant in high-intensity, low-energy hadron accelerators such as FRIB, SNS, and ESS.
We should now address the difficulty in determining the evolution of this system: the force on a particle in an \(n\)-particle bunch depends on the positions of the other \(n - 1\) particles. The approach of statistical mechanics to this problem is to introduce a distribution function\(f(\mathbf{x}, \mathbf{x}', s)\) which gives the particle density at axial position \(s\) and phase space coordinates \(\mathbf{x}\), \(\mathbf{x}'\). The Vlasov-Poisson system of equations determines the evolution of \(f\) as long as we ignore collisions between particles:
Finally, the transverse charge density \(\rho\) is determined by
\[
\rho = q \int{f dx'dy'}.
\tag{5}\]
Although these equations are easy to write down, they are generally impossible to solve analytically. We need to turn to a computer for help.
Computational method
The Vlasov equation could be solved directly, but this is difficult, especially in 2D or 3D. On the other end of the spectrum, the notion of a fluid in phase space could be abandoned and each particle could be tracked individually, computing the forces using direct sums. But this is infeasible with current hardware; the time complexity would by \(O(n^2)\), where \(n\) is the number of particles, and \(n\) may be on the order of \(10^{14}\). The particle-in-cell (PIC) method is a sort of combination of these two approaches. The idea is to track a group of macroparticles according to Equation 1, each of which represents a large number of real particles. The fields, however, are solved from Equation 4. The key step is transforming back and forth between a discrete and continuous representation of the bunch. The simulation loop for the PIC method is shown below.
In the next sections I will discuss each of these steps and implement them in Python. The hidden cell below shows all the imports needed to run the code.
Imports
import Cython%load_ext cythonimport numpy as npfrom matplotlib import animationfrom matplotlib import pyplot as pltfrom matplotlib.lines import Line2Dfrom matplotlib.patches import Ellipseimport proplot as ppltfrom scipy.fft import fft2from scipy.fft import ifft2from scipy.interpolate import RegularGridInterpolatorfrom scipy.integrate import odeintfrom scipy.stats import truncnormimport seaborn as sns from tqdm.notebook import trange
Let’s first create a Bunch class, which is a simple container for the bunch coordinates.
class Bunch:"""Container for four-dimensional phase space distribution. Attributes ---------- intensity : float Number of physical particles in the bunch. length : float Length of the bunch [m]. mass, kin_energy : float Mass [GeV/c^2], charge [C], and kinetic energy [GeV] per particle. nparts : float Number of macroparticles in the bunch. X : ndarray, shape (nparts, 4) Array of particle coordinates. Columns are [x, x', y, y']. Units are meters and radians. positions : ndarray, shape (nparts, 2): Just the x and y positions (for convenience). """def__init__(self, intensity=1e14, length=250., mass=0.938, kin_energy=1.0):self.intensity, self.length = intensity, lengthself.mass, self.kin_energy = mass, kin_energyself.gamma =1+ (kin_energy / mass) # Lorentz factorself.beta = np.sqrt(1- (1/self.gamma)**2) # v/c r0 =1.53469e-18# classical proton radius [m]self.perveance =2* r0 * intensity / (length *self.beta**2*self.gamma**3)self.nparts =0self.compute_macrosize()self.X, self.positions =None, Nonedef compute_macrosize(self):"""Update the macrosize and macrocharge."""self.macrosize =self.intensity //self.nparts ifself.nparts >0else0def fill(self, X):"""Fill with particles. X is the 4D phase space coordinate array."""self.X = X ifself.X isNoneelse np.vstack([self.X, X])self.positions =self.X[:, [0, 2]]self.nparts =self.X.shape[0]self.compute_macrosize()def compute_extremum(self):"""Get extreme x and y coorinates."""self.xmin, self.ymin = np.min(self.positions, axis=0)self.xmax, self.ymax = np.max(self.positions, axis=0)self.xlim, self.ylim = (self.xmin, self.xmax), (self.ymin, self.ymax)
Weighting
Starting from a group of macroparticles, we need to produce a charge density \(\rho_{i,j}\) on a grid. The most simple approach is the nearest grid point (NGP) method, which, as the name suggests, assigns the full particle charge to the closest grid point. This is commonly called zero-order weighting; although it is very fast and easy to implement, it is not commonly used because it can lead to significant noise. A better method called cloud-in-cell (CIC) treats each particle as a rectangular, uniform density cloud of charge with dimensions equal to the grid spacing. A fractional part of the charge is assigned based on the fraction of the cloud overlapping with a given cell. This can be thought of as first-order weighting. To get a sense of what these methods are doing (in 1D), we can slide a particle across a cell and plot the resulting density of the cell at each position, thus giving an effective particle shape.
Code
def shape_func(u, v, cell_width, method='NGP'): S, diff =0.0, np.abs(u - v)if method.upper() =='NGP': S =1.0if diff < (0.5* cell_width) else0.0elif method.upper() =='CIC': S =1.0- diff / cell_width if diff < cell_width else0.0return S / cell_widthfig, ax = pplt.subplots(figsize=(4.0, 1.5))xvals = np.linspace(-1.0, 1.0, 1000)for i, method inenumerate(['NGP', 'CIC']): densities = [shape_func(x, 0.0, 1.0, method) for x in xvals] ax.plot(xvals, densities, color='black', ls=['-', ':'][i], label=method,)ax.format(ylim=(0.0, 1.1), xlabel='($x - x_k) \,/\, \Delta x$', ylabel='Density')ax.legend(loc='r', ncols=1, framealpha=0.0)plt.close()
The NGP method leads to a discontinuous boundary while the CIC method leads to a continous boundary (but discontinous derivative). There are also higher order methods which lead to a smooth boundary, but I don’t cover those here.
We also need to perform the inverse operation: given the electric field at each grid point, interpolate the value at each particle position. The same method applies here. NGP just uses the electric field at the nearest grid point, while CIC weights the four nearest grid points. The following Grid class implements the CIC method. Notice that Cython is used in the for-loop in the distribute method. I couldn’t figure out a way to perform this operation with the loop, and in pure Python it took about 90% of the runtime for a single simulation step. Using Cython gave a significant performance boost.
%%cythonimport numpy as npfrom scipy.interpolate import RegularGridInterpolatorclass Grid:"""Class for 2D grid. Attributes ---------- xmin, ymin, xmax, ymax : float Minimum and maximum coordinates. Nx, Ny : int Number of grid points. dx, dy : int Spacing between grid points. x, y : ndarray, shape (Nx,) or (Ny,) Positions of each grid point. cell_area : float Area of each cell. """def__init__(self, xlim=(-1.0, 1.0), ylim=(-1.0, 1.0), size=(64, 64)):self.xlim = xlimself.ylim = ylim (self.xmin, self.xmax) =self.xlim (self.ymin, self.ymax) =self.ylimself.size = size (self.Nx, self.Ny) = sizeself.dx = (self.xmax -self.xmin) /float(self.Nx -1)self.dy = (self.ymax -self.ymin) /float(self.Ny -1)self.cell_area =self.dx *self.dyself.x = np.linspace(self.xmin, self.xmax, self.Nx)self.y = np.linspace(self.ymin, self.ymax, self.Ny)def set_lims(self, xlim, ylim):"""Set the min and max grid coordinates."""self.__init__(xlim=xlim, ylim=ylim, size=self.size)def zeros(self):"""Create array of zeros with same size as the grid."""return np.zeros((self.size))def distribute(self, positions):"""Distribute points on the grid using the cloud-in-cell (CIC) method. Parameters ---------- positions : ndarray, shape (n, 2) List of (x, y) positions. Returns ------- rho : ndarray, shape (Nx, Ny) The value rho[i, j] gives the number of macroparticles in the i,j cell. """# Compute area overlapping with 4 nearest neighbors (A1, A2, A3, A4) ivals = np.floor((positions[:, 0] -self.xmin) /self.dx).astype(int) jvals = np.floor((positions[:, 1] -self.ymin) /self.dy).astype(int) ivals[ivals >self.Nx -2] =self.Nx -2 jvals[jvals >self.Ny -2] =self.Ny -2 x_i, x_ip1 =self.x[ivals], self.x[ivals +1] y_j, y_jp1 =self.y[jvals], self.y[jvals +1] _A1 = (positions[:, 0] - x_i) * (positions[:, 1] - y_j) _A2 = (x_ip1 - positions[:, 0]) * (positions[:, 1] - y_j) _A3 = (positions[:, 0] - x_i) * (y_jp1 - positions[:, 1]) _A4 = (x_ip1 - positions[:, 0]) * (y_jp1 - positions[:, 1])# Distribute fractional areas rho =self.zeros() cdef double[:, :] rho_view = rho cdef int i, jfor i, j, A1, A2, A3, A4 inzip(ivals, jvals, _A1, _A2, _A3, _A4): rho_view[i, j] += A4 rho_view[i +1, j] += A3 rho_view[i, j +1] += A2 rho_view[i +1, j +1] += A1 return rho /self.cell_areadef interpolate(self, grid_vals, positions):"""Interpolate values from the grid using the CIC method. Parameters ---------- positions : ndarray, shape (n, 2) List of (x, y) positions. grid_vals : ndarray, shape (n, 2) Scalar value at each coordinate point. Returns ------- int_vals : ndarray, shape (nparts,) Interpolated value at each position. """ int_func = RegularGridInterpolator((self.x, self.y), grid_vals)return int_func(positions)def gradient(self, grid_vals):"""Compute gradient using 2nd order centered differencing. Parameters ---------- grid_vals : ndarray, shape (Nx, Ny) Scalar values at each grid point. Returns ------- gradx, grady : ndarray, shape (Nx, Ny) The x and y gradient at each grid point. """return np.gradient(grid_vals, self.dx, self.dy)
It should also be mentioned that the field interpolation method should be the same as the charge deposition method; if this is not true, it is possible for a particle to exert a force on itself! Let’s test the method with a \(64 \times 64\) grid.
for a grid with spacing \(\Delta_x\) and \(\Delta_y\). There are multiple paths to a solution; we will focus on the method implemented in PyORBIT which utilizes the Fourier convolution theorem. Let’s briefly go over this method. The potential from an infinite line of elementary charges at the origin with number density \(\lambda\) is
Note that \(\mathbf{y}\) is just a dummy variable. By letting \(G(\mathbf{x} - \mathbf{y}) = -\ln{|\mathbf{x} - \mathbf{y}|}\) and \(\rho(\mathbf{x}) = \delta(\mathbf{x})\), then up to a scaling factor we have
In this form the potential is a convolution (represented by \(*\)) of the charge density \(\rho\) with \(G\), which is called the Green’s function. On the grid this will look like
This solves the problem in \(O(N^2)\) time complexity for \(N\) grid points. This is already much faster than a direct force calculation but could still get expensive for fine grids. We can speed things up by exploiting the convolution theorem, which says that the Fourier transform of a convolution of two functions is equal to the product of their Fourier transforms. The Fourier transform is defined by
where the hat represents the discrete Fourier transform. The time complexity can be reduced to \(O\left(N \log N\right)\) with the FFT algorithm at our disposal.
There is a caveat to this method: Equation 10 must be a circular convolution in order to use the FFT algorithm, which means \(G\) must be periodic. But the beam is in free space (we’ve neglected any conducting boundary), so this is not true. We can make it true by doubling the grid size in each dimension. We then make \(G\) a mirror reflection in the new quadrants so that it is periodic, and also set the charge density equal to zero in these regions. After running the method on this larger grid, the potential in the new quadrants will be unphysical; however, the potential in the original quadrant will be correct. There are also some tricks we can play to reduce the space complexity, and in the end doubling the grid size is not much of a price to pay for the gain in speed. The method is implemented in the PoissonSolver class.
class PoissonSolver:"""Class to solve Poisson's equation on a 2D grid. Attributes ---------- rho, phi, G : ndarray, shape (2*Nx, 2*Ny) The density (rho), potential (phi), and Green's function (G) at each grid point on a doubled grid. Only one quadrant (i < Nx, j < Ny) corresponds to to the real potential. """def__init__(self, grid, sign=-1.0):self.grid = grid new_shape = (2*self.grid.Nx, 2*self.grid.Ny)self.rho = np.zeros(new_shape)self.G = np.zeros(new_shape)self.phi = np.zeros(new_shape)def set_grid(self, grid):self.__init__(grid)def compute_greens_function(self):"""Compute Green's function on doubled grid.""" Nx, Ny =self.grid.Nx, self.grid.Ny Y, X = np.meshgrid(self.grid.x -self.grid.xmin, self.grid.y -self.grid.ymin)self.G[:Nx, :Ny] =-0.5* np.log(X**2+ Y**2, out=np.zeros_like(X), where=(X + Y >0))self.G[Nx:, :] = np.flip(self.G[:Nx, :], axis=0)self.G[:, Ny:] = np.flip(self.G[:, :Ny], axis=1)def get_potential(self, rho):"""Compute the scaled electric potential on the grid. Parameters ---------- rho : ndarray, shape (Nx, Ny) Number of macroparticles at each grid point. Returns ------- phi : ndarray, shape (Nx, Ny) Scaled electric potential at each grid point. """ Nx, Ny =self.grid.Nx, self.grid.Nyself.rho[:Nx, :Ny] = rhoself.compute_greens_function()self.phi = ifft2(fft2(self.G) * fft2(self.rho)).realreturnself.phi[:Nx, :Ny]
Running the algorithm gives the following potential on the doubled grid:
All we need to do in this step is integrate the equations of motion. A common method is leapfrog integration in which the position and velocity are integrated out of phase as follows:
\[
m \left(\frac{\mathbf{v}_{i+1/2} - \mathbf{v}_{i-1/2}}{\Delta_t}\right) = \mathbf{F}(\mathbf{x}_i),
\tag{14}\]
A different scheme must be used when velocity-dependent forces are present. This is a symplectic integrator, which means it conserves energy. It is also second-order accurate, meaning that its error is proportional to the square of the \(\Delta_t\). Finally, it is time-reversible. The only complication is that, because the velocity and position are out of phase, we need to push the velocity back one half-step before starting the simulation, and push it one half-step forward when taking a measurement.
Putting it all together
Simulation loop
We have all the tools to implement the simulation loop. While \(s < s_{max}\) we:
Compute the charge density on the grid.
Find the electric potential on the grid.
Interpolate the electric field at the particle positions.
Update the particle positions.
We’ll first create a History class which stores the beam moments or phase space coordinates.
class History:"""Class to store bunch data over time. Atributes --------- moments : list Second-order bunch moments. Each element is ndarray of shape (10,). coords : list Bunch coordinate arrays. Each element is ndarray of shape (nparts, 4) moment_positions, coord_positions : list Positions corresponding to each element of `moments` or `coords`. """def__init__(self, bunch, samples='all'):self.X = bunch.Xself.moments, self.coords = [], []self.moment_positions, self.coord_positions = [], []if samples =='all'or samples >= bunch.nparts:self.idx = np.arange(bunch.nparts)else:self.idx = np.random.choice(bunch.nparts, samples, replace=False)def store_moments(self, s): Sigma = np.cov(self.X.T)self.moments.append(Sigma[np.triu_indices(4)])self.moment_positions.append(s)def store_coords(self, s):self.coords.append(np.copy(self.X[self.idx, :]))self.coord_positions.append(s)def package(self):self.moments = np.array(self.moments)self.coords = np.array(self.coords)
Now we’ll create a Simulation class.
class Simulation:"""Class to simulate the evolution of a charged particle bunch in free space. Attributes ---------- bunch : Bunch: The bunch to track. distance : float Total tracking distance [m]. step_size : float Distance between force calculations [m]. nsteps : float Total number of steps = int(length / ds). steps_performed : int Number of steps performed so far. s : float Current bunch position. history : History object Object storing historic bunch data. meas_every : dict Dictionary with keys: 'moments' and 'coords'. Values correspond to the number of simulations steps between storing these quantities. For example, `meas_every = {'coords':4, 'moments':2}` will store the moments every 4 steps and the moments every other step. Defaults to storing only the initial and final positions. samples : int Number of bunch particles to store when measuring phase space coordinates. Defaults to the entire coordinate array. """def__init__(self, bunch, distance, step_size, grid_size, meas_every={}, samples='all'):self.bunch = bunchself.distance, self.step_size = distance, step_size self.nsteps =int(distance / step_size)self.grid = Grid(size=grid_size)self.solver = PoissonSolver(self.grid)self.fields = np.zeros((bunch.nparts, 2))self.history = History(bunch, samples) self.s, self.steps_performed =0.0, 0self.meas_every = meas_everyself.meas_every.setdefault('moments', self.nsteps)self.meas_every.setdefault('coords', self.nsteps)self.sc_factor = bunch.perveance / bunch.npartsdef set_grid(self):"""Set grid limits from bunch size."""self.bunch.compute_extremum()self.grid.set_lims(self.bunch.xlim, self.bunch.ylim)self.solver.set_grid(self.grid)def compute_electric_field(self):"""Compute self-generated electric field."""self.set_grid() rho =self.grid.distribute(self.bunch.positions) phi =self.solver.get_potential(rho) Ex, Ey =self.grid.gradient(-phi)self.fields[:, 0] =self.grid.interpolate(Ex, self.bunch.positions)self.fields[:, 1] =self.grid.interpolate(Ey, self.bunch.positions)def kick(self, step_size):"""Update particle slopes."""self.bunch.X[:, 1] +=self.sc_factor *self.fields[:, 0] * step_sizeself.bunch.X[:, 3] +=self.sc_factor *self.fields[:, 1] * step_sizedef push(self, step_size):"""Update particle positions."""self.bunch.X[:, 0] +=self.bunch.X[:, 1] * step_sizeself.bunch.X[:, 2] +=self.bunch.X[:, 3] * step_sizedef store(self):"""Store bunch data.""" store_moments =self.steps_performed %self.meas_every['moments'] ==0 store_coords =self.steps_performed %self.meas_every['coords'] ==0ifnot (store_moments or store_coords):return Xp = np.copy(self.bunch.X[:, [1, 3]])self.kick(+0.5*self.step_size) # sync positions/slopesif store_moments:self.history.store_moments(self.s)if store_coords:self.history.store_coords(self.s)self.bunch.X[:, [1, 3]] = Xpdef run(self, meas_every={}):"""Run the simulation."""self.store()self.compute_electric_field()self.kick(-0.5*self.step_size) # desync positions/slopesfor i in trange(self.nsteps):self.compute_electric_field()self.kick(self.step_size)self.push(self.step_size)self.s +=self.step_sizeself.steps_performed +=1self.store()self.history.package()
Benchmark: freely expanding Vlasov equilibrium distribution
We need some way of checking our method’s accuracy. Luckily there is an analytic benchmark available: the Kapchinskij-Vladimirskij (KV) distribution. Without going into any detail, the beam projects to a uniform density ellipse in the \(x\)-\(y\) plane, and the space charge forces produced within this ellipse are linear (in general space charge forces are nonlinear). If we plug the KV distribution into the Vlasov equation, it can be seen that these forces will remain linear for all time if the external focusing forces are also linear. As a consequence, a set of self-consistent differential equations describing the evolution of the ellipse boundary can be written down. If we consider the beam to be an upright ellipse with semi-axis \(a\) along the \(x\) axis and \(b\) along the \(y\) axis, then without external fields the equations read:
These are known as the KV envelope equations or simply envelope equations. \(Q\), called the perveance, is a dimensionless number which is proportional to the beam intensity but reduced by the beam energy. We can think of this constant as a measure of the space charge strength. The \(\varepsilon_x\) and \(\varepsilon_y\) terms are called the emittances and determine the area occupied by the beam in \(x\)-\(x'\) and \(y\)-\(y'\) phase space. For example, a beam with all particles sitting perfectly still in the \(x\)-\(y\) plane has no emittance, but a beam which is instead spreading out has a nonzero emittance. These emittances will also be conserved for the KV distribution. The following function integrates the envelope equations.
def track_env(X, positions, perveance=0.0):"""Track beam moments (assuming KV distribution) through free space. Parameters ---------- X : ndarray, shape (nparts, 4) Transverse bunch coordinate array. positions : list List of positions at which to evaluate the equations. perveance : float The dimensionless space charge perveance. Returns ------- ndarray, shape (len(positions), 4) Each row gives [a, a', b, b'], where a and b are the beam radii in the x and y dimension, respectively. """ Sigma = np.cov(X.T) a, b = np.sqrt(Sigma[0, 0]), np.sqrt(Sigma[2, 2]) ap, bp = Sigma[0, 1] / a, Sigma[2, 3] / b epsx = np.sqrt(np.linalg.det(Sigma[:2, :2])) epsy = np.sqrt(np.linalg.det(Sigma[2:, 2:]))def derivs(env, s): a, ap, b, bp = env envp = np.zeros(4) envp[0], envp[2] = ap, bp envp[1] =0.5* perveance/(a + b) + epsx**2/ a**3 envp[3] =0.5* perveance/(a + b) + epsy**2/ b**3return envpreturn odeint(derivs, [a, ap, b, bp], positions, atol=1e-14)
Some care must be taken in the choice of simulation parameters; we need a fine enough grid to resolve the hard edge of the beam and enough macroparticles per grid cell to collect good statistics. I chose what I thought was reasonable: 128,000 macroparticles, a step size of 2.5 cm, and a \(128 \times 128\) grid. Let’s create and track four identical KV distributions, each with a different intensity.
This plot shows the horizontal and vertical beam size over time for each of the four chosen beam intensities. The solid lines are the result of integrating the envelope equations, while the dots are the result of the PIC calculation. Notice that the beam expands on its own due to the nonzero emittance and that the effect of space charge is to increase the expansion rate. It seems to be quite accurate over this distance, and the runtime is acceptable for my purposes. Here is the evolution of 10,000 randomly sampled macroparicles compared with the KV envelope.
This post implemented an electrostatic PIC solver in Python. I learned quite a bit from doing this and was happy to see my calculations agree with the theoretical benchmark. One extension of this code would be to consider the velocity-dependent force from magnetic fields. It would also be straightforward to extend the code to 3D. Finally, all the methods used here are applicable to gravitational simulations. Here are some helpful references:
Fig. 1. Coordinate system defined in the beam rest frame.Fig. 2. Coasting beam approximation. (Source: G. Franchetti, Space charge in circular machines, CERN Accelerator School Proceedings (2017). https://e-publishing.cern.ch/index.php/CYRSP/article/view/413)Fig. 3. The magnetic force between parallel currents is attractive. (Source: OpenStax University Physics.)Fig. 4. The particle-in-cell (PIC) loop.Fig. 5. Effective particle shape for Nearest-grid-point (NGP) and cloud-in-cell (CIC) weighting.Fig. 6. Test of cloud-in-cell (CIC) particle weighting on a regular 64 x 64 grid.Fig. 7. Electric potential on doubled grid.Fig. 8. Electric field on original grid.Fig. 9. Leapfrog integration. (Source: S. Lund.)Fig. 11. Evolution of 10,000 macroparticles (black) and KV envelope (red) during free-expansion.