Home > Legacy archive > Specific versions > FARGO-ADSG > FARGO-SG > **MPI implementation**

The calculation of the self-gravitating accelerations with FFTs is parallelized. For this purpose, we use the version 2.1.5 (with MPI support) of the `FFTW`

library.

To avoid the well-known alias issue, the FFTs calculation is done on an additional grid whose radial range is twice that of the hydrodynamics mesh (hereafter HYDRO mesh, where all the hydrodynamics fields are defined). This grid, referred to as the FFTW mesh, has however the same azimuthal range as the HYDRO mesh. Note that the fftw mesh is only dedicated to the calculation of the forward and backward Fourier Transforms.

There are therefore two domain decompositions (hereafter dd): one for the hydro mesh, one for the fftw mesh. It is however not possible to enforce the dd of the fftw mesh, once the dd of the hydro mesh is done. We have to do the contrary. The function `rfftwnd_mpi_local_sizes`

(split.c) of the fftw library builds up the dd of the fftw mesh: the cpu numbers are ordered within the fftw mesh. We then have to adapt the dd of the hydro mesh to minimize the amount of communications between both meshes. The following figure displays the dds for an even number of cpus, and the communications required between both meshes for the calculation of self-gravity.

The surface density field is defined on the hydro mesh. In this example, the cpu 3 communicates its local surface density to cpu 1, while cpu 2 communicates with cpu 0. In the fftw mesh, only cpus 0 and 1 represent the surface density of the hydro mesh; the buffered surface densities of cpus 2 and 3 are filled with zeros (hence the black boxes in the figure). The self-gravitating accelerations are then calculated on the fftw mesh as explained here. Once calculated, cpu 1 communicates back its accelerations to cpu 3, and cpu 0 to cpu2. The velocities can then be updated on the hydro mesh.

The case of an odd number of cpus is displayed in the following figure.