Running Pre-Processing, Processing and Post-processing jobs with dependency

Sometimes you might want to run various pieces of your workflow that have dependency. One way is obviously to run then one-by-one after ensuring that the dependent jobs are finished. That strategy might be inconvenient if you have many parametric studies planned simulataneously. An easier approach is to use a slurm feature "#SBATCH --dependency=singleton" that allows you to submit many jobs at once such that the jobs run with dependency based on the order they were submitted. The dependent jobs need to have the same name. The best way to fully understancd it you tru it yourself. The test case is based on a CFD code - OpenFOAM. In the first step, the mesh is generated and partitioned, using one node. The solver, icoFOAM runs in the second job using two nodes, and the reconstruction of solution happens in the third job. Since these jobs require different number of nodes for execution, you cannot/should not combine these steps in one job script. 

 

Test it yourself:

tar -xf /sw/xc40cle7/openfoam/openfoam_dependency.tar 

cd openfoam_dependency/

sbatch job1.sh

sbatch job2.sh

sbatch job3.sh

squeue -u $USER