## page was renamed from Juropatest/CompileExecute
== Compile and Execute CLaMS programs on JURECA/JUWELS ==
* Load following modules
{{{
module load Intel
module load ParaStationMPI
module load netCDF-Fortran
}}}
* Add current directory to PATH in .bash_profile:
{{{
PATH=.:$PATH
export PATH
}}}
* For compilation of all CLaMS program, go to the clams directory and simply type
{{{
make all
}}}
* For compilation of a specific program ''progname'' in package ''package-name'':
{{{
cd clams-directory
make libs
cd package-name
make progname
}}}
* It may be necessary to set the stack size to unlimited:
{{{
ulimit -s unlimited
}}}
Without unsetting this memory limit a memory fault can occur.
* Create a batch script
{{{#!highlight ksh numbers=disable
#!/bin/ksh -x
#SBATCH --nodes=1
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=4
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:05:00
#SBATCH --partition=batch
#SBATCH --account=icg1
srun ./traj_mpi
}}}
* Submit job <
>
The job script is submitted using:
{{{
sbatch
}}}
On success, sbatch writes the job ID to standard out.
* Other Slurm Commands
* ''squeue '' <
>
Show status of all jobs.
* ''scancel '' <
>
Cancel a job.
* ''scontrol show job '' <
>
Show detailed information about a pending, running or recently completed job.
* ''scontrol update job set ...'' <
>
Update a pending job.
* ''scontrol -h'' <
>
Show detailed information about scontrol.
* ''sacct -j '' <
>
Query information about old jobs.
* ''sprio'' <
>
Show job priorities.
* ''smap'' <
>
Show distribution of jobs. For a graphical interface users are referred to '''llview'''.
* '' sinfo'' <
>
View information about nodes and partitions.
* Interactive sessions:
* allocate interactive session:
{{{
salloc --partition=devel --nodes=2 --time=00:30:00
}}}
* salloc will start a bash on the login node. Start program within the bash with:
{{{
srun --nodes=2 --ntasks-per-node=8 progname
}}}
* The interactive session is terminated by exiting the shell
{{{
exit
}}}