Size: 1729
Comment:
|
Size: 2268
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
== Compile and Execute CLaMS programs on Juropatest == | == Compile and Execute CLaMS programs on JURECA == |
Line 6: | Line 6: |
module load intel-para netCDF-Fortran/4.4.2 | module load Intel module load ParaStationMPI module load netCDF-Fortran |
Line 9: | Line 11: |
* Add current directory to PATH in .profile: | * Add current directory to PATH in .bash_profile: |
Line 15: | Line 17: |
* The libraries used by CLaMS are installed in directory /homec/jicg11/jicg1108 | * For compilation of all CLaMS program, go to the clams directory and simply type {{{ make all }}} |
Line 17: | Line 22: |
* The CLaMS programs can be compiled with: | * For compilation of a specific program ''progname'' in package ''package-name'': |
Line 19: | Line 24: |
make [useMPI=true] progname | cd clams-directory make libs cd package-name make progname |
Line 29: | Line 37: |
{{{ #!/bin/bash -x |
{{{#!highlight ksh numbers=disable #!/bin/ksh -x |
Line 38: | Line 46: |
#SBATCH --account=icg1 | |
Line 44: | Line 53: |
{{{ | {{{ |
Line 68: | Line 77: |
* Interactive sessions: * allocate interactive session: {{{ salloc --partition=devel --nodes=2 --time=00:30:00 }}} * salloc will start a bash on the login node. Start program within the bash with: {{{ srun --nodes=2 --ntasks-per-node=8 progname }}} * The interactive session is terminated by exiting the shell {{{ exit }}} |
Compile and Execute CLaMS programs on JURECA
- Load following modules
module load Intel module load ParaStationMPI module load netCDF-Fortran
- Add current directory to PATH in .bash_profile:
PATH=.:$PATH export PATH
- For compilation of all CLaMS program, go to the clams directory and simply type
make all
For compilation of a specific program progname in package package-name:
cd clams-directory make libs cd package-name make progname
- It may be necessary to set the stack size to unlimited:
ulimit -s unlimited
Without unsetting this memory limit a memory fault can occur. - Create a batch script
#!/bin/ksh -x #SBATCH --nodes=1 #SBATCH --ntasks=4 #SBATCH --ntasks-per-node=4 #SBATCH --output=mpi-out.%j #SBATCH --error=mpi-err.%j #SBATCH --time=00:05:00 #SBATCH --partition=batch #SBATCH --account=icg1 srun ./traj_mpi
Submit job
The job script is submitted using:sbatch <jobscript>
On success, sbatch writes the job ID to standard out.- Other Slurm Commands
squeue
Show status of all jobs.scancel <jobid>
Cancel a job.scontrol show job <jobid>
Show detailed information about a pending, running or recently completed job.scontrol update job <jobid> set ...
Update a pending job.scontrol -h
Show detailed information about scontrol.sacct -j <jobid>
Query information about old jobs.sprio
Show job priorities.smap
Show distribution of jobs. For a graphical interface users are referred to llview.sinfo
View information about nodes and partitions.
- Interactive sessions:
- allocate interactive session:
salloc --partition=devel --nodes=2 --time=00:30:00
- salloc will start a bash on the login node. Start program within the bash with:
srun --nodes=2 --ntasks-per-node=8 progname
- The interactive session is terminated by exiting the shell
exit
- allocate interactive session: