Differences between revisions 1 and 13 (spanning 12 versions)
Revision 1 as of 2015-05-21 10:17:03
Size: 1520
Editor: NicoleThomas
Comment:
Revision 13 as of 2019-07-16 10:43:51
Size: 2124
Editor: NicoleThomas
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:

== Compile and Execute CLaMS programs on Juropatest ==
## page was renamed from Juropatest/CompileExecute
== Compile and Execute CLaMS programs on JURECA ==
Line 6: Line 6:
module load intel-para netCDF-Fortran/4.4.2 module load intel-para
module load
netCDF-Fortran
Line 9: Line 10:
 * Add current directory to PATH in .profile:  * Add current directory to PATH in .bash_profile:
Line 22: Line 23:
 * It may be necessary to set the stack size to unlimited:
 {{{
ulimit -s unlimited
}}}
 Without unsetting this memory limit a memory fault can occur.
Line 23: Line 30:
 {{{
#!/bin/bash -x
 {{{#!highlight ksh numbers=disable
#!/bin/ksh -x
Line 38: Line 45:
   {{{  {{{
Line 62: Line 69:

 * Interactive sessions:
   
  * allocate interactive session:
  {{{
salloc --partition=devel --nodes=2 --time=00:30:00
}}}

  * salloc will start a bash on the login node. Start program within the bash with:
  {{{
srun --nodes=2 --ntasks-per-node=8 progname
}}}

  * The interactive session is terminated by exiting the shell
  {{{
exit
}}}

Compile and Execute CLaMS programs on JURECA

  • Load following modules
    module load intel-para
    module load netCDF-Fortran
  • Add current directory to PATH in .bash_profile:
    PATH=.:$PATH
    export PATH
  • The libraries used by CLaMS are installed in directory /homec/jicg11/jicg1108
  • The CLaMS programs can be compiled with:
    make [useMPI=true] progname
  • It may be necessary to set the stack size to unlimited:
    ulimit -s unlimited
    Without unsetting this memory limit a memory fault can occur.
  • Create a batch script
    #!/bin/ksh -x
    #SBATCH --nodes=1
    #SBATCH --ntasks=4
    #SBATCH --ntasks-per-node=4
    #SBATCH --output=mpi-out.%j
    #SBATCH --error=mpi-err.%j
    #SBATCH --time=00:05:00
    #SBATCH --partition=batch
    
    srun ./traj_mpi
    
  • Submit job
    The job script is submitted using:

    sbatch <jobscript>
    On success, sbatch writes the job ID to standard out.
  • Other Slurm Commands
    • squeue
      Show status of all jobs.

    • scancel <jobid>
      Cancel a job.

    • scontrol show job <jobid>
      Show detailed information about a pending, running or recently completed job.

    • scontrol update job <jobid> set ...
      Update a pending job.

    • scontrol -h
      Show detailed information about scontrol.

    • sacct -j <jobid>
      Query information about old jobs.

    • sprio
      Show job priorities.

    • smap
      Show distribution of jobs. For a graphical interface users are referred to llview.

    • sinfo
      View information about nodes and partitions.

  • Interactive sessions:
    • allocate interactive session:
      salloc --partition=devel --nodes=2 --time=00:30:00
    • salloc will start a bash on the login node. Start program within the bash with:
      srun --nodes=2 --ntasks-per-node=8 progname
    • The interactive session is terminated by exiting the shell
      exit

Jureca/CompileExecute (last edited 2022-10-07 08:38:31 by NicoleThomas)