= MESSy/CLaMS: Running the model =
You have to adjust the run script and the namelist setup
== Changes in run script (e.g. messy-dir/messy/util/xmessy_mmd.climtest) ==
* On '''JUWELS/JURECA''' set SBATCH commands in the first lines of the run script:
{{{#!highlight ksh numbers=disable
#!/bin/bash -e
#SBATCH --nodes=2
#SBATCH --ntasks=48
#SBATCH --ntasks-per-node=24
#SBATCH --output=mpi-out.%j
#SBATCH --error=mpi-err.%j
#SBATCH --time=00:30:00
#SBATCH --partition=batch
#SBATCH --account=jicg11
}}}
* Change working directory, e. g.:
{{{
WORKDIR=/private/icg112/messy_out/climtest
}}}
* Set experiment name (used for output filenames):
{{{
EXP_NAME=climtest
}}}
* On '''JUWELS/JURECA''' you have to set BASEDIR:
{{{
BASEDIR=/p/project/clams-esm/thomas2/messy-clams
}}}
* Set start date and end date:
{{{
START_YEAR=1988
START_MONTH=01
START_DAY=01
START_HOUR=12
STOP_YEAR=1988
STOP_MONTH=01
STOP_DAY=04
STOP_HOUR=12
}}}
* Specify subdirectory with namelist setup:
{{{
NML_SETUP=MBM/clams_clim
}}}
* Select model: <
><
>
CLaMS run:
{{{
MINSTANCE[1]=clams
}}}
ECHAM5/CLaMS run:
{{{
MINSTANCE[1]=ECHAM5
}}}
* Set NPX[1] and NPY[1]:
{{{
NPY[1]=6 # => NPROCA for ECHAM5
NPX[1]=8 # => NPROCB for ECHAM5
}}}
NPX[1]*NPY[1] number of cores are used. On JURECA it has to match the number of tasks specified in the first lines of the script (#SBATCH ...).
== Change namelist setup (e.g. messy-dir/messy/nml/MBM/clams_clim) ==
==== Messy setup ====
* channel.nml
This file includes entries to control the output and restart handling for all channels and channel object <
>
(see ''MESSy CHANNEL User Manual'')
* qtimer.nml
Set queue time limit <
>
(see ''Restarts'' and ''Development Cycle 2 of the Modular Earth Submodel System'')
* switch.nml
* Clams submodels (traj, chem, mix, bmix and cirrus) can be switch on or off
* USE_CLAMS must be TRUE for all clams runs
* If USE_CLAMSCHEM is TRUE, USE_DISSOC has to be TRUE too
* timer.nml
* IO_RERUN_EV: interval for rerun output, for example write rerun files once a month:
{{{
IO_RERUN_EV = 1,'months','first',0,
}}}
* NO_CYCLES: number of how often rerun files are written without interrupting the simulation
* delta_time: model time step in seconds <
>
(see ''MESSy TIMER User Manual'')
==== Setup for clams submodels ====
* clams.nml
* clamstraj.nml
* clamschem.nml
* dissoc.nml
* clamsmix.nml
* clamsbmix.nml
* clamscirrus.nml
* clamssedi.nml
== Execute program ==
* Create working directory (specified in run script), e.g.
{{{
mkdir /private/icg112/messy_out/climtest
}}}
* Change to working directory:
{{{
cd /private/icg112/messy_out/climtest
}}}
* Running the model:
* Start run script on workstation cluster:
{{{
~/messy-dir/messy/util/xmessy_mmd.climtest 2>&1 | tee out
}}}
* Submit run script on '''JUWELS/JURECA'''
{{{
sbatch ~/messy-dir/messy/util/xmessy_mmd.climtest
}}}
* On success, sbatch writes the job ID to standard out.
* Other Slurm Commands
* ''squeue [-u userid]'' <
>
Show status of all jobs.
* ''scancel '' <
>
Cancel a job.
* ''scontrol show job '' <
>
Show detailed information about a pending, running or recently completed job.
* ''scontrol update job set ...'' <
>
Update a pending job.
* ''scontrol -h'' <
>
Show detailed information about scontrol.
* ''sacct -j '' <
>
Query information about old jobs.
* ''sprio'' <
>
Show job priorities.
* ''smap'' <
>
Show distribution of jobs. For a graphical interface users are referred to '''llview'''.
* '' sinfo'' <
>
View information about nodes and partitions.
* The executable and the namelist setup is copied to the working directory
* All output and restart files are created in the working directory
* File MSH_NO is created for automatic reruns. <
>
If MSH_NO is in the working directory, the model is started in rerun-mode. MSH_NO contains the number of the last chain-element.
If you want run the example again from the beginning, '''remove file MSH_NO''' before starting the run script.