Run CLaMS-MESSy
Namelist Setup
An example namelist setup for a CLaMS standalone run (with basemodel CLaMS) can be found in directory messy/nml/MBM/clams.
There you will find all necessary namelists with detailed descriptions of all parameters, a README with compile and run instructions and the corresponding xmessy_mmd.header.
Example CLaMS standalone run:
- one day clams run starting on 01/01/1979
all CLaMS submodels (except sedi) are used:
- traj, mix, bmix, dissoc, chem, cirrus, deepconv, tracer
- example ERA5 data (for 2 days) is used
In directory messy/nml/EMAC/LG/CLaMS/base there is an example namelist setup for a coupled CLaMS-EMAC run (with basemodel ECHAM5).
See information on page EMAC-CLaMS
Create Run Script
- Create run script for Basemodel CLaMS with:
./messy/util/xmkr MBM/clams
Run Script messy/util/xmessy_mmd.MBM-clams is created.
Create run script for CLaMS-EMAC run with:./messy/utils/xmkr EMAC/LG/CLaMS/base
Run Script messy/util/xmessy_mmd.EMAC-LG-CLaMS-base is created.
Modify Run Script
On JUWELS/JURECA set SBATCH commands in the first lines of the run script:
#!/bin/bash -e #SBATCH --nodes=2 #SBATCH --ntasks=48 #SBATCH --ntasks-per-node=24 #SBATCH --output=mpi-out.%j #SBATCH --error=mpi-err.%j #SBATCH --time=00:30:00 #SBATCH --partition=batch #SBATCH --account=clams-esm
Information about the JSC Supercomputers can be found here.
The batch system, for example, is explained here
Please keep an eye on the utilization using the job reports and adjust the settings when the maximum memory usage is low so that costs are reduced and performance is increased.
Access to job reports for the JSC Supercomputers: llview
- Set experiment name (used for output filenames):
EXP_NAME=climtest
- Change working directory, e. g.:
WORKDIR=/p/scratch/clams-esm/thomas2/messy_out/clams
On JUWELS/JURECA you have to set BASEDIR, e.g.:
BASEDIR=/p/project/clams-esm/thomas2/messy-clams
- Set start date and end date:
START_YEAR=1979 START_MONTH=01 START_DAY=01 START_HOUR=12 STOP_YEAR=1979 STOP_MONTH=01 STOP_DAY=02 STOP_HOUR=12
Specify subdirectory with namelist setup
For Basemodel CLaMS:NML_SETUP=MBM/clams
For CLaMS-EMAC:NML_SETUP=EMAC/LG/CLaMS/base
Set MESSy input directory:
On JUWELS:INPUTDIR_MESSY=/p/data1/slmet/model_data/MESSy/DATA/MESSy2
On ICE4 workstations:INPUTDIR_MESSY=/usr/nfs/data/meteocloud/data1/model_data/MESSy/DATA/MESSy2
Select model:
CLaMS run:MINSTANCE[1]=clams
CLaMS-EMAC run:MINSTANCE[1]=ECHAM5
- Set NPX[1] and NPY[1]:
NPY[1]=6 # => NPROCA for ECHAM5 NPX[1]=8 # => NPROCB for ECHAM5
NPX[1]*NPY[1] number of cores are used. On JURECA/JUWELS it has to match the number of tasks specified in the first lines of the script (#SBATCH ...).
Change Namelist Setup
The namelist setup in directory messy/nml/MBM/clams should work on ICE-4 Cluster and JUWELS without any changes.
Messy setup
- channel.nml
This file includes entries to control the output and restart handling for all channels and channel object
(see MESSy CHANNEL User Manual) - qtimer.nml
Set queue time limit
(see Restarts and Development Cycle 2 of the Modular Earth Submodel System) - switch.nml
- Clams submodels (traj, chem, mix, bmix and cirrus) can be switch on or off
- USE_CLAMS must be TRUE for all clams runs
- If USE_CLAMSCHEM is TRUE, USE_DISSOC has to be TRUE too
- timer.nml
delta_time: model time step in seconds
- IO_RERUN_EV: interval for rerun output, for example write rerun files once a month:
IO_RERUN_EV = ${RESTART_INTERVAL},'${RESTART_UNIT}','first',0,
- NO_CYCLES: number of how often rerun files are written without interrupting the simulation
- RESTART_INTERVAL, RESTART_UNIT and NO_CYCLES were set in messy script
(see MESSy TIMER User Manual)
Setup for clams submodels
- clams.nml
- clamstraj.nml
- clamschem.nml
- dissoc.nml
- clamsmix.nml
- clamsbmix.nml
- clamscirrus.nml
- clamssedi.nml
See detailed description of all namelist parameters in example namelists in directory messy/nml/MBM/clams
Execute
- Create working directory (specified in messy script), e.g.
mkdir /p/scratch/clams-esm/thomas2/messy_out/clams
- Change to working directory, e.g.
cd /p/scratch/clams-esm/thomas2/messy_out/clams
Start run script from working directory
- Start run script on workstation cluster, e.g.:
~/messy-clams/messy/util/xmessy_mmd.MBM-clams 2>&1 | tee out
- Start run script on JUWELS/JURECA, e.g.:
sbatch /p/project/clams-esm/thomas2/juwels/messy-clams/messy/util/xmessy_mmd.MBM-clams
- Start run script on workstation cluster, e.g.:
- The executable and the namelist setup is copied to the working directory
- All output and restart files are created in the working directory
File MSH_NO is created for automatic reruns.
If MSH_NO is in the working directory, the model is started in rerun-mode. MSH_NO contains the number of the last chain-element.
If you want run the example again from the beginning, remove file MSH_NO and files END* before starting the run script.