CESM
The Community Earth System Model (CESM) is a coupled climate model for simulating the earth's climate system. Composed of four separate sub-models simultaneously simulating the earth's, atmosphere: ocean, land surface and sea-ice, and one central coupler component.
CESM is one of the leading global climate models, widely recognised and successfully used by the international research community. The state-of-the-art CESM models compliment both the MET Office's Unified Model and WRF/WRF-CHEM, in that CESM simulates tropical climate better than current alternative models; CESM is a prominent model developed by the US climate research community: there is the need to facilitate the comparison between US and UK state-of-the-art models in order to resolve uncertainties in climate projections; CESM can exploit the high level of parallelism provided by ARCHER; and CESM can be configured in many ways, opening a wide range of avenues for climate research.
Both CESM 1.0.6 and CESM 1.2.2 have been installed and tested extensively on ARCHER. There are no CESM modules due to the nature of using CESM, as users must download and install their own local version. To aid in this process, the instructions for using CESM on ARCHER have been made available.
The associated work was done as part of an eCSE project, namely "Porting and enabling the use of CESM, the Community Earth System Model on ARCHER", PIs: Massimo Bollasina and Mike Mineter, eCSE01-016, April-November, 2014.
Relevant Documents
eCSE White Paper: Porting and Enabling Use of the Community Earth System Model on ARCHER
Licensing and Access
CESM Copyright Notice and Disclaimer
Building and running CESM
Input, archive and scratch directories
NB before building CESM, the input, archive and scratch directories must exist.
When users build their case, the input directory will be probed to check if the associated input files are available. If they are not, then the scripts will automatically pull the necessary associated input files from the CESM svn repository and place them in the input directory. As such, the input directory can become huge.
In an ideal environment, there would exist a directory where every ARCHER user has both read and write access to it. Unfortunately, such a directory does not exist on ARCHER. As such, there is no shared inputdata directory and Users must create and manage their own input directory
;Please note that there is a shared input directory which contains the largest and more popular input data files, but this directory is read only. This CESM shared input data directory is located at /work/n02/shared/cesm/inputdata/. This shared directory may be used by any ARCHER users, and not just NCAR (n02) users, and may only be read from. Users may copy relevant input files from this shared directory to their own local input data files. The use of this shared directory will save a significant amout of disk space and time.
The input, archive and scratch directories must all be created by hand by each user in their own work directory, e.g.
mkdir /work/ecse0116/ecse0116/gavin2/cesm1_2_2/archive
mkdir /work/ecse0116/ecse0116/gavin2/cesm1_2_2/scratch
mkdir /work/ecse0116/ecse0116/gavin2/cesm1_2_2/inputdata
NB if you will use parallel-netcdf and not simply netcdf then to gain best performance, you should set the LFS stripe to -1 for these three directories using the following three commands
lfs setstripe -c -1 /work/ecse0116/ecse0116/gavin2/cesm1_2_2/SCRATCH
lfs setstripe -c -1 /work/ecse0116/ecse0116/gavin2/cesm1_2_2/archive
lfs setstripe -c -1 /work/ecse0116/ecse0116/gavin2/cesm1_2_2/inputdata
This has already been done for /work/ecse0116/shared/CESM1.0/inputdata
The location of the scratch, archive and input directories are referenced in config_Machines.xml