Both CASTEP 7 and CASTEP 8 are installed and available on ARCHER. We also have serial versions of the program compiled. The different versions can be accessed by loading the correct module.
- CASTEP web page
- CASTEP Tutorials
- CASTEP Frequently Asked Questions - includes answers to problems encountered in running CASTEP on ARCHER.
Licensing and Access
To run CASTEP you need to add the correct module:
module add castep
The current default CASTEP module will load CASTEP 8.0.0, but there are modules available for CASTEP 7.0.3, for previous versions of CASTEP 7, and for serial versions of CASTEP.
Once the module has been added the main CASTEP executable is available as castep.mpi. Executables for the tools distributed with CASTEP will also be available.
An example CASTEP job submission script is shown below.
#!/bin/bash --login #PBS -N castep_job #PBS -V # Select 128 nodes (maximum of 3072 cores) #PBS -l select=128 #PBS -l walltime=03:00:00 # Make sure you change this to your budget code #PBS -A budget # Make sure any symbolic links are resolved to absolute path export PBS_O_WORKDIR=$(readlink -f $PBS_O_WORKDIR) # Change to the directory that the job was submitted from cd $PBS_O_WORKDIR # Load the CASTEP module module add castep # This line sets the temporary directory - without it CASTEP will fail export GFORTRAN_TMPDIR=$PBS_O_WORKDIR # Change the name of the input file to match your own job aprun -n 3072 castep.mpi my_job
- Compiling CASTEP 16.1.2 for ARCHER using Intel 16 compilers (on GitHub)
- Compiling CASTEP 8 and 16 on ARCHER
- Compiling CASTEP 8 and 16 for the serial nodes on ARCHER
- Compiling CASTEP 7.0.* on ARCHER
Hints and Tips
When setting the $GFORTRAN_TMPDIR environment variable you must use the absolute /work path rather than a symbolic link to /work from the /home filesystem otherwise your calculation will fail. This is done in the example above by resetting the PBS_O_WORKDIR environment variable using the readlink command.
CASTEP has an option to optimise its run between speed and memory saving. This is controlled either by parameter:
Normally it should be recommended that users choose OPT_STRATEGY=SPEED/OPT_STRATEGY_BIAS=+3 unless the run attempts to use more memory than available. (Even in that case the first choice should be to increase the number of processors used and distribute the memory if possible.)