ONETEP
ONETEP Version 4.3.0 is installed and available on ARCHER. An earlier version (3.4.0) can be accessed by loading the appropriate module.
Useful links
Licensing and Access
ONETEP is licensed software. Please see the ONETEP web page for details. Users who wish to access the ONETEP package should submit a request via SAFE.
Running ONETEP
To run ONETEP you need to load the correct module:
module load onetep
The current default ONETEP module will load ONETEP 4.3.0, but there is a module available for ONETEP 3.4.0:
module load onetep/3.4
Once the module has been loaded the main ONETEP executable is available as onetep.
An example ONETEP job submission script is shown below.
#!/bin/bash #PBS -N ONETEP_job #PBS -l select=4 #PBS -l walltime=01:00:00 #PBS -j oe #PBS -A budget # Make sure any symbolic links are resolved to absolute path export PBS_O_WORKDIR=$(readlink -f $PBS_O_WORKDIR) # Change to the directory that the job was submitted from cd $PBS_O_WORKDIR # Load the ONETEP module module load onetep # Set the seed name for the calculation (controls what .dat file is loaded) SEED="seedname" # Set parameters for OpenMP export MKL_NUM_THREADS=1 export MKL_DYNAMIC=false export OMP_NESTED=true export MKL_NESTED=true # Set number of threads per MPI process (must be a divisor of 24) export OMP_NUM_THREADS=3 # Set number of MPI processes per node # (NUM_PPM is the number of processor cores per node) export NP=$((NUM_PPN / $OMP_NUM_THREADS)) # Set total number of MPI processes # (NODE_COUNT is the number of nodes allocated to your job) export NMPI=$((NP * NODE_COUNT)) # Gather aprun arguments export APRUN_ARGS="-n $NMPI -N $NP -d $OMP_NUM_THREADS -S $((NP / 2)) -cc numa_node" # Set output filename export OUT=$SEED".out_"$NMPI"mpi_"$OMP_NUM_THREADS"thr" # Run the job aprun $APRUN_ARGS onetep $SEED >> $OUT echo "Executing aprun $APRUN_ARGS onetep $SEED >> $OUT"
Hints and Tips
ONETEP supports hybrid OpenMP / MPI parallelism. It is not necessary to use OpenMP for a very small calculation, but for anything over a few hundred atoms it is highly advisable. The above example script uses 4 nodes and thus a total of 96 cores, and it runs with 3 OpenMP threads per MPI process, so there are 8 MPI processes per node and 32 MPI processes in total.
The script chooses the number of processes per node based on the user-supplied number of threads such that each thread runs on a single core, and chooses the total number of MPI processes based on the number of nodes. Therefore the two parameters to adjust to control the parallelisation are total number of nodes (#PBS -l select=N) and number of threads per process (OMP_NUM_THREADS).
It is very rarely necessary to under-populate the nodes: if you find yourself running out of memory simply up the number of OpenMP threads per MPI process. Efficient results are obtainable with OMP_NUM_THREADS=2,3,4 and 6. OMP_NUM_THREADS=12 can be used but the efficiency will be less than ideal. You should not go beyond OMP_NUM_THREADS=12 as this would mean splitting MPI processes between NUMA regions, which incurs a considerable performance hit.
You will also need to set the budget, and set the seed name to match the name of your input file (e.g. "silane" for the 1st tutorial on the ONETEP website).
Compiling
ONETEP should be compiled with the Intel compiler and using the supplied configuration file (config/conf.archer) that contains compilation settings appropriate to ARCHER, as follows (from a fresh login):
module swap PrgEnv-cray PrgEnv-intel gmake onetep ARCH=archer