2. System Architecture and Configuration

This page is under construction.

For an overview of the Cray XC30 architecture used in ARCHER, please see the relevant About ARCHER section:

2.1 Processor architecture

2.1.1 Vector-type instructions

One of the keys to getting good performance out of the Xeon architecture is writing your code in such a way that the compiler can make use of the vector-type, floating point operations available on the processor. There are two different vector-type operations available that execute in a similar manner: SSE (Streaming SIMD Extensions) Instructions and AVX (Advanced Vector eXtensions) Instructions. Please note AVX instructions are not available on the serial/PP nodes

These instructions can use the floating-point unit (FPU) to operate on multiple floating point numbers simultaneously as long as the numbers are contiguous in memory. SSE instructions contain a number of different operations (for example: arithmetic, comparison, type conversion) that operate on two operands in 128-bit registers. AVX instructions expand SSE to allow operations to operate on three operands and on a data path expanded from 128- to 256-bits. In the E5-2600 architecture each core has a 256-bit floating point pipeline.

2.2 Memory architecture

The two processors on a standard ARCHER compute node share 64 GB of DDR3 memory. There are a small number of high-memory nodes with 128 GB of memory shared between the two processors. ARCHER has 4544 standard memory node, along with 376 high-memory nodes, bringing the total memory on ARCHER to over 200 TB.

Each node has a main memory bandwidth of 85.3GB/s (42.7GB/s per socket, 5.3GB/s per module, 2.7GB/s per core).

2.3 Available file systems

ARCHER has a number of different filesystems, each with its own purpose. Detailed information on ARCHER filesystems can be found in the User Guide:

2.4 Operating system (CLE)

The operating system on ARCHER is the Cray Linux Environment (CLE) that in turn is based on SuSE Linux. CLE consists of two components: CLE and Compute Node Linux (CNL).

The service nodes, external login nodes, and post-processing nodes of ARCHER run a full-featured version of Linux.

The compute nodes of ARCHER run CNL. CNL is a stripped-down version of Linux that has been extensively modified to reduce both the memory footprint of the OS and also the amount of variation in compute node performance due to OS overhead.