Tuesday 12th November 2019

A Fully Lagrangian Dynamical Core for the Met Office/NERC Cloud Model - Online webinar

Wednesday 13th November 2019 15:00

Steven Boeing, University of Leeds and Gordon Gibb, EPCC

We discuss the development of an essentially Lagrangian model of atmospheric cloud dynamics, carried out in the eCSE project 'A fully Lagrangian Dynamical Core for the Met Office/NERC Cloud Model'. We describe the work, which aimed to incorporate the 'Moist Parcel in Cell' (MPIC) code into the Met Office/NERC Cloud Model (MONC), thereby granting it MPI parallelism and the ability to scale beyond a single node.

In the Moist-Parcel-In-Cell (MPIC) model, parcels represent both the thermodynamic and the dynamical properties of the flow. The representation of parcel properties is fully Lagrangian, but an efficient grid-based solver calculates parcel advection velocities. The Lagrangian approach of MPIC has a number of advantages: in particular, parcel properties are naturally conserved, and the amount of mixing between parcels can be explicitly controlled. MPIC also lends itself well to parallelisation, because most of the communication required between processors is local, and an efficient solver is available where global communication is required.

First, we describe the modifications to MONC in order for it to be parcel-aware. Subsequently, we outline the implementation of MPIC's physics into MONC. We then provide an in-depth comparison of MPIC with the newly developed code, and investigate how it scales to many thousands of MPI processes. Finally we discuss the limitations of the code, and future work to be carried out to mitigate these limitations. We found that the code can scale up to many thousands of cores for large problem sizes, although the main limiter of performance at scale are the Fourier Transform routines. Despite good MPI performance from the code, OpenMP performance is poor, achieving a speedup of only 3 on 12 threads. Overall, on a single node the new code performs better than MPIC does, carrying out more parcel operations per core per second.

We will also describe how users can install the code and develop new components.

Full details and join link

RDF Maintenance - Wednesday 20th November from 9am to 5pm

The RDF filesystems /nerc, /epsrc and /general will be unavailable on Wednesday 20th November from 9am to 5pm.

The network between Archer and the RDF will be upgraded from 20Gb/s to 80Gb/s in order to speed up transfers of data from Archer to the RDF.

This will also involve updates to the configuration on the login nodes and so users are advised to access the login nodes using rather than directing logins to a specific login node.

ARCHER Image and Video Competition 2019 : Results

Thank you to everyone who submitted images and videos to this year's competition. The outstanding range of entries made judging a most enjoyable experience with so many great demonstrations of the important work being done using ARCHER.

We are pleased to announce the winners as follows:

  • Winning Early Career Researcher entry: Dr Georgios Deskos, Imperial College London with their image "Turbulence-resolving simulations of a utility-scale offshore wind farm, highlighting the flow structures and turbulence of turbine wakes".
  • Winning Image: Dr David Abbasi Perez, Department of Physics, King's College London with their image "Chiral molecules on a surface can be separated under the presence of an external oscillating field".
  • Winning Video and Overall Winner: Benjamin Fernando, Department of Earth Sciences, University of Oxford with their video "Seismic Waves in the Ocean"

A gallery of all the submitted images and videos is available to view at

Thanks to the entrants, judges and all involved in making the competition such a great success. We hope you will take a few moments to enjoy looking at the gallery.

Enabling multi-node MPI parallelisation of the LISFLOOD flood inundation model

Postponed to Wednesday 4th December 2019 15:00

Arno Proeme, EPCC

The availability of higher-resolution topographic datasets covering greater spatial domains has soared in recent years, pushing the limits of computational resources beyond those typically found in regional HPC services. In addition, many countries that are subject to large-scale flooding each year do not have access to real-time flood forecasting software. This webinar describes how HAIL-CAESAR, a geosciences code that implements the LISFLOOD flood inundation model, was ported to make use of LibGeoDecomp - a C++ stencil code HPC library - to enable multi-node parallelism. Whilst currently single inundation scenarios can take multiple days to run using standard hydrological modelling software, this project paves the way for ensemble runs that can be initiated on the basis of a 24 or 48 hour rainfall forecast and complete within shorter timescales, which should ultimately have major implications for flood warnings in developing countries.

Full details and join link

HPC-Europa3 Transnational Access programme

Collaborative research visits using High Performance Computing

Call for applications: next closing date 20th November 2019

HPC-Europa3 funds research visits for computational scientists in any discipline which can use High Performance Computing (HPC).

Visits can be made to research institutes in Finland, Germany, Greece, Ireland, Italy, the Netherlands, Spain, Sweden or the UK.

UK-based researchers can benefit in two ways: either by visiting a research group elsewhere in Europe, or by hosting a research visitor from another country.

What does HPC-Europa3 provide?

  • Funding for travel, living and accommodation expenses for visits of up to 13 weeks.
  • Access to world-class High Performance Computing (HPC) facilities.
  • Technical support to help you make best use of the HPC systems.
  • Collaborative environment with an expert in your field of research.

Who can apply?

  • Researchers of all levels, from postgraduate to full professors.
  • Researchers from academia or industry.
  • Researchers currently working in a European Union country or Associated State (see for full list of Associated States).
  • Researchers may not visit a group in the country where they currently work.
  • A small number of places are available for researchers working outside these countries - please contact for more information.

How do I apply?

Apply online at

The next closing date is 20th November 2019. Closing dates are held 4 times per year. Applications can be submitted at any time. You should receive a decision approximately 6 weeks after the closing date.

For more information and to apply online, visit:

Follow us on Twitter for project news:

Upcoming Training Opportunities

Registration open now

  • A Fully Lagrangian Dynamical Core for the Met Office/NERC Cloud Model, Online webinar, 13th November 2019 15:00
  • Shared-memory Programming with OpenMP - Online course, four Wednesday afternoons, 13, 20, 27 November and 4 December 2019
  • Enabling multi-node MPI parallelisation of the LISFLOOD flood inundation model, Online webinar, postponed to 4 December 2019 15:00
  • HPC Carpentry, EPCC, Edinburgh, 9 - 10 December 2019

Registration opening very soon

  • Hands-on Introduction to HPC, EPCC, Edinburgh 7-8 January 2020
  • GPU Programming with CUDA, EPCC, Edinburgh 9-10 January 2020

Full details and registration at