Virtual Tutorials and Webinars

This section contains details on the virtual tutorials and webinars offered by the ARCHER service.

These virtual sessions usually take place at 15:00 UK time on Wednesdays.

The virtual tutorials and webinars are live online interactive sessions conducted using Blackboard Collaborate. As well as presenting on selected topics in high-performance computing relevant to ARCHER users and reporting on scientific software development work done as part of ARCHER eCSE projects, these sessions provide a forum for users to ask questions on any topic related to the ARCHER service. They also provide an opportunity for attendees of recent ARCHER training courses to ask questions that may have arisen since attending the course.

Webinars usually have a presentation followed by the opportunity for attendees to ask questions.

Virtual tutorials normally start with a presentation and live demonstration on a particular topic followed by time for discussion with participants, both about the presented materials and in response to general questions about ARCHER and HPC/parallel computing in general. We hope that the discussions will be driven by what users wish to know and we expect the format to be that of a Question and Answer (Q&A) session.

Upcoming Sessions

See here for more information on how to connect.

From August 2016 Blackboard Collaborate has been updated to Collaborate Ultra. This has a very different look and feel to the previous version - and no longer requires software to be pre-installed except a web browser.
Chrome is the recommended browser but most other modern browsers will work to attend a session - please see our BlackBoard Collaborate page for more information and advice on testing your connection prior to the tutorial.

Report a Blackboard Collaborate or Connection Problem

Title Description Type Associated files Date
eCSE05-05 Open source exascale multi-scale framework for the UK solid mechanics community

Anton Shterenlikht, University of Bristol, Luis Cebamanos, EPCC and Lee Margetts, University of Manchester.

Fortran coarrays have been used as an extension to the standard for over 20 years, mostly on Cray systems. Their appeal to users increased substantially when they were standardised in 2010. In this work we show that coarrays offer simple and intuitive data structures for 3D cellular automata (CA) modelling of material microstructures. We show how coarrays can be used together with an MPI finite element (FE) library to create a two-way concurrent hierarchical and scalable multi-scale CAFE deformation and fracture framework. Design of a coarray cellular automata microstructure evolution library CGPACK is described.

We have developed hybrid MPI/coarray multi-scale miniapps from MPI finite element library ParaFEM and Fortran 2008 coarray cellular automata library CGPACK. We show that independently CGPACK and ParaFEM programs can scale up well into tens of thousands of cores. The miniapps represent multi-scale fracture models of polycrystalline solids. The software from which these miniapps have been derived will improve predictive modelling in the automotive, aerospace, power generation, defense and manufacturing sectors. The libraries and miniapps are distributed under BSD license, so these can be used by computer scientists and hardware vendors to test various tools including compilers and performance monitoring applications.

CrayPAT tools have been used for sampling and tracing analysis of the miniapps. Two routines with all-to-all communication structures have been identified a primary candidates for optimisation. New routines have been written implementing the nearest neighbour algorithm and using coarray collectives. Strong scaling limit for miniapps has been increased by a factor of 3, from about 2k to over 7k cores. Excessive synchronisation might be one contributing factor to the scaling limit. Therefore we conclude with a comparative analysis of synchronisation requirements in MPI and coarray programs. Specific challenges of synchronising a coarray library are discussed.
Webinar Slides Wednesday 29th March 2017 15:00
LAUNCH

Previous Sessions

Slides and video/audio recordings are available for some of the virtual tutorial and webinar sessions. Most videos are hosted through the ARCHER Youtube channel. Any .jar recordings should play automatically using Blackboard Collaborate. Please leave us feedback by clicking the Feedback icon to help us improve future courses.



Title Description Type Associated files Date
eCSE05-13 Optimisation of LESsCOAL for large-scale high-fidelity simulation of coal pyrolysis and combustion

Kaidi Wan, Zhejiang University, and Jun Xia, Brunel University

This webinar will summarise the eCSE05-13 project on optimising LESsCOAL (Large-Eddy Simulations of COAL combustion) on ARCHER. The specific objective was to achieve 80% of the theoretical parallel efficiency when up to 3,000 compute cores are used on ARCHER. The software package can be used for a broad range of turbulent gas-solid and gas-liquid two-phase combustion problems using high-fidelity simulation techniques. The Navier-Stokes equations are used for the turbulent carrier gas-phase. For the second discrete phase, its characteristic size is much smaller than the Eulerian grid spacing and therefore a point-source Lagrangian approach can be applied. Full coupling between the two phases on total and species mass, momentum and energy has been incorporated.

The major work included upgrading the pressure, radiation and particle modules, which had been identified as the 3 major hurdles degrading the scaling performance. The load balance of particles has been achieved via on-the-fly redistributing particles among all the cores.

In addition, using one-sided communication MPI_put instead of collective communication MPI_allgather has greatly enhanced the parallel efficiency of redistributing particles among cores. A wavefront sweep algorithm and a priority queuing technique have been compared during optimising the radiation model. The former approach is more suitable for modelling radiation in a long channel/tube with a large aspect ratio. The priority queuing technique has been found to be more suitable for the cases under consideration in this project.

For the pressure module, 14 different combinations of a solver and a pre-conditioner available in the open-source software package HYPRE have been tested. It was found that GMRES-PFMG and PCG-PFMG consume less computational time per time step for the cases under consideration. After optimisation, the parallel efficiency of the code reaches 80% of the theoretical one for weak scaling. For strong scaling, the parallel efficiency is over 80% when up to 1,200 cores are used for typical cases, but reduces to 48% on 3,072 ARCHER cores.

The root cause is the low grid and particles numbers in charge by each core, which leads to a high communication/computation ratio. It is clear that future work is still needed to further optimise the parallel efficiency of the code. On the other hand, weak scaling is more relevant with laboratory-scale turbulent pulverised-coal combustion cases under consideration, i.e. in general the compute core number increases with the problem scale, which is measured by total grid and particle/droplet numbers.
Webinar Video
Slides

Feedback
22 March 2017
Parallel IO on ARCHER Andy Turner, EPCC
The ARCHER CSE team at EPCC have recently been benchmarking the performance of different parallel I/O schemes on ARCHER and this has led to the production of the "I/O on ARCHER" chapter in the ARCHER Best Practice Guide (http://www.archer.ac.uk/documentation/best-practice-guide/io.php).
This webinar discusses the results from the benchmarking activity and what this means for optimising the I/O write performance for applications on ARCHER.
Virtual Tutorial Slides

Video

Feedback
8 February 2017
eCSE Tutorial Embedded CSE (eCSE) support provides funding to both the established ARCHER user community and communities new to ARCHER to develop software in a sustainable manner for running on ARCHER. This enables the employment of a researcher or code developer to work specifically on the relevant software to enable new features or improve the performance of the code. Calls are issued 3 times per year.
This tutorial provides an overview of the eCSE submission process, including material focussing on applications to use the new XC40 Xeon Phi System.
Webinar Slides

Video

Feedback
January 18 2017
Modern Fortran Adrian Jackson discusses the features of "modern" Fortran (Fortran90 and beyond), and the steps that need to be considered to port older Fortran (primarily Fortran 77) to new Fortran standards. We also briefly discuss some of the new Fortran features (in Fortran 2003 and beyond). Virtual Tutorial Slides

Video

Feedback
11 January 2017
eCSE03-9 Adjoint ocean modelling with MITgcm and OpenAD
Dan Jones and Gavin Pringle
Adjoint methods can be used to gain unique insights into the ocean and cryosphere, for example via (1) the construction of observationally-constrained estimates of the ocean state and (2) sensitivity studies that quantify how a selected model output depends on the full set of model inputs.
In this webinar, we discuss oceanic adjoint modelling on ARCHER using two open source tools, MITgcm and OpenAD. In particular, we discuss how available test cases may be modified for new applications. This webinar may be of interest to anyone wishing to apply adjoint modelling techniques to oceanic and/or cryospheric problems.
Virtual tutorial Slides

Video

Feedback
Wednesday 30th November 2016
CP2K CP2K: Recent performance improvements and new TD-DFT functionality
Iain Bethune and Matthew Watkins
We will present a round-up of recent additions and improvements to CP2K, most of which are available in the recent CP2K 4.1 release that is now installed on ARCHER. The eCSE06-06 project has improved performance for a large, load-imbalanced calculations, and implemented OpenMP parallelism for the pair-potential and non-local type dispersion corrections. We will also present new functionality for Time Dependent Density Functional Theory (TD-DFT) implement in eCSE 03-11, including the Maximum Overlap Method, support for hybrid density functionals (direct and ADMM methods) in linear-response TD-DFT, calculation of transition dipole moments and oscillator strengths, and parallel computation of excited states.
Virtual tutorial Slides
- Iain's
- Matt's

Video

Feedback
Wednesday 23rd November 2016
eCSE05-14 Large-Eddy Simulation Code for City Scale Environments
Z Tong Xie and Vladimir Fuka
The atmospheric large eddy simulation code ELMM, (Extended Large-eddy Microscale Model) was optimized and its capabilities are enhanced within project eCSE 05-14 in order to allow simulations to allow simulations of city scale problems (~10 km). The model is used mainly for simulations of turbulent flow and contaminant dispersion in the atmospheric boundary layer, especially in urban areas. A hybrid OpenMP/MPI version of the code was developed. The new features developed include one-way and two-way grid nesting. This enables simulation of large domains in affordable resolution and high resolution in selected areas while still maintaining a simple uniform grid with the associated code simplicity and efficiency. Synthetic turbulence generation is employed at nested boundaries to generate turbulent structures not resolved in the outer grid.
Virtual tutorial Slides

Video

Feedback
Wednesday 9th November 2016 15:00
eCSE04-07 and eCSE05-10 Multi-resolution modelling of biological systems in LAMMPS
Iain Bethune and Oliver Henrich
Two recent ARCHER eCSE projects (eCSE04-07 and eCSE05-10) have implemented extensions to LAMMPS to allow efficient simulations of biological systems using multiscale or dual-resolution coarse-grained/atomistic approaches.
The first project concerns the implementation of ELBA, an ELectrostatics-BAsed force-field originally designed for modelling lipid bilayers, but which is also transferrable to applications like computation of solvation free energies of biomolecules in water. Simulations using ELBA may combine coarse-grained beads with atomistic particles described using standard force-fields, for example membrane proteins embedded in a bilayer. We will discuss the implementation of a new integration algorithm for stable long-timescale Molecular Dynamics, and extensions to the LAMMPS load balancer to improve parallel performance in dual-resolution simulations.
The second project implemented the oxDNA force field, a coarse-grained model for nucleic acids. New pair interactions have been developed and permit now dynamical simulation of single and double stranded DNA on very large time and length scales. LAMMPS has emerged as a suitable computational platform for coarse-grained modelling of DNA and this eCSE project created a possibility to compare and optimise them with a view towards later multiscale modelling of DNA. We will explain how the oxDNA force field is parametrised and discuss a set of new Langevin-type rigid body integrators with improved stability.
Webinar Slides

Video

Feedback
Wednesday 19th October 2016
Using KNL on ARCHER It was recently announce that a 12-node Cray system containing Intel's new Knights Landing (KNL) manycore processors is to be added to the existing ARCHER service.
This webinar, which follows on from the previous KNL Overview and the eCSE programme for KNL-related projects, will focus on how to use the ARCHER KNL system in practice.
As ever, the main aim is to promote discussion between attendees and members of the ARCHER CSE team.
The webinar should be of interest to anyone considering porting applications to Knights Landing.
Virtual tutorial Slides

Video

Chat

Feedback
Wednesday 12th October 2016
eCSE Embedded CSE (eCSE) support provides funding to both the established ARCHER user community and communities new to ARCHER to develop software in a sustainable manner for running on ARCHER. This enables the employment of a researcher or code developer to work specifically on the relevant software to enable new features or improve the performance of the code.
Calls are issued 3 times per year.
This tutorial will provide an overview of the eCSE submission process, with a particular focus on applications to use the new XC40 Xeon Phi System. Following this there will be an drop in session for potential applicants to ask the eCSE team any questions they may have about the whole eCSE programme.
Webinar Video

Feedback
Wednesday 21 Sept 2016
The Intel Knights Landing Processor It was recently announced that a 12-node Cray system containing Intel's new Knights Landing (KNL) manycore processors is to be added to the existing ARCHER service.
This webinar will focus on the architecture of the KNL processor, including details of its new high-bandwidth memory and a discussion of how best to use all of the 250-plus possible threads of execution in real applications. As ever, the main aim is to promote discussion between attendees and members of the ARCHER CSE team.
The webinar should be of interest to anyone considering porting applications to Knights Landing. This is the first of three webinars - subsequent events will focus on applying to the eCSE programme for KNL-related projects, and on how to use the ARCHER KNL system in practice.
Virtual tutorial Slides

Video

Feedback
Wednesday 14th Sept 2016
eCSE03-8 Construction of a general purpose parallel supermeshing library for multimesh modelling Models which use multiple non-matching unstructured meshes generally need to solve a computational geometry problem, and construct intersection meshes in a process known as supermeshing. The algorithm for solving this problem is known and has an existing implementation in the unstructured finite element model Fluidity, but this implementation is deeply embedded within the code and unavailable for widespread use. This limits access to this powerful numerical technique, and limits the scope of present multimesh modelling applications.

In general a numerical model may need to consider not only two non-matching unstructured meshes, but also allow the two meshes to have different parallel partitionings. An implementation of a parallel supermeshing algorithm in this general case is lacking, and this has limited the scale of multimesh modelling applications.

This project will address these issues by:
  • Creating a standalone general-purpose numerical library which can be easily integrated into new and existing numerical models.
  • Integrate general parallel supermeshing into the new numerical library, allowing new large-scale multimesh problems to be tackled.
The general parallel supermeshing algorithm will be optimised to ensure parallel scaling up to 10,000 ARCHER cores for a one hundred million degree of freedom multimesh problem. This will allow multimesh simulations to be conducted on a previously inaccessible scale.
Webinar Slides

Video

Feedback
13th July 2016
TAU HPC Tool Guest speaker, Sameer Shende from University of Oregon talking about the TAU HPC Tool (Tuning and Analysis Utilities) which is available on ARCHER. Virtual Tutorial Slides

Video

Feedback
7th July 2016
libcfd2lcs: A general purpose library for computing Lagrangian coherent structures during CFD simulations Justin Finn, School of Engineering, The University of Liverpool A number of diagnostics have been proposed in recent years that allow transport and mixing patterns to be objectively defined in time dependent or chaotic flows. One such family of diagnostics, coined Lagrangian Coherent Structures (LCS), correspond to the locally most attracting or repelling material surfaces in a flow. They have been instrumental in developing new understanding of transport in a number of geophysical, biological, and industrial applications. However, the resource intensive post-processing procedure typically used to extract these features from experimental or numerical simulation data has probably limited the impact of LCS to date. In this web-forum I'll describe libcfd2lcs, a new computational library that provides the ability to extract LCS on-the-fly during a computational fluid dynamics (CFD) simulation, with modest additional overhead. Users of the library create LCS diagnostics through a simple but flexible API, and the library updates the diagnostics as the user's CFD simulation evolves. By harnessing the large scale parallelism of platforms like ARCHER, this enables CFD practitioners to make LCS a standard analysis tool, even for large scale three dimensional data sets. This seminar will focus on the current capabilities of libcfd2lcs and how to develop applications that use it. Development of libcfd2lcs was funded by an eCSE grant from the Edinburgh Parallel Computing Center (EPCC). It is currently accessible to all ARCHER users or can be obtained by visiting here. Webinar Slides

Video

Feedback
22nd June 2016
Modern Fortran Adrian Jackson will discuss the features of "modern" Fortran (Fortran90 and beyond), and the steps that need to be considered to port older Fortran (primarily Fortran 77) to new Fortran standards. We will also briefly discuss some of the new Fortran features (in Fortran 2003 and beyond). Virtual Tutorial Please see the re-run of this session January 2017 8th June 2016
Introduction to the RDF
An introduction to the Research Data Facility, what it is and how to use it. Virtual Tutorial Slides

Video

Feedback
4th May 2016
Using MPI - Q&A Webinar
Learning the basic MPI syntax and writing small example programs can be relatively straightforward, but many questions only arise when you first tackle a large-scale parallel application. Typical topics include how best to avoid deadlock, overlapping communication and calculation, debugging strategies or parallel IO.
This virtual tutorial is an opportunity for users to ask any questions they want about using MPI in real applications, which we will try to answer based on the practical experience gained by the ARCHER CSE team.
Webinar Questions
Video

Feedback
6th April 2016
OpenMP4
The latest versions of the compilers on ARCHER now all support OpenMP 4.0.
This webinar will give a tour of the new features in this version of the OpenMP standard and some examples of how they can be used.
Virtual Tutorial Slides

Video

Feedback
9th March 2016
ARCHER Programming Environment Update
Harvey Richardson and Michael Neff give an overview of recent changes and updates in the Cray programming environment available on ARCHER. Webinar Video

Feedback
Wednesday 10th February 2016
PBS Job Submission How PBS job submission works on ARCHER, which is not as simple as it may first appear. Virtual Tutorial Slides
Video
Chat

Feedback
Wednesday 13th January 2016
eCSE virtual drop-in session An opportunity for ARCHER users to ask the eCSE team any questions about the eCSE programme Virtual Tutorial Video
Chat

Feedback
Wednesday 16th December 2015 at 11:00
Lustre and I/O Tuning /work on ARCHER is a Lustre file system. This virtual tutorial is a brief introduction to Lustre with tips on how to choose Lustre settings that will improve I/O performance. Virtual Tutorial Slides

Video

Feedback
Wednesday 25th November 2015
Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware? One of the largest challenges facing the HPC user community on moving from terascale, through petascale, towards exascale HPC is the ability of parallel software to meet the scaling demands placed on it by modern HPC architectures. In this webinar I will look at the usage of parallel software across two UK national HPC facilities: HECToR and ARCHER to understand how well applications have kept pace with hardware advances.
These systems have spanned the rise of multicore architectures: from 2 to 24 cores per compute node. We analyse and comment on: trends in usage over time; trends in the calculation size; and changes in research areas on the systems. The in-house Python tool that is used to collect and analyse the application usage statistics is also described. We conclude by using this analysis to look forward to how particular parallel applications may fare on future HPC systems.
Andy Turner, EPCC.
Webinar Slides

Video

Chat

Feedback
11th November 2015
Supporting Diversity Outlining some of the issues which students may experience when taking part in training and courses, and some stategies which we can use to make our courses more accessible. Virtual Tutorial Slides

Video

Feedback
4th November 2015
Using OpenFOAM on ARCHER Several versions of OpenFOAM are installed on Archer. This webinar explains how to use the installed versions and how to compile OpenFOAM yourself. It also provides tips on how to use OpenFOAM efficiently on the /work Lustre filesystem. Virtual Tutorial Slides

Video

Chat

Feedback
23rd September 2015
Data Management This talk looks at some of the potential bottlenecks of transferring data between systems, specifically in relation to the UK Research Data Facility (RDF) and ARCHER service. Archiving is proposed as a method of reducing the volume of files and addressing the bottleneck of filesystem metadata operations. Different copying utilities are examined and the trade off between security and performance considered. Virtual Tutorial Video

Slides

Chat

Feedback
Wednesday 9th September 2015
eCSE Webinar Embedded CSE (eCSE) support provides funding to both the established ARCHER user community and communities new to ARCHER to develop software in a sustainable manner for running on ARCHER. This enables the employment of a researcher or code developer to work specifically on the relevant software to enable new features or improve the performance of the code.
Calls are issued 3 times per year.
This tutorial aims to give an overview of what the eCSE is and to provide examples of the kind of work which is supported. It also gives a short description of the submission and review process.
virtual tutorial Slides

Video

Chat transcript

Feedback
26th August 2015
Bash is Awesome! Andy Turner
bash has may useful features, particularly for scripting. In this short webinar I will give an intorduction to a number of the features I find the most useful on a day-to-day basis and hopefully hear from attendees which other features I should be using!
Virtual Tutorial Slides

Example script

Video

Chat transcript

Feedback
12 August 2015
Compilation and the mysticism of Make

"Make" is the standard Unix tool for managing multi-file programs, where the compilation instructions are encoded in a "makefile". Most large packages will come with associated makefiles, so it is important to understand how they work so code can be successfully compiled on new platforms, e.g. when porting a code to ARCHER.

Unfortunately, makefiles have a tendency to become very complicated and attain something of a mystical status. They are handed down from father to son, mother to daughter, supervisor to student, requiring magical incantations and only fully understood by a select few ...

In fact, make is fundamentally a very straightforward tool and most makefiles can be understood with some basic knowledge and a bit of practical experimentation.

In an initial 30-minute presentation, David Henty (EPCC and ARCHER CSE Support) will cover:

  • the difficulties of managing multi-file codes;
  • the fundamentals of make;
  • simple makefiles for compiling Fortran and C codes;
  • common mistakes and misconceptions.

The aim of the presentation is to initiate online discussions between users and staff on compilation and makefiles, or any other aspects of the ARCHER service.

Here are PDF copies of the slides and the sample programs as a gzipped tar file.

Virtual Tutorial Video

Feedback
Wednesday July 8th 2015
Flexible, Scalable Mesh and Data Management using PETSc DMPlex [Full Description] webinar Slides

Video to come
Wednesday June 24th 2015
Introduction to Version Control [Full Description] virtual tutorial Slides:
Part 1
Part 2

Videos:
Part 1, Demo
Part 2

Practical

Feedback
June 3rd & 17th 2015
Voxel-based biomechanical engineering with VOX-FE2 Richard Holbrey (University of Hull), Neelofer Banglawala (EPCC)
[Full Description]
webinar Slides

Video

Demo videos:
1, 2, 3.

Feedback
April 29th 2015
Single Node Performance
Optimisation - A Computational
Scientist's Guide to Computer
Architecture
This two-part tutorial provides an introduction to the architecture of processors and memory systems, and is aimed at providing a grounding in these topics for anyone who is involved in optimising the single-node performance of codes. virtual tutorial Videos

Part 1

Part 2

Feedback
April 8th & 15th 2015
Not-so-old Fortran (Harvey Richardson, Cray Inc.) [Full Description] webinar Video

Feedback
Wednesday March 18th 2015
Performance analysis on ARCHER using CrayPAT - a case study Iain Bethune, EPCC virtual tutorial Slides
Video

Feedback
Wednesday March 11th 2015
Experiences porting the Community Earth System Model (CSEM) to ARCHER (Dr Gavin J. Pringle, EPCC) [Full Description] webinar Slides

Video

Feedback
Wednesday February 25th 2015
PBS Job Submission An online session open to all where you can put questions to ARCHER staff. All you need is a computer and internet access. This session starts with a talk on how PBS job submission works on ARCHER, which is not as simple as it may first appear, followed by an open Q&A. virtual tutorial Slides Wednesday February 11th 2015
A Pinch of salt in ONETEP's solvent model (Lucian Anton, Scientific Computing Department, STFC, Daresbury Laboratory; Jacek Dziedzic, Chris-Kriton Skylaris, School of Chemistry, University of Southampton) [Full Description] webinar Slides

Video

Feedback
Wednesday January 28th 2015
ARCHER eCSE projects Embedded CSE (eCSE) support provides funding to both the established ARCHER user community and communities new to ARCHER to develop software in a sustainable manner for running on ARCHER. This enables the employment of a researcher or code developer to work specifically on the relevant software to enable new features or improve the performance of the code.
Calls are issued 3 times per year. This tutorial aims to give an overview of what the eCSE is and to provide examples of the kind of work which is supported. It also gives a short description of the submission and review process.
virtual tutorial Wednesday January 7th 2015
An Introduction to GPU Programming with CUDA Graphics Processing Units (GPUs) offer significant performance advantages over traditional CPUs for many types of scientific computations. However, adaptations are needed to applications in order to effectively use GPUs. During this tutorial, I will present an introduction to GPU programming focusing on the CUDA language which can be used in conjunction with C, C++ or Fortran to program NVIDIA GPUs. After this presentation, there will be opportunities for questions and discussion. virtual tutorial Video

Feedback
Wednesday December 10th 2014
Introduction to Version Control [Full Description] virtual tutorial Slides

Terminal Logs:
SVN A
SVN B
Git

Video

Feedback
Wednesday November 12th 2014
Hybridisation: Adding OpenMP to MPI for the plasma simulation code GS2 (Adrian Jackson, EPCC) [Full Description] webinar Slides

Video

Feedback
Wednesday October 29th 2014
Parallel IO and the ARCHER Filesystem

File input and output often become a severe bottleneck when parallel applications are scaled up to large numbers of processors. In this tutorial, I will outline the methods commonly used for parallel IO and discuss how well each of these is likely to perform on ARCHER. The work space on ARCHER uses the parallel Lustre file system, and I will explain how the design and configuration of Lustre affects IO performance. The tutorial will end with a brief overview of ways to measure the IO behaviour of your own applications, and what steps to take if the performance is not as good as expected.

The main aim of the 45-minute presentation is to initiate online discussions between users and staff on parallel IO, or any other aspects of the ARCHER service.

virtual tutorial Slides

Video

Feedback
Wednesday October 8th 2014
Abstraction of Lattice Based Parallelism with Portable Performance (Alan Gray, EPCC) [Full Description] webinar Video

Feedback
Paper
Wednesday September 24th 2014
Tools for Building and Submitting an eCSE Proposal This virtual tutorial follows on from last week's session, which provided an overview of the embedded CSE (eCSE) programme and summarised the kind of work supported. Firstly, this tutorial will provide a practical overview of the on-line submission system. Secondly, the tutorial will look at the type of supporting performance data required for a successful eCSE submission and discuss the profiling and performance monitoring tools available to obtain this data. eCSE calls are issued 3 times per year. virtual tutorial Slides

Video

Feedback
Wednesday September 3rd 2014
ARCHER eCSE Tutorial Embedded CSE (eCSE) support provides funding to the ARCHER user community to develop software in a sustainable manner for running on ARCHER. This enables the employment of a researcher or code developer to work specifically on the relevant software to enable new features or improve the performance of the code.
Calls are issued 3 times per year. This tutorial aims to give an overview of what the eCSE is and to provide examples of the kind of work which is supported. We will also give a short description of the submission and review process.

[Full Description]
virtual tutorial Slides

Video

Feedback
Wednesday August 27th 2014
Python for High Performance Computing

The Python language is becoming very popular in all areas of Scientific Computing due to its flexibility and the availability of many useful libraries. It is increasingly being used for HPC, with typical use cases being:

  • for pre and post-processsing of simulation data;
  • to coordinate or glue together existing pieces of code written, for example, in Fortran or C;
  • for entire applications, possibly parallelised using mpi4py.

In this short tutorial we give a brief overview of the aspects of Python most relevant to HPC, and cover the pros and cons of these typical use cases. The goal is to give guidance on how to utilise the many attractive features of Python without sacrificing too much performance.

The main aim of this presentation was to initiate online discussions between users and staff on using Python for HPC, or any other aspects of the ARCHER service.

virtual tutorial Slides

Video

Feedback
Wednesday August 13th 2014
Experiences of porting real codes to Intel's Xeon Phi (Iain Bethune, EPCC) [Full Description] webinar (pdf)
Video

Feedback
Wednesday July 30th 2014
Make and Compilation Issues

"Make" is the standard Unix tool for managing multi-file programs, where the compilation instructions are encoded in a "makefile". Most large packages will come with associated makefiles, so it is important to understand how they work so code can be successfully compiled on new platforms, e.g. when porting a code to ARCHER.

Unfortunately, makefiles have a tendency to become very complicated and attain something of a mystical status. They are handed down from father to son, mother to daughter, supervisor to student, requiring magical incantations and only fully understood by a select few ...

In fact, make is fundamentally a very straightforward tool and most makefiles can be understood with some basic knowledge and a bit of practical experimentation.

In an initial 30-minute presentation, David Henty (EPCC and ARCHER CSE Support) will cover:

  • the difficulties of managing multi-file codes;
  • the fundamentals of make;
  • simple makefiles for compiling Fortran and C codes;
  • common mistakes and misconceptions.

    The aim of the presentation is to initiate online discussions between users and staff on compilation and makefiles, or any other aspects of the ARCHER service.

virtual tutorial slides and examples

Video

Feedback
Wednesday July 9th 2014
Optimising MPI Rank Placement at Runtime (Tom Edwards, Cray Inc.) [Full Description] virtual tutorial Video

Feedback
Wednesday June 25th 2014
A Very Brief Introduction to Parallel Programming Models In this short presentation Andy Turner (EPCC and ARCHER CSE Support) provides a brief outline of the two different parallel programming models in common use on ARCHER: shared memory (through OpenMP) and message passing (through MPI), and an analysis of their strengths and weaknesses. This requires an understanding of how an operating system works with processes and threads, so we provide a high-level overview of these concepts. virtual tutorial Video

Feedback
Wednesday June 11th 2014
The OpenSHMEM Single-sided PGAS Communications Library (David Henty, EPCC) [Full Description] webinar (pdf) Wednesday May 28th 2014
ARCHER Filesystems The ARCHER system has a number of different filesystems and data storage hardware associated with it. Understanding the purposes and configurations of the different filesystems, the disk quotas, and how disk quotas are managed in SAFE is useful to enable proper use of ARCHER and to ensure your jobs don't fail or you don't lose valuable data. A short presentation at the start of this tutorial, given by Adrian Jackson (EPCC and ARCHER CSE Support) addresses the following:
  • what filesystem hardware do we have on ARCHER?
  • which filesystems are backed up?
  • what is the RDF?
  • which filesystems can be accessed from production jobs?
  • how do you work out how much disk space you are using?
  • how are disk quotas managed in projects?
The aim of the presentation was to initiate online discussions between users and staff on the ARCHER filesystems or any other aspect of the ARCHER service.
virtual tutorial Slides
Recording
Video

Feedback
Wednesday May 14th 2014
Parallel IO in Code_Saturne (Charles Moulinec, STFC Daresbury Laboratory) [Full Description] webinar (pdf)
Video

Feedback
Wednesday April 30th 2014
PBS Job Submission Slides
Recording
Video

Feedback
virtual tutorial Wednesday April 9th 2014
Vectorisation and optimising for processor architecture (Peter Boyle, University of Edinburgh) webinar (pdf) Wednesday March 26th 2014
SAFE virtual tutorial Wednesday March 12th 2014
Python for HPC: Tools and performance (Nick Johnson, EPCC) webinar (pdf)
Meeting Notes
Wednesday February 26th 2014
OpenMP Tips, Tricks and Gotchas: Experiences from programming and teaching OpenMP (Mark Bull, EPCC) webinar (pdf)
Video

Feedback
Wednesday January 29th 2014
Introduction to ARCHER and CRAY MPI virtual tutorial Wednesday January 22nd 2014
ARCHER Tips and Tricks: A few notes from the CSE Team (AndyTurner, EPCC) webinar (pptx|pdf)
Video

Feedback
November 26th 2013