GPU with Cuda

Dates: 11-12 October 2016

Location: University College London

Timetable

Tuesday 11th October 2016

  • 9.00 Arrival
  • 9.30 Introduction
  • 9.45 GPU Architecture
  • 10.15 Programming GPUs with CUDA
  • 11.00 Break
  • 11.30 Practical Session 1: Your first CUDA code
    • Click here (not available before session starts)
    • Enter the password and log in
    • Navigate to YourName->intro and open main_notebook.ipynb
  • 12.30 Lunch
  • 13.30 GPU Optimisation
  • 14:00 Practical Session 2: Optimising a CUDA Application
    • Click here
    • Enter the password and log in
    • Navigate to YourName->reconstruct and open main_notebook.ipynb
  • 15.00 Break
  • 15.30 Practical Session 2 (cont)
  • 16.00 Optional Lecture: Deep Learning with GPUs (you can see a movie of a previous verison of this here)
  • 16.45 Close

Wednesday 12th October 2016

10:00 Seminar: A Lightweight Approach to Performance Portability with targetDP

(followed by discussion on this and/or previous day's training)

Alan Gray, EPCC, The University of Edinburgh

See preprint

Leading HPC systems achieve their status through use of highly parallel devices such as NVIDIA GPUs or Intel Xeon Phi many-core CPUs. The concept of performance portability across such architectures, as well as traditional CPUs, is vital for the application programmer. In this paper we describe targetDP, a lightweight abstraction layer which allows grid-based applications to target data parallel hardware in a platform agnostic manner. We demonstrate the effectiveness of our pragmatic approach by presenting performance results for a complex fluid application (with which the model was co-designed), plus a separate lattice QCD particle physics code. For each application, a single source code base is seen to achieve portable performance, as assessed within the context of the Roofline model. TargetDP can be combined with MPI to allow use on systems containing multiple nodes: we demonstrate this through provision of scaling results on traditional and GPU-accelerated large scale supercomputers.