A two day a deep learning for GPUs training course will be taking place at Sheffield on the 26th/27th July. Places will be extremely limited. Signup on the GPUComputing@Sheffield Training page.
GPU Training is advertised first through the GPUComputing@Sheffield mailing list!
On Monday the 25th April (next Monday) I will be giving a two hour lecture on Performance Optimisation for GPU computing in CUDA. The lecture is part of my 4th year undergraduate course on GPU computing but any CUDA developers who are keep on optimising their code are welcome to attend.
The lecture is unlikely to be beneficial to you if you are only just starting in CUDA programming. It will cover some advanced topics in explaining performance limitations of code but will be beneficial to people developing in CUDA at the intermediate level. If you are interested in attending the lecture will be in Diamond LT-09 from 15:00. Please drop me an email (email@example.com) to let me know if you plan on attending.
As part of GPUComputing@Sheffield we will be hosting a seminar series for sharing practical issues relating to GPU computing and HPC. The first of these will take place on 22nd of July at 14:30 in the Information Commons,IC-126. The invited speaker is Dr Alan Gray from EPCC.
Alan's Bio and abstract for the talk are below. The talk’s topic of performance portability will give some great insight into scaling of HPC applications to multi GPU environments and how to write code that can easily be targeted to both CPU and GPU environments.
Bio: Alan's research career began in the area of theoretical physics: his Ph.D. thesis was awarded the UK-wide Ogden Prize in 2004 for the best thesis in particle physics phenomenology. He continued this work under a University Fellowship at The Ohio State University, before moving to EPCC in 2005. His current research focuses on the exploitation of GPUs to the benefit of real scientific and industrial applications: he has a particular interest in the programming of large-scale GPU-accelerated supercomputers. He was awarded the status of CUDA Fellow in 2014. Alan leads EPCC's GPU related activities, and is involved in management, teaching and supervision for the EPCC MSc in High Performance Computing.
Talk Abstract: Many fluid dynamics problems are made tractable through the discretisation of space and time, to allow representation and evolution within a computer simulation. The continued rise in performance of the largest supercomputers has permitted increasingly complex and realistic models. But complexity increases within the computational architectures themselves, such as the increasing reliance on multiple levels of hierarchical parallelism coupled with non-uniform and distributed memory spaces, pose a tremendous challenge for programmers. The emergence of powerful accelerators such as Graphics Processing Units (GPUs) has further increased diversity. Applications must intelligently map to the hardware whilst retaining intuitiveness and portability. We will describe our efforts to manage such issues in relation to a particular application, Ludwig. We believe that our experiences, techniques and software components may be of interest more widely.
Ludwig is a versatile package which can simulate a wide range of complex fluids using lattice Boltzmann and finite difference techniques. A current research focus involves combining liquid crystals with colloidal particles to create substances with potentially interesting optical properties: these simulations are extremely computationally demanding due to the range of scales involved. We will first describe a multi-GPU implementation, and present results showing excellent scaling to thousands of GPUs in parallel on the Titan supercomputer at Oak Ridge National Laboratory. We will then go on to describe our work to re-develop Ludwig using our new domain specific abstraction layer, targetDP, which targets data parallel hardware in a platform agnostic manner, by abstracting the memory spaces and the hierarchy of task, thread and instruction levels of parallelism. We will present performance results targetDP for Ludwig, where the same source code is targeted at both GPU-accelerated and traditional CPU-based architectures. These demonstrate both performance portability and also the benefit gained through the intelligent exposure of the lattice-based parallelism to this hierarchy.
Statement from NVIDIA.
"At GTC 2015, we highlighted the incredible contributions our academic partners make in every area of GPU computing: graphics, virtualization, high performance computing, data sciences to just name a few. To that end we would like to broaden the scope of our partnerships to recognize your growing contributions by changing the program name from "CUDA" to "GPU"."
As such the University of Sheffield will now be referred to as a GPU Research Center.
The aim of the course is to provide a basic understanding of principles of CUDA GPU programming and GPU programming with directives using OpenACC. Prior knowledge of CUDA or Parallel programming is not required. Previous knowledge of C/C++ is required in order to get the most out of the course. Familiarity with concepts such as pointers, arrays and functions is essential. The course consists of approximately 3 hours of lectures and 4 hours of practical training each day.
Registration is required using the link below:
A maximum of 30 participants is allowed on a first come first serve basis.
Date: Tuesday 19th and Wednesday 20th May 2015
Time: 9.30-17.30 (Tue), 9.30-17.15 (Wed)
Venue: Hadfield Building, HB-E61; Pool Computer Room E61
Useful links for the course
Shared Notepad (etherpad)
Short link to shared notepad: http://mzl.la/1ecFPKw
There will be a 1 day "Introduction to CUDA" training course held at the University of Sheffield on May 5th 2015. The course will be delivered by Dr Paul Richmond (Comp Sci), Dr Mike Griffiths (CICS) and supported EPCC. The signup form is managed by ARCHER details are on the discussion group.
Unfortunately the upcoming GPU programming with CUDA course is oversubscribed and places have been allocated on a first come first serve basis.
We intend to run this course again so to receive first priory access and to receive information on further GPU training and events please subscribe to the Sheffield GPU computing google group (https://groups.google.com/a/sheffield.ac.uk/forum/?hl=en#!forum/gpucomputing) and ensure that you have opted to receive email notifications.
1-10 of 12