Graphics Processing Units (GPUs) are processors that have been built to render textures on the user's screen and with the large demand from the gaming industry, they have made considerable advancements in the speed at which they can perform floating point operations (FLOPS). As computational modelling typically requires a great deal of such operations, GPU acceleration is becoming a big part of scientific computing due to the ability to reduce the runtime of programs by impressive amounts.
Anthony Morse gave a comprehensive three day workshop at the NGCM Summer Academy and spent the first two days on the language used to code NVIDIA GPUs for use in scientific computing, CUDA. Even with very basic training, attendees were capable of producing a factor of hundreds to possibly even a thousand times speed up in certain types of simple code. The other method of coding GPUs that was covered during the three days was the more general language of OpenACC. It applies to all graphics cards and is very similar to the multi-threading method of OpenMP. After spending three days learning such techniques, it is clear that there is a lot to gain out of using such a powerful computational tool.
An NVIDIA Tesla Graphics Processing Unit