Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

GPU acceleration

A brief overview of GPU acceleration, CUDA, and their use in PyTorch
A large computing rack containing numerous GPUs
© The University of Nottingham

A brief overview of GPU acceleration, CUDA, and their use in PyTorch and Colab.

Though to an extent we don’t need to worry too much about GPU acceleration with CUDA, we will just use it, it’s worth giving a brief overview of what it is, and how we will use it in Colab.

What is GPU acceleration?

Generally, most computing tasks are done using the CPU (Central Processing Unit) on the computer’s motherboard that is (as the name suggests) the central chip that controls everything that happens in the computer, from running the operating system, to running applications, to running any machine learning algorithms you may have written. Though CPU’s usually have multiple cores, generally any programme you write will only be able to perform calculations one at a time, sequentially. For large deep learning networks operating on large datasets, in particular with image data, the many millions of calculations required soon become very time consuming.

Graphical Processing Units, or GPUs, on the other hand, are able to process many calculations involving large amounts of data in parallel, meaning lots of calculations can be done at the same time, vastly improving the speed of some programs such as the training of deep learning networks. Many high-spec PCs have these high performance GPUs in their graphics cards.

Though originally designed for applications making heavy use of 3D graphics such as games and animation, additional software allows programmers to harness the extra processing power of GPUs in graphics cards for many other tasks. One such piece of software is CUDA (short for Compute Unified Device Architecture), which enables GPU acceleration on machines with NVDIA graphics cards, and is compatible with PyTorch.

PyTorch, CUDA, and Colab

Since we will be using Colab for this course we don’t need to worry too much about hardware configuration to make use of GPU acceleration with CUDA, provided we take one additional step in the settings of the Colab notebook. This is done by selecting “Change runtime type” from the “Runtime” menu at the top of the Colab window, and in the dropdown box under “Hardware acceleration” selecting “GPU” rather than “CPU”. After that the CUDA functionality should be available in PyTorch

You can try this out for yourself by following the link below.

Colab CUDA example

This Colab notebook shows you how to check whether GPU acceleration using CUDA is available, print out some technical information, and demonstrates some basic PyTorch CUDA commands.

This article is from the free online

Deep Learning for Bioscientists

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now