Skip main navigation

Practical: Creating layers and using activation functions in PyTorch

A link to a colab notebook demonstrating how to make convolutional and max pooling layers, and the use of activation functions in PyTorch
An RGB colour image of a cat, leading to the same image split into the three colour channels, leading to eight convolved versions of the same image, via a convolutional layer.
© The University of Nottingham
After tensors, the next key components we need to make our own CNNs are layers and activation functions.

In the linked Colab notebook we look at the following in using PyTorch:

  • how to make convolution layers, and set the number of input and output channels
  • how to set the padding and stride in convolutional layers
  • how to make max pooling layers
  • dropout layers
  • fully connected layers
  • how to use activation functions, in particular ReLU
  • hoftmax function and log softmax
  • loss functions, in particular cross entropy.

Follow the link below and work through the Colab notebook as before.

Layers in PyTorch

You may wish to open the link in a new browser tab so you can refer back here quickly.

Please leave any questions or comments below.

This article is from the free online

Deep Learning for Bioscientists

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now