Skip main navigation

Implementing gradient descent in Python

In this video, you will learn how to implement gradient descent with Tensorflow.

Automatic differentiation is one of the core functions of deep Learning libraries such as Tensorflow and Pytorch. To facilitate the training of the neural networks, it is required to memorize the evaluation of all the neurons in the forward pass, and in the backward pass, it allows the computation of the gradients and updates of model parameters recursively.

In this video, we show how to write the gradient descent algorithm from scratch in Tensorflow. We fit a linear model from a constructed dataset with this algorithm. In the end, we introduce the high-level API provided in Tensorflow that concisely packs this procedure.

This article is from the free online

An Introduction to Machine Learning in Quantitative Finance

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now