Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

Trusting results

In this video, we think about how to know if we can trust a machine learning system

Do we believe the models we have trained?

In this video, we think about how to know if we can trust a system. What does an accuracy score really relate to in the real world?

When you’ve built a machine learning system, you have some confidence about how well it’s working because you’ve monitored your training, perhaps used a validation set, and you’ve tested it on hopefully unseen, realistic real-world data. But can you really trust those results?

Some useful concepts for this video:

True Positive (TP): A correct prediction of the existence of something e.g. there is disease predicted, when there is disease in reality

False positive (FP): An incorrect prediction of the existence of something e.g. there is disease predicated when there is no disease in reality

True Negative (TN): A correct prediction of the absence of something e.g. no disease is predicted, and there is no disease in reality

False Negative (FN): An incorrect prediction of the absence of something e.g. no disease is predicted, when there is disease in reality.

“Precision” and “recall” combine the above measures in different ways.

Precision: TP / (TP + FP)

Recall: TP / (TP+FN)

You may also encounter the “F1” score as a metric, which combines all of the concepts above.

This article is from the free online

Experimental Design for Machine Learning

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now