Skip main navigation

Leveraging visual representations to understand AI behavior

Leveraging visual representations to understand AI behavior
By leveraging visual representations, a black-box-like AI behaviour can be demystified, such as how AI makes automated decisions, whether AI uses unnecessary or insignificant data, or which biases in data are persistent.

A heatmap of three images: lions, a dog, and a flamingo.

* Source:

A heatmap (or saliency map) is a popular visual representation for generating user attention data during traditional usability testing with eye trackers.

Rather than showing what users are actually paying attention to on your AI products, AI can employ heatmap-like visual representations to exhibit its decision patterns.  As illustrated in the image below, the regions marked with red are the most critical area for recognising an object (here, a lion, dog, or bird).

The heatmap indicates whether the respective data are sufficiently helpful for learning and whether unnecessary elements are used to make predictions. Hence, visual representations can also determine the points requiring human involvement to make better decisions and the measures used to secure the reliability of AI behaviour.

This article is from the free online

Designing Human-Centred AI Products and Services

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now