Skip main navigation

Designing ethical products and experiences

How should we design conversational interfaces? Georgina Bourke, from [Projects by IF (https://www.projectsbyif.com/) considers key ethical principles

Conversational interfaces (CIs) are designed to simulate human conversation. But it’s important that people know they aren’t human and have limitations in what they can do and the way they can respond. Georgina Bourke is a designer from Projects by IF. Here she explains that to earn users’ trust, designers should create products with honesty and transparency throughout.

Everyone who interacts with an automated interface should know they are interacting with a machine, so they understand what to expect from it. Google received criticism when launching Google Duplex in 2018 for creating an AI voice assistant that sounded so human people couldn’t tell it was a bot.

The designers had intentionally tried to trick people, adding in sounds humans use to signal thinking or listening, like “hmm” and “umm”. It’s unethical to make people think they’re speaking to a person, as they won’t be able to socially interact with a machine in the same way. The huge reaction meant Google was forced to halt the product and re-release a few months later, with a voice assistant that was more explicit about the fact it was a machine.

Let’s now take a look at some of the principles that should be employed when designing and building a conversational interface:

Be honest about what the interface can do and ensure support is always available

Errors are inevitable with automated systems. Without visual screens to explain mistakes, it’s more challenging for users to understand what’s happened and know how to recover or access support. Designers must think about ways of ‘failing gracefully’ to create conversational interfaces that people understand and trust. This might mean being transparent about the limitations of the interface as Arnita Saini describes or making sure there is always a way to ‘speak to a human. This is especially important when designing for vulnerable people with more complex or urgent needs. In these cases consider if and when it’s more appropriate for people to speak to a human over an automated assistant.

IF collaborated with design studio Comuzi to develop new ways of explaining automated decisions in the context of a fictional mental health chatbot, MoodJar. The example diagram below shows how someone could flag if they felt uncomfortable talking to a CI about a specific topic and how a CI might replay what it has understood so the person using it can check for any mistakes.

Diagram of ethical design of a conversational interface - "I get frustrated I want to do well but I'm just not confident right now about my exams". "You said you are frustrated and not confident." "Has Moodjar understood this correctly?" "Yes" The screen on the left shows a CI replaying what it understood from the data input so someone can check for mistakes. The screen on the right shows how someone could flag if they are uncomfortable talking to a CI about a topic. CC BY 4.0 (Click to expand)

Design for multiple people to understand what happens to data

With screen-based interfaces there are common design patterns for how someone might understand what happens with their data and give consent. For example, when you download a new app you might see a list of data it needs access to. With voice interfaces, how will people give permission to share data without these visual prompts?

There’s a limit to how much you can communicate verbally without making it difficult for people to follow and understand. That means terms and conditions and lengthy privacy policies will be even more redundant in voice interfaces than they are on screens. How might you help someone understand data flows with a voice interface without overwhelming them or causing unwanted friction? IF collects ways to communicate data sharing in the Data patterns catalogue. So far we only have patterns for screen-based permissions but we’ll be adding conversational interface patterns soon.

There is another challenge with communicating data flows, particularly with home assistants like Alexa. As they’re currently used, the device will collect lots of data about multiple people, including family members, residents and visitors. Designers should think about how to offer choice to each member of the household, allowing them to exercise their rights under the General Data Protection Regulation (GDPR).

Also consider people who don’t own the device but may come into contact with it, for example visiting friends or someone fixing an appliance. These groups also have a right to know what data is collected about them and give or withdraw consent. IF and the ODI have started research in this area, but there are still lots of unknowns and this will only become more pressing as peoples’ homes and lives become more connected.

Make sure people can hold the automated system to account

Beyond just designing for the people using the product, think about the support and maintenance systems around it that will need to monitor what’s happening. Product teams need to have oversight of how interfaces respond to different communities of people to check for bias in the system.

A recent study found that voice recognition systems run by Apple, Google and Facebook had higher error rates with black Americans than white Americans. New ways of investigating and exposing bias are emerging. Google Model cards explain how engineers build models for image recognition so that researchers and experts can test the system. Governance teams will also need tools to monitor data flows and responses in real-time. It’s important to consider how different groups might investigate how conversational interfaces treat people differently or impact people at scale.

Have your say

Think about the devices you have interacted with. Have you ever given a conversational interface the permission to use your data? Do you believe that when using a voice device, it is important to have explicit ways of giving consent regarding data, before you interact with it?
Have you ever encountered any bias or has an experience made assumptions based on your gender, age or nationality? How would it make you feel if a device consistently did not understand you, or made recommendations based on an assumed profile of your needs?
Share your thoughts below in the comments section.
This article is from the free online

Introduction to Conversational Interfaces

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now