Skip main navigation

Best practices for co-learning from user feedback

Read this article to discover best practices for co-learning from user feedback.

1. Communicate with user feedback

For either type of feedback, knowing what information is being collected, what it’s for, and how its use benefits them is essential.

Once a user gives feedback, please acknowledge that you received it. Whenever possible, find ways to use feedback to improve your AI.

2. Interpret dual feedback

Sometimes the explicit and implicit feedback do not match, or a single piece of feedback simultaneously contains both implicit and explicit signals. For example, public ‘likes’ are both a way to communicate with others (explicitly) and useful data to tune a recommendations model (implicitly). Feedback like this can be confusing because there isn’t always a clear link between what a user does and what a user wants from the AI model. In these cases, it’s crucial to analyse the root causes and consult with designers, users, and other stakeholders to generate a comprehensive review.

3. Account for negative impact

As AI moves into higher-stakes applications and use cases, it becomes even more important to plan for and monitor the negative impacts of your product. According to the Google PAIR report, you could set the following standards and guidance for addressing potential adverse outcomes:

  • If users’ average rejection rate of intelligent playlists and routes goes above 20%, we should check our AI model.
  • If over 60% of users download our app and never use it, we should revisit our marketing strategy.
  • If users are opening the app frequently but only completing runs 25% of the time, we’ll talk to users about their experiences and potentially revisit our notification frequency.

4. Last but not least, prioritize feedback

Instead of uncritically accepting all feedback, a comprehensive audit with engineers, designers, product counsel, and other stakeholders is useful to review and prioritise what feedback will need to be collected to impact the AI.

When prioritising feedback opportunities to improve the user experience with the AI, ask yourselves:

  • Do all of your user groups benefit from this feedback?
  • How might the level of control that the user has over the AI that the user has (or wants to have) influence the user’s desire to provide feedback?
  • How will the AI change from this feedback?
This article is from the free online

Designing Human-Centred AI Products and Services

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now