Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

AI Transparency Paradox

Transparency can help mitigate issues of fairness, discrimination, and trust. At the same time, however, disclosures about AI pose risks.

Facebook has started to disclose what factors affect its post rankings in News Feed through the Why Am I Seeing This service.

The information that generally has the most significant influence over the order of posts includes:

  • How often do you interact with posts from people, Pages, or Groups?
  • How often do you interact with a specific type of post, for example, videos, photos, or links?
  • The popularity of the posts shared by the people, Pages, and Groups you follow.

This service helps users understand and control the posts they see. 

Recently many litigations and administrative offices surge calls for transparent AI and encourage companies to take responsibility for automated decision–making systems. In Week 3, we covered how the federal regulations in the US will affect companies that deploy automated decision-making systems for high-stakes scenarios, such as access to education, employment, financial services, healthcare, legal services, and more (refer to the Algorithmic Accountability Act of 2019).

Macron, the president of France, asserted his plans for increasing the collective pressure to make AI algorithms transparent.  France will take initiatives to open data from government and publicly-funded projects and plan to incentivise the private players to make their AI algorithms public and transparent.   Transparency can help mitigate issues of fairness, discrimination, and trust. At the same time, however, it is becoming clear that disclosures about AI pose risks, for instance:

  • If AI models are determined to behave biased or unfairly, we will automatically lose customer trust.
  • AI transparency isn’t necessary because organisations can self-regulate.
  • The model can’t be biased if protected data (such as gender, race, region, and age) isn’t used in model building.
  • AI transparency leaves you vulnerable to losing intellectual property and trade secrets.

Due to these concerns, many companies are still hesitant to disclose their AI models and become susceptible to lawsuits or regulatory action. Call it AI’s “transparency paradox” — while generating more information about AI might create tangible benefits, it may also lead to new downsides. 

Taking this into account, please share your view on exploiting this paradox and reducing the tension between the desire for AI transparency and a company’s interest in maintaining secrecy over their AI tools. 

This article is from the free online

Designing Human-Centred AI Products and Services

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now