Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

Types of transparency

In this article, Prof. Ching-Fu Lin explore types of transparency.
© Ching-Fu Lin and NTHU, proofread by ChatGPT

Next, let’s pivot our focus towards the notion of transparency and its correlation with black boxes.

By ex ante or ex post

Ex Ante Transparency
  • Systematic Transparency
This emphasizes comprehending the entire system, capturing the clarity of algorithmic operations, the type of data used, the formulation process, and the guiding principles behind decisions. Essentially, it entails being forthcoming about the overarching workings of an AI framework.
  • Output Simulatability
This pertains to the capacity of human users to understand or reproduce an algorithm’s results. Essentially, if given the identical inputs an algorithm processes, could an individual draw the same conclusion? This ensures the decision-making process remains transparent and can be duplicated, building trust and accountability.
Take, for example, a system determining credit scores. Here, output simulatability would entail that with identical applicant details (like income, past credit behavior), a human evaluator could emulate the algorithm’s decision process and match its credit score outcome.
Ex post Transparency
  • User-Specific Transparency

This dimension aims at making an AI system’s functions clear and transparent to a particular user, considering their expertise and the context they’re operating in. It ensures users grasp why a system lands on a particular verdict or suggestion.

For instance, in a tailor-made learning platform, user-specific transparency could imply that the system offers reasons for suggesting certain educational tools or courses based on the learner’s past performance, inclinations, and objectives. This data should be conveyed in a manner that resonates with the student, allowing them to comprehend the system’s choices and their relevance to their learning journey.

By disclosure or explain

Consider a model that differentiates between cats and dogs:
Systematically Transparency
  • “How does the model define the appearance of a cat?”
For those of us who are regulators or overseers of these algorithms, our attention is primarily directed towards systematic transparency. Our goal would be to grasp the comprehensibility and thoroughness of the algorithm.
Individually Actionable Transparency
  • “What reasons does the model have for classifying THIS specific picture as a cat?”

It’s essential to recognize that the significance of each transparency type and explanation is heavily swayed by the viewpoint of the audience.

By contrast, if we are individuals directly impacted by the outcomes generated by the algorithm, our main interest lies in the counterfactual explanation. This means we’re keen on discerning what elements led to a particular outcome, rather than delving into the intricate specifics of the entire system.

© Ching-Fu Lin and NTHU, proofread by ChatGPT
This article is from the free online

AI Ethics, Law, and Policy

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now