In this activity, we move on to the second topic: transparency.
Please download and read this article below, and we will discuss this topic via the audio lecture in the following steps.
A recurrent concern about machine learning algorithms is that they operate as “black boxes,” making it difficult to identify how and why the algorithms reach particular decisions, recommendations, or predictions. Yet judges will confront machine learning algorithms with increasing frequency, including in criminal, administrative, and tort cases. This Essay argues that judges should demand explanations for these algorithmic outcomes. One way to address the “black box” problem is to design systems that explain how the algorithms reach their conclusions or predictions. If and as judges demand these explanations, they will play a seminal role in shaping the nature and form of “explainable artificial intelligence” (or “xAI”). Using the tools of the common law, courts can develop what xAI should mean in different legal contexts.
There are advantages to having courts to play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. Further, courts are likely to stimulate the production of different forms of xAI that are responsive to distinct legal settings and audiences. More generally, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.
Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, 119 Colum. L. Rev. __ (2019 Forthcoming), pp. 1829-1838, available at SSRN: https://ssrn.com/abstract=3440723
Must be Read:
Part I (THE WHAT AND WHY OF EXPLAINABLE AI);
Part III (Criminal Sentencing Algorithms).
Page 1835, 1836, 1837, 1844, and footnote 87 are helpful to answer the quiz and the final test.
© Ashley Deeks