Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £29.99 £19.99. New subscribers only. T&Cs apply

Find out more

Week 1 Introduction

Brief description about week 1.
In this week we’ll be discussing, a short overview of A.I., and then we’ll be discussing transparency. So in this first week, what I’d like to accomplish is to give you a short introduction on how one goes about gathering data and creating models, so that we can use them in useful ways. So in the first lecture we will talk about how we make an A.I. model, using input from the world, and talking about A.I. sort of limitations as well. Then we’ll also be talking about this idea of how lawyers are using A.I.. And I’d like to break this down by discussing about who, what, when, where, and why.
And not only the who, what, when, where, why, and how, and how much, but this idea of what, when combining the two, who and who, to think about how we’re organizing facts about the world. This is kind of an exercise that lawyers do when analyzing a problem. Then we’ll talk about some basic contemporary issues, lawyer and judge competency, prediction and forecasting, surveillance and A.I., A.I. at the border, A.I. arms race, facial recognition, health issues, bias and social manipulation. So, we will give a short over review of those issues. In this first week, I introduced one of two of my favorite quotes. The first one is by Melvin Kranzberg, basically he says “Technology is neither good nor bad.
Nor is it neutral.” So this idea of Well, computers technologies are made by humans, they just do what we tell them to do, neither good nor bad. Correct. But are they? Are they neutral? Algorithms are one tool that humans use in this very complicated world. So, can we really say it’s neutral? Then we discuss this kind of fundamental case that comes out from the US, that has to do with predicting whether or not someone is likely to commit another crime (in the future). And the issues surrounding using a tool in the judicial system that’s not open, It’s not transparent. So then we’ll talk about this idea of accountability.
And the basic premise here is historically, humans have been the things that have made the decisions. Basically important decisions are made by human beings. But now, some decisions may be made by technology, But now, some decisions may be made by technology, maybe made by artificial intelligence. So then how do we assure citizens in society that those decisions are being made, and we are accountable for those decisions, rather when governments are making are using AI to make those decisions. And finally we’ll talk about some initiatives to govern A.I., some international initiatives, some national initiatives, some initiatives from labor unions, and some private party initiatives.
So again this week, I challenge you to really think about the basic issues of artificial intelligence and transparency This idea of do we really know what’s going on when artificial intelligence is working. Good luck. Thanks everybody and take care.

In this week, we’re going to discuss a short overview of A.I. and transparency, including:

  1. How do we make an A.I. model

  2. Sort of limitation of A.I.

  3. How do lawyers use A.I.

This article is from the free online

AI for Legal Professionals (I): Law and Policy

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now