Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £29.99 £19.99. New subscribers only. T&Cs apply

Find out more

The laws of AI

The laws of AI (Video)
Let us remind ourselves of Jacob Turner’s definition of AI. Artificial Intelligence is the ability of a non natural entity to make choices by an evaluative process. This definition allows us to focus on the question of whether we should regulate a decision made by a machine which may affect us as humans. We’ll come back to this question throughout the session, but we will start our analysis by looking at how AI makes these decisions. The rise of AI as a combination of three factors. First, as we looked at in week one, the amount of an access to data, digital data or big data, AI learns from and improves as it is able to process more data. Secondly, algorithms.
The growth and improved efficiency of coded algorithms means that more effective uses and decisions can be made of the data that is being received. Thirdly, computing power. The ever increasing ability of computers to date means that the algorithms can work more efficiently and process more data. The three factors work symbiotically in a virtuous loop. More data can be processed because of increased computing power. The algorithms learn and become better and the cycle continues. Don’t get put off by this table, you will see that there is a large amount of law which applies to AI and the effects of AI.
I’m going to go over three specific areas, the ones underlined being data laws, intellectual property rights, and negligence in the next few slides. In relation to the other points, it’s worth noting the following. You’ll see that I’ve differentiated between private laws and public laws. Public includes the state taking action against a company or individual. For example, the fines the EU has leveled against Google for competitional breaches, what the American’s call antitrust. Private laws involve a company or individual taking action in the courts. This will involve potentially very high costs. That is the cost of the lawyers and might not always be feasible for that reason. Product liability laws give consumers direct rights against manufacturers for defective products.
This creates some issues for AI. Is it a product or a service? ECHR means the European Convention on Human Right. Article 8 provides that everyone has the right to respect for his private and family life, his home and his correspondence. Public authorities cannot interfere with these rights except in accordance with the limits set out in the ECHR. And the courts must also take into account these rights even if all the parties are private. RIPA is the Regulation of Investigatory Powers Act 2000. The statute regulates the powers of public bodies to carry out surveillance and investigation, including the interception of communications. There are also related regulations specifying when an employer may monitor their employees.
This will be relevant to where AI is used covertly. In the UK, it is against the law to discriminate against anyone because of certain protected characteristics. For example, age, gender or race. Anyone designing AI will need to ensure the effective AI is not to discriminate and breach of these laws. The GDPR contains provisions which are directly relevant to the effect of AI to its decisions. Article 22 provides that the data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or significantly affects him or her. This right is perhaps not as strong as it seems.
It is subject to a number of exceptions in Article 22 subsection 2, with in particular decisions where it is necessary for entering into or performance of a contract between the data subject and a data controller being exempt. So for example, using AI to credits someone for mortgage is not subject to the restriction in Article 22-1. There is also a right to be told about the existence of automated decision making and meaningful information about the logic involved as well as the significance and the envisaged consequences of such processing for the data subject. But note that this is a subject access right, which we looked at in week one.
So you would have to make an application to the data controller to get this information. You don’t get it automatically. And what is meant by meaningful information? How do you explain complex technology which may utilize hundreds of algorithms in code? Data cannot be owned. In a famous case, a student who stole an exam paper and then returned it after he’d found out the answers, was acquitted of theft. Information cannot be owned. However, in the EU, there is protection for collections of data, databases, and many companies will own and keep improving large and very valuable datasets. They will be able to use the EU database right and copyright laws in other countries to prevent others from extracting data from these databases.
Algorithms are also capable of intellectual property protection. These are protected by the law of confidentiality as trade secrets. For example, if Google changes its native search algorithms, this will affect where a company may rank on a search listing, the company may adjust it’s website code as a consequence, but it will have no right to see the rules Google has encoded. These are proprietary to Google. Intellectual property rights therefore give the owners of AI some quite strong protections regarding the underlying technologies in the AI. But they do not really address the issue of the quality of the decision making of these algorithms. What happens when AI causes harm?
Most people will be familiar with the concept of negligence in the legal sense. But when we start to apply this concept, which is nearly 100 years old, to AI, it runs into some problems. The first requirement is to show that there is a duty of care. There needs to be a duty of care owed by the defendant to the claimant. In the case of AI, who is this? The designers of the AI, the manufacturers of a product with embedded AI, or even the AI itself? We’ll look at this point again. Secondly, there must be a breach, the duty must be breached. The question is whether the defendant acted in the same way as the average reasonable person in that situation.
But this may be a difficult standard to determine with AI, especially where it is novel. How is the standard set? Or where the AI developed for a specific purpose is used for other purposes, which cause unintended consequences. Thirdly, there must be causation. The breach must have caused the damage. What happens when a human fails to upgrade software used for AI or intervenes to override AI? And lastly, you must prove loss. And the loss caused must be reasonably foreseeable. As AI develops, many of its consequences may be unforeseeable, that is both the power and risk of AI.
Governments are recognizing the limits of existing laws and are starting to legislate, but only on a piecemeal basis in relation to specific applications of AI. The UK government has passed the Automated and Electric Vehicles Act 2018, although many of its’ provisions are not yet enforced. This makes the insurer liable for damage stemming from an accident caused by an ensured automated vehicle when the vehicle is in self-driving mode. And where the vehicle and an insured person or any other person suffers damage as a result of the accident. This is making sure the insurer cannot escape liability because of the use of a self-driving car. But it is dodging the bigger issues. What is the liability of the manufacturer of the car?.
What is the basis on which the insurer can recover from the manufacturer?. Do we need to amend product liability laws to deal with this?. Contrast this with the German law on autonomous vehicles which provides that, in the event of unavoidable accident situations, any distinction based on personal features, age, gender, physical, or mental constitution is strictly prohibited. Think about this in the context of driving a car with a child. You might instinctively act to save the child over yourself or other adults. AI is required not to favor the child under this law. Facebook’s old motto that you should move fast and break things, reflects that technological disruption moves much quicker than the law.
Given the importance of AI, should we not be regulating at the start, at the design stage? Leaving the issues about AI to private citizens also has limitations. There is an access to justice issue. Going to court is expensive and English litigation carries the risk of having to pay the other side’s costs if you lose. The law also leaves many gaps where discretion is allowed. The law can’t cover every eventuality. It relies on industries to develop standards and ethical codes to supplement the law and provide guidance. Often, the law is not concerned with the morality of a situation, but it’s there for certainty of coordination as a society. For example, requiring that we all drive on the left in the UK.
So let’s go and consider some issues around ethical self regulation of AI and giving rights to robots.
This article is from the free online

The Laws of Digital Data, Content and Artificial Intelligence (AI)

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now