Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

History and definition of AI

This step explains the foundation and history of artificial intelligence.
Robot finger
© Tara Winstead via Pexels

The examples of modern AI that you have seen so far all stem from decades of dedicated research. This step will explain the origins of AI and how it has transformed into modern-day AI techniques.

Symbolic AI

The beginnings of modern AI is seeded in antiquity, and in classical philosophers’ attempts to describe human thinking as symbolic systems. But it wasn’t until the summer of 1956, that the field of AI research was formally founded at a workshop held on the campus of Dartmouth College.

The aim of this workshop was clear:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

Thus, the idea of artificial intelligence entails any technique used to mimic human intelligence by means of a machine. MIT cognitive scientist Marvin Minsky and others who attended the conference were extremely optimistic about the future of AI. They believed that within a generation, this ‘problem’ of creating artificial intelligence would be solved.

Following the workshop, tremendous efforts were made in developing methods that would help to create this artificial intelligence. These methods were initially based on the programming of complex rules encompassing symbolic (human-readable) representations of problems, logic and search. This paradigm is often referred to as symbolic AI or Good Old-Fashioned AI (GOFAI).

A well-known example of such symbolic AI is the rule-based chatbot ELIZA, a natural language processing computer program pretending to be a psychotherapist. By means of pattern matching, the program gave the users the feeling that they were talking to a real therapist. ELIZA was one of the first programs to pass the Turing Test, designed by Alan Turing. Passing this test meant that users believed that they were communicating with a real person even though the responses were produced by a machine.

ELIZA conversation
ELIZA conversation by Marcin Wichary via Wikipedia

However, AI researchers encountered quite some problems with symbolic AI such as limited computing power and an explosive complexity of the problems as the size of the inputs grew. After several reports criticizing progress in AI, government funding and interest in the field dropped off. This period, from 1974 to 1980, became known as the “AI winter”.

In the 1980s, a new form of AI called ‘expert systems’ made an entrance. An expert system is a program that solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. One of the most celebrated examples was the MYCIN system that identified bacteria causing severe infections, and recommended antibiotics with dosage adjusted for the patient’s body weight.

However, these systems were very expensive to develop due to lack of domain experts. They were difficult to update and they were ‘brittle’ (i.e., they couldn’t handle ambiguous inputs). Thus, AI research entered another ‘winter’ between 1987 and 1993, a period marked by severe funding cuts and bankruptcy of many AI companies.

Connectionist AI

In the 1990s, AI models that resembled the way in which the humans learn became more popular. Newer so-called connectionist models, such as decision trees, k-means algorithms and artificial neural networks, allowed for a shift from a knowledge-driving approach to a data-driven approach. Instead of coding the machine by hand, the developer would train the machine on examples or experiences so it could learn to make inferences on new data. This is called machine learning.

Since 2010, a new subfield of machine learning called deep learning started getting traction. Deep learning models always consist of multilayered neural networks. In contrast to simpler neural networks, they do not require the developer to design the input features that are relevant for the problem at hand. For example, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) can be used to analyse images or to process natural language, respectively, autonomously. Features observed in the images or sound waves do not need to be specified.

A recent example of an influential demonstration of a deep learning model is Google’s AlphaFold, which can predict what shapes biological proteins will fold into. The protein folding problem is incredibly hard and has stood as a grand challenge in biology for the past 50 years. This breakthrough AI model now makes it possible to accelerate scientific discoveries in many problems of fundamental life sciences.

© AIProHealth Project
This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now