Skip main navigation

Hurry, only 10 days left to get one year of Unlimited learning for £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

The Modern AI and a short history

Read this article for a short history of AI leading up to modern AI.

The Present AI is called the third wave of AI. The third wave of AI systems (data-centric AI) is trained with big data.

Artificial intelligence (AI) is a trendy topic now, but the concept of AI was first proposed almost 70 years ago. What happened in the AI research community over this period, and why did it take so long for the breakthroughs to occur?

*‘Van den Herik, H. J. (2019). Computer chess: From idea to DeepMind1. ICGA Journal, 40(3), 160–176. doi:10.3233/icg-180075’

The field of AI research was officially born in 1956 at the Dartmouth Conference, which introduced the term “artificial intelligence” to unify the various research efforts in cybernetics, automata theory, and complex information processing to give machines the ability to “think”.

A small group of prominent researchers, including John McCarthy, Marvin Minsky, Claude Shannon and Norbert Wiener, proposed that “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” This provided a clear, pragmatic direction for subsequent AI research efforts.

In 1958, Frank Rosenblatt created the perceptron learning algorithm, the simplest type of neural network with only one layer of neurons connecting inputs to outputs. The New York Times sensationally reported the perceptron as “the embryo of an electronic computer that will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” However, it was proven that the single layer perceptron could only recognise simple patterns but not more complex types.

* Source:

The first AI winter came!

Coming through the first AI winter, in the 1980s and 90s, Expert systems became very popular as specialised systems that simulated the decision-making ability of human experts to solve narrow specific problems such as diagnosing infectious diseases or identifying chemical compounds (e.g., Dendral).

These expert systems eventually proved too expensive to maintain as they were difficult to update, could not learn, and were brittle rather than robust in handling unusual inputs. As consumers no longer needed to buy an expensive machine specialised for a particular domain, this led to the collapse of the dream of Expert systems.

Screenshots of articles. One is titled 'DENDRAL, first Expert System'. The other has an image with the caption: "SUMEX, a computer designed to encourage the application of artificial intelligence in medicine." Alt text

The second AI winter came!

Finally, for about a decade or so, we have been in a period of a real blossoming of interest and investment in AI, the third wave of AI, if you will. The innovation of AI algorithms combined with the availability and experience of working with big data is one of the biggest reasons that artificial intelligence has been able to leave hibernation.

In particular, the development of deep learning is another reason we have come out of the second AI winter. However, with all this investment, interest, and funding in AI are we heading to another AI winter? Are we once again over-promising and under-delivering on what AI is capable of? Are we going to be disappointed with the limitations of driverless vehicles, natural language processing, and AI-powered predictive analytics?

This article is from the free online

Designing Human-Centred AI Products and Services

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now