Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

History of AI and Common AI algorithms

Okay, so let me start a little bit with the AI history Actually AI has not been a new thing but it’s always been the next big thing though since 1950s. The if you remember the computer a formal general-purpose computer were invented in 1947 was the name ENIAC. Although the computer was designed at that time for the Second World War but when they finished when all these researchers when they finished the project, the war was over so the computer wasn’t the first computer wasn’t actually used in in a war which is a good thing.
But shortly after the computer were invented; a general-purpose computer were invented, there’s a professor from Brown University who called the term artificial intelligence and since then, what this term is really you know very sexy and people imagine great things based on AI. So there was the birth of the first AI hype. But after about 20 years 25 years people got disappointed with all the hype because you know by definition hype is something that you hope you hope for that you can achieve So there is so-called AI winter in 1990 and 80s where people avoid talking about AI because they think it’s a hoax, it’s really not a new real thing. It’s not something that we can achieve.
But until the end up 1980s it’s about in 1990s there are new hopes because of the computer speed and hardware and software you know, even in Japan, they were talking about a fifth-generation computer that could listen to people and do all the computation like parties like what we see in Star Trek You can just do our computation and all the voice command, you can just talk to your computer and the computer will answer you with any question you have in a very sexy female voice.
Unfortunately of course we did not manage to achieve something like that so in the 1990s, in the beginning of the 1990s to the end actually there’s another so-called AI winter you know people got disappointed in AI and all the research on AI come to an halt. The budget won’t come in and people actually change names when they do AI. It’s not until, you know, the 2011 IBM Watson won the Jeopardy competition that you heard about and in 2016 the Go chess were beaten. But it all started, I mean, this wave of AI all started with image a breakthrough in recognizing a random image from the internet. The famous project called image net in, from Dr. Li, Fei-Fei Li.
She built a image net database dataset with 15 million random images from the internet provided by Google and she asked computer to recognize about a thousand different object class out of these random images. Of course she has to went through a very long process to have human recognize it and labeling each object in an image first. But then this competition, you know, most competitors that join the image in a competition only achieve about sixty percent accuracy which was like you know the natural the the well-known limit of artificial intelligence in terms of recognizing images. But there are one team that actually stood out.
This is a team from Canada they that they achieve more than eighty percent accuracy the first time they went into the competition and then the rest are all history. Because the team use something called CNN We’re going to talk about it’s not the cable news CNN it’s a, the AI algorithm called CNN. They were able to achieve a breakthrough in terms of recognizing object in a random image. Actually nowadays they were able to achieve an accuracy more than ninety nine point four percent which is actually better than human.
So if you want to recognize a random image you know an object or several objects from a random image AI actually does a better job than the average human so when we talk about AI, it is common that people got confused and said… some people think AI is machine learning; some people think AI is deep learning. You know people mix all these terms together. But actually AI is a field of research and there are several major algorithms that AI actually researchers will really employ to do.
The first most common algorithms actually rule-based inference which was very popular you know, in the first AI wave in a second AI wave In use rules, the researchers actually produce many expert systems so-called expert systems. Also probabilistic inference which is a actually a quite popular type of algorithm that’s quite suitable for medicine because there are so many uncertainty in medicine that many of the rules that cannot actually be expressed in a deterministic way. So if something happened there is a probability of something else could that could be happening and but it’s not zero or one it’s it’s there’s some uncertainty involved so probably is the inference were actually quite popular in the medical field.
Especially Bayesian probability so you some of you may heard about Bayesian probability, these are also quite popular as AI algorithm in healthcare. And of course you heard about statistical model there are so many things like those just take regression general expansion you know regression they’re different type of regression that’s used in terms of predicting prediction and fitting models. These also work is algorithms. In of course you can also assign ad hoc scores to the variables that you like that will contribute to the outcome that you’re trying to to predict or to model. You can always assign ad hoc scores. Machine learning was not a new thing.
Actually I think when I was in high school which was like very long time ago I was using machine learning. There is a package call actually I still remember it’s cooperate maker that you can run at Apple too. You know this very old personal computer from Apple that give you the way to construct neural network but I think the the upper limit is something like a 20 nodes and it took guy’s hours or days to learn a network. So it’s not anything new. However, the way to run millions or even billions of nodes in a very big new network in terms of minutes or seconds is something that we’ve never been able to achieve until recently.
And also thanks to the people who actually reinvent artificial neural network and provide a derivative or a branch from a name into something like CNN or support vector machine SVM. Let’s talk a little bit more about CNN in our next slide. CNN is actually a… one of the classes of different type of network from ANN. It’s been a mainstream method for machine learning because as I described before it provided a breakthrough in terms of recognizing random objects from random images and the reason, okay, so, in the middle of the slides you can see the full name CNN is actually Convolution Neural Network. Okay or sometimes convoluted neural network.
It is a type of neural network or ANN but it’s come multiple layers, meaning I think the days can go up to like 157 layers. So people also call it deep neural network. Okay or combine everything together the convolutional neural network or DCNN and you know when people when they started doing CNN they don’t want to call it neural network because neural network make you think about the losers who were doing AI research so they call it deep learning.
And that’s why we got all these terms mixed together deep learning, machine learning, you know CNN, DCNN, but actually these are all the same thing under the umbrella of artificial neural network which is a type of popular way to do machine learning Okay so most people now when they say machine learning they’re implying that they use neural network and sometimes deep neural net like CNN. CNN is one type of deep learning.
In a way CNN works is by decomposing an image into smaller units and then and the each smaller units were again divided into smaller units until we have pixels and they they use a multi-layer architecture to look at each pixel the arrangement of the pixels And then you eventually recognize the images. If you’re interested there are many materials online that you could actually learn more about CNN.

Dr. Yu-Chuan Jack Li will explore topics: History of AI and common AI algorithms. Common AI algorithms such as machine learning, artificial neural networks will be discussed in the activity.

In this video, we will see that AI has not always gained in popularity over the past decades. Given the state of AI today, do you think it will happen again that people be disappointed in AI? Why or why not? Can you define common AI algorithms mentioned by Dr.Li and their application in healthcare? Share your comments below and discuss with others.

This article is from the free online

Artificial Intelligence for Healthcare: Opportunities and Challenges

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now