Previously we reflected on what we mean when we talk about AI. We also considered what AI could become in the future. In this step we are going to look at some current AI applications that are being used in the industry.
What do we mean by intelligence?
When we talk about artificial intelligence, what level of intelligence are we talking about? The systems we work with today are not alive or aware in any sense of the word. These systems can appear intelligent to the end user, but they are mainly statistical models that give us insight into patterns and trends in time. When a system identifies a pattern, it can then apply that pattern to new input data to create original output. HAL, the sentient computer system from the film 2001: A Space Odyssey, is still what it was when it appeared in 1968: science fiction.
At some point in the future, HAL may make the transition from science fiction to reality. However, the revolution that is happening around us, including in the creative industries, is not in any way because of self-aware AI systems. What we’re now seeing is an increased automation of tasks and processes that, up until now, have been very difficult to implement using computers.
These implementations include:
- natural language processing (NLP)
- speech recognition
- image recognition and classification
- gesture recognition
- recommender systems.
Let’s look at each in turn.
Natural Language Processing
Natural language processing (NLP) has a significant history. As with many tasks associated with AI, we are only starting to see its full potential in recent years. NLP is an umbrella term. It describes the ability of a piece of software to analyse and process human speech. It then enables the computer to use the speech as data or commands to perform a task. You can see everyday examples of NLP in action in text completion features in your email, website chat boxes that answer your queries while shopping, and programmes such as Google Translate.
Speech recognition is a subset of NLP and stretches all the way back to the development of an automatic digit recognition machine, Audrey, at Bell Labs in 1952. This machine was a gigantic computer that could recognise phonemes, the basic units of human speech. For a more in depth look at the history of speech technology, see the link to the BBC article The Machines That Learned To Listen below. Speech recognition systems listen for vocal commands and process the information to complete (some) requests. This technology has now progressed to the point that we have products such as Google’s Assistant and Amazon’s Alexa in many homes, all providing information for the user based on voice input.
Image recognition and classification
Whilst the field of AI research is many years old, recent developments in ‘deep learning’ have provided a boost for some really amazing work over the last decade. Deep learning is a term used in AI to describe the process of using massive amounts of data as input to a large neural network. This input can then be used to identify patterns in the data.
A neural network is a computer algorithm that uses small units to process bits of information. By adding these small units together, we can identify features in the data that are difficult to describe manually. When these networks are trained on massive amounts of photos, the network will eventually be able to describe the features of the content of the photos. Once the content features are identified, some networks can then produce new images from this information.
You can see examples of this deep learning recognition and classification in systems used to power self-driving cars, as well as more creative applications such as in Google’s Deep Dream generator which allows you to see what a neural network sees with startling results, and Magenta, a software library that is tailored towards creating music and image applications powered by machine learning algorithms. You can find out more about both in the links below this article.
A gesture recognition system has the ability to interpret the user’s body movement to perform some task. The user’s movement is captured using a sensor. This movement is then used as input into a machine learning algorithm. The system then produces some output depending on the gesture. You can now change the channel on your TV by simply swiping your hand through the air, and control your laptop by moving your fingers in a certain way. Both are thanks to gesture recognition.
Anyone who has used the internet will be familiar with targeted advertising. It’s used by companies to promote products they think you will be interested in based on your previous online activity. Thanks to recommender systems, it’s now possible to predict the kinds of things that people might like to do based on the things they have done in the past. They also make it possible to predict how those things they have done might relate to other things.
Recommender systems are deeply embedded in everyday life. From your YouTube watchlist to your favourite playlist on apps like Spotify, they can also influence the type of media you may be interested in, so they are incredibly powerful tools.
Have your say
- Can you think of any creative or artistic applications that could be possible with these technologies?
- What forms of artistic output could be generated?
Share your ideas with other learners in the Comments section.