Introduction to Neural Networks

Neural networks are a cutting-edge learning technology. You will need to bring all of your knowledge of AI and machine learning to help you understand how these “digital brains” work.
Neural networks are a cutting-edge learning technology. You will need to bring all of your knowledge of AI and machine learning to help you understand how these “digital brains” work.

What is a Neural Network?

Neural networks are algorithms that mimic parts of how the cells in our brain (neurons) collaborate to make decisions.
Neural networks are a framework for machine learning algorithms. You can use neural networks to classify data and apply labels. They can also predict numeric outcomes and even process natural language to understand commands. Whilst they are extremely useful, creating and training a neural network requires a lot of time and extremely powerful processors.

Nodes, Layers and Weights

Above is an example of a simple neural network, represented as a graph made up of nodes and connections.
The circles are called nodes or neurons and they are organised into layers, each column of nodes representing a single layer. The first layer in any neural network is called the input layer and each of the nodes will be fed by a single attribute of the data point the neural network is currently working on. For example, the input layer of a neural network being used to predict house prices might look like this:
The final layer in every neural network is the output layer. If the desired result is a single value, like a number for a regression problem, there will only be one node in the output layer. If a network is being used to classify, then you would need an output node for each potential label.
In between these two layers are hidden layers, which can be a single layer or many layers connected together. These layers are called “hidden” as the programmer has no way of directly interacting with them.
Nodes in the hidden layers will generate values from their inputs (nodes that connect to them from previous layers) by applying weights to them. Every connection in a neural network has a weight. The higher the weight, the more influential that input is on the value of a node. The nodes and connections in the hidden layers combine multiple inputs.

Why Use Neural Networks?

Neural networks are really good at learning and representing complex relationships between parts of an input and the desired outcome. For example, in a movie data set, a neural network could model the influence that combinations of run time, number of lines of dialogue and genre has on box office sales, whereas other methods may not spot such relationships.
You can think of each node as its own mapping function that combines different parts of the input to create a feature – a combination of inputs and their weights that represent a relationship in the data. These features then act as inputs for other nodes and more mapping functions. Combining all of these mapping functions makes neural networks much more accurate than other machine learning methods when working with complex data (like images).

Neural Network Topologies

The way the nodes are connected together is called a topology.
Feed-forward neural networks connect all the nodes from a layer to all of the nodes in the next layer. This means that every node in the input layer is connected to every node in the first hidden layer and so on up until the output layer.
They are called feed-forward because the direction of the data flow is always forward and all the data from one layer is always passed on to the next.
Recurrent neural networks allow connections to go backwards, so the output of one node can be fed into an earlier layer for the next iteration of the model.
This is useful for situations where a previous result of a node has an impact on the next output, such as in predictive text where the last few words have a big impact on the next recommendation.

How Neural Networks Learn

Whilst neural networks do not function exactly as the brain does, it is a useful analogy to help you understand the technique. Inside the brain, individual brain cells are connected to one another through synapses. You can think of the nodes in a neural network as brain cells and the connections as synapses.
Information is passed between brain cells in the form of electrical signals and receiving cells combine those signals to decide whether they should send a signal in response. We learn because our brains strengthen useful neuron connections and weaken (or “prune”) less useful connections. Neural networks do this too by adjusting the weights as they learn, amplifying useful connections whilst nullifying others by setting their weight to zero.