Glossary

The definitions below are adapted from those provided in the UK House of Lords’ Select Committee on Artificial Intelligence report “AI in the UK: Ready, Willing and Able”.

Algorithm

A series of instructions for performing a calculation or solving a problem, especially with a computer. They form the basis for everything a computer can do, and are therefore a fundamental aspect of all AI systems.

Deep Learning

A more recent variation of neural networks, which uses many layers of artificial neurons (forming a “deep neural network”) to solve more difficult problems. Its popularity as a technique increased significantly from the mid-2000s onwards, as it is behind much of the wider interest in AI today. It is often used to classify information from images, text or sound.

Expert system

A computer system that mimics the decision-making ability of a human expert by following pre-programmed rules, such as ‘if this occurs, then do that’. These systems fuelled much of the earlier excitement surrounding AI in the 1980s, but have since become less fashionable, particularly with the rise of neural networks.

Machine Learning

One particular form of AI, which gives computers the ability to learn and improve at a task from experience, without being explicitly programmed for that task. When provided with sufficient data, a machine learning algorithm can learn to make predictions or solve problems, such as identifying objects in pictures or winning at particular games, for example.

Neural Network

Also known as an artificial neural network, this is a type of machine loosely inspired by the structure of the human brain. A neural network is composed of simple processing nodes, or “artificial neurons”, which are connected to one another in layers. Each node will receive data from several nodes “above” it, and give data to several nodes “below” it. Nodes attach a “weight” to the data they receive, and attribute a value to that data. If the data does not pass a certain threshold, it is not passed on to another node. The weights and thresholds of the nodes are adjusted when the algorithm is trained until similar data input results in consistent outputs.