January 19th, 2020
Welcome, reader, to the first Canaria Technologies blog post. In this series we provide a comprehensive, plain-English introduction to AI. All too often we find people at technology conferences either nodding along trying to pretend they understand what AI is; or giving such incomprehensible pseudo-engineering presentations that even fellow experts in AI can’t follow along.
We’re here to fix that! By the end of this 6-part series you’ll be able to:
You don’t need a background in maths, coding, or engineering to follow this series. We’re not going to be covering anything like algorithm creation (there are plenty of online courses for that already) – we’re going to be covering an overview of different methods used to create self-learning systems with easy-to-follow illustrations.
So, without further ado, let’s jump in!
The term ‘Artificial Intelligence’ refers not to any one specific technique, but any coding technique that results in the imitation of human intelligence. As an umbrella term, AI covers ML (Machine Learning) and deep learning, as well as other more classical mathematical techniques (such as decision trees) that result in more simple imitations of human logic.
There are many different definitions and levels of complexity within artificial intelligence. A good rule of thumb definition is ‘a program that can sense, reason, act, and adapt. Any technique that enables computers to mimic human intelligence.’1 A complex example of artificial intelligence requires a team consisting of different specialists to create. A perfect example of this type of complex AI is the software within the Spot robot by Boston Dynamics. The AI within Spot has to combine spatial data (from multiple gyroscopes and LIDAR on the legs and body of the robot) with image recognition from its cameras alongside movement stabilization programmes in order to move through its surroundings. The Spot is always improving its ability to move through its surroundings based on new information (data), so the AI within Spot is never ‘finished’. It is, however, successful. This is because it has achieved its functions of being able to:
So, AI systems are never ‘finished’. But they are different degrees of ‘successful’ or ‘unsuccessful’ in regards to their ability to complete specific functions as defined by their team of creating engineers.
A simple example of AI would be a customer service chat-bot utilizing decision trees to mimic an understanding of human language. Even this application is contested as to whether it really counts as a form of AI or not due to its simplicity and vagueness around whether it’s really ‘sensing, reasoning, acting or adapting’. It’s a wide range.
A core component of all AI systems is that they require vast amounts of historical data in order to be successful. A good metric for judging how successful or unsuccessful an AI system is likely to be is to establish how much historic data it has access to. For example, for Spot’s image recognition-based navigation to be deemed likely to be ‘successful’ it requires access to thousands of hours of video footage. A Spot with access to 10 hours of video footage may still function, but it will probably crash into a lot of objects.
The Spot is a great example of AI. Unfortunately, AI has become a marketing buzzword over the last few years and many companies advertising their ‘AI capabilities’ are not using any AI. A recent survey of European technology startups by London based VC firm MMC revealed that 40% of AI companies do not use any AI1. These companies are usually using forms of statistical mathematics which, although crucial as a building block to eventually get to an AI system, are certainly not ‘true’ AI. The most common example of this is linear regression (a graph-based method of finding patterns in simple data by establishing the relationship between 2 or more variables) being marketed as AI.
For further information, this short video clearly explains what linear regression is on this video
An example of a non-AI predictive system that uses linear regression would be using historical data of housing prices to predict what the following year’s property prices are likely to be.
The history of AI is a fascinating subject for further reading. Its popularity has been ebbing and waning since the 1950s. Periods of popular interest in AI are followed by AI ‘winters’. This is because the capabilities of AI were frequently over-played by the mainstream press (‘Robots take over earth! Super-intelligence coming any day now!’), and then funders of research or commercial innovation were subsequently disappointed by the real results (‘erm, we’ve managed to train a military AI to recognize tanks in photos…but it’s only right 40% of the time’). This disappointment resulted in the mass pulling of funding behind these projects. Anglo-Saxon cultures are currently in the midst of another AI boom. Expect another AI winter to happen within the next decade or so.