Close ☰
Menu ☰

Why we should rename AI

Posted on: Tuesday 9th of January 2018

My first involvement with AI project came five years ago. Except we weren’t calling it AI then, we were calling it analytics. The project entailed integrating predictive modelling with a decision engine to personalise what customers would be offered when they shopped online. And since then I have read a number of articles on how to personalise using AI which describes pretty much what we did. But at the time AI seemed more futuristic and exotic.

Merriam Webster defines AI as an area of computer science that deals with giving machines the ability to seem like they have human intelligence. As AI has permeated mainstream consciousness, the nuance of its being a branch of computer science has been lost. Intelligence viewed through the lens of computer science generates a narrower definition, based on what it can deliver, than the more general definition most people would recognise.

AI solves problems by using complicated maths. A better name for it would be computer-based reasoning. That is why AI’s first applications were in deductive thinking – deduction being an arithmetic transformation as well as a branch of logic – for example experts distilling their knowledge into a series of rules that can be coded such as legal and tax compliance. Inductive logic was next as that can be codified using statistical approaches such as models which estimate how likely we are to default on a loan which banks use to assess our credit-worthiness. And with the advent of Big Data, we are increasingly seeing abductive approaches that use much wider but less complete datasets to draw logical inference over how we will behave.

The reason why AI is causing so much excitement (and concern) is that it can now mimic other aspects of intelligence that until now have been exclusively human – the abilities to learn, perceive and create. But the underlying process remains mathematical, meaning the process is time-consuming or the results are mixed.

Take learning for example. Libratus, the AI solution which beat the world’s best poker players, learned by playing trillions of hands against itself. Similarly for an AI solution to correctly perceive a cat in a picture requires it being trained on hundreds of thousands of images that are tagged as having a cat in them or not. For a five-year-old, a training set of five is usually sufficient. Some specific forms of perception are very susceptible to maths – facial recognition, speech recognition, emotion recognition (both voice and expression) – so long as the quality of the sound or image is good enough. But for general perception, the rules are much more complex.

With creativity, the process involves consuming and deconstructing enough examples so rules can be derived to create new examples. This works well when generating granular weather forecasts from meteorological data but not when translating a photograph into a painting in the style of Van Gogh.

Not all intelligence is founded in logic and trying to derive it mathematically creates deficiencies. Renaming AI as computer-based reasoning would be more accurate and less scary.