Artificial Intelligence: Separating Fact from Science Fiction

  • Published: 29 October 2019,
  • The Say Team

Artificial intelligence has been a recurring theme in film and literature since the ‘60s, and the likes of 2001: A Space Odyssey and The Terminator have left the majority of us misunderstanding its capabilities and for some, an outright distrust of the technology. Some worry it will take over the world, others think it’s just a buzzword that will never fulfil its potential and most are unsure of its future. (But I’m here to tell you AI akin to HAL 9000, Blade Runner’s Roy Batty or Disney’s less sinister iteration – WALL-E – are a long way off).

The state of AI

AI has an image problem, with much of this due to the way it is presented by media outlets and marketing teams. New technologies and products with minor elements of AI or learning are triumphantly branded with the AI label, like Oral-B’s AI toothbrush. People should ultimately know that despite the advertising, the brand’s new ‘AI powered toothbrush’ won’t pass the Turing Test. A MMC survey revealed that as much as 40% of European startups that claim to use AI in their products – actually don’t.

Another factor that fuels misinformation about AI is the way the media treats the subject. With each incremental advancement in the field there’s usually a host of articles that allude to our impending doom, often, accompanied by an image of Arnold Schwarzenegger’s alter ego – like this piece on Mail Online, for instance. The media also has a tendency to use the terms artificial intelligence, machine learning and deep learning interchangeably – and also at random. Though the three are very closely linked, it is important to make the distinction between them, as machine learning is a subset of AI, and deep learning a subset of machine learning. These phrases, along with AI itself, have become buzzwords and have only added to the confusion around their capabilities.

Interestingly, one of the biggest contributors that skews our understanding of AI is our tendency to anthropomorphise. This is most pronounced in the way we evaluate software like Siri and Alexa, as the slightest hint of human behaviours from these machines can lead us to greatly overestimate their intelligence. When in reality, they’re simply using pattern recognition to give predetermined responses based on the data the user has inputted (their voice), albeit in a very convincing way.

That being said there’s certainly a scale of intelligence, which ranges from the aforementioned ‘genius’ toothbrush to more complex iterations, like AI journalism or Google’s AI that independently decided to teach itself about cats.

Defining AI: Alexa vs Skynet

Whilst writing this blog, I came across a litany of definitions for AI, my personal favourite – although totally unrelated – was the three-toed ai. But tree-dwelling mammals aside, the complexity of AI makes it difficult to explain concisely, so it’s unsurprising it has so many different definitions.

Most definitions usually take on a very wide scope, as there are so many sets and sub-sets that make up the branches of AI, and still more yet to be categorised as they remain undiscovered. So, for the purpose of this blog, we’ll stick to the most rudimentary level and divide it into two main categories.

Specialised AI (or, Weak AI) is what we have now, a machine that has a very narrow focus and, generally speaking, can only perform one task – albeit well. Siri and Alexa are prime examples of this and rather than ‘learning’ in a human sense they employ more advanced pattern recognition tools.

Generalised AI (or, Strong AI) would be considered a machine that can think and learn like a human being, performing tasks to the same or a better standard – or more simply, the same way it’s depicted in Hollywood films. Unfortunately, this does not exist yet, though many countries are aiming to change that.

What’s next for AI?

We’ll no doubt see the applications of specialised AI continue to increase dramatically over the coming years – from increased use in our homes to automation in the workplace. The journey to AI dominance is being likened to the 1957 space race, with China declaring its ambition to be the world leader by 2030. Whether or not they’ll be able to stick to this timeline is uncertain.

The hype surrounding generalised AI as a defining technology for humanity could not be higher – if we ever manage to achieve it, the possibilities are said to be endless. In 1964 Isaac Asimov, considered one the fathers of robotics, made a startingly accurate prediction for the future: “The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being. Mankind will therefore have become largely a race of machine tenders.”

However, Asimov also predicted that the advancement of technology would make the need for traditional education obsolete, reasoning that information would be so readily available from home computers that there would simply be no need for school. Which illustrates, no matter how much thought you put into it, it’s easy to underestimate the human capacity for wasting time on the internet – and the fallacy of trying to predict the future.

Understanding AI

It’s important to have a basic understanding of AI to avoid being misled by vague or outright incorrect news and consequently getting swept up in the hype. Understanding the uses of AI is crucial, as it’s already becoming central to everyday life – both in business and our private lives too. It’s vital that we grasp the opportunities and limitations that AI presents to drive businesses forward. Gartner predicts that in 2021 AI will generate $2.9 trillion of business value.

Marketing teams on the other hand, can learn from this confusion to adjust their positioning accordingly. They can shift their brand messaging to be more clear, concise and upfront to drive interest and sales, without being misleading – or, risk leaving their audiences confused, disengaged and waiting for doomsday.

By Charlie E. 

What's Next?