Depending on who you ask, artificial intelligence is either going to save humanity or kill us all. At the same time, AI seems to be in everything from home assistants to washing machines. But what is AI, and why is everyone going on about it?
Here’s how to explain artificial intelligence to your family over Christmas when they ask whether they’ve unwittingly inviting the AI apocalypse into their living room.
What is artificial intelligence?
Roughly speaking, artificial intelligence research is all about creating computers that can perform tasks that usually require human intelligence. Speech and image recognition, translation and complex decision making are some examples of the kinds of tasks that, until now, have always required a human touch.
So it’s just a fancy way of describing computers then?
Not really. Many tasks that computers perform now don’t do anything that looks much like intelligence. Just look at the word computer: a computer computes. It’s a glorified calculator (sort of). The computer takes an input from a human – a number, image, or command – performs a series of predefined calculations, and spits out an answer.
And what does AI do that’s different?
Maybe it’ll help to talk about how humans perform tasks, to see how we’re so different. Say we see a nice dog in the street. Most of us can pretty immediately recognise that it’s a dog, even if we can’t see most of it, or perhaps can only hear it. We’re pretty good at recognising dogs no matter what they look like – even if they’re just cartoons or weird sketches. And we learned all this without anyone explicitly sitting us down, pointing at all these different kinds of dogs, cartoon and real, and telling us that they are indeed dogs. Most kinds of computing tasks just don’t work like that – older versions of computer recognition software required humans to tell the computer precisely what to look out for in order for it to recognise an image.
Now I’m more confused. What have dogs got to do with anything?
It’s just an example. The basic idea behind artificial intelligence is to see if we can give computers some of the decision-making abilities that humans have. We’re really good at making sense of the world even based on relatively little data and without being given many explicit rules. We learn to speak, after all, just by hearing and seeing other people speak. No machine has the seemingly innate capability to acquire new skills without explicit guidance.
Alright then, how do we go about doing that?
The field of artificial intelligence has been around for decades, but things have really heated up recently because of recent breakthroughs in the area of machine learning. There are lots of ways of approaching machine learning, but one common approach is to gather together a whole bunch of data and feed it through a special type of algorithm that gradually learns to extract meaning from that data. So to go back to our dog example, you could show tens of thousands of different photos labelled as ‘dog’ to a machine learning algorithms and – if everything is working as expected – it should eventually become able to recognise photos of dogs it has never seen before.
Oh, is that it?
It’s early days, but researchers are working to recreate this approach across lots of different tasks, and now everything from Amazon’s Alexa to Google Translate incorporates something a little like what was described above. So far, most of the practical uses of machine learning are for very specific tasks. Machine learning algorithms have been pretty good at teaching computers to perform dedicated tasks, but they’re yet to bring us much beyond that.
What does the future of AI look like?
One area that researchers are really interested in is called artificial general intelligence (AGI). Humans aren’t just good at learning really specific tasks, we’re also pretty great at transferring knowledge between tasks too. Once we’ve learned to pick up a mug, for example, we don’t need to relearn from scratch how to pick up a book. AGI researchers are interested in creating machines that are able to transfer knowledge from one domain into another. That’s why the researchers at Google’s machine learning outfit, DeepMind, are so chuffed that their AlphaGo algorithm, which beat the world’s best Go players, can also learn how to play chess.
So what about that AI apocalypse?
Some people – Elon Musk in particular – are worried that we’re heading down a path towards super-intelligent robots that could eventually decide they know better than we do, and use their skills to kill us all. Or something like that. There’s no imminent threat, but Musk might have a point. If we’re going to give machines human-like abilities, many people think we should be seriously thinking about how we make sure they don’t end up using those abilities to harm us. But for the time being, there’s probably no danger that your your washing machine will turn sentient and kill all your family. Probably.