Search form

menu menu
  • Daily & Weekly newsletters
  • Buy & download The Bulletin
  • Comment on our articles

Rise of the machines: Is a robot going to take your job - or worse?

06:11 24/04/2019
Perhaps the movies can give us a clue about where artificial intelligence might take us.

Every evening when I start my car near my office, Siri infers from my GPS position that I’m on my way home, trawls for traffic information online and calculates an approximate driving time. Without my having asked for any of it. This is just one area in which the artificially intelligent assistant inside an iPhone has begun to facilitate our daily lives.

Other early AI applications that are available today include systems that are rapidly transforming the financial, retail, legal and other sectors, Google and Amazon’s smart speakers, and self-driving cars. Our world is gradually evolving into the kind of place that was prophesised decades ago in the movies. Science fiction classics were keen on showing us the benefits of computer technology – which was still in its infancy when these movies hit theatres – but also alerting us to the dangers it could pose.

The 2001 scenario

When director Stanley Kubrick and writer Arthur C Clarke unleashed 2001: A Space Odyssey on us in 1968, there were no computers to speak of. But the machine antagonist they created for the movie, the soft-spoken, dangerously intelligent computer HAL 9000, became a first warning sign of what can go wrong with technology once we allow it to take over our lives. Of course, the actual year 2001 has long since passed, and AI hasn’t yet materialised in the same way. In fact, current AI systems, or those their developers call artificially intelligent, are pretty dumb. Siri, for instance, doesn’t really assume I’m heading ‘home’; it just assumes an address by the fact that the GPS position of my drive ends near a certain address (the wrong one at that; for some reason, Siri consistently thinks I want to drive to our neighbour across the street).

But it’s getting there. Siri, Google Assistant and Alexa, the three leading consumer-grade AI systems, are derivative of much more complex systems that Apple, Google and Amazon are developing in their AI labs, and that are capable of learning by themselves. The process of this machine learning in itself is simple. When you see 1,000 images of a zebra, by the time you see image 1,001, you can say with a fair degree of certainty that this is a zebra too. That’s how self-learning computer systems work.

The study of artificial intelligence has been around since the 1950s, but it’s moved into the fast lane since the rise of neural computing chips less than a decade ago. Today’s neural network processors mimic the workings of a human brain. They consist of neural connections between all their smaller components, just as the neurons in the brain are connected by synapses. Just like your average computer chips, these neural processors grow in computing power over time, which means that not only will computers become smarter by learning, the hardware inside them will become able to make more and more calculations. The human brain is still vastly smarter than the most powerful computer in the world: while it consists of approximately 100 billion neurons, systems built by IBM and Google can barely reach a few million. But just as computer power has grown tremendously over the last decades, it will continue the same line of growth in these new neural computer processors.

Which means that, somewhere in the future, computers might actually become ‘smarter’ than humans. “But you shouldn’t expect that in near future,” says Jonathan Berte, founder and CEO of Ghent technology company Robovision. “A first hurdle computers will have to cross will be the evolution to artificial general intelligence: computer systems that know how to handle not just one specific skillset but a great number of them, just like a human brain. The current consensus among computer scientists says we won’t reach that moment before 2045.”

The Westworld scenario

Alongside AI, we’re also seeing the advent of robotics. We’re still a long way off the human-like cowboy robots we saw in the 1977 movie Westworld and its recent TV adaptation, but there have been vast improvements in the technology. Just look at the recent evolution of the dog-like robot created by American company Boston Dynamics, or the first humanoid robots built by Japanese professor Hiroshi Ishiguro. These things are still not very smart or dexterous, but that’s where the concurrent growth of AI steps in. “All these new evolutions in tech are beginning to converge at this point, and are reinforcing each other,” says Belgian tech visionary Peter Hinssen. “Robots get better because AI is also growing at a breakneck speed. Their motor skills are also improving fast: there are bricklaying robots out there that can build a wall with five thousand bricks in a day. That’s a feat no human bricklayer can match.”

But just like with these single-purpose AI systems today, robots also are a long way off the sentient beings we saw in Westworld. “An AI system is not an artificially intelligent being,” Berte says. “The development of machines towards biomechanical entities tends to run much slower than for instance Moore’s Law, which predicts the growth of computing power. Robots today are not much further evolved than the ones we saw entering the market ten years ago.”

The Metropolis scenario

In silent-era movie Metropolis (1927), an army of machines led by a Maschinenmensch takes over control of the workforce. For many, it was a cautionary tale about man’s dependence on machines. Its theme seems more relevant than ever, now that machines are starting to enter our actual workplace. Not worker-bot machines, but AI-driven systems that are starting to do the same white-collar jobs as humans.

In customer support departments, it’s already happening, with AI-driven bots slowly taking over simple client communications through web chat features. And IBM installed a version of its Watson AI computer at the American law firm BakerHostetler two years ago. The machine helps the company lawyers find answers to legal questions, allowing them to search its ‘brain’ containing millions of pages of legal text. Clerks, lawyers, accountants and other professionals whose jobs were until a few years ago deemed safe from the onslaught of computer technology now see their roles threatened by the rise of artificial intelligence. Consultancy firm McKinsey believes 50% of all tasks that the world’s workforce needs to complete today can be executed just as well by these new technologies, resulting in a worst-case scenario of 800 million jobs lost as early as 2030.

“People who work with information, with data: the chance that an algorithm pops up that does their job better than they can is almost guaranteed,” Hinssen says. “The biggest question for the workforce today is: what can you do on top of that, as a human? You can’t stop the evolution. But we can change things in our education system, so that people entering the workforce today are ready to use these new technologies as a tool, to work alongside them, and to look for the added value that we bring to our jobs as a human. We’ll have to go all-in on human creativity in the future. We’ll be able to work better and smarter with a machine by our side than without one. Today’s education system isn’t doing a good job in that regard: we’re still training people to reproduce things, which is exactly what computers will soon be able to do better.”

The Terminator scenario

And then there’s the growing nervousness that the computers we’re creating today will want to destroy us tomorrow. It’s a theme that was explored in films like 1984’s The Terminator, in which a sentient AI system, Skynet, took over the earth by weaponising robot technology and taking over human military systems – including nuclear bombs. It’s fiction, of course, but according to people like futurologist Ray Kurzweil, Microsoft founder Bill Gates and physicist Stephen Hawking, it’s a very real scenario. It’s the story of the Singularity: the moment where computers grow to an artificial intelligence beyond our puny human minds, and no longer need us.

“Many say it’s game over for us, but that idea paints a bleak image that I don’t subscribe to,” Hinssen says. “Look back at evolution: humans have evolved from the same unicellular organisms as squirrels or giraffes. We’ve all evolved to an astonishing palette of creatures. A squirrel’s brain is wired to remember up to three thousand locations of nuts it’s stored somewhere over the course of five years. That’s a memory we, as humans, simply don’t have. You’ll still have the same differences when AI becomes our cognitive equal. Some systems will be smarter than us, some won’t: I think we’ll see an explosion of different AI types that will coexist with us. Not just one that wants to rule the world, and enslave us all.”

Written by Ronald Meeus