The debate around how machines and artificial intelligence (AI) will affect the future of work just keeps getting more and more intense. How can we, as humans, predict the impact these machines will have, when they are increasingly developing intelligence that is beyond human capabilities?
We are now conscious of the fact that machines will automate many routine tasks and even entire jobs in the future, which in turn will create more time for tasks which require human judgement, creativity and empathy. After all, machines can’t mimic these innate human qualities, right?
In the 1980s, author Richard Susskind wrote his doctorate on AI and its impact on the law. Together with Professor Phillip Capper they were part of the vanguard in creating the first AI system commercially available in law. Their approach to machine learning was to sit down with a lawyer and get them to explain their methodology and capture this information in a set of instructions and rules for machines to follow.
This approach followed the belief that machines have to copy the way that human beings think and reason in order to outperform them. It is a belief that Richard Susskind’s son, Daniel Susskind, is now challenging 30 years later.
Advances in processing power, data storage capability and algorithm design mean that the distinction between routine tasks (those that can be easily explained by humans) and non-routine tasks (those that require human intuition) is blurring. This is not to say machines are starting to think or reason like humans – and maybe that’s the problem. Are we looking at AI through the wrong lens?
Machines cannot think, reason and feel like humans because, simply, they are not human. They can however perform tasks by running pattern recognition algorithms through hundreds of thousands of past cases and data. While this may result in the same conclusion a human may come to, the machine is performing the task in an unhuman way. And where 30 years ago machines were transparent with the information fed to them by humans, now they are much more opaque.
There is a common misconception that a machine can only be as intelligent as the human who programmed it, but we now know that human intelligence does not represent any sort of finishing line for machine capability. Machines no longer need to be fed data to trump human intelligence. Take the example of world Go champion Lee Sedol against the machine owned by Google’s DeepMind, AlphaGo.
Go is a Chinese game which reportedly has more moves than atoms in the universe. The machine was only programmed with the rules of the game, yet in a spectacular and surprising way it went on to beat Lee four consecutive times.
In a world where we cannot use human logic to predict machine intelligence, what does this mean for the future of work? Last year McKinsey found that only 5% of jobs can be automated, but single tasks can be more easily automated. This means the next generation of workers can take two routes: build machines or compete against them.
In a recent TED talk, Daniel Susskind explained that he used the word “compete” very deliberately, where most people would use the word “collaborate”. We still hold the belief that man with machine is better than man versus machine, but as I’ve just mentioned, machine can outperform man through machine learning.
While it is true that many automated tasks and AI capabilities can complement human activity – take the trusty satnav – in the future there will be a need to compete with machines to perform jobs. This means that the next generation of workers need to learn a high standard of digital skills.
This may seem doom and gloom, but just because machines can perform human tasks better than humans it doesn’t necessarily mean they should. While AI now aids decision-making on putting someone on parole, does it mean they should have the same hand in deciding a life sentence? When machines help make an accurate medical diagnosis, does that also mean they should play the same role in deciding to take someone off life support? For now, there are some tasks which are fundamentally human in nature.
Thanks to artificial intelligence, machines can now perform tasks in the workplace better than humans. But should they do so?