There was one telling sentence in SiliconAngle’s article covering the fact that AlphaGo Zero, DeepMind’s latest AI, had mastered chess: “IBM spent more than 10 years perfecting Deep Blue before it successfully mastered chess. AlphaGo Zero did it in just 24 hours.”
This is after AlphaGo Zero had already gained “superhuman performance” in shoga and Go, which is another crucial differentiator from traditional AIs. We humans never needed to fear an intelligence that could master one game, but now that AlphaGo can play three the rules have changed. For the first time, AI can adapt, not simply follow the rails we humans provide at the point of design.
There are parallels with the development of computers. We started with single-use machines that had to be fine-tuned with human assistance to perform a certain task, such as breaking ciphers. Or early word processors, such as the IBM DisplayWriter (watch the brilliant promo video below).
Within two decades, though, these machines were software-controlled and could do pretty much anything – so long as it involved the manipulation and presentation of dots on a screen or paper.
Computers didn’t rise up and take over the world, but they did transform it. So we can expect a similar change to occur as AI matures and becomes another consumable technology, in the same way that we can now buy £400 laptops from PC World.
Clearly, AI is different to a Windows PC. We never needed to fear that Microsoft Word would overpower us through its ingenious use of Comic Sans. But it’s still a digitally contained “thing”: the dots on a page have become 0s and 1s in an audio file.
So why are people so scared of AI? One key reason is that AI only needs to become marginally better at creating other AI projects than humans are, before the development of such things becomes exponential – and we lose control.
It’s this fear that led to various headlines from an interview Stephen Hawking gave to Wired: “AI may replace humans altogether,” he warned. “If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.”
So are there reasons to be fearful? Absolutely. New technology always brings danger to accompany the exciting new opportunities. And it’s true that there are people out there bent on destruction.
But it would be a huge mistake to be swayed by Daily Mail headlines such as AI could ‘replace humans altogether’: Professor Stephen Hawking warns that robots will soon be a ‘new form of life’ that can outperform us.
Why? Because if we stopped investing in technologies that also brought danger then we wouldn’t have trains, planes, roads, cars, bicycles or indeed the buildings we live in.
As Stephen Hawking also said in his interview, we should keep developing AI – we just need to mindful of the dangers as we go. Just like every single technology that’s ever existed.
Add Comment