Mankind's challenge: A morality code for machines
Gwynne Dyer says paranoia about artificial intelligence is nothing new, and we can design machines that aren't just smart, but also good
The experts run the whole gamut from A to B, and they're practically unanimous: artificial intelligence is going to destroy human civilisation. Expert A is Elon Musk, polymath co-founder of PayPal, manufacturer of Tesla electric cars, creator of SpaceX, the first privately funded company to send a spacecraft into orbit, and much else. "I think we should be very careful about artificial intelligence (AI)," he told an audience at the Massachusetts Institute of Technology in October. "If I were to guess what our biggest existential threat is, it's probably that." Musk warned AI engineers to "be very careful" not to create robots that could rule the world.
Expert B is Stephen Hawking, the world's most famous theoretical physicist. He told the BBC this week that "the development of full artificial intelligence could spell the end of the human race". A genuinely intelligent machine, Hawking warned, "would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded".
Musk and Hawking are almost 50 years behind popular culture in their fear of rogue AI turning against human beings (HAL in ). They are a full 30 years behind the concept of a super-computer that achieves consciousness and launches a war of extermination against mankind (Skynet in the films). Then there's , and similar variations. It's taken a while for the respectable thinkers to catch up with all this paranoia, but they're there now.
Let's look at this more calmly. Full AI, with capacities comparable to the human brain or better, is at least two or three decades away, so we have time to think about how to handle it.
Such a society might well end up as a place in which intelligent machines had "human" rights before the law, but that's not what worries the sceptics. Their fear is that machines, having achieved consciousness, will see human beings as a threat (because we can turn them off, at least at first), and that they will therefore seek to control or even eliminate us. That's not very realistic.
The saving grace in the real scenario is that AI will not arrive all at once. It will be built over decades, which gives us time to introduce a kind of moral sense into the basic programming, rather like the innate morality that human beings are born with.