Professor Stephen Hawking has said, on several occasions, that efforts to create thinking machines pose a threat to humankind. He might be the most famous thinker to hold this view, but he is certainly not the only one. The fear is that thinking machines (AI), when they finally reach human-level intelligence, will enhance themselves and become superintelligent, completely outwitting us humans. An explosive technological development would begin, and this moment is known as the technological singularity.
Many thinkers, including Nick Bostrom, believe that the singularity will be either very good or very, very bad. In the utopian scenario machines or robots would enhance the lives of humans, they might invent pharmaceuticals that can cure diseases such as bipolar disorder and Alzheimer. In the dystopian scenario the machines or robots would be more like the computer HAL or a terminator, seeing humans as a threat to their position as the new rulers of the world.
A Disneyland without children
“A society of economic miracles and technological awesomeness, with nobody there to benefit,” as Bostrom writes in his new book Superintelligence. “A Disneyland without children.”
A majority of the experts believe that human-level AI will be achieved pretty soon, according to a review of the book in Financial Times:
“About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075. Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.”
We have read a brand new, hugely entertaining and accessible book on this subject, The Technological Singularity (MIT Press) – it’s published in MIT:s very fine series Essential Knowledge – and we had the opportunity to ask its author, professor Murray Shanahan, a few questions. Mr. Shanahan is professor of Cognitive Robotics in the Department of Computing at Imperial College London.
Could you please tell us a little bit about your background, about how you begun to take an interest in theories about the technological singularity and how this interest has developed?
– I was fascinated by robotics and artificial intelligence at an early age, mainly thanks to science fiction books and movies. I was especially influenced by Asimov’s robot stories. But I also loved Dr Who as a kid, and always wanted to know what a dalek was like inside. I used to draw pictures of the insides of robots – mainly random boxes and wires, if I recall.
– Years later, when I was studying computer science at high school, I started thinking more seriously about the possibility of AI. I read I. J. Good’s seminal paper ”Speculations Concerning the First Ultraintelligent Machine” in the 1980s when I was still a teenager. This 1965 paper was the first serious treatment of the idea of the technological singularity. At that time I was sure that, if human-level AI was developed, it would be a wholly positive thing. I still think so, on balance. But now – partly thanks to the work of Nick Bostrom – I feel it’s important to exercise caution and think about the risks and downsides too.
In your book, the aim is not to make predictions but rather to explore different scenarios. In the literature as a whole on the technological singularity, would you say that the discourse is more dystopian than utopian? Has it changed over time?
– Non-fictional literature on the subject used to exhibit a nice balance of positive and negative portrayals of AI in the future. Recently this has shifted somewhat towards the discussion of risks and dangers, particularly since the publication of Nick Bostrom’s book. Of course, the general public loves a bit of apocalypse, so the media tends to amplify the concerns. And for the same reason, AI often gets a negative portrayal in science fiction movies. But it’s very important not to muddle up science fiction with reality.
– There are real concerns with how to build AI with human-level capabilities and beyond. But it’s not because researchers are worried that AI will have human foibles such as greed and cruelty. Rather it’s because a very powerful AI may be very good at achieving the goals we set it, but in ways with very harmful unanticipated side-effects. Nevertheless, I’m confident that we will solve these problems and that AI will be beneficial to humanity.
Could you please explain why human-level AI could develop very rapidly into superhuman AI and set off an intelligence explosion?
– In my view it would be a small step form human-level AI to superhuman-level AI. As soon as human-level intelligence is realised in a digital substrate then there are some simple, conservative ways to improve it dramatically. For example, it could be speeded up. Or aspects of the system, such as memory capacity, could simply be expanded. These thigs wouldn’t require any sort of further conceptual breakthrough. The prospect of an intelligence explosion is a different thing. Here we are imagining the implications of self-improving AI – artificial intelligence that is good at creating better artificial intelligence. If an AI can make its own improved successor, which in turn can make an even better successor (perhaps more quickly) then the result could be very dramatic.
Could you please describe a scenario in which a superintelligent AI goes rogue? What kind of damage could it do and what are the most reasonable ways of avoiding AIs going rogue?
– The sort of scenario that worries people is analogous to the sort of folktale where a genie gives you three wishes, and the genie makes them all come true but in horrible ways you weren’t expecting. The worry is that we will be unable to specify what we want a powerful AI to do precisely enough to ensure that it doesn’t think the best way to achieve the goal involves the destruction of humanity as a side-effect. We don’t yetknow how serious this problem really is, but people are starting to do research to make sure we can avoid it.
Human-level AI and superhuman AI raises a lot of philosophical questions. What are the most important ones?
– I think the most important philosophical questions relate to what it means to be human. The prospect of human-level AI obliges us to consider very different kinds of intelligence, very different kinds of consciousness even, which throws new light on our own form of intelligence and consciousness. It also makes us ask ourselves what we really want. If we could create servants with god-like powers, what would we ask them to do for us? How would we reshape human society and human life if we could do whatever we wanted?
Ola Wihlke