Etikettarkiv: the technological singularity

Ny bok om den teknologiska singulariteten

33fd8b50d5dc7a73d63acfb44de4b92a

Murray Shannahan
The Technological Singularity
MIT Press

Många tror att maskinerna, någon gång i framtiden, kommer att uppnå den generella intelligens som är karakteristisk för människor. Det är ganska lätt att konstruera en dator som kan slå en stormästare i schack. Det är ganska lätt att konstruera en dator som vinner mot de bästa människorna i Jeopardy. Det är oerhört mycket svårare att konstruera en maskin som är så exceptionellt mångsidig och kreativ som en människa. Men eftersom den teknologiska utvecklingen är så oerhört högt uppskruvad tror många att det bara är en tidsfråga innan maskinerna blir lika intelligenta som människor. När väl det inträffar tror man att maskinerna kommer att förbättra sig själva och att en explosiv teknologisk utveckling tar vid. Maskinerna kommer att uppnå superintelligens, de kommer att bli så intelligenta att det är svårt för oss att föreställa oss det. Den här föreställda vattendelaren, det här skiftet i mänsklighetens historia kallas den teknologiska singulariteten.

Den teknologiska singulariteten brukar associera till sciencefiction, men Murray Shannahan (intervju), professor i Cognitive Robotics vid Imperial College London, påpekar inledningsvis i sin mycket intressanta bok, The Technological Singularity, att sciencefiction inte är någon bra kunskapskälla om man verkligen vill lära sig något om den teknologiska singulariteten. Och hans framställning har också något förnuftigt och försiktigt över sig. Det betyder inte att den är tråkig, tvärtom, tack vare att den är så trovärdig blir den genuint spännande.

Ungefär hälften av alla experter på den teknologiska singulariteten tror att den kommer att inträffa 2045, men Shannahan ägnar sig inte åt några sådana gissningar och förutsägelser. Hans bok bygger i stället på olika scenarier, som innehåller bedömningar av hur troligt eller sannolikt det är att olika typer av AI och superintelligens kommer att skapas och uppstå. Han beskriver och analyserar några av de vägar utvecklingen skulle kunna ta.

På senare tid har många, inte minst Steven Hawking, varnat för vad den tekniska singulariteten skulle kunna innebära. Många tänkare och experter på området, bland dem Nick Bostrom, tror att den teknologiska singulariteten antingen kommer att få väldigt goda eller väldigt negativa konsekvenser.

I det utopiska scenariot förbättrar maskinerna eller robotarna människornas liv, de kanske snabbt uppfinner läkemedel mot sjukdomar som tidigare varit svåra eller omöjliga att bota. I det dystopiska scenariot skulle maskinerna och robotarna påminna mer om datorn HAL eller en terminator, de skulle snarare se oss människor som ett hot mot positionen som världens nya härskare. Världen skulle kunna bli fantastisk på många sätt, men inte för oss människor.

Om du ändå tycker att det här låter som sciencefiction, ge The Technological Singularity en chans. Shannahan tillhör inte de mest pessimistiska experterna, men han tycker att farhågorna bör tas på allvar. Hans stil är stringent men avspänd, ibland humoristisk, men oftare är framställningen smart och rik på perspektiv och infallsvinklar. Det inledande avsnittet om hjärnan är sällsynt fascinerande – det lär dröja innan den efterhärmas, nästan obegripligt komplex. Boken handlar naturligtvis mycket om teknik, men den handlar också om filosofiska, framförallt etiska frågor, och den är en riktig sidvändare.

Boken ges ut i en fantastisk serie: The MIT Essential Knowledge series.

Ola Wihlke

Lämna en kommentar

Under Recensioner

Interview: Murray Shanahan on the technological singularity

9780262527804

Professor Stephen Hawking has said, on several occasions, that efforts to create thinking machines pose a threat to humankind. He might be the most famous thinker to hold this view, but he is certainly not the only one. The fear is that thinking machines (AI), when they finally reach human-level intelligence, will enhance themselves and become superintelligent, completely outwitting us humans. An explosive technological development would begin, and this moment is known as the technological singularity.

Many thinkers, including Nick Bostrom, believe that the singularity will be either very good or very, very bad. In the utopian scenario machines or robots would enhance the lives of humans, they might invent pharmaceuticals that can cure diseases such as bipolar disorder and Alzheimer. In the dystopian scenario the machines or robots would be more like the computer HAL or a terminator, seeing humans as a threat to their position as the new rulers of the world.

A Disneyland without children

“A society of economic miracles and technological awesomeness, with nobody there to benefit,” as Bostrom writes in his new book Superintelligence. “A Disneyland without children.”

A majority of the experts believe that human-level AI will be achieved pretty soon, according to a review of the book in Financial Times:

“About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075. Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.”

We have read a brand new, hugely entertaining and accessible book on this subject, The Technological Singularity (MIT Press) – it’s published in MIT:s very fine series Essential Knowledge – and we had the opportunity to ask its author, professor Murray Shanahan, a few questions. Mr. Shanahan is professor of Cognitive Robotics in the Department of Computing at Imperial College London.

Could you please tell us a little bit about your background, about how you begun to take an interest in theories about the technological singularity and how this interest has developed?

– I was fascinated by robotics and artificial intelligence at an early age, mainly thanks to science fiction books and movies. I was especially influenced by Asimov’s robot stories. But I also loved Dr Who as a kid, and always wanted to know what a dalek was like inside. I used to draw pictures of the insides of robots – mainly random boxes and wires, if I recall.

– Years later, when I was studying computer science at high school, I started thinking more seriously about the possibility of AI. I read I. J. Good’s seminal paper ”Speculations Concerning the First Ultraintelligent Machine” in the 1980s when I was still a teenager. This 1965 paper was the first serious treatment of the idea of the technological singularity. At that time I was sure that, if human-level AI was developed, it would be a wholly positive thing. I still think so, on balance. But now – partly thanks to the work of Nick Bostrom – I feel it’s important to exercise caution and think about the risks and downsides too.

In your book, the aim is not to make predictions but rather to explore different scenarios. In the literature as a whole on the technological singularity, would you say that the discourse is more dystopian than utopian? Has it changed over time?

– Non-fictional literature on the subject used to exhibit a nice balance of positive and negative portrayals of AI in the future. Recently this has shifted somewhat towards the discussion of risks and dangers, particularly since the publication of Nick Bostrom’s book. Of course, the general public loves a bit of apocalypse, so the media tends to amplify the concerns. And for the same reason, AI often gets a negative portrayal in science fiction movies. But it’s very important not to muddle up science fiction with reality.

– There are real concerns with how to build AI with human-level capabilities and beyond. But it’s not because researchers are worried that AI will have human foibles such as greed and cruelty. Rather it’s because a very powerful AI may be very good at achieving the goals we set it, but in ways with very harmful unanticipated side-effects. Nevertheless, I’m confident that we will solve these problems and that AI will be beneficial to humanity.

Could you please explain why human-level AI could develop very rapidly into superhuman AI and set off an intelligence explosion?

– In my view it would be a small step form human-level AI to superhuman-level AI. As soon as human-level intelligence is realised in a digital substrate then there are some simple, conservative ways to improve it dramatically. For example, it could be speeded up. Or aspects of the system, such as memory capacity, could simply be expanded. These thigs wouldn’t require any sort of further conceptual breakthrough. The prospect of an intelligence explosion is a different thing. Here we are imagining the implications of self-improving AI – artificial intelligence that is good at creating better artificial intelligence. If an AI can make its own improved successor, which in turn can make an even better successor (perhaps more quickly) then the result could be very dramatic.

Could you please describe a scenario in which a superintelligent AI goes rogue? What kind of damage could it do and what are the most reasonable ways of avoiding AIs going rogue?

– The sort of scenario that worries people is analogous to the sort of folktale where a genie gives you three wishes, and the genie makes them all come true but in horrible ways you weren’t expecting. The worry is that we will be unable to specify what we want a powerful AI to do precisely enough to ensure that it doesn’t think the best way to achieve the goal involves the destruction of humanity as a side-effect. We don’t yetknow how serious this problem really is, but people are starting to do research to make sure we can avoid it.

Human-level AI and superhuman AI raises a lot of philosophical questions. What are the most important ones?

– I think the most important philosophical questions relate to what it means to be human. The prospect of human-level AI obliges us to consider very different kinds of intelligence, very different kinds of consciousness even, which throws new light on our own form of intelligence and consciousness. It also makes us ask ourselves what we really want. If we could create servants with god-like powers, what would we ask them to do for us? How would we reshape human society and human life if we could do whatever we wanted?

Ola Wihlke

Lämna en kommentar

Under Intervjuer