Jeg læste Nick Bostroms Superintelligence for cirka tre år siden. Det er en ambitiøs og imponerende bog, som enhver der interesserer sig for AI-revolutionen bør læse. Det er store, eksistentielle perspektiver vi taler om her. Fra forordet:
If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence. […] In this book, I try to understand the challenge presented by the prospect of superintelligence, and how we might best respond. This is quite possibly the most important and most daunting challenge humanity has ever faced. And – whether we succeed or fail – it is probably the last challenge we will ever face.
Bostrom sætter modeller og tankeeksperimenter op og tænker dem virkelig stringent igennem. Der gås i detaljer med antagelserne. I kapitel 4, “The kinetics of an intelligence explosion”, spørger han indledningsvist: “Once machines attain some form of human-equivalence in general reasoning ability, how long will it then be before they attain radical superintelligence? Will this be a slow, gradual, protacted transition? Or will it be sudden, explosive?” I lyset af den eksplosive udvikling i AI-modellers evner de seneste måneder (i går så jeg en artikel om at ChatGPT kan bestå en skriftlig jura-universitetseksamen bedre end 90% af menneskelige studerende), er følgende observation særdeles tankevækkende:
It is also possible that our natural tendency to view intelligence from an anthropocentric perspective will lead us to underestimate improvements in sub-human systems, and thus to overestimate recalcitrance. Eliezer Yudkowsky, an AI theorist who has written extensively on the future of machine intelligence, puts the point as follows […]:
AI might make an apparently sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of “village idiot” and “Einstein” as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general. Everything dumber than a dumb human may appear to us as simply “dumb”. One imagines the “AI arrow” creeping steadily up the scale of intelligence, moving past mice and chimpanzees, witih AIs still remaining “dumb” because AIs can not speak fluent language or write science papers, and then the AI arrow crosses the tiny gap from infra-idiot to ultra-Einstein in the course of one month or some similarly short period.
Jeg har længe ment at AI-modeller burde behandles med ekstrem forsigtighed, på samme vis som livsfarlige vira i biolaboratorier bliver det. Men det er det stik modsatte der sker: kunstig intelligens skal bygges ind i hvad som helst, og det kan tilsyneladende ikke gå hurtigt nok.
Uanset hvad man tror om fremtidsperspektiverne: I betragtning af hvad ChatGPT og andre AI-modeller er i stand til allerede nu, er en ting sikker: verden bliver aldrig den samme igen. Det er ukendt land herfra, helt og aldeles.
