Stephen Cave
AI vs IQ
The Singularity: Could Artificial Intelligence Really Out-Think Us (And Would We Want It To)?
By Uziel Awret (ed)
Imprint Academic 426pp £60
I used to be a sceptic about the ‘technological singularity’, the idea that we are on the verge of creating machines so unimaginably intelligent that human history will enter a wholly new and unfathomable phase. My scepticism stemmed largely from the quasi-religious awe that surrounds the idea. Like the mysterious black monolith in the film 2001: A Space Odyssey, the singularity is a blankness onto which adherents project primordial fantasies of immortality and omnipotence, heaven and hell.
As a keen student of mythology, I am always inclined to take a wry view of such projections, which recur in different variations, secular and religious, in every generation, couched in the language of the day. For us, that is the vocabulary of advanced technology: robots, avatars and AI. According to this version of the story, when this technology reaches a sufficiently advanced state, we will be able to solve all the world’s troubles, cure all diseases and upload ourselves onto indestructible silicon platforms. We will, in other words, achieve by modern means the transformation described by St Paul: ‘when this corruptible shall have put on incorruption, and this mortal shall have put on immortality, then shall be brought to pass the saying that is written, Death is swallowed up in victory.’
Recently, the idea of the technological singularity has started to receive more serious attention. David Chalmers, a philosophy professor at NYU, was the first important, mainstream philosopher to treat it at face value. In 2010, he published a lengthy article analysing the idea and its assumptions in Journal of Consciousness Studies. The Singularity republishes that essay alongside a wide range of responses to it from prominent thinkers in many fields.
In his essay, Chalmers first lays out the premise of the idea, which is seductively simple. We are, it states, designing machines that are ever more intelligent. One day, it is possible that we will create a machine that is better at designing intelligent machines than we are. At that point, the rate of improvement in such machines is likely to increase significantly – software, after all, doesn’t sleep or take coffee breaks; it can duplicate itself to work in parallel and can write new computer code much faster than a human can type. Therefore very soon – perhaps in a few years, perhaps in just days or hours – we could be faced with what is known as an ‘intelligence explosion’.
A superintelligent machine at the pinnacle of this process, the argument goes, intellectually could stand in relation to us as we stand to an earthworm. The lowly worm, we assume, cannot begin to fathom the complexities of the human world; it cannot understand us as we understand ourselves, let alone grasp its own role in the universe. Consequently, though we might accord worms some respect, we don’t worry too much about stepping on them. Following this analogy, we are unlikely to be able to fathom the thoughts and purposes of a superintelligence, or share its insights into the nature of reality. As to whether it would think twice about stepping on us, this is the kind of question that keeps the contributors to this book up at night.
This might sound a bit far-fetched. But just a couple of decades ago, the prospect of tiny portable devices that enable us to access all the world’s knowledge while video-calling distant colleagues would also have seemed far-fetched. Similarly, the fact that adherents project ancient fantasies onto the singularity also does not mean that it won’t happen, or that they are necessarily wrong in their projections. As techno-utopians like to point out, humans dreamed of flying for thousands of years, and now they really can.
Hence I have been overcoming my scepticism. Increasingly I am persuaded that, as far-fetched as these ideas might sound, there is good reason to take them seriously, as Chalmers does. Of course, one can take the arguments seriously and still disagree with either their premises or their conclusions, as many of the contributors to this book do. The eminent neuroscientist Susan Greenfield, for example, challenges the idea that computation can ever lead to understanding, let alone wisdom, both of which she considers to be bound up with real intelligence.
Some of the most interesting disagreements centre on the question of whether the rise of superintelligent machines would bode well or ill for humanity. On one side are those who, following Immanuel Kant, argue that intelligence and values are intimately linked, such that great intellect brings moral rectitude. The cosmologist Frank J Tipler takes this line to argue for the ‘inevitable goodness’ of the singularity. The majority, however, follow David Hume in assuming that intelligence and values are completely separate, and therefore ‘’Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.’ If Hume is right, a superintelligence could still be a malign force, and a number of contributors discuss how such an entity might be contained.
The cognitive roboticist Murray Shanahan, in contrast, subtly argues that such discussions are hopeless. We mere humans, he believes, are unlikely to have the conceptual apparatus required to understand such a profoundly alien being. Perhaps, he speculates, such a superintelligence would attain a kind of enlightenment and not be inclined towards the egocentric cycle of creation and destruction that marks human existence.
The journalist Bryan Appleyard, who provides a brilliant preface to the book, agrees. If a singularity were to occur all bets would be off, he argues, and then ‘there is nothing merely human that is worth saying’. Nonetheless, we should engage with the idea because it plays a pivotal role in the contemporary imagination. We are, he suggests, already allowing ourselves to be overtaken – or taken over – by our own machines.
As this stimulating book shows, considering the singularity – whether you think it plausible or not – can help us to unpick the narrative of runaway progress and carefully consider what we really want from our inventions.
Sign Up to our newsletter
Receive free articles, highlights from the archive, news, details of prizes, and much more.@Lit_Review
Follow Literary Review on Twitter
Twitter Feed
Richard Flanagan's Question 7 is this year's winner of the @BGPrize.
In her review from our June issue, @rosalyster delves into Tasmania, nuclear physics, romance and Chekhov.
Rosa Lyster - Kiss of Death
Rosa Lyster: Kiss of Death - Question 7 by Richard Flanagan
literaryreview.co.uk
‘At times, Orbital feels almost like a long poem.’
@sam3reynolds on Samantha Harvey’s Orbital, the winner of this year’s @TheBookerPrizes
Sam Reynolds - Islands in the Sky
Sam Reynolds: Islands in the Sky - Orbital by Samantha Harvey
literaryreview.co.uk
Nick Harkaway, John le Carré's son, has gone back to the 1960s with a new novel featuring his father's anti-hero, George Smiley.
But is this the missing link in le Carré’s oeuvre, asks @ddguttenplan, or is there something awry?
D D Guttenplan - Smiley Redux
D D Guttenplan: Smiley Redux - Karla’s Choice by Nick Harkaway
literaryreview.co.uk