John Naughton

Computer Says Go

Human Compatible: Artificial Intelligence and the Problem of Control

By

Allen Lane 336pp £25 order from our bookshop

The biggest question facing us today in relation to artificial intelligence (AI) is: what if we actually succeed in building superintelligent machines? In particular, what would be the consequences for humankind? This possibility is one of the four ‘existential risks’ that Martin Rees and his colleagues at Cambridge University’s Centre for the Study of Existential Risk are pondering. Such questions go back a long way – at least to 1965, when one of Alan Turing’s colleagues, the mathematician I J Good, observed that ‘the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control’.

Sign Up to our newsletter

Receive free articles, highlights from the archive, news, details of prizes, and much more.

Follow Literary Review on Twitter