select all

The Magical Rationalism of Elon Musk and the Prophets of AI

Photo: Justin Chin/Bloomberg via Getty Images

One morning in the summer of 2015, I sat in a featureless office in Berkeley as a young computer programmer walked me through how he intended to save the world. The world needed saving, he insisted, not from climate change — or from the rise of the far right, or the treacherous instability of global capitalism — but from the advent of artificial superintelligence, which would almost certainly wipe humanity from the face of the earth unless certain preventative measures were put in place by a very small number of dedicated specialists such as himself, who alone understood the scale of the danger and the course of action necessary to protect against it.

This intense and deeply serious young programmer was Nate Soares, the executive director of MIRI (Machine Intelligence Research Institute), a nonprofit organization dedicated to the safe — which is to say, non-humanity-obliterating — development of artificial intelligence. As I listened to him speak, and as I struggled (and failed) to follow the algebraic abstractions he was scrawling on a whiteboard in illustration of his preferred doomsday scenario, I was suddenly hit by the full force of a paradox: The austere and inflexible rationalism of this man’s worldview had led him into a grand and methodically reasoned absurdity.

In researching and reporting my book, To Be a Machine, I had spent much of the previous 18 months among the adherents of the transhumanist movement, a broad church comprising life-extension advocates, cryonicists, would-be cyborgs, Silicon Valley tech entrepreneurs, neuroscientists looking to convert the human brain into code, and so forth — all of whom were entirely convinced that science and technology would allow us to transcend the human condition. With many of these transhumanists (the vast majority of whom, it bears mentioning, were men), I had experienced some version of this weird cognitive dissonance, this apprehension of a logic-unto-madness. I had come across it so frequently, in fact, that I wound up giving it a name: magical rationalism.

The key thing about magical rationalism is that its approach to a given question always seems, and in most meaningful respects is, perfectly logical. To take our current example, the argument about AI posing an existential risk to our species seems, on one level, quite compelling. The basic gist is this: If and when we develop human-level artificial intelligence, it’s only a matter of time until this AI, by creating smarter and smarter iterations of itself, gives rise to a machine whose intelligence is as superior to our own as our intelligence currently is to that of other animal species. (Let’s leave the cephalopods out of this for the moment, because who knows what the hell is going on with those guys.) Computers being what they are, though, there’s a nontrivial risk of this superintelligent AI taking the commands it’s issued far too literally. You tell it, for instance, to eliminate cancer once and for all, and it takes the shortest and most logical route to that end by wiping out all life-forms in which abnormal cell division might potentially occur. (An example of the cure-worse-than-the-disease scenario so perfect that you would not survive long enough to appreciate its perfection.) As far as I can see, there’s nothing about this scenario that is anything but logically sound, and yet here we are, taken to a place that most of us will agree feels deeply and intuitively batshit. (The obvious counterargument to this, of course, is that just because something feels intuitively batshit doesn’t mean that it’s not going to happen. It’s worth bearing in mind that the history of science is replete with examples of this principle.)

Magical rationalism arises out of a quasi-religious worldview, in which reason takes the place of the godhead, and whereby all of our human problems are soluble by means of its application. The power of rationalism, manifested in the form of technology — the word made silicon — has the potential to deliver us from all evils, up to and including death itself. This spiritual dimension is most clearly visible in the techno-millenarianism of the Singularity: the point on the near horizon of our future at which human beings will finally and irrevocably merge with technology, to become uploaded minds, disembodied beings of pure and immutable thought. (Nate Soares, in common with many of those working to eliminate the existential threat posed by AI, viewed this as the best-case scenario for the future, as the kingdom of heaven that would be ours if we could only avoid the annihilation of our species by AI. I myself found it hard to conceive of as anything other than a vision of deepest hell.)

In his book The Singularity is Near, Ray Kurzweil, a futurist and director of engineering at Google, lays out the specifics of this post-human afterlife. “The Singularity,” he writes, “will allow us to transcend these limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our hands. We will be able to live as long as we want (a subtly different statement from saying we will live forever). We will fully understand human thinking and will vastly extend and expand its reach. By the end of this century, the nonbiological portion of our intelligence will be trillions of times more powerful than unaided human intelligence.” This is magical rationalism in its purest form: It arises out of the same human terrors and desires as the major religions — the terror of death, the desire to transcend it — and proceeds toward the same kinds of visionary mythologizing.

This particular Singularitarian strain of magical rationalism could be glimpsed in Elon Musk’s widely reported recent comments at a conference in Dubai. Humans, he insisted, would need to merge with machines in order to avoid becoming obsolete. “It’s mostly about the bandwidth,” he explained; computers were capable of processing information at a trillion bits per second, while we humans could input data into our devices at a mere ten bits per second, or thereabouts. From the point of view of narrow rationalism, Musk’s argument was sort of compelling — if computers are going to beat us at our own game, we’d better find ways to join them — but it only really made sense if you thought of a human being as a kind of computer to begin with. (We’re computers; we’re just rubbish at computing compared to actual computers these days.)

While writing To Be a Machine, I kept finding myself thinking about Flann O’Brien’s surreal comic masterpiece The Third Policeman, in which everyone is unhealthily obsessed with bicycles, and men who spend too much time on their bicycles wind up themselves becoming bicycles via some kind of mysterious process of molecular transfer. Transhumanism — a world as overwhelmingly male as O’Brien’s rural Irish hellscape — often seemed to me to be guided by a similar kind of overidentification with computers, a strange confusion of the distinct categories of human and machine. Because if computation is the ultimate value, the ultimate end of intelligence, then it makes absolute sense to become better versions of the computers we already are. We must “optimize for intelligence,” as transhumanists are fond of saying — meaning by intelligence, in most cases, the exercise of pure reason. And this is the crux of magical rationalism: It is both an idealization of reason, of beautiful and rigorous abstraction, and a mode of thinking whereby reason is made to serve as the faithful handmaiden of absolute madness. Because reason is, among its other uses, a finely calibrated tool by which the human animal pursues its famously unreasonable ends.

The Magical Rationalism of Elon Musk and the Prophets of AI