As Jeopardy! Robot Watson Grows Up, How Afraid of It Should We Be?

By
Who’s afraid of IBM’s Jeopardy! robot? Photo: Zohar Lazar/Patrick/Flickr
Boyhood
Watson was just 4 years old when it beat the best human contestants on Jeopardy! As it grows up and goes out into the world, the question becomes: How afraid of it should we be?
Illustrations by Zohar Lazar

On the first weekend of January, many of the leading researchers in artificial intelligence traveled to Puerto Rico to take part in an unusual private conference. Part of what made it unusual was its topic: whether the rise of intelligent machines would be good or bad for people, something endlessly discussed by the public but rarely by the scientists themselves. But the conference’s organizers were interesting, too. The meeting had been arranged by something called the Future of Life Institute, a young think tank run by a cosmologist at MIT named Max Tegmark, who had become a little bit famous when he’d published a book hypothesizing that the universe might merely be the articulation of a mathematical structure; it was under­written mainly by Jaan Tallinn, the co-founder of Skype. Elon Musk, the CEO of Tesla, flew in to give a Sunday-evening talk.

The researchers in the audience found themselves presented with two propositions. The first was that they were the stewards of an exceptional breakthrough. “We’re in the midst of one of the greatest onetime events in history,” declared the MIT economist Erik Brynjolfsson. Attendees were asked to predict when machines would become better than humans at all human tasks; virtually all were willing to put a date on the event, and their median answer was 2050. The second proposition was more complicated, because it asked the researchers to consider that this breakthrough might be a very bad thing. Famed technologists have been warning about the threats that AI might pose: not just Musk and Tallinn but Steve Wozniak, Bill Gates, and Stephen Hawking, who recently said that “the development of full artificial intelligence could spell the end of the human race.” Or as Musk has said, “With artificial intelligence, we are summoning the demon.”

Tegmark’s conference was designed to sketch that demon so that the researchers might begin to see the worries as more serious than science fiction. Brynjolfsson and other economists explained the risk that economic inequality might escalate as machines grow more adept at more tasks, rendering some jobs obsolete and enriching those who designed and capitalized the machines. Academic and industry researchers detailed exactly how expansive the machine brain had been growing, now able to comprehend and generate concepts that could plausibly be described as “beliefs.” Law professors explained the challenges of assigning legal responsibility to computers that identify a target for bombing or suggest driving directions.

The Sunday-­evening session, titled the Intelligence Explosion, described the possibility, theorized by the Oxford philosopher Nick Bostrom, that machines might, very rapidly, come to far exceed human capacities. Tallinn recalled how the AI company DeepMind, in which he was an early investor, had instructed its algorithm to play Atari’s Breakout, in which the player has to knock out bricks with a bouncing ball, and try to maximize its score. The program had no concept of a ball or a paddle, was given no explanation of how to win points. But within two hours, it was playing capably, and within four hours, it had figured out how to win, using the ball to create a tunnel through the bricks so that it could knock out the formation from behind. Tallinn thought that this could be a glimpse of the future, both mesmerizing and terrifying. For those who were worried about AI, each time the machines got smarter, it raised once again the question of control.

One good rule of thumb for life in capitalism is that if billionaires start to be alarmed about the ethical implications of the work you are doing, then you are doing something significant. Musk would soon pledge $10 million to the Institute to fund artificial-­intelligence research, and more than a thousand leading AI practitioners, including all who were at the conference, would sign a statement, designed by Tegmark, vowing to ensure that intelligent machines are socially beneficial. “AI is beginning to work at a level where it is coming out of the lab and into society,” Tegmark said when I asked him why these topics were suddenly so urgent. Robots are already very capable at sensing the world around them and performing physical tasks within it: The driverless car is a reality, and a fully autonomous kitchen (where a robot can prepare a dish by itself) is planned for 2017. There have also been advancements in making robots social — machines are now able to interpret what they see at a “near human” level and are being taught to comprehend the emotions behind human facial expressions in order to model our experience. To some of the workaday AI practitioners in the audience in Puerto Rico, who struggle daily with the shortcomings of the machines and tend to see progress as incremental rather than explosive, the alarms onstage could seem a little florid. “The algorithms don’t work that way,” an Oregon State professor named Tom Dietterich, the president of the Association for the Advancement of Artificial Intelligence, murmured after taking in Bostrom’s ideas. “People ask what is the relationship between humans and machines,” Dietterich told me later, “and my answer is that it’s very obvious: Machines are our slaves.” But Dietterich signed Tegmark’s letter, too, and the AAAI’s annual conference, held a few weeks later, focused on lengthy discussions of robot ethics.

That all of this is even a question — that the machines can be said to be social at all — turns out to have a lot to do with the work of a small group at IBM’s research headquarters in northern Westchester County who for nearly a decade have been building an intelligent machine called Watson. The machine began as the product of a long-shot corporate stunt, in which IBM engineers set out to build an artificial intelligence that could beat the greatest human champions at Jeopardy!, one that could master language’s subtleties: rhymes, allusions, puns. Watson’s 2011 victory was a landmark in the man-versus-machine wars, but since then Watson has continued to evolve, its thinking becoming more creative, its design more responsive, so that it might fit more exactly into our needs.

Watson has now been trained in molecular biology and finance, written a cookbook, been put to work in oil exploration. It is learning to help solve crimes. This fall, Wired aired predictions that Watson would soon be the world’s most perfect medical diagnostician. It has folded so seamlessly into the world that, according to IBM, the Watson program has been applied in 75 industries in 17 countries, and tens of thousands of people are using its applications in their own work. In these experiences, Watson has functioned as an early probe into the relationship between humans and intelligent machines — what we need from them, what gaps they fill, what fears they generate.

The honest way into philosophy is by accident, through the disorienting flash of a new experience. Watson’s creators, many of whom have worked on the project since its inception, talk about their machine ­differently than Tegmark does. They see a chronology of experiences, of developing skills and propulsive failures, as if the machine had its own biography. Some of them describe their experience with Watson in more personal terms, as if they were the parent and the machine the child. Last October, IBM moved the project into a new home, a foreboding office tower that inscribes shadows over Astor Place, which has given Watson a neat, humanlike path through time: Its early years spent in a family atmosphere in the suburbs; then an educational course to prepare it to support itself in a more complicated world; then, to make money, a move to the East Village and a search for employment. It has also meant that it is possible to leave your office, as I did one recent afternoon, walk across lower Manhattan, take an elevator upstairs, situate yourself before a stadium of screens, notice what looks like a small stack of hard drives in the corner, contemplate the human place in the scheme of things, and hear a placid computerized voice say, “Hello, Watson here. What are we working on today?”

“It feels very much like Watson has grown up and gone places by itself.” Photo: Zohar Lazar/Stacy Walsh Rosenstock/Alamy

IBM employs 400,000 people around the globe — it is itself a kind of forgotten country — but its headquarters are in Armonk and wits research based in Yorktown Heights, two places that feel very far away from Silicon Valley. The research center is housed in a long, arcing building designed by Eero Saarinen a half-century ago. It is, like the new Apple headquarters currently under construction in Cupertino, an architectural vision of the utopian corporation. But it is also a period piece, still outfitted with Saarinen’s original furniture and more monastic than a technology company would now get away with (most of the offices are identical, windowless compartments). Downstairs there are plaques celebrating the work of IBM’s 13 Nobel Prize–winning researchers. “Just the history in that building,” a researcher named Dave Ferrucci told me. “The history of IBM is the history of computing.”

The Watson project began here, and from its inception it belonged to Ferrucci, a 53-year-old computer scientist from the Bronx who had been working full-time for IBM ever since he’d finished his doctorate, at Rensselaer Polytechnic Institute, in 1994. He is a voluble, likable, intense man — close-cropped goatee, slicked hair, outer-borough vowels. In graduate school, he’d built a program called Brutus that could be given a theme (betrayal, for instance) and then let loose to script an original piece of fiction. Ferrucci still retains some of the idealism of that project. Language is the “holy grail,” he said, “the reflection of how we think about the world.” He tapped his head. “It’s the path into here.”

The plan to build a machine that could win Jeopardy! was a peculiarly IBM way into the problem of artificial intelligence, an explicit echo of the success the company had in the ’90s, when its chess computer, Deep Blue, beat Garry Kasparov. But gimmick aside, the Watson project forced researchers to grapple with Ferrucci’s reoccupation — language — because it was the only information system that could communicate with Alex Trebek. To teach machines language had once required programmers to describe each concept mathematically — one at a time, a formula to explain “large” and “small,” a separate one to describe “expensive.” It was a fairy tale of a project, and an impossible one. By 2007, when Ferrucci’s team began working together, they were able to take advantage of the efficiency of Big Data technology and the algorithms of machine learning. Watson wouldn’t need to formulate concepts behind the clues; it would depend upon semantic context, proximity, statistical patterns, as if it were playing a version of the child’s game Memory. Ferrucci’s team uploaded a vast database of text (encyclopedias, websites, reference texts) and built hundreds of bots, or programs, each designed to scrutinize a different aspect of the clue and produce candidate responses, whose viability was weighed and then ranked. Send enough bots scurrying through enough text for candidate answers that fit with “English author” and “Stratford-upon-Avon,” assign central algorithms to evaluate those candidate answers, and you’d quickly come up with “William Shakespeare.” “Optimized for accuracy,” explained John Prager, one of the Watson scientists; “pessimized for understanding.” What is Crete? Watson would bleat out. Who is Maurice Chevalier? But it had no comprehension. Who is Maurice Chevalier indeed.

Watson did have the mark of artificial intelligence: It could learn from experience. Each time it responded correctly to a clue from the Jeopardy! archive, it remembered which bots to trust for that kind of problem. Watson’s engineers taught it to recognize the tricks the show’s producers played with language. The machine learned how to better parse each clue. Slowly, over years, its responses got faster and more accurate, and by 2010 it just about matched the success rate of the grand champions. In the beginning, it had taken so long for the machine to respond that the programmers were in the habit of giving Watson a clue and then going to lunch. Eventually IBM’s engineers got the response time down to three seconds.

Language turned out to be a strange, intimate room into which to lead a machine. Once inside, Watson could peruse, as one IBM researcher later explained to me, “the entire cultural corpus” — everything that people had written down in order to explain the world to one another. Watson scrutinized this whole corpus equally, with no bias or preference, and it grew adept at excavating information that people had once been intensely interested in and then forgotten. Watson was an echo of abandoned human expertise. It could also be extremely naïve. In test matches throughout 2010, it started adding a d sound to the end of answers that ended in nWhat is Pakistand? It pronounced the radical black thinker “Malcolm Ten.” Once, Watson was asked to name the first woman in space. “Who is Wonder Woman?” the machine ventured. “I really liked that,” said Jennifer Chu-Carroll, one of IBM’s specialists in natural language processing. The machine was making the same kind of mistakes that a child might, mispronouncing new words, mistaking the line between reality and myth, misinterpreting what adults could not express clearly.

Watson appeared on Jeopardy! in a special multipart broadcast, facing off against Ken Jennings, who had set the program’s record with a 74-match winning streak, and Brad Rutter, who had beaten Jennings. According to IBM’s calculations, Watson had about a 70 percent chance of winning. “A good bet, but still a bet,” Ferrucci said, and when the cameras panned the crowd in between rounds, they captured him, hair slicked back, looking tense. “At that point, you have no control,” Ferrucci’s deputy, Eric Brown, said. “You just cross your fingers and hope for the best.” Watson ran rampant in the category Etude, Brute?, correctly naming classical-music composers. It dominated the category about hedgehogs. It announced it wanted to wager precisely $6,435 on a Daily Double clue, and Trebek, baffled by the specificity, said, “I won’t ask.” Though Jennings mounted a comeback in a later round, with the machine struggling to surface the names of Denzel Washington and Sean Penn as quickly as its competitors remembered them, the sheer force of Watson’s pattern recognition prevailed. By the end, it had won more than $77,000, three times more than either Jennings or Rutter.

Jeopardy! was never the goal for me,” Ferrucci told me. In the final days of preparation, the engineers had begun to expand what Watson could do, to render it into something more than a factoid machine. In part this was because of competitive necessity. Chu-Carroll noticed that Watson kept botching a certain type of clue, in which a crucial piece of information was kept hidden. Upon hearing of the discovery of George Mallory’s body, he told reporters he still believed he was first. The right answer was Sir Edmund Hillary, who had summited Everest and survived (Mallory had died in the effort), but Hillary was buried down deep in Watson’s proposed answers. The concept the machine needed to understand was Mount Everest, which was everywhere in the text Watson had focused on but was discarded because it was obviously not the answer. Chu-Carroll made an adjustment: When the machine noticed a phrase that seemed to orbit the answer like this, it would run a second query that included that phrase in the search, opening up its inquiry in the same way a researcher does when she stumbles upon an obscure text in the library — finding new connections, building on them. The machine could explore, Chu-Carroll said, “in more pointed directions.” Soon, a summer intern wrote a program that let Watson trawl the internet to expand what it knew. The legend is that Watson then started cursing.

The machine began to accumulate capacities. IBM acquired an Australian company to “teach Watson emotional intelligence”; a computer vision specialist I met told me his job was “teaching Watson how to see.” By the time I started visiting IBM’s Watson facilities this winter, it was no longer said that the computer was generating candidate answers but generating hypotheses, new ideas. “Just by changing the name, you suddenly significantly expand the potential applications,” Brown said. From this angle, the nature of an even higher order of human thought — creativity — didn’t seem so elusive or mysterious.

Consider a Broadway composer, Ferrucci said, sitting down at his piano searching for the perfect way to end a musical phrase. “He’s got two notes and he’s looking for a third,” Ferrucci said, jabbing his index finger downward in search of something great. “No.” He did it again and shook his head like an exasperated maestro. “No.” Then a third time, and this time Ferrucci lit up, his pointer finger in the air. “Aha!” Inspiration! Ferrucci was gleeful. Searching for closure to a statistical pattern, matching it to experience — this was exactly what a computer could do. Without knowing it, the maestro was behaving like an experimental machine. “The composer — he’s doing generate and test!”

Watson didn’t really understand the woman’s suffering. But even so, it had done exactly what a doctor would do — pinpointed the relevant parts of the clinical report, discerned the disease, identified the biological cause. Photo: Zohar Lazar/Ken Chernus/Getty Images

After the Jeopardy! win, Ferrucci took more than 40 trips to show off the world’s most famous robot. He debated Tegmark on the impact of artificial intelligence, gave presentations at universities, sat for a long interview at the Computing History Museum. Ferrucci was not accustomed to this kind of itinerary, and it wiped him out. But alongside him, less exhaustible, his computer was becoming a public character, too, one that superseded him: Watson was the subject of a PBS documentary and an Off Broadway play. Conan O’Brien told jokes about it.

A basic confusion had surrounded the project from the beginning, and it only escalated as Watson became famous. What exactly is Watson? It has certain identifiable human attributes: It can learn on its own; you can make the case that it can generate new ideas. Brown was once asked if Watson could pass the famous Turing Test — whether the machine could trick a person into thinking it was a human. He answered that it could in certain highly constrained circumstances (like when it acted as a Jeopardy! contestant), but a more general impersonation was far off. Experts believe that another model for artificial intelligence called Deep Learning will soon lead to a far more supple and plastic machine mind than Watson’s (Google has invested heavily); other machines are far more advanced at understanding human expression and navigating the physical world. But if Watson doesn’t define the AI frontier, it is still the rare general-purpose intelligent machine that people interact with. The natural inclination has been to anthropomorphize. “You see a machine up there answering questions like a person would, and it’s the most natural thing,” Ferrucci said. “All I need is a couple of cues. Two eyes. A smile. Boom. I’m there.”

At times, IBM has encouraged this confusion. The company’s marketing executives had explored whether to outfit the machine with a humanoid face and body for its television appearances, and though they gave up those ideas, when Watson appeared on Jeopardy!, it had a tinny voice, a mechanical finger, and a sort-of-face fashioned from a corporate logo. Perhaps most important, it had a name. Now Watson exists mostly in the cloud, but still there is confusion. Last month, IBM CEO Ginni Rometty, speaking on Charlie Rose’s program, began by describing Watson as “it,” but soon her pronoun changed. “He looks at all your medical records,” she said. “He has been fed and taught by the best doctors in the world.” Rose didn’t call her on the slip, and in some crudely functional way it made sense. The Watson project had atomized, with experts in different fields arriving to teach the peculiar language and content of their profession, evaluating what questions about that material the machine got right. (One lengthy project required medical students to explain to Watson how they comprehended diseases and therapies, so that the machine could answer questions from the medical-licensing exam.) Watson was being prepared for professional life.

It was flattering to work with Watson. If you were an expert, you had spent your career building up shortcuts and intuition that were hard to pass on to anyone else. Now came a machine and some engineers who were deeply curious about that intuition — they wanted to interrogate, test what was real, translate that knowledge into formal mathematics. Detectives described how they solved crimes; utility dispatchers explained how they tried to behave in an emergency. Mark Kris, a lung oncologist at Sloan-Kettering, told me that Watson had the hardest time making subtle judgments: when a patient requires an unconventional approach, or when a new study was so compelling that it ought to change how patients are treated. They couldn’t encode their intuition.

The gaze went both ways. Watson sometimes wanted to know about the ­person it was working with, too. (For artificial intelligence to really work, Ferrucci said, “the machine has got to have a model of you.”) IBM engineers, working with an oil-exploration company, have been developing questions for Watson to ask each individual geologist so that it can understand that scientist’s appetite for risk, thus judging his potential biases and weighing his recommendations accordingly. A company called Elemental Path is building a question-answering toy dinosaur that can use Watson’s technology to learn a child’s interests and comprehension, and tailor its responses accordingly.

Watson was becoming something strange, and new — an expert that was only beginning to understand. One day, a young Watson engineer named Mike Barborak and his colleagues wrote something close to the simplest rule that he could imagine, which, translated from code to English, roughly meant: Things are related to things. They intended the rule as an instigation, an instruction to begin making a chain of inferences, each idea leaping to the next. Barborak presented a medical scenario, a few sentences from a patient note that described an older woman entering the doctor’s office with a tremor. He ran the program — things are related to things — and let Watson roam.

In many ways, Watson’s truest expression is a graph, a concept map of clusters and connective lines that showed the leaps it was making. Barborak began to study its clusters — hundreds, maybe thousands of ideas that Watson had explored, many of them strange or obscure. “Just no way that a person would ever manually do those searches,” Barborak said. The inferences led it to a dense node that, when Barborak examined it, concerned a part of the brain called the substantia nigra that becomes degraded by Parkinson’s disease. “Pretty amazing,” Barborak said. Watson didn’t really understand the woman’s suffering. But even so, it had done exactly what a doctor would do — pinpointed the relevant parts of the clinical report, discerned the disease, identified the biological cause. To make these leaps, all you needed was to read like a machine: voraciously and perfectly.

It is hard to hear stories like this and not wonder about what will become of human prowess. A cancer biologist at Baylor named Lawrence Donehower explained to me that he had spent his career researching a well-studied cancer gene called p53; he estimated that all oncology researchers put together added one new potential site for drugs to target each year. Watson had ingested all 70,000 academic papers that somehow related to p53 and found eight new targets. They were the Russian études of protein biology, discoveries that once seemed very important but had since been abandoned in the scholarly backlog. Donehower found their rediscovery exhilarating. “I’m getting older,” he said. “You get impatient.”

From a closer perspective — from the perspective of its creators — it seemed like Watson had matured into independence. “I have a teenage daughter,” Chu-Carroll told me. “It feels very much like Watson has grown up and gone places by itself.” She laughed out loud. “Left the nest.”

So many of the scenarios that people worry about have to do with artificial intelligence becoming more powerful and doing things on its own,” Max Tegmark said when I called him to talk about Watson. “But long before that you have powerful expert machines.” To Tegmark, expertise seemed like a pretty fundamental thing for people to give away to nonhumans. “If you think about it, why do human beings have more civilization than lions? It’s because we’re smarter. Because we have more expertise.”

Ferrucci and Tegmark are a well-matched pair: the theorist and the engineer, both of them accomplished and charismatic, both self-enrolled in the debate over the future of humanity. If you assume that Watson and machines like it will become more plastic and supple, capable of being reshaped and formed to fill in what people can’t do well, then your position on AI may depend less on how you view the machines than on how much you think society could be improved if less of it were in the hands of people.

On this Ferrucci is an absolutist. To see human beings from the point of view of the machine is to see how badly we need their assistance — how blinkered and incomplete our thinking is, how dependent we are upon our own personal experience, how much information and perspective we lack. In a roundabout way, Watson is becoming a foot soldier in one of the great intellectual fights of our time, over how rational we really are. The Princeton Nobel laureate Daniel Kahneman, the theorist of cognitive bias and human irrationality, appears in IBM promotional videos and has visited an advanced lab at IBM’s research headquarters, twice, to speak. “There is no doubt that these cognitive biases exist and that they’re alarming,” Ferrucci told me. He said he hoped that the ascent of artificial intelligence would prod “human beings to ask themselves, ‘Wow. Should I really be doing this without help?’ ”

It may not be so much easier for a machine to escape the culture of its creation than it is for a person. IBM has spent decades selling computers to help businesses supplement the cognitive limitations of their employees; the more sophisticated the machines have gotten, the easier it is to notice the human shortcomings. One executive, Dario Gil, told me about a Watson project he worked on in which electrical-utility executives were trying to improve how they allocated their resources in the face of storms. Their decision-making depended on old wives’ tales. “It was like,” Gil said, miming a man dumbly sticking a finger at a map, “ ‘When the storm comes from the north and the dog rises …’ ” Perhaps Watson might have acquired a different mission if Kahneman’s ideas had not been in such vogue this decade, or if he had lived farther from ­Yorktown Heights, or if the project had been developed in Silicon Valley. But Watson comes from a different tradition: one in which success is collaborative, individuals are prone to limitations, and machines are in the service not of the Übermensch but the organization man.

IBM sells Watson as both a business service for large companies and a technology to be licensed by entrepreneurs, and prospective clients are invited to meet the machine at Astor Place: a vertical stack of hard drives whose inner workings are projected onto screens that surround you like a private planetarium. The presentation begins with video projections of two young women professionals, a chef and a lawyer. “We see causal patterns where there are none,” a narrator says, suggesting that the way human beings solve problems can be as unscientific as astrology. “We can become overconfident and make costly mistakes.” My human guide, a young man named Frederik Tunvall, gestured at the image of a chef, projected onto the screen behind us. “She believes olive oil is the only oil for cooking Mediterranean food,” he said. “Maybe because of this bias she’s missing out on an opportunity.” Here, Tunvall became slightly self-conscious. Recently, he told me, he’d been giving this presentation to a group of Italian financial executives, who had become incensed at the suggestion that you could cook Italian food in anything but olive oil. Tunvall’s tone suggested that this was funny but also ridiculous. After all, who would you bet on to know more about Italian cooking, a supercomputer that had ingested uncountable formulations of the underlying chemistry of food or some banker with the dumb luck to have been born near Bologna?

Watson learned how to cook in the same way that it learns everything else. The machine read the entire database of recipes published by Bon Appétit and extracted from them a way of comprehending what Mexican chefs were doing when they converted raw ingredients into a dish and how that process differed from the French one. Then, prodded by programmers and a chef named James Briscione, Watson started suggesting new recipes. Its specialty was in taking an ingredient or method of preparation from one food tradition and splicing it, Wolfgang Puck–ishly, into another. It generated Tanzanian-Jewish matzo-ball soup, Czech pork-belly moussaka, and a recipe called Harlem chicken, which fused African-American and West African ingredients and techniques. As he worked through the recipes, trying them out, Briscione started to see how these chemical relationships could become a whole language. The effect was liberating. “It’s interesting to be forced to think, What can I do with a tomato if I’m not going to pair it with basil?” he said.

I sent a copy of Cognitive Cooking With Chef Watson, a cookbook that was published last month, to Ari Taymor, the young chef of Alma, an inventive restaurant in Los Angeles. As Taymor paged through, it seemed obvious to him that Watson was performing some kind of cognitive act, but whatever the nature of that act was, it didn’t seem to have anything to do with pleasure. “I just can’t imagine anyone ever sitting down in a restaurant and deciding that what they wanted was this food,” he told me. The thrill of the Chef Watson project, to Briscione, had been its intellectual abandon — the way in which it took food and unmoored it from culture. To Taymor, this seemed to misunderstand what food was about: “evoking a specific place, a specific experience,” a kind of mooring that a machine couldn’t replicate or comprehend. A sense of home.

The modern fear of artificial intelligence is that it will mold so perfectly to human weakness and incapacity that we no longer notice the ways it boxes us in. Driverless cars we don’t pay attention to until they run amok, or military drones that suddenly develop minds of their own, or the fiendishly clever robot in the recent movie Ex Machina who seduces her human protagonist. In Superintelligence, the book that so persuaded Elon Musk, Nick Bostrom cites an absurd example: “An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paper clips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paper clips.”

But up close, the fears about artificial intelligence, even the more general atmosphere of uncertainty, don’t seem to really be about your basic comfort with technology. The world’s most senior technologists are as alarmed as anyone else. It may not even have so much to do with how you perceive the risk, because that depends on technologies that have not developed yet and so is hard to measure. My suspicion is that it depends mainly upon what you think about people: of where you line up in the intellectual wars over human limitation and irrationality, on how much you agree with Ferrucci that we badly need the help.

In late 2013, Playwrights Horizons staged a play by Madeleine George titled The (Curious Case of the) Watson Intelligence. The play twined together four stories of beleaguered scientific assistants, each named Watson: Sherlock Holmes’s physician sidekick, Alexander Graham Bell’s faithful deputy, a fictional modern-day IT grunt, and the computer itself. Eric Brown, who lives in Connecticut, traveled down to see two separate performances. Each time, he stuck around for the panel discussion afterward. “It was pretty insightful,” Brown said. “Just this notion of the assistant — of the role that assistants play and in some cases do they get the credit that they sometimes deserve.” Brown had been an assistant of a kind for years to Ferrucci. “And just,” Brown said, “can you be happy in that role as the assistant, to the more famous and better-known master?”

One icy Monday evening, at the very end of a historically icy winter, I drove to York­town Heights to meet Ferrucci for dinner at his home. He lives at the end of the same cul-de-sac where he has been since he finished his doctorate and returned to IBM, which was then investing heavily in artificial intelligence. In his living room, he showed me a memento from the Jeopardy! challenge he seems to cherish, a kind of fight poster advertising the Watson match that everyone on his team had signed.

Ferrucci left IBM two years ago to take a job at the hedge fund Bridgewater Associates in Westport. “I loved IBM,” he said. “IBM was my home.” He mentioned how he’d first found out about the company’s research when he was a high-school kid absorbed by the whole notion of AI and plunking away on an ancient Apple in his basement: IBM had run ads in his father’s magazines, petitioning scientists to come work for it, promising a freedom to explore. But though he thought what IBM was doing with Watson was interesting and tactically smart, it wasn’t for him. “The pure commercialization,” he said. “It was just less meaningful.”

The hedge-fund job has obvious benefits, though it has taken him further from the cutting edge of AI research, but Ferrucci was eager to reminisce about Watson and imagine where the field might go. He told a story about Tchaikovsky’s Sixth Symphony, a production with deep meaning to the composer, who believed it mapped the whole arc of human experience. The first night in St. Petersburg it bombed. The composer stayed up long into the night and made a modest change, appending to the program an explanation of the symphony’s meaning. The second night it was a hit. What Tchaikovsky had, Ferrucci said, was “the human simulator” — the ability to comprehend what a human would and wouldn’t respond to. A computer could never make that change, he said, unless it could simulate the world at a level deeper than statistics. “That is where my life is going,” Ferrucci said. “That is the true AI.”

Ferrucci’s wife, Elizabeth, had made a countertop sous-vide surf and turf. “To me, there’s a very deep philosophical question that I think will rattle us more than the economic and social change that might occur,” Ferrucci said as we ate. “When machines can solve any given task more successfully than humans can, what happens to your sense of self? As humans, we went from the chief is the biggest and the strongest because he can hurt anyone to the chief is the smartest, right? How smart are you at figuring out social situations, or business situations, or solving complex science or engineering problems. If we get to the point where, hands down, you’d give a computer any task before you’d give a person any task, how do you value yourself?”

Ferrucci said that though he found Tegmark’s sensitivity to the apocalypse fascinating, he didn’t have a sense of impending doom. (He hasn’t signed Tegmark’s statement.) Some jobs would likely dissolve and policymakers would have to grapple with the social consequences of better machines, Ferrucci said, but this seemed to him just a fleeting transition. “I see the endgame as really good in a very powerful way, which is human beings get to do the things they really enjoy — exploring their minds, exploring thought processes, their conceptualizations of the world. Machines become thought-­partners in this process.”

This reminded me of a report I’d read of a radical group in England that has proposed a ten-hour human workweek to come once we are dependent upon a class of beneficent robot labor. Their slogan: “Luxury for All.” So much of our reaction to artificial intelligence is relative. The billionaires fear usurpation, a loss of control. The middle-class engineers dream of leisure. The idea underlying Ferrucci’s vision of the endgame was that perhaps people simply aren’t suited for the complex cognitive tasks of work because, in some basic biological sense, we just weren’t made for it. But maybe we were made for something better.

Ferrucci showed me a long video on his tablet of his daughter playing the piano. When it ended, he pressed PLAY again, and so we watched the recital a second time. He said, “I am not afraid.”

*This article appears in the May 18, 2015 issue of New York Magazine.