Proust Wasn’t a Neuroscientist. Neither was Jonah Lehrer.

Photo: Nick Cunard/Eyevine/Redux

We are all bad apples,” wrote Jonah Lehrer, in probably the last back-cover endorsement of his career. “Dishonesty is everywhere … It’s an uncomfortable message, but the implications are huge.”

Lehrer’s blurb was for behavioral economist Dan Ariely’s The (Honest) Truth About Dishonesty: How We Lie to Everyone—Especially Ourselves. Among Ariely’s bite-size lessons: We all cheat by a “fudge factor” of roughly 15 percent, regardless of how likely we are to get caught; a few of us advance gradually to bigger and bigger fudges, often driven by social pressures; and it’s only when our backs are up against the wall that we resort to brazen lies.

Lehrer, 31, had already established the kind of reputation that made his backing invaluable to a popular science writer. Thanks to three books, countless articles and blog posts, and many turns on the lecture circuit, Lehrer was perhaps the leading explainer of neuroscience this side of a Ph.D. He was kind enough to interview Ariely this past June for the Frontal Cortex, a blog Lehrer had started in 2006 and carried with him from one high-profile appointment to the next. The New Yorker had begun hosting it that month, after Lehrer was hired as a staff writer—another major career milestone. But newyorker.com didn’t run the Ariely story, because by the time he wrote it, Lehrer had already been banned from his own blog. Two weeks earlier, readers had discovered that he was rampantly “self-plagiarizing” his own blog posts among different media outlets. Lehrer held onto his three-day-old print contract, but the blog was on ice.

Then it got so much worse. Four excruciating months later, Jonah Lehrer is known as a fabricator, a plagiarist, a reckless recycler. He’s cut-and-pasted not just his own stories but at least one from another journalist; he’s invented or conflated quotes; and he’s reproduced big errors even after sources pointed them out. His publisher, Houghton Mifflin Harcourt, will soon conclude a fact-check of his three books, the last of which, Imagine, was recalled from bookstores—a great expense for a company that, like all publishing houses, can’t afford to fact-check most books in the first place. In the meantime, he’s been completely ostracized. It’s unclear if he’ll ever write for a living again.

If the public battering seems excessive now, four months in, that should come as no surprise. That’s how modern scandals go—burning bright, then burning out, leaving a vacuum that fills with sympathy. It’s especially true in cases like Lehrer’s, where the initial fury is narrowly professional, ­fueled by Schadenfreude and inside-baseball ethical disputes. It was fellow journalists who felled Lehrer, after all, not the ­sources he betrayed. But the funny thing is that while the sins they accused him of were relatively trivial, more interesting to his colleagues than his readers, Lehrer’s serious distortions—of science and art and basic human motivations—went largely unnoticed. In fact, by the time he was caught breaking the rules of journalism, Lehrer was barely beholden to the profession at all. He was scrambling up the slippery slope to the TED-talk elite: authors and scientists for whom the book or the experiment is just part of a multimedia branding strategy. He was on a conveyor belt of blog posts, features, lectures, and inspirational books, serving an entrepreneurial public hungry for futurist fables, easy fixes, and scientific marvels in a world that often feels tangled, stagnant, and frustratingly familiar. He was less interested in wisdom than in seeming convincingly wise.

The remarkable thing about that transformation is that it wasn’t all that unusual. In his short, spectacular career, Lehrer had two advocate-editors who quickly became his exilers. The first, Wired editor Chris Anderson, has himself been caught plagiarizing twice, the second time in an uncorrected proof. The often-absentee editor of a futurist magazine that may be the house journal of the lecture circuit, Anderson makes his living precisely as Lehrer did—snipping and tailoring anecdotal factoids into ready-to-wear tech-friendly conclusions.

The second, David Remnick, has invested resources in The New Yorker’s own highbrow talk series, The New Yorker Festival, in which staff writers function as boldfaced brand experts in everything from economics to medicine to creativity. The tone of those talks mixes the smooth technospeak of the Aspen Ideas Festival—co-hosted by rival magazine The Atlantic—with the campfire spirit of first-person storyteller confab the Moth. It was at the Moth that The New Yorker’s biggest brand, The Tipping Point author Malcolm Gladwell, got into hot water in 2005 by telling a story about games he played in the pages of the Washington Post that turned out to be almost entirely untrue. In print, Gladwell is often knocked for reducing social science to easy epiphanies and is occasionally called out for ignoring evidence that contradicts his cozy theories—most recently over a piece this past September on Jerry Sandusky. Yet he also serves as a pioneer in the industry of big-idea books—like those by his New Yorker colleague James Surowiecki, the “Freakonomics” guys, Dan Ariely, and others. Theirs is a mixed legacy, bringing new esoteric research to a lay audience but sacrificing a great deal of thorny complexity in the process.

Photo: Kris Krüg

In the world of magazines, of course, none of us is immune to slickness or oversimplification—New York included. But two things make Lehrer’s glibness especially problematic, and especially representative. First, conferences and corporate speaking gigs have helped replace the ­journalist-as-translator with the journalist-as-sage; in a magazine profile, the scientist stands out, but in a TED talk, the speaker does. And second, the scientific fields that are the most exciting to today’s writers—neuroscience, evolutionary biology, behavioral economics—are fashionable despite, or perhaps because of, their newness, which makes breakthrough findings both thrilling and unreliable. In these fields, in which shiny new insights so rarely pan out, every popularizer must be, almost by definition, a huckster. When science doesn’t give us the answers we want, we find someone who will.

“I’ve never really gotten over the sense of fraudulence that comes with being onstage,” Lehrer once said. Young and striving and insecure, he was both a product of this glib new world and a perpetrator of its swindles. He was also its first real victim.

Lehrer, who grew up in L.A. and attended prestigious North Hollywood High School, was always precocious. At 15, he won $1,000 in a contest run by nasdaq with an essay calling the stock market “a crucial bond between plebeians and patricians.” Two years later, he and some other students made the finals of the countrywide Fed Challenge for a cogent argument against raising the national interest rate. “We felt we shouldn’t act on a guess or a premonition,” he told a newspaper. “We should act on the basis of statistics.”

At Columbia University, Lehrer majored in neuroscience, helped edit the literary Columbia Review, and spent a few years working in the lab of Eric Kandel. (Journalists and scientists often mistook this undergraduate experience for lab work that left Lehrer just shy of a Ph.D.) The Nobel Prize–winning neuroscientist, who was unlocking the secrets of our working memory, remembers his former lab assistant fondly. “He was the most gracious, decent, warm, nice kid to interact with,” says Kandel. “Cultured, fun to have a conversation with—and knew a great deal about food. I was surprised he didn’t go into science, because he had a real curiosity about it.”

Lehrer won a Rhodes Scholarship, then used some of his research at Oxford to write his first book, Proust Was a Neuroscientist. Published in late 2007, it was a grab bag of fun facts in the service of an earnest point: that great Modern artists anticipated the discoveries of brain science. It had a senior-thesis feel, down to an ambitious coda. Critic C. P. Snow had called, in 1959, for a “third culture” to bridge science and art—a prophecy that had been fulfilled, but to the advantage, Lehrer thought, of science. In Proust, Lehrer proposed a “fourth culture,” in which art would be a stronger “counterbalance to the glories and excesses of scientific reductionism.”

A year earlier, Lehrer had begun blogging on Frontal Cortex. After hundreds of posts, he began to find traction in magazines. Mark Horowitz, then an editor at Wired, brought him in to write a feature on a project to map all the genes in the brain. “It was a very complex piece,” says ­Horowitz, “with lots of reporting, lots of science. I thought that was a breakthrough for him.” It was, he adds, thoroughly fact-checked; none of Lehrer’s magazine stories have been found to have serious errors.

“If you asked him, ‘How many ideas do you have for an article?’ he had ten ideas, more than anyone else,” Horowitz says. “That’s why he was able to churn out so many blog posts.” They were long posts, too, the kind that quickly became the basis for print stories. In 2010, Frontal Cortex moved over to wired.com. “Chris [Anderson] loved Jonah Lehrer—loved him,” Horowitz says. “Any story idea he had, it was, ‘See if Jonah will do it.’ He was good, he was young, and he was getting better with every story.”

Lehrer’s tortuous fall began on what should have been a day of celebration. Monday, June 18, was his official start date as a New Yorker staff writer. That evening, an anonymous West Coast journalist wrote to media watchdog Jim Romenesko, noting that one of Lehrer’s five New Yorker blog posts—“Why Smart People Are Stupid”—had whole paragraphs copied nearly verbatim from Lehrer’s October 2011 column for The Wall Street Journal.

Within 24 hours, journalists found several more recycled posts, setting off a feeding frenzy one blogger called the “Google Game”—find a distinctive passage, Google it: pay dirt. On Wednesday, the irascible arts blogger Ed Champion unleashed an 8,000-word catalogue of previously published story material Lehrer had worked into Imagine. (Never mind that drawing on earlier stories for book projects is standard practice.) It was called “The Starr Report of the Lehrer Affair.” That day Lehrer told the New York Times that repurposing his own material “was a stupid thing to do and incredibly lazy and absolutely wrong.”

At The New Yorker, David Remnick initially saw the “self-plagiarism” pile-on as overkill. “There are all kinds of crimes and misdemeanors in this business,” The New Yorker editor said that Thursday, explaining his decision to retain Lehrer. “If he were making things up or appropriating other people’s work, that’s one level of crime.” A source says Remnick did consider firing Lehrer outright, but decided against it.

Ironically, it was another journalist’s sympathy for Lehrer that led to his complete unraveling. “The Schadenfreude with Lehrer was pretty aggressive,” says Michael Moynihan, a freelance writer who was then guest-blogging for the Washington Post. “I was going to write a bit about the mania for destroying journalists because they’re popular and have more money than you do.” Having never read Lehrer’s books, he dug into Imagine (which purports to explain the brain science of “how creativity works”), not even knowing that its first chapter focused on one of his favorite musicians, Bob Dylan. He found some suspiciously unfamiliar quotes. “Every Dylan quote, every citation, is online,” Moynihan says. A new quote is “like finding another version of the Bible.”

He e-mailed Lehrer, who claimed to be on vacation until just after Moynihan’s Post gig was up. But off the top of his head, Lehrer offered one source for a quote—a book on Marianne Faithfull. It was wildly out of context, but no matter: Where did the ­other six come from? When Moynihan reached him the following week, Lehrer expressed surprise that he still planned to run the piece. That was when, as Moynihan puts it, “the calls started.”

On the phone, Lehrer seemed charming and cooperative. He said he’d pulled some quotes from a Dylan radio program as well as unaired footage for the documentary No Direction Home. Dylan’s manager, Jeff Rosen, had given him the latter. Moynihan pressed him for more details over the next several days, but Lehrer stalled.

Finally, Moynihan was able to reach Rosen, who said he’d never heard from Lehrer. When Moynihan spoke to the author, while walking down Flatbush Avenue near his Brooklyn home, the conversation grew so heated that a passing acquaintance thought it was a marital spat. Lehrer finally came clean about making up his sources. He was impressed that Moynihan had figured out how to reach Rosen. “It shows,” he told Moynihan, “you’re a better journalist than I am.”

An editor Moynihan knew at the online magazine Tablet had happily accepted Moynihan’s exposé. The Sunday before it was published, July 29, Moynihan had to ignore Lehrer’s late-night calls just to write the piece.

That same evening of July 29, David Remnick was at his first Yankees game of the season. After getting an e-mail from Tablet’s editor, Alana Newhouse, he spent most of the game in the aisle, calling and e-mailing with Newhouse, his editors, and Lehrer. It was all, as Remnick said the next day, “a terrifically sad situation.”

The next morning, a desperate Lehrer finally managed to reach Moynihan. Didn’t he realize, Lehrer pleaded, that if Moynihan went forward, he would never write again—would end up nothing more than a schoolteacher? The story was published soon after. That afternoon, Lehrer announced through his publisher that he’d resigned from The New Yorker and would do everything he could to help correct the record. “The lies,” he said, “are over now.”

The ensuing flurry of tweets and columns was split between the Google Game fact-checkers and opiners like David Carr, who felt that Lehrer’s missteps were the result of “the Web’s ferocious appetite for content” and the collapse of hard news. All of them were grappling to name Lehrer’s pathology. What none of them really asked, and what Houghton Mifflin’s fact-check won’t answer, is what Imagine would look like if it really were scrubbed of every slippery shortcut and distortion. In truth, it might not exist at all. The fabricated quotes are not just slight aberrations; they’re more like the tells of a poker player who’s gotten away with bluffing for far too long.

In case after case, bad facts are made to serve forced conclusions. Take that Dylan chapter. First, of course, there are the quotes debunked by Moynihan. Then there are the obvious factual errors: Dylan did not immediately repair from his 1965 London tour to a cabin in Woodstock to write “Like a Rolling Stone” (he took a trip with his wife first and spent only a couple of days in that cabin), and did not coin the word juiced, as Lehrer claims; it had meant “drunk” for at least a decade. (These errors were discovered by Isaac Chotiner, weeks before Moynihan’s exposé, in The New Republic: “almost everything,” he wrote, “from the minor details to the larger argument—is inaccurate, misleading, or simplistic.”) Lehrer’s analysis of Dylan’s “Like a Rolling Stone” breakthrough is also wrong. It was hardly his first foray into elliptical songwriting, and it was hardly the first piece to defy the “two basic ways to write a song”—a dichotomy between doleful bluesy literalism and “Sugar pie, honeybunch” that no serious student of American pop music could possibly swallow.

Finally and fatally, what ties the narrative together is not some real insight into the nature of Dylan’s art, but a self-help lesson: Take a break to recharge. To anyone versed in Dylan, this story was almost unrecognizable. Lehrer’s intellectual chutzpah was startling: His conclusions didn’t shed new light on the facts; they distorted or invented facts, with the sole purpose of coating an unrelated and essentially useless lesson with the thinnest veneer of plausibility.

It’s the same way with the science that “proves” the lesson. Lehrer quotes one neuroscientist, Mark Beeman, as saying that “an insight is like finding a needle in a haystack”—presumably an insight like Dylan’s, though Beeman’s study hinges on puzzles. Beeman tells me, “That doesn’t sound like me,” because it’s absolutely the wrong analogy for how the brain works—“as if a thought is embedded in one connection.” In the next chapter, Lehrer links his tale of Dylan’s refreshed creativity to Marcus Raichle’s discoveries on productive daydreaming. But Raichle tells me those discoveries aren’t about daydreaming. Then why, I ask, would Lehrer draw that conclusion? “It sounds like he wanted to tell a story.”

Consider another tall tale, this one from Lehrer’s previous book, How We Decide. Discussing what happens when we choke under pressure, Lehrer invokes the famous case of Jean Van de Velde, a golfer who blew a three-stroke lead in the eighteenth hole of the final round of the 1999 British Open. In Lehrer’s telling, the pressure caused Van de Velde to choke, focusing on mechanics and “swinging with the cautious deliberation of a beginner with a big handicap.”

Lehrer tees this up as a transition to a psychological study on overthinking. It fits perfectly into what one critic called “the story-study-lesson cycle” of this kind of book. And just like Dylan’s “insight,” it’s largely made up. Here too he flubs an important fact: Van de Velde didn’t lose outright: He tied and lost the subsequent playoff. But then there is the larger deception. Most golf commentators thought at the time that he simply chose risky clubs—that he wasn’t handicapped by anxiety, but undone by cockiness. Van de Velde agreed; he played too aggressively. A month after the disaster, he said, “I could not live with myself knowing that I tried to play for safety and blew it.” Lehrer just rewrote the history to reach a conclusion flatly contradicted by the story of how Van de Velde actually decided.

Unlike the books, Lehrer’s New Yorker pieces were thoroughly fact-checked. But even there, his conclusions are facile. One popular story, published in 2010, is especially symptomatic of how he misrepresents science—and harms it in the process. Headlined “The Truth Wears Off,” it sets out to describe a curious phenomenon in scientific research: the alarmingly high number of study results that couldn’t be repeated in subsequent experiments. Researchers worry a lot about this tendency, sometimes called the “decline effect.” But they’ve settled on some hard, logical truths: Studies are incredibly difficult to design well; scientists are biased toward positive results; and the more surprising the finding, the more likely it is to be wrong. Good theories require good science, and science that can’t be replicated isn’t any good.

That wasn’t Lehrer’s approach. His story begins, instead, with the question, “Is there something wrong with the scientific method?” To answer that question definitively would require a very rigorous review of research practice—one that demonstrated persuasively that even the most airtight studies produced findings that couldn’t be replicated. Lehrer’s conclusion is considerably more mystical, offering bromides where analysis should be: “Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.” It sounds an awful lot like the Zen-lite conclusion of Imagine: “Every creative story is different. And every creative story is the same. There was nothing. Now there is something. It’s almost like magic.”

By August 14, the storm seemed to be passing. That morning, Lehrer’s aunt had breakfast in California with an old friend. While fretting over her nephew, she mentioned innocently that Lehrer still had an outstanding contract with Wired. Her friend happened to be the mother of Jonah Peretti, a founder of the website BuzzFeed, who gladly published the “update.” Wired confirmed that Lehrer was still under contract, but said it wouldn’t publish anything more until a full vetting of his blog posts, already under way, was completed.

In fact, the real online vetting hadn’t even begun, and probably wouldn’t have happened if not for Lehrer’s chatty aunt. On the 16th, wired.com editor-in-chief Evan Hansen called Charles Seife, a science writer and journalism professor at NYU, to ask if he could investigate Lehrer’s hundreds of Frontal Cortex posts. It was too much work, so they settled on a mere eighteen, some of them known to have problems. Seife found a range of issues, from recycling—in most of the stories—to lazy copying from press releases, a couple of slightly fudged quotes, and three cases of outright plagiarism. (In fairness, there was only one truly egregious case of stealing.)

Accounts vary over whether Seife was expected to publish a Wired story about Lehrer, and whether his 90-minute conversation with Lehrer was on the record. Lehrer told a friend that Chris Anderson assured him there wouldn’t be a story—but then Hansen called him to ask if his remarks were on the record. Lehrer said they weren’t. Wired decided against running a full story, but allowed Seife to take it elsewhere.

Slate’s story went up on August 31, just after Wired began posting its first corrections to Lehrer’s blog posts. All Seife could say about his phone conversation with Lehrer was that it “made me suspect that Lehrer’s journalistic moral compass is badly broken.” An hour later, wired.com issued a full statement saying they had “no choice but to sever our relationship” with Lehrer.

If Anderson did indeed quietly defend Lehrer against Seife, it would fit the pattern: For all the hand-wringing about the decline of print-media standards, Lehrer was not a new-media wunderkind but an old-media darling. Just as newyorker.com had banned Lehrer before David Remnick canceled his print contract, it was wired.com that led the charge against Lehrer, and the print magazine that only fired him when it “had no choice”—after Seife published his exposé at another web magazine.

Lehrer’s biggest defenders today tend to be veterans of traditional journalism. NPR’s longtime correspondent Robert Krulwich has known Lehrer for almost a decade and used him many times on the science program “Radiolab.” “I find myself uncomfortable with how he’s been judged,” Krulwich wrote in an e-mail, weeks after “Radiolab” ran six corrections online. “If in a next round, he produces work that’s better, more careful, I hope his editors and his readers will welcome him back.” Malcolm Gladwell wrote me, “[Lehrer] didn’t twist anyone’s meaning or libel anyone or manufacture some malicious fiction … Surely only the most hardhearted person wouldn’t want to give him a chance to make things right.”

If anyone could have gotten a second chance, it was Lehrer. He’d made himself the perfect acolyte. Lehrer seemed to relish exciting ideas more than workaday craft, but editors are ravenous for ideas, and Lehrer had plenty. He fed pitches to deskbound editors and counted on them and their staffs to clean up the stories for publication. He was a fluid writer with an instinctive sense of narrative structure. In fact, he was much better at writing magazine stories than he was at blogging. His online posts were not only repetitive but too long and full of facts—true or not.

Seife spent a chunk of his time tracking down a change made to an E. O. Wilson quote in one of Lehrer’s New Yorker stories, only to find that a fact-checker had altered it at Wilson’s insistence. The piece’s editor told Seife that Lehrer was “a model of probity.” Meanwhile, wired.com—the very site that hired Seife—couldn’t vouch for any of the work Lehrer had published there. Lehrer told a friend that the first time he heard from Hansen in his two years at wired.com was during the vetting. The lack of oversight became distressingly clear when Seife, on the phone with Lehrer, demanded to know why he hadn’t asked his blog editor to fix his errors. Lehrer shot back in frustration that there was no editor.

Lehrer spent much of August writing about the affair, trying to figure out where it had all gone wrong. He came to the conclusion that he’d stretched himself too thin. His excuses fall along those lines: He told Seife that his plagiarized blog post was a rough draft he’d posted by mistake. And his latest explanation for those fabricated Dylan quotes is that he had written them into his book proposal and forgotten to fix them later. Even by his own account, then, the writing wasn’t his top priority.

The lectures, though, were increasingly important. Lehrer gave between 30 and 40 talks in 2010, all while meeting constant deadlines, starting a family, and buying a home in the Hollywood Hills. It was more than just a time suck; it was a new way of orienting his work. Lehrer was the first of the Millennials to follow his elders into the dubious promised land of the convention hall, where the book, blog, TED talk, and article are merely delivery systems for a core commodity, the Insight.

The Insight is less of an idea than a conceit, a bit of alchemy that transforms minor studies into news, data into magic. Once the Insight is in place—Blink, Nudge, Free, The World Is Flat—the data becomes scaffolding. It can go in the book, along with any caveats, but it’s secondary. The purpose is not to substantiate but to enchant.

The tradition of the author’s lecture tour goes back at least as far as Charles Dickens. But its latest incarnation began with Gladwell in 2000. The Tipping Point, his breakthrough best seller, didn’t sell itself. His publisher, Little, Brown, promoted the book by testing out its theory—that small ideas become blockbusters through social networks. Gladwell was sent across the country not just to promote his book but to lecture to booksellers about the secrets of viral-marketing. Soon The New Yorker was dispatching him to speak before advertisers, charming them and implicitly promoting the magazine’s brand along with his own. Increasingly, he became a commodity in his own right, not just touring a book (which authors do for free) but giving “expert” presentations to professional groups who pay very well—usually five figures per talk.

Gladwell was quickly picked up by Bill Leigh, whose Leigh Bureau handles many of the journalist-lecturers of the aughts wave. Asked what bookers require from his journalist clients, Bill Leigh simply says, “The takeaway. What they’re getting is that everyone hears the same thing in the same way.” The writers, in turn, get a paying focus group for their book-in-progress. Leigh remembers talking to his client, the writer Steven Johnson, about how to package his next project. “He wanted to take his book sales to the next level,” says Leigh. “Out of those conversations came his decision to slant his material with a particular innovation feel to it.” That book was titled Where Big Ideas Come From: The Natural History of Innovation. His new one is called Future Perfect.

One of the sharpest critiques of this new guard of nonspecialist Insight peddlers came from a surprising source, a veteran of the lecture circuit who decried “our thirst for nonthreatening answers.” “I’m essentially a technocrat, a knowledge worker,” says Eric Garland, who was a futurist long before that became a trendy descriptor. A past consultant to some of the Insight men’s favorite companies—3M, GM, AT&T—Garland is wistful for a time when speakers were genuine experts in sales, leadership, and cell phones. “Has Jonah Lehrer ever presented anything at a neuroscience conference?” he asks a touch dismissively.

Lehrer has not. But Gladwell actually did give a talk, in 2004, at an academic conference devoted to decision-making. “Some people were outraged by the simplification,” remembers one attendee, who likes Gladwell’s work. Someone stood up and asked if he should be more careful about citing ­sources.

In reply, Gladwell offered another anecdote. A while back, he’d found out that the playwright Bryony Lavery’s award-winning play, Frozen, cribbed quotes from one of his stories. Though he might have sued Lavery for plagiarism, Gladwell concluded that, no, the definition of plagiarism was far too broad. The important thing is not to pay homage to the source material but to make it new enough to warrant the theft. Lavery’s appropriation wasn’t plagiarism but a tribute. “I thought it was a terrible answer,” says the attendee. “If there was ever an answer that was about rationalization, this was it.”

The worst thing about Lehrer’s “decline effect” story is that the effect is real—science is indeed in trouble—and Lehrer is part of the problem. Last month, the Nobel laureate behavioral economist and psychologist Daniel Kahneman sent a mass e-mail to colleagues warning that revelations of shoddy research and even outright fraud had cast a shadow over the hot new subfield of “social priming” (which studies how perceptions are influenced by subtle cues and expectations). Others blamed the broader “summer of our discontent,” as one science writer called it, on a hunger for publicity that leads to shaved-off corners or worse.

“There’s a habit among science journalists to treat a single experiment as something that is newsworthy,” says the writer-psychologist Steven Pinker. “But a single study proves very little.” The lesson of the “decline effect,” as Pinker sees it, is not that science is fatally flawed, but that readers have been led to expect shocking discoveries from a discipline that depends on slow, stutter-step progress. Call it the “TED ­effect.” Science writer Carl Zimmer sees it especially in the work of Lehrer and Gladwell. “They find some research that seems to tell a compelling story and want to make that the lesson. But the fact is that science is usually a big old mess.”

Sadly, Lehrer knows exactly how big a mess it is, especially when it comes to neuroscience. One of his earliest blog posts, back in 2006, was titled “The Dirty Secrets of fMRI.” Its subject was the most appealing tool of brain science, functional magnetic-resonance imaging. Unlike its cousin the MRI, fMRI can take pictures of the brain at work, tracking oxygen flow to selected chunks while the patient performs assigned tasks. The most active sections are thus “lit up,” sometimes in dazzling colors, seeming to show clumps of neurons in mid-thought. But in that early blog post, Lehrer warned of the machine’s deceptive allure. “The important thing,” he concluded, “is to not confuse the map of a place for the place itself.”

Lehrer repeated this warning about the limitations of fMRI in later stories. And yet, both in How We Decide and Imagine, fMRI is Lehrer’s deus ex machina. No supermarket decision or sneaker logo or song lyric is conceived without “lighting up” a telltale region of the brain. Here comes the anterior cingulate cortex, and there goes the superior temporal sulcus, and now the amygdala has its say. It reads like a symphony—magical, authoritative, deeply true.

This contradiction was pointed out back in March in a critique of Imagine published at the literary site the Millions by Tim Requarth and Meehan Crist. “It’s baffling that in Imagine Lehrer makes statements so similar to ones he thoroughly discredits” elsewhere. Then they offer an analogy to explain what’s wrong with drawing vast conclusions from pretty fMRI pictures. “Brain regions, like houses, have many functions,” they write, and just because there are people at someone’s house doesn’t mean you know what they’re doing. “While you can conclude that a party means there will be people,” they write, “you cannot conclude that people means a party.”

Rebecca Goldin, a mathematician-­writer who often criticizes “neurobabble,” points out that this is exactly what’s so enticing about this brand-new science: its mystery. Imagine that fMRI is a primitive telescope, and those clumps of neurons are like all the beautiful stars you can finally see up close, but “may in fact be in different galaxies.” You still can’t discern precisely how they’re interacting. Journalist David Dobbs recently asked a table full of neuroscientists: “Of what we need to know to fully understand the brain, what percentage do we know now?” They all gave figures in the single digits. Imagine makes it look like we’re halfway there.

If Lehrer was misusing science, why didn’t more scientists speak up? When I reached out to them, a couple did complain to me, but many responded with shrugs. They didn’t expect anything better. Mark Beeman, who questioned that “needle in the haystack” quote, was fairly typical: Lehrer’s simplifications were “nothing that hasn’t happened to me in many other newspaper stories.”

Even scientists who’ve learned to write for a broad audience can be fatalistic about the endeavor. Kahneman had a surprise best seller in 2011, Thinking, Fast and Slow. His writing is dense and subtle, as complicated as pop science gets. But as he once told Dan Ariely, his former acolyte, “There’s no way to write a science book well. If you write it for a general audience and you are successful, your academic colleagues will hate you, and if you write it for academics, nobody would want to read it.”

For a long time, Lehrer avoided the dilemma by assuming it didn’t apply to him, writing not for the scientists (who shrugged off his oversimplifications) or for the editors (who fixed his most obvious errors) but for a large and hungry audience of readers. We only wanted one thing from Jonah Lehrer: a story. He told it so well that we forgave him almost ­everything.

Proust Wasn’t a Neuroscientist. Neither was Jonah […]