Ten years ago, William MacAskill came to MIT in search of converts.
The Scottish philosopher was already one of the world’s most prominent proponents of “effective altruism” (or “EA”), a social movement dedicated to ”using evidence and reason to figure out how to benefit others as much as possible.” In 2012, MacAskill believed that many idealistic young people had misconceptions about how they could best improve the world. Specifically, such individuals had a tendency to seek low-paying jobs at philanthropies and progressive nonprofits even though, in many cases, such institutions had no great need for their labor. To the contrary, such jobs often attracted a superabundance of qualified applicants. Therefore, a young idealist would make virtually no positive contribution to the world by taking such a position; in their absence, another similarly skilled person would perform the same role roughly as well.
In truth, highly effective charities didn’t need more idealistic workers; they needed more deep-pocketed donors. An effective altruist who went into finance, earned $200,000, and then devoted $50,000 of that sum to purchasing bed nets for the global poor would do far more good than one who elbowed past equally gifted Ivy League grads for an entry-level job at UNICEF.
The former strategy was called “earning to give.” And MacAskill hoped to guide MIT’s aspiring altruists onto that path. While in Cambridge, he learned of an especially promising candidate for the cause, an undergraduate named Sam Bankman-Fried.
Over lunch, Bankman-Fried told MacAskill that he had recently become a vegan and wanted a job in which he could advance the cause of animal welfare. MacAskill suggested that he would reduce animal suffering far more if he tried to make a lot of money and then donated it to relevant charities. Bankman-Fried took his advice.
A couple weeks ago, that lunch looked like the best thing that had ever happened to effective altruism. Now, it looks like the worst.
A brief history of Sam Bankman-Fried’s rise and fall.
Following his encounter with MacAskill, Bankman-Fried completed his degree and then pursued a career in finance. First, he took a job at the proprietary trading firm Jane Street Capital. Then, with a fellow effective altruist, he founded his own hedge fund, Alameda Research. In 2018, SBF made his fortune by exploiting a flaw in the global bitcoin market, which kept its price reliably higher in Japan than in the U.S. One year later, he founded FTX, an exchange for cryptocurrency derivatives. Along the way, his net worth swelled to $26 billion. And as he prospered, EA-aligned philanthropies did too. Indeed, the movement stopped encouraging idealistic young people to “earn to give,” as the most promising causes now needed talent more than money.
And then things fell apart.
As interest rates rose, crypto markets began crashing. And Bankman-Fried (allegedly) started taking money deposited by FTX’s customers and lending it to Alameda Research, so that his increasingly illiquid hedge fund could finance risky trades. As collateral for these loans, Alameda Research put up FTT tokens; which is to say, FTX’s own made-up currency, which had little inherent value. This was probably an illegal idea. And it was definitely a bad one. Alameda Research executed a lot of losing trades with FTX customers’ money. Pretty soon, its balance sheet consisted largely of FTT tokens. When that fact then became public knowledge, FTX’s chief rival, Binance, responded to the news by selling off its reserve of 23 million FTT tokens, torpedoing their value and triggering a de facto bank run at FTX as customers scrambled to trade in their tokens for cash — only to find that the exchange had misplaced billions of their dollars.
Now, effective altruism is in crisis. Some of the movement’s institutions are suddenly broke. The FTX Future Fund, which focuses on minimizing threats to humanity’s long-term future, will not be able to honor many of its committed grants.
For the movement as a whole, though, Bankman-Fried’s collapse may be less devastating for its financial costs than its reputational ones. After all, Bankman-Fried was, by most accounts, an EA true believer. And the movement’s teachings apparently led him to seek a fortune in the most lawless and socially useless segment of the finance industry, and then, allegedly, to commit fraud on a massive scale. His career now resembles a caricature of the “earning to give” concept. And yet, before last week, effective altruists held him up as a model to emulate. In its explainer on the concept of earning to give, the EA career-advice organization 80,000 Hours features a testimonial from Bankman-Fried (below a recently appended disclaimer disavowing him).
All of which raises the question: Did Sam Bankman-Fried corrupt effective altruism, or did effective altruism corrupt Sam Bankman-Fried?
Years ago, effective altruists argued that engaging in harmful financial activities was never acceptable (except when it was).
Many people think of effective altruism as a ruthlessly utilitarian philosophy. Like utilitarians, EAs strive to do the greatest good for the greatest number. And they seek to subordinate common-sense moral intuitions to that aim. Typically, people feel that they have profound moral obligations to their family, few to poor people in other countries, and none whatsoever to chickens. EAs insist that all suffering is equally bad, and all joy equally good, regardless of the identity of the sentient being experiencing it. And we are therefore obliged to save the lives of as many of the global poor as we can, and to spare those of factory-farmed animals. Fulfilling that obligation means prioritizing efficacy over emotional satisfaction. Volunteering in a soup kitchen might make one feel good. But earning a thousand dollars and then donating it to malnourished families will do more to alleviate hunger.
Effective altruists’ insistence on the supreme importance of consequences invites the impression that they would countenance any means for achieving a righteous end. But EAs have long disavowed that position. While there is some philosophical diversity within the movement, its dominant view is that consequentialist moral reasoning must be tempered by fidelity to basic human rights and popular moral intuitions. This is because doing harm to achieve a greater good rarely succeeds on its own terms (harmful actions often produce unexpected, harmful consequences), and because no one can be certain whether consequentialism is the correct moral framework. So it’s best to hedge one’s bets a bit by showing some deference to rules-based ethical systems.
After news of SBF’s apparent acts of fraud and thievery broke, William MacAskill reiterated these points on Twitter. “For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints,” MacAskill wrote. “If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.”
A 2017 article by MacAskill and Benjamin Todd titled “Is It Ever OK to Take a Harmful Job in Order to Do More Good? An In-Depth Analysis” lends credence to MacAskill’s claim. And yet, that same document suggests that, in some extraordinary circumstances, profoundly good ends can justify odious means.
MacAskill and Todd argue that, “in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful, even if the overall benefits of that work seem greater than the harms.” They provide a variety of arguments for this position. One is that there is reason to believe that rights-based moralities are correct, and so one should show some deference to them. Intuition tells most people that it is not ethical to kill one person, harvest their organs, and then use those organs to save the lives of five people. And if that’s wrong, then engaging in harmful economic activity to generate funds for charity probably is too.
Separately, they suggest that performing a socially destructive job for the sake of bankrolling effective altruism is liable to fail on its own terms. They list several reasons why this is the case. The first seems rather prophetic, in the wake of SBF’s financial ruin:
What’s more, MacAskill and Todd’s go-to example of an impermissible career is “a banker who commits fraud.” Repeatedly, they use the figure of a fraudulent banker as a foil for morally neutral (or else, minimally harmful) economic actors, such as a banker who is merely overpaid.
All this said, MacAskill and Todd’s case against doing harm is laced with escape clauses. Although it is wrong to kill one person to save five people, the authors write, “Almost all ethicists agree that these rights and rules are not absolute. If you had to kill one person to save 100,000 others, most would agree that it would be the right thing to do.”
In “exceptional circumstances,” the EAs allow, consequentialism may trump other considerations. And Sam Bankman-Fried might reasonably have considered his own circumstances exceptional. It is highly unusual for a devotee of an altruistic movement to amass a $16 billion fortune, thereby liberating all of that movement’s institutions from cash constraints. If killing one person to save 100,000 is morally permissible, then couldn’t one say the same of scamming crypto investors for the sake of feeding the poor (and/or, preventing the robot apocalypse)?
Bankman-Fried apparently believed that effective altruism required him to gamble away his fortune.
If effective altruism might have helped SBF rationalize unscrupulous dealings, it also seems to have informed his extraordinary tolerance for risk.
As Matthew Zeitlin notes in Grid, Bankman-Fried was quite open about his philosophical commitment to making reckless financial bets. In an interview with the effective altruist Robert Wiblin last April, SBF argued that EA investors should have a higher tolerance for risk than ordinary rich people. After all, for an individual human being, money has diminishing marginal value. Seeing your net worth grow from $0 to $1 million will do more for your quality of life than seeing your net worth rise $14 billion to $14.1 billion. At a certain point then, the hedonistic billionaire is better off safeguarding their fortune than gambling with it.
But things are different for the philanthropic plutocrat. For them, every additional $1 million translates into more lives saved. If SBF feeds 1,000 hungry children, the world’s remaining malnourished kids don’t become any less desperate for his benevolence. So every dollar counts.
“More good is more good,” Bankman-Fried told Wiblin. “It’s not like you did some good, so good doesn’t matter anymore. But how about money? Are you able to donate so much that money doesn’t matter anymore?”
Bankman-Fried went on to say that “the expected value of how much impact you have, I think, is going to be a function sort of weighted towards upside tail cases. That’s what I think my prior would be. And if your impact is weighted towards upside tail cases, then what’s that probability distribution of impact probably look like? I think the odds are, it has decent weight on zero. Maybe majority weight.”
Here, Bankman-Fried is referring to an expected-value equation, a rough heuristic for determining the potential value of a given course of action, which is popular among effective altruists. Basically, you take the probability that a given action will be successful, multiply it by the amount of good that action would accomplish if successful, and the product is the “expected value” of that action.
SBF’s intuition was that, since the scope of worthwhile philanthropic endeavors is near infinite, even actions with a very small probability of drastically increasing his fortune (and thus, his capacity to fund EA causes) were worthwhile. After all, if you multiply .01 (i.e., one percent) by a near-infinite number, you end up with a very large sum.
And yet, although the expected value of taking such risks was high, “the probability distribution of impact” would “weight on zero”; meaning, pursuing the highest expected value would, more often than not, cause SBF to lose his entire fortune. Yet Bankman-Fried suggested that this was a risk he was morally compelled to take.
Caroline Ellison, an SBF associate and CEO of Alameda Research, put the point more accessibly in a now-deleted Tumblr post. “Is it infinitely good to do double-or-nothing coin flips forever? Well, sort of, because your upside is unbounded and your downside is bounded at your entire net worth. But most people don’t do this, because … they really don’t want to lose all of their money. (Of course those people are lame and not EAs; this blog endorses double-or-nothing coin flips and high leverage.)”
In other words, according to their own public statements, Bankman-Fried and Ellison both believed that they had a moral obligation to make “double-or-nothing” financial bets over and over again — even though this would almost certainly lead to financial ruin eventually — because there was a small chance that they would just keep winning such bets, and thereby save humanity.
And some of the world’s most reputable investors handed them billions of dollars anyway.
It seems unfair to attribute SBF’s bizarre financial philosophy to effective altruism writ large. I’m sure that if SBF went to MacAskill, or any of his largesse’s other beneficiaries, and asked, “Do you think I should make incredibly risky financial bets over and over again until I’m liquidated or become a trillionaire?,” they would have said, “No, please do not bankrupt our institutions.”
On the other hand, many within the effective-altruism community have argued that MacAskill and his acolytes had turned expected-value calculations from a tool into a kind of fetish object. MacAskill and SBF were both proponents of a specific variant of EA thought known as “longtermism.” Longtermists argue that EAs should prioritize improving humanity’s long-term future, since the number of potential future humans is exponentially greater than the number who are currently alive. This has led them to focus on minimizing catastrophic risks like human extinction, even though one’s odds of successfully reducing the risk of an apocalypse in the year 3000 are exponentially lower than, say, one’s odds of successfully saving a single human’s life through philanthropy today. MacAskill has justified this prioritization with reference to expected-value equations, arguing that donating $10,000 toward initiatives that would reduce the probability of an AI apocalypse by just “0.001 percent” would do orders of magnitude more “expected” good than donating a similar sum to anti-malaria initiatives.
It seems fair to say that SBF’s approach to risk-taking represented an abuse of effective altruism’s tools. But one might say the same thing about some EAs’ approach to longtermism more broadly. As Tyler Cowen notes, FTX’s Future Fund failed to recognize the existential risk to itself, even though SBF was advertising that risk in interviews with effective altruism-themed podcasts. Given that longtermists failed to recognize a risk so blatant and proximate to themselves, they shouldn’t have much confidence in their ability to accurately anticipate existential risks to humanity in a millennia or two.
Effective altruists aren’t responsible for SBF’s alleged fraud. But some were complicit in FTX’s legitimate malfeasance.
Ultimately, effective altruists bear little responsibility for SBF’s apparent theft of customer money. As mentioned above, the movement’s archetypical example of something that one shouldn’t do for the sake of generating philanthropic funds was financial fraud. And if Bankman-Fried’s investors and business partners failed to sniff out his alleged corruption, there is little reason to have expected MacAskill & Co. to discern it.
Thus, the latter do not need to answer for any fraud. But they do need to answer for their celebration of Bankman-Fried’s legitimate crypto dealings. In their 2017 article on the justifiability of pursuing a harmful career, MacAskill and Todd emphasize that not all finance jobs are created equal, and they counsel against taking work in the most deceitful and exploitative realms of the industry.
Even before this month’s revelations, it was quite clear that SBF was operating in such a realm. During the 2022 Super Bowl, FTX aired an ad that explicitly encouraged retail investors who do not understand cryptocurrency to invest in it anyway.
That appeal is exploitative on its face. But what makes it especially unscrupulous is that SBF himself argued that much of the crypto industry consisted of Ponzi schemes. In an infamous interview with the Odd Lots podcast, Bankman-Fried likened many crypto enterprises to companies that dress up an inherently worthless box as “life-changing,” then persuade people to place their money in the box in exchange for a box token. And the more people put money in the box, the more seemingly valuable the box tokens become. “This box is worth zero obviously,” SBF said. “But on the other hand, if everyone kind of now thinks that this box token is worth about a billion-dollar market cap, that’s what people are pricing it at and [it] sort of has that market cap.”
In context, SBF was not describing his own business, but merely the business model of many of the firms that used his FTX exchange. In principle, it is possible to operate a legitimate and functional exchange that facilitates Ponzi schemes, but which is not itself a Ponzi scheme.
Effective altruists couldn’t have known that SBF was describing how his own business apparently worked. But if all they did was merely listen to what SBF had said on podcasts and what his firm advertised during the Super Bowl, they would have gleaned that Bankman-Fried was encouraging amateur investors to put their money into crypto, even if they didn’t understand it, despite the fact that he believed much of the crypto world to consist of Ponzi schemes.
To be sure, EAs had a massive incentive to blind themselves to this reality. If one of your social movement’s members finds a way to generate tens of billions of dollars for the cause, then it becomes awfully unappealing to view his enterprises with a critical eye. And SBF styled himself as the most ethical man in crypto. He didn’t scam people into buying meme coins; he merely ran an exchange. What’s more, against the wishes of his peers, he lobbied for federal regulation of the industry. For months, the financial reporter Michael Lewis had been embedded with SBF. And until last week, the author had imagined his book on FTX as the story of a battle between “the Luke Skywalker and Darth Vader of crypto,” with Bankman-Fried cast in the role of the Jedi knight. It isn’t hard then to see how EAs may have convinced themselves that the crypto industry was better off for having SBF in it.
Alternatively, it is possible that MacAskill and his peers recognized that running a crypto exchange was inherently unethical, but concluded that it was nevertheless justifiable given the scale of the good that SBF’s fortune would do. Given EA’s premises, it is not obviously unjust to impose a de facto tax on the most gullible gamblers within the top decile of the global income distribution, and then transfer those funds to the global poor and/or pandemic prevention.
But if legally scamming crypto enthusiasts to finance the greater good was justifiable in theory, it seems to have failed in practice. And for reasons that MacAskill and Todd had anticipated years earlier. In their list of reasons to avoid taking a harmful job, the EAs noted that “being around unethical people all day may mean that you’ll become less motivated, build a worse network for social impact, and become a less moral person in general. That’s because you might pick up the attitudes and social norms of the people you spend a lot of time with.”
Before Sam Bankman-Fried went into crypto, he was an idealistic vegan interested in animal welfare. As of last week, he was (apparently) a Ponzi schemer who spent millions sustaining an opulent lifestyle in the Bahamas while obsessing over the threat of the robot apocalypse.
Personally, I still think effective altruism is a worthwhile movement. It has successfully transferred hundreds of millions of dollars to the global poor, lobbied for full-employment macroeconomic policies in the United States, and encouraged lawmakers to make prudent investments in pandemic preparedness.
But the SBF saga spotlights the philosophy’s greatest liabilities. Effective altruism invites “ends justify the means” reasoning, no matter how loudly EAs disavow such logic. And it also lends itself to maniacal fetishization of “expected-value” calculations, which can then be used to justify virtually anything.
No belief system is invulnerable to extremism. But if EAs don’t learn from SBF’s mistakes, this may not be the last time that their movement inspires ineffective avarice.