Inside effective altruism, where the far future counts a lot more more the present

Oregon 6th Con­gressional District candidate Carrick Flynn seemed to drop out of the sky. With a stint at Oxford’s Future of Humanity Institute, a track record of voting in only two of the past 30 elections, and $11 million in support from a political action committee established by crypto billionaire Sam Bankman-Fried, Flynn didn’t fit into the local political scene, even though he’d grown up in the state. One constituent called him “Mr. Creepy Funds”  in an interview with a local paper; another said he thought Flynn was a Russian bot. 

The specter of crypto influence, a slew of expensive TV ads, and the fact that few locals had heard of or spoken to Flynn raised suspicions that he was a tool of outside financial interests. And while the rival candidate who led the primary race promised to fight for issues like better worker protections and stronger gun legislation, Flynn’s platform prioritized economic growth and preparedness for pandemics and other disasters. Both are pillars of “longtermism,” a growing strain of the ideology known as effective altruism (or EA), which is popular among an elite slice of people in tech and politics

Even during an actual pandemic, Flynn’s focus struck many Oregonians as far-fetched and foreign. Perhaps unsurprisingly, he ended up losing the 2022 primary to the more politically experienced Democrat, Andrea Salinas. But despite Flynn’s lackluster showing, he made history as effective altruism’s first political candidate to run for office.

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”—and supplied clear methodologies for calculating the answer. Directing money to organizations that use evidence-based approaches is the one technique EA is most known for. But as it has expanded from an academic philosophy into a community and a movement, its ideas of the “best” way to change the world have evolved as well. 

Longtermism,” the belief that unlikely but existential threats like a humanity-destroying AI revolt or international biological warfare are humanity’s most pressing problems, is integral to EA today. Of late, it has moved from the fringes of the movement to its fore with Flynn’s campaign, a flurry of mainstream media coverage, and a new treatise published by one of EA’s founding fathers, William MacAskill. It’s an ideology that’s poised to take the main stage as more believers in the tech and billionaire classes—which are, notably, mostly male and white—start to pour millions into new PACs and projects like Bankman-Fried’s FTX Future Fund and Longview Philanthropy’s Longtermism Fund, which focus on theoretical menaces ripped from the pages of science fiction. 

EA’s ideas have long faced criticism from within the fields of philosophy and philanthropy that they reflect white Western saviorism and an avoidance of structural problems in favor of abstract math—not coincidentally, many of the same objections lobbed at the tech industry at large. Such charges are only intensifying as EA’s pockets deepen and its purview stretches into a galaxy far, far away. Ultimately, the philosophy’s influence may be limited by their accuracy.

What is EA?

If effective altruism were a lab-grown species, its origin story would begin with DNA spliced from three parents: applied ethics, speculative technology, and philanthropy. 

EA’s philosophical genes came from Peter Singer’s brand of utilitarianism and Oxford philosopher Nick Bostrom’s investigations into potential threats to humanity. From tech, EA drew on early research into the long-term impact of artificial intelligence carried out at what’s now known as the Machine Intelligence Research Institute (MIRI) in Berkeley, California. In philanthropy, EA is part of a growing trend toward evidence-based giving, driven by members of the Silicon Valley nouveau riche who are eager to apply the strategies that made them money to the process of giving it away.

For effective altruists, a good cause is not good enough; only the very best should get funding in the areas most in need.

While these origins may seem diverse, the people involved are linked by social, economic, and professional class, and by a tech-utopian worldview. Early players—including MacAskill, a Cambridge philosopher; Toby Ord, an Oxford philosopher; Holden Karnofsky, cofounder of the charity evaluator GiveWell; and Dustin Moskovitz, a cofounder of Facebook who founded the nonprofit Open Philanthropy with his wife, Cari Tuna—are all still leaders in the movement’s interconnected constellation of nonprofits, foundations, and research organizations.

For effective altruists, a good cause is not good enough; only the very best should get funding in the areas most in need. Those areas are usually, by EA calculations, developing nations. Personal connections that might encourage someone to give to a local food bank or donate to the hospital that treated a parent are a distraction—or worse, a waste of money.

The classic example of an EA-approved effort is the Against Malaria Foundation, which purchases and distributes mosquito nets in sub-Saharan Africa and other areas heavily affected by the disease. The price of a net is very small compared with the scale of its life-saving potential; this ratio of “value” to cost is what EA aims for. Other popular early EA causes include providing vitamin A supplements and malaria medication in African countries, and promoting animal welfare in Asia

Within effective altruism’s framework, selecting one’s career is just as important as choosing where to make donations. EA defines a professional “fit” by whether a candidate has comparative advantages like exceptional intelligence or an entrepreneurial drive, and if an effective altruist qualifies for a high-paying path, the ethos encourages “earning to give,” or dedicating one’s life to building wealth in order to give it away to EA causes. Bankman-Fried has said that he’s earning to give, even founding the crypto platform FTX with the express purpose of building wealth in order to redirect 99% of it. Now one of the richest crypto executives in the world, Bankman-Fried plans to give away up to $1 billion by the end of 2022.

“The allure of effective altruism has been that it’s an off-the-shelf methodology for being a highly sophisticated, impact-­focused, data-driven funder,” says David Callahan, founder and editor of Inside Philanthropy and the author of a 2017 book on philanthropic trends, The Givers. Not only does EA suggest a clear and decisive framework, but the community also offers a set of resources for potential EA funders—including GiveWell, a nonprofit that uses an EA-driven evaluation rubric to recommend charitable organizations; EA Funds, which allows individuals to donate to curated pools of charities; 80,000 Hours, a career-coaching organization; and a vibrant discussion forum at Effectivealtruism.org, where leaders like MacAskill and Ord regularly chime in. 

Effective altruism’s original laser focus on measurement has contributed rigor in a field that has historically lacked accountability for big donors with last names like Rockefeller and Sackler. “It has been an overdue, much-needed counterweight to the typical practice of elite philanthropy, which has been very inefficient,” says Callahan. 

But where exactly are effective altruists directing their earnings? Who benefits? As with all giving—in EA or otherwise—there are no set rules for what constitutes “philanthropy,” and charitable organizations benefit from a tax code that incentivizes the super-rich to establish and control their own charitable endeavors at the expense of public tax revenues, local governance, or public accountability. EA organizations are able to leverage the practices of traditional philanthropy while enjoying the shine of an effectively disruptive approach to giving.

The movement has formalized its community’s commitment to donate with the Giving What We Can Pledge—mirroring another old-school philanthropic practice—but there are no giving requirements to be publicly listed as a pledger. Tracking the full influence of EA’s philosophy is tricky, but 80,000 Hours has estimated that $46 billion was committed to EA causes between 2015 and 2021, with donations growing about 20% each year. GiveWell calculates that in 2021 alone, it directed over $187 million to malaria nets and medication; by the organization’s math, that’s over 36,000 lives saved.

Accountability is significantly harder with longtermist causes like biosecurity or “AI alignment”—a set of efforts aimed at ensuring that the power of AI is harnessed toward ends generally understood as “good.” Such causes, for a growing number of effective altruists, now take priority over mosquito nets and vitamin A medication. “The things that matter most are the things that have long-term impact on what the world will look like,” Bankman-Fried said in an interview earlier this year. “There are trillions of people who have not yet been born.”

Bankman-Fried’s views are influenced by longtermism’s utilitarian calculations, which flatten lives into single units of value. By this math, the trillions of humans yet to be born represent a greater moral obligation than the billions alive today. Any threats that could prevent future generations from reaching their full potential—either through extinction or through technological stagnation, which MacAskill deems equally dire in his new book, What We Owe the Future—are priority number one. 

In his book, MacAskill discusses his own journey from longtermism skeptic to true believer and urges other to follow the same path. The existential risks he lays out are specific: “The future could be terrible, falling to authoritarians who use surveillance and AI to lock in their ideology for all time, or even to AI systems that seek to gain power rather than promote a thriving society. Or there could be no future at all: we could kill ourselves off with biological weapons or wage an all-out nuclear war that causes civilisation to collapse and never recover.” 

It was to help guard against these exact possibilities that Bankman-Fried created the FTX Future Fund this year as a project within his philanthropic foundation. Its focus areas include “space governance,” “artificial intelligence,” and “empowering exceptional people.” The fund’s website acknowledges that many of its bets “will fail.” (Its primary goal for 2022 is to test new funding models, but the fund’s site does not establish what “success” may look like.) As of June 2022, the FTX Future Fund had made 262 grants and investments, with recipients including a Brown University academic researching long-term economic growth, a Cornell University academic researching AI alignment, and an organization working on legal research around AI and biosecurity (which was born out of Harvard Law’s EA group). 

Sam Bankman-Fried, one of the world’s richest crypto executives, is also one of the country’s largest political donors. He plans to give away up to $1 billion by the end of 2022.
COINTELEGRAPH VIA WIKIMEDIA COMMONS

Bankman-Fried is hardly the only tech billionaire pushing forward longtermist causes. Open Philanthropy, the EA charitable organization funded primarily by Moskovitz and Tuna, has directed $260 million to addressing “potential risks from advanced AI” since its founding. Together, the FTX Future Fund and Open Philanthropy supported Longview Philanthropy with more than $15 million this year before the organization announced its new Longtermism Fund. Vitalik Buterin, one of the founders of the blockchain platform Ethereum, is the second-largest recent donor to MIRI, whose mission is “to ensure [that] smarter-­than-human artificial intelligence has a positive impact.”

MIRI’s donor list also includes the Thiel Foundation; Ben Delo, cofounder of crypto exchange BitMEX; and Jaan Tallinn, one of the founding engineers of Skype, who is also a cofounder of Cambridge’s Centre for the Study of Existential Risk (CSER). Elon Musk is yet another tech mogul dedicated to fighting longtermist existential risks; he’s even claimed that his for-profit operations—including SpaceX’s mission to Mars—are philanthropic efforts supporting humanity’s progress and survival. (MacAskill has recently expressed concern that his philosophy is getting conflated with Musk’s “world­view.” However, EA aims for an expanded audience, and it seems unreasonable to expect rigid adherence to the exact belief system of its creators.) 

Criticism and change

Even before the foregrounding of long­termism,effective altruism had been criticized for elevating the mindset of the “benevolent capitalist” (as philosopher Amia Srinivasan wrote in her 2015 review of MacAskill’s first book) and emphasizing individual agency within capitalism over more foundational critiques of the systems that have made one part of the world wealthy enough to spend time theorizing about how best to aid the rest. 

EA’s earn-to-give philosophy raises the question of why the wealthy should get to decide where funds go in a highly inequitable world—especially if they may be extracting that wealth from employees’ labor or the public, as may be the case with some crypto executives. “My ideological orientation starts with the belief that folks don’t earn tremendous amounts of money without it being at the expense of other people,” says Farhad Ebrahimi, founder and president of the Chorus Foundation, which funds mainly US organizations working to combat climate change by shifting economic and political power to the communities most affected by it. 

Many of the foundation’s grantees are groups led by people of color, and it is what’s known as a spend-down foundation; in other words, Ebrahimi says, Chorus’s work will be successful when its funds are fully redistributed. 

EA’s earn-to-give philosophy raises the question of why the wealthy should get to decide where funds go.

Ebrahimi objects to EA’s approach of supporting targeted interventions rather than endowing local organizations to define their own priorities: “Why wouldn’t you want to support having the communities that you want the money to go to be the ones to build economic power? That’s an individual saying, ‘I want to build my economic power because I think I’m going to make good decisions about what to do with it’ … It seems very ‘benevolent dictator’ to me.” 

Effective altruists would respond that their moral obligation is to fund the most demonstrably transformative projects as defined by their framework, no matter what else is left behind. In an interview in 2018, MacAskill suggested that in order to recommend prioritizing any structural power shifts, he’d need to see “an argument that opposing inequality in some particular way is actually going to be the best thing to do.”

man in a suit gives money to a robot while homeless men beg for help in the background

VICTOR KERLOW

However, when a small group of individuals with similar backgrounds have determined the formula for the most critical causes and “best” solutions, the unbiased rigor that EA is known for should come into question. While the top nine charities featured on GiveWell’s website today work in developing nations with communities of color, the EA community stands at 71% male and 76% white, with the largest percentage living in the US and the UK, according to a 2020 survey by the Centre for Effective Altruism (CEA).

This may not be surprising given that the philanthropic community at large has long been criticized for homogeneity. But some studies have demonstrated that charitable giving in the US is actually growing in diversity, which casts EA’s breakdown in a different light. A 2012 report by the W. K. Kellogg Foundation found that both Asian-American and Black households gave away a larger percentage of their income than white households. Research from the Indiana University Lilly Family School of Philanthropy found in 2021 that 65% of Black households and 67% of Hispanic households surveyed donated charitably on a regular basis, along with 74% of white households. And donors of color were more likely to be involved in more informal avenues of giving, such as crowdfunding, mutual aid, or giving circles, which may not be accounted for in other reports. EA’s sales pitch does not appear to be reaching these donors.

While EA proponents say its approach is data driven, EA’s calculations defy best practices within the tech industry around dealing with data. “This assumption that we’re going to calculate the single best thing to do in the world—have all this data and make these decisions—is so similar to the issues that we talk about in machine learning, and why you shouldn’t do that,” says Timnit Gebru, a leader in AI ethics and the founder and executive director of the Distributed AI Research Institute (DAIR), which centers diversity in its AI research. 

Ethereum cofounder Vitalik Buterin is the second-largest recent donor to Berkeley’s Machine Intelligence Research Institute, whose mission is “to ensure [that] smarter-­than-human artificial intelligence has a positive impact.”
JOHN PHILLIPS/GETTY IMAGES VIA WIKIMEDIA COMMONS

Gebru and others have written extensively about the dangers of leveraging data without undertaking deeper analysis and making sure it comes from diverse sources. In machine learning, it leads to dangerously biased models. In philanthropy, a narrow definition of success rewards alliance with EA’s value system over other worldviews and penalizes nonprofits working on longer-term or more complex strategies that can’t be translated into EA’s math.

The research that EA’s assessments rely on may also be flawed or subject to change; a 2004 study that elevated deworming—distributing drugs for parasitic infections—to one of GiveWell’s top causes has come under serious fire, with some researchers claiming to have debunked it while others have been unable to replicate the results leading to the conclusion that it would save huge numbers of lives. Despite the uncertainty surrounding this intervention, GiveWell directed more than $12 million to deworming charities through its Maximum Impact Fund this year. 

The voices of dissent are growing louder as EA’s influence spreads and more money is directed toward longtermist causes. A longtermist himself by some definitions, CSER researcher Luke Kemp believes that the growing focus of the EA research community is based on a limited and minority perspective. He’s been disappointed with the lack of diversity of thought and leadership he’s found in the field. Last year, he and his colleague Carla Zoe Cremer wrote and circulated a preprint titled “Democratizing Risk” about the community’s focus on the “techno-utopian approach”—which assumes that pursuing technology to its maximum development is an undeniable net positive—to the exclusion of other frameworks that reflect more common moral worldviews. “There’s a small number of key funders who have a very particular ideology, and either consciously or unconsciously select for the ideas that most resonate with what they want. You have to speak that language to move higher up the hierarchy and get more funding,” Kemp says. 

Longtermism sees history as a forward march toward inevitable progress.

Even the basic concept of longtermism, according to Kemp, has been hijacked from legal and economic scholars in the 1960s, ’70s, and ’80s, who were focused on intergenerational equity and environmentalism—priorities that have notably dropped away from the EA version of the philosophy. Indeed, the central premise that “future people count,” as MacAskill says in his 2022 book, is hardly new. The Native American concept of the “seventh generation principle” and similar ideas in indigenous cultures across the globe ask each generation to consider the ones that have come before and will come after. Integral to these concepts, though, is the idea that the past holds valuable lessons for action today, especially in cases where our ancestors made choices that have led to environmental and economic crises

Longtermism sees history differently: as a forward march toward inevitable progress. MacAskill references the past often in What We Owe the Future, but only in the form of case studies on the life-­improving impact of technological and moral development. He discusses the abolition of slavery, the Industrial Revolution, and the women’s rights movement as evidence of how important it is to continue humanity’s arc of progress before the wrong values get “locked in” by despots. What are the “right” values? MacAskill has a coy approach to articulating them: he argues that “we should focus on promoting more abstract or general moral principles” to ensure that “moral changes stay relevant and robustly positive into the future.” 

Worldwide and ongoing climate change, which already affects the under-resourced more than the elite today, is notably not a core longtermist cause, as philosopher Emile P. Torres points out in his critiques. While it poses a threat to millions of lives, longtermists argue, it probably won’t wipe out all of humanity; those with the wealth and means to survive can carry on fulfilling our species’ potential. Tech billionaires like Thiel and Larry Page already have plans and real estate in place to ride out a climate apocalypse. (MacAskill, in his new book, names climate change as a serious worry for those alive today, but he considers it an existential threat only in the “extreme” form where agriculture won’t survive.)

“To come to the conclusion that in order to do the most good in the world you have to work on artificial general intelligence is very strange.”

Timnit Gebru

The final mysterious feature of EA’s version of the long view is how its logic ends up in a specific list of technology-based far-off threats to civilization that just happen to align with many of the original EA cohort’s areas of research. “I am a researcher in the field of AI,” says Gebru, “but to come to the conclusion that in order to do the most good in the world you have to work on artificial general intelligence is very strange. It’s like trying to justify the fact that you want to think about the science fiction scenario and you don’t want to think about real people, the real world, and current structural issues. You want to justify how you want to pull billions of dollars into that while people are starving.”

Some EA leaders seem aware that criticism and change are key to expanding the community and strengthening its impact. MacAskill and others have made it explicit that their calculations are estimates (“These are our best guesses,” MacAskill offered on a 2020 podcast episode) and said they’re eager to improve through critical discourse. Both GiveWell and CEA have pages on their websites titled “Our Mistakes,” and in June, CEA ran a contest inviting critiques on the EA forum; the Future Fund has launched prizes up to $1.5 million for critical perspectives on AI.

“We recognize that the problems EA is trying to address are really, really big and we don’t have a hope of solving them with only a small segment of people,” GiveWell board member and CEA community liaison Julia Wise says of EA’s diversity statistics. “We need the talents that lots of different kinds of people can bring to address these worldwide problems.” Wise also spoke on the topic at the 2020 EA Global Conference, and she actively discusses inclusion and community power dynamics on the CEA forum. The Center for Effective Altruism supports a mentorship program for women and nonbinary people (founded, incidentally, by Carrick Flynn’s wife) that Wise says is expanding to other underrepresented groups in the EA community, and CEA has made an effort to facilitate conferences in more locations worldwide to welcome a more geographically diverse group. But these efforts appear to be limited in scope and impact; CEA’s public-facing page on diversity and inclusion hasn’t even been updated since 2020. As the tech-utopian tenets of longtermism take a front seat in EA’s rocket ship and a few billionaire donors chart its path into the future, it may be too late to alter the DNA of the movement.

Politics and the future

Despite the sci-fi sheen, effective altruism today is a conservative project, consolidating decision-making behind a technocratic belief system and a small set of individuals, potentially at the expense of local and intersectional visions for the future. But EA’s community and successes were built around clear methodologies that may not transfer into the more nuanced political arena that some EA leaders and a few big donors are pushing toward. According to Wise, the community at large is still split on politics as an approach to pursuing EA’s goals, with some dissenters believing politics is too polarized a space for effective change. 

But EA is not the only charitable movement looking to political action to reshape the world; the philanthropic field generally has been moving into politics for greater impact. “We have an existential political crisis that philanthropy has to deal with. Otherwise, a lot of its other goals are going to be hard to achieve,” says Inside Philanthropy’s Callahan, using a definition of “existential” that differs from MacAskill’s. But while EA may offer a clear rubric for determining how to give charitably, the political arena presents a messier challenge. “There’s no easy metric for how to gain political power or shift politics,” he says. “And Sam Bankman-Fried has so far demonstrated himself not the most effective political giver.” 

Bankman-Fried has articulated his own political giving as “more policy than politics,” and has donated primarily to Democrats through his short-lived Protect Our Future PAC (which backed Carrick Flynn in Oregon) and the Guarding Against Pandemics PAC (which is run by his brother Gabe and publishes a cross-party list of its “champions” to support). Ryan Salame, the co-CEO with Bankman-Fried of FTX, funded his own PAC, American Dream Federal Action, which focuses mainly on Republican candidates. (Bankman-Fried has said Salame shares his passion for preventing pandemics.) Guarding Against Pandemics and the Open Philanthropy Action Fund (Open Philanthropy’s political arm) spent more than $18 million to get an initiative on the California state ballot this fall to fund pandemic research and action through a new tax.

So while longtermist funds are certainly making waves behind the scenes, Flynn’s primary loss in Oregon may signal that EA’s more visible electoral efforts need to draw on new and diverse strategies to win over real-world voters. Vanessa Daniel, founder and former executive director of Groundswell, one of the largest funders of the US reproductive justice movement, believes that big donations and 11th-hour interventions will never rival grassroots organizing in making real political change. “Slow and patient organizing led by Black women, communities of color, and some poor white communities created the tipping point in the 2020 election that saved the country from fascism and allowed some window of opportunity to get things like the climate deal passed,” she says. And Daniel takes issue with the idea that metrics are the exclusive domain of rich, white, and male-led approaches. “I’ve talked to so many donors who think that grassroots organizing is the equivalent of planting magical beans and expecting things to grow. This is not the case,” she says. “The data is right in front of us. And it doesn’t require the collateral damage of millions of people.”

Open Philanthropy, the EA charitable organization funded primarily by Dustin Moskovitz and Cari Tuna, has directed $260 million to addressing “potential risks from advanced AI” since its founding.
COURTESY OF ASANA

The question now is whether the culture of EA will allow the community and its major donors to learn from such lessons. In May, Bankman-Fried admitted in an interview that there are a few takeaways from the Oregon loss, “in terms of thinking about who to support and how much,” and that he sees “decreasing marginal gains from funding.” In August, after distributing a total of $24 million over six months to candidates supporting pandemic prevention, Bankman-Fried appeared to have shut down funding through his Protect Our Future PAC, perhaps signaling an end to one political experiment. (Or maybe it was just a pragmatic belt-tightening after the serious and sustained downturn in the crypto market, the source of Bankman-Fried’s immense wealth.) 

Others in the EA community draw different lessons from the Flynn campaign. On the forum at Effectivealtruism.org, Daniel Eth, a researcher at the Future of Humanity Institute, posted a lengthy postmortem of the race, expressing surprise that the candidate couldn’t win over the general audience when he seemed “unusually selfless and intelligent, even for an EA.”

But Eth didn’t encourage radically new strategies for a next run apart from ensuring that candidates vote more regularly and spend more time in the area. Otherwise, he proposed doubling down on EA’s existing approach: “Politics might somewhat degrade our typical epistemics and rigor. We should guard against this.” Members of the EA community contributing to the 93 comments on Eth’s post offered their own opinions, with some supporting Eth’s analysis, others urging lobbying over electioneering, and still others expressing frustration that effective altruists are funding political efforts at all. At this rate, political causes are not likely to make it to the front page of GiveWell anytime soon. 

Money can move mountains, and as EA takes on larger platforms with larger amounts of funding from billionaires and tech industry insiders, the wealth of a few billionaires will likely continue to elevate pet EA causes and candidates. But if the movement aims to conquer the political landscape, EA leaders may find that whatever its political strategies, its messages don’t connect with the people who are living with local and present-day challenges like insufficient housing and food insecurity. EA’s academic and tech industry origins as a heady philosophical plan for distributing inherited and institutional wealth may have gotten the movement this far, but those same roots likely can’t support its hopes for expanding its influence.

Rebecca Ackermann is a writer and artist in San Francisco.

Main Menu