{"id":53383,"date":"2024-04-29T20:08:50","date_gmt":"2024-04-29T20:08:50","guid":{"rendered":"https:\/\/peymantaeidi.net\/stem-cell\/?p=53383"},"modified":"2024-04-29T20:38:02","modified_gmt":"2024-04-29T20:38:02","slug":"effective-altruisms-bait-and-switch-from-global-poverty-to-ai-doomerism","status":"publish","type":"post","link":"https:\/\/peymantaeidi.net\/stem-cell\/2024\/04\/29\/effective-altruisms-bait-and-switch-from-global-poverty-to-ai-doomerism\/","title":{"rendered":"Effective Altruism\u2019s Bait-and-Switch: From Global Poverty To AI Doomerism"},"content":{"rendered":"<div class=\"postbody\">\n<h3>from the <i>come-for-the-ending-global-poverty,-stay-for-the-ai-doomerism<\/i> dept<\/h3>\n<h2 class=\"wp-block-heading\"><strong>The Effective Altruism movement<\/strong><\/h2>\n<p>Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the \u201cmost good\u201d with their resources (money, skills). Its \u201ceffective giving\u201d aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, \u201cGiving What We Can\u201d (GWWC) and \u201c80,000 Hours,\u201d were brought under CEA\u2019s umbrella, and the movement became officially known as Effective Altruism.<\/p>\n<p>Effective Altruists (EAs) were praised in the media as \u201ccharity nerds\u201d looking to maximize the number of \u201clives saved\u201d per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.<\/p>\n<p>If this movement sounds familiar to you, it\u2019s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: \u201cEarn to give.\u201d In November 2023, SBF was&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/apnews.com\/article\/sam-bankman-fried-ftx-crypto-bitcoin-baa4c94f2c4237c860475ff92e6bcf42\">convicted of seven fraud charges<\/a>&nbsp;(stealing $10 billion from customers and investors).&nbsp;In March 2024, SBF was <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.theverge.com\/2024\/3\/28\/24112507\/sam-bankman-fried-sentence-ftx-alameda\">sentenced to&nbsp;25 years in prison<\/a>. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the \u201cEarn to give\u201d concept was susceptible to the \u201cEnds justify the means\u201d mentality.<\/p>\n<p>In 2016, the main funder of Effective Altruism, Open Philanthropy, designated \u201cAI Safety\u201d a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world\u2019s most pressing problem. It looked like a major shift in focus and was portrayed as a \u201cmission drift.\u201d It wasn\u2019t.<\/p>\n<p>What looked to outsiders \u2013 in the general public, academia, media, and politics \u2013 as a \u201csudden embrace of AI x-risk\u201d was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Effective Altruism\u2019s \u201cbrand management\u201d<\/strong><\/h2>\n<p>Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement\u2019s leaders recognized it could be perceived as \u201cconfusing for non-EAs,\u201d they decided to get donations and recruit new members for different causes like poverty and \u201csending money to Africa.\u201d<\/p>\n<p>When the movement was still small, they planned the <strong>bait-and-switch<\/strong> tactics in plain sight (in old forum discussions).<\/p>\n<p>A dissertation by Mollie Gleiberman methodically analyzes the distinction between the \u201c<strong>public-facing EA<\/strong>\u201d and the inward-facing \u201c<strong>core EA<\/strong>.\u201d Among the study findings: \u201cFrom the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety\/x-risk).\u201d<\/p>\n<p>\u201cEA\u2019s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism\u2019s far more radical aim,\u201d explains Gleiberman. It was part of their \u201cbrand management\u201d strategy to conceal the latter.<\/p>\n<p>The public-facing discourse of \u201cgiving to the poor\u201d (in popular media and books) was a mirage designed to get people into the movement and then lead them to the \u201ccore EA,\u201d x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.<\/p>\n<p>In public-facing\/grassroots EA, the target recipients of donations, typically understood to be GiveWell\u2019s top recommendations, are causes like AMF \u2013 Against Malaria Foundation. \u201cHere, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,\u201d says Gleiberman. \u201cIn stark contrast to this, <strong>the target recipients of donations in core EA are the EAs themselves<\/strong>. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the <em>worst<\/em> forms of charity but one of the <em>best<\/em>. Rather than living frugally (giving up a vacation\/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as <em>essential<\/em> for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.\u201d<\/p>\n<h2 class=\"wp-block-heading\">\u201c<strong>We should be kind of quiet about it in public-facing spaces\u201d<\/strong><\/h2>\n<p>Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.<\/p>\n<p>On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the <em>Felicifia<\/em> forum) that \u201cnew effective altruists tend to start off concerned about global poverty or animal suffering and then <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/felicifia.github.io\/thread\/614.html#p5507\"><u>hear, take seriously, and often are convinced<\/u><\/a> by the arguments for existential risk mitigation.\u201d<\/p>\n<figure class=\"wp-block-image\"><\/figure>\n<p>On November 10, 2012, Will Crouch (MacAskill) wrote on the <em>LessWrong<\/em> forum that \u201cit seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/web.archive.org\/web\/20121130091244\/http:\/lesswrong.com\/lw\/fej\/giving_what_we_can_80000_hours_and_metacharity\/\"><u>the most important cause area<\/u><\/a>.\u201d In the same message, he also argued that \u201cit\u2019s still a good thing to save someone\u2019s life in the developing world,\u201d however, \u201cof course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.\u201d<\/p>\n<p>In 2011, a leader of the EA movement, an influential GWWC leader\/CEA affiliate, who used a \u201cutilitymonster\u201c username on the <em>Felicifia <\/em>forum, had a discussion with a high-school student about the \u201cHigh Impact Career\u201d (HIC, later rebranded to 80,000 Hours). The high schooler wrote: \u201cBut HIC always seems to talk about things in terms of \u2018lives saved,\u2019 I\u2019ve never heard them mentioning other things to donate to.\u201d Utilitymonster replied: \u201cThat\u2019s exactly the right thing for HIC to do. <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/felicifia.github.io\/thread\/497.html#p4242\"><u>Talk about \u2018lives saved\u2019 with their public face<\/u><\/a>, let <em>hardcore members<\/em> hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.\u201d<\/p>\n<p>Another influential figure, Eliezer Yudkowsky, wrote on <em>LessWrong<\/em> in 2013: \u201cI regard the <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.lesswrong.com\/posts\/E3beR7bQ723kkNHpA\/a-critique-of-effective-altruism#37WQGqigivLAxGrFR\"><u>non-x-risk parts<\/u><\/a> of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, <strong>the actual plot<\/strong>.\u201d<\/p>\n<p>As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: \u201cAs I\u2019ve said repeatedly, <em>xrisk cannot be the public face of EA<\/em>, OPP [OpenPhil] can\u2019t be the public face of EA. <strong>Only \u2018sending money to Africa\u2019<\/strong> is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/archive.md\/elTRO\"><u>putting AI in there<\/u><\/a> is just shooting yourself in the foot.\u201d<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/peymantaeidi.net\/stem-cell\/wp-content\/uploads\/2024\/04\/785e7b9c-5d13-4b70-b719-daa4aae43b71-RackMultipart20240429-210-fqnevf.png\" alt=\"image\" data-recalc-dims=\"1\" \/><\/figure>\n<\/div>\n<p>Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: \u201cIn fairness to the \u2018MIRI is bad PR for EA\u2019 perspective, I\u2019ve seen MIRI\u2019s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I\u2019m not sure I agree [\u2026]. If we were optimizing for having <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/archive.ph\/94UTI#selection-1437.0-1441.435\"><u>the right \u2018public face\u2019<\/u><\/a> I think we\u2019d be talking more about things that are in between malaria nets and AI [\u2026] like biosecurity and macroeconomic policy reform.\u201d<\/p>\n<p>Scott Alexander (Siskind) is the author of the influential rationalist blog \u201c<em>Slate Star Codex<\/em>\u201d and \u201c<em>Astral Codex Ten<\/em>.\u201d In 2015, he acknowledged that he supports the AI-safety\/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: \u201cExistential risk <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/web.archive.org\/web\/20150813153432\/http:\/slatestarcodex.com\/2015\/08\/12\/stop-adding-zeroes\/\"><u>isn\u2019t the most useful public face<\/u><\/a> for effective altruism \u2013 everyone including Eliezer Yudkowsky agrees about that.\u201d In the same year, 2015, he also wrote: \u201cSeveral people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/slatestarcodex.com\/2015\/09\/22\/beware-systemic-change\/\"><u>should be kind of quiet about it<\/u><\/a> in public-facing spaces.\u201d<\/p>\n<p>In 2014, Peter Wildeford (then Hurford) published a conversation about \u201cEA Marketing\u201d with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be <em>real <\/em>Effective Altruists (EAs):<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cThings in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.<\/em><\/p>\n<p><em>There\u2019s a chance that people might accept the more mainstream global poverty angle, but be <\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/web.archive.org\/web\/20141030073244\/http:\/everydayutilitarian.com\/essays\/conversation-with-michael-bitton-about-ea-marketing\/\"><em><u>turned off by other aspects of EA<\/u><\/em><\/a><em>. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.\u201d<\/em><\/p>\n<\/blockquote>\n<p>\u201cLongtermism is a bad \u2018on-ramp\u2019 to EA,\u201d wrote a community member on the <em>Effective Altruism Forum<\/em>. \u201cAI safety is new and complicated, making it more likely that people [\u2026] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).\u201d<\/p>\n<p>Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: \u201cI became an EA in 2016, and it the time, while a lot of the \u2018outward-facing\u2019 materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk\/AI safety interventions are top priority. I was quite happy with that [\u2026] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/web.archive.org\/web\/20200929190922\/https:\/www.facebook.com\/groups\/effective.altruists\/permalink\/1750780338311649\/?comment_id=1752865068103176\"><u>the communication structure is somewhat resembling a conspiracy or a church<\/u><\/a>, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.\u201d<\/p>\n<p>Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: \u201cA lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/thingofthings.wordpress.com\/2017\/09\/20\/against-ea-pr\/\"><u>only donate to MIRI<\/u><\/a>.\u201d<\/p>\n<p>As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cMy perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a <\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/archive.md\/elTRO\"><em><u>recruitment tool<\/u><\/em><\/a><em> to get people interested in the concept and then convert them to Xrisk causes.\u201d (Alasdair Pearce, 2015).<\/em><\/p>\n<p><em>\u201cI used to work for an organization in EA, and I am still quite active in the community. 1 \u2013 I\u2019ve heard people say things like, \u2018Sure, we say that effective altruism is about global poverty, but \u2014 wink, nod \u2014 that\u2019s just what we do to get people in the door so that we can <\/em><strong><em>convert<\/em><\/strong><em> them to <\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/forum.effectivealtruism.org\/posts\/QYGNmzqjhHgxfbyhy\/anonymous-ea-comments?commentId=ddc7LhQyMKdXSqAio\"><em><u>helping out with AI<\/u><\/em><\/a><em>\/animal suffering\/(insert weird cause here).\u2019 This disturbs me.\u201d (Anonymous#23, 2017).<\/em><\/p>\n<p><em>\u201cIn my time as a community builder [\u2026] I saw the downsides of this. [\u2026] Concerns that the EA community is doing <\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/archive.is\/yFyDr\"><em><u>a bait-and-switch tactic<\/u><\/em><\/a><em> of \u2018come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.\u2019 [\u2026] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.\u201d (weeatquince [Sam Hilton], 2020).<\/em><\/p>\n<\/blockquote>\n<p>Austin Chen, the co-founder of Manifold Markets, wrote on the <em>Effective Altruism Forum<\/em> in 2020: \u201cOn one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/forum.effectivealtruism.org\/posts\/LdZcit8zX89rofZf3\/evidence-cluelessness-and-the-long-termhilary-greaves?commentId=DEQJMnhBEETkJWrLS\"><u>vague obligation to join in<\/u><\/a> too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me [\u2026] into EA in the first place.\u201d<\/p>\n<p>In 2019, EA Hub published a guide: \u201cTips to help your conversation go well.\u201d Among the tips like \u201cHighlight the process of EA\u201d and \u201cUse the person\u2019s interest,\u201d there was \u201cPreventing \u2018Bait and Switch.\u2019\u201d The post acknowledged that \u201cmany leaders of EA organizations are most focused on community building and the long-term future <em>than<\/em> animal advocacy and global poverty.\u201d Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cIt is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes <\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/archive.md\/0EmQK\"><em><u>might be misleading<\/u><\/em><\/a><em>, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don\u2019t understand or care about. This can feel like a \u201cbait and switch\u201d\u2014they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.\u201d<\/em><\/p>\n<\/blockquote>\n<p>Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the <em>LessWrong<\/em>\/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public\u2019s perception of the movement:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cTo be clear, my primary reason for why EA shouldn\u2019t entirely focus on longtermism is because that would to some degree violate some <\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/forum.effectivealtruism.org\/posts\/fStCX6RXmgxkTBe73\/towards-a-weaker-longtermism?commentId=widZvbTdt88PF9GaY\"><em><u>implicit promises that the EA community has made to the external world<\/u><\/em><\/a><em>. If that wasn\u2019t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.\u201d<\/em><\/p>\n<\/blockquote>\n<h2 class=\"wp-block-heading\"><strong>The structure of Effective Altruism rhetoric<\/strong><\/h2>\n<p>The researcher Mollie Gleiberman explains the EA\u2019s \u201cstrategic ambiguity\u201d: \u201cEA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.\u201d<\/p>\n<p>When Effective Altruists talked in public about \u201cdoing good,\u201d \u201chelping others,\u201d \u201ccaring about the world,\u201d and pursuing \u201cthe most impact,\u201d <strong>the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable<\/strong>. Inward, \u201cdoing good\u201d and the \u201cmost pressing problems\u201d were understood as working to mainstream core EA ideas like extinction from unaligned AI.<\/p>\n<p>In the communication with \u201ccore EAs,\u201d \u201cthe initial focus on global poverty is explained as merely an <em>example<\/em> used to illustrate the concept \u2013 not the <em>actual<\/em> cause endorsed by most EAs.\u201d<\/p>\n<p>Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation\/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization \u201cRaising for Effective Giving\u201d (REG): \u201cREG prioritizes long-term future causes, it\u2019s just <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/web.archive.org\/web\/20200929190922\/https:\/www.facebook.com\/groups\/effective.altruists\/permalink\/1750780338311649\/?comment_id=1752865068103176\"><u>much easier to fundraise<\/u><\/a> for poverty charities.\u201d<\/p>\n<p>The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about <strong>marketing<\/strong> to outsiders.<\/p>\n<h2 class=\"wp-block-heading\"><strong>The \u201cFunnel Mode\u201d<\/strong><\/h2>\n<p>According to the Centre for Effective Altruism, \u201cWhen describing the target audience of our projects, it is useful to have labels for different parts of the community.\u201d<\/p>\n<p>The levels are: Audience, followers, participants, contributors, core, and leadership.<\/p>\n<p>In 2018, in a post entitled <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/web.archive.org\/web\/20190429035527\/https:\/www.centreforeffectivealtruism.org\/the-funnel-model\"><u>The Funnel Mode<\/u><\/a>, CEA elaborated that \u201cDifferent parts of CEA operate to bring people into different parts of the funnel.\u201d<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/peymantaeidi.net\/stem-cell\/wp-content\/uploads\/2024\/04\/228f4055-1ebb-477f-aa3d-4d5f0371c8a9-RackMultipart20240429-117-8fy7bk.png\" alt=\"image\" data-recalc-dims=\"1\" \/><\/figure>\n<\/div>\n<p><em>The Centre for Effective Altruism: <\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/web.archive.org\/web\/20190429035527\/https:\/www.centreforeffectivealtruism.org\/the-funnel-model\"><em><u>The Funnel Mode<\/u><\/em><\/a><em>.<\/em><\/p>\n<p>At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill\u2019s <em>Quartz<\/em> column and book, \u2018Doing Good Better,\u2019 Singer\u2019s TED talk, and Singer\u2019s \u2018The Most Good You Can Do.\u2019 The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.<\/p>\n<p>The 2017 edition of the movement\u2019s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: \u201cNew EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making <em>AI <\/em>more palatable as a cause area.&nbsp;In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/forum.effectivealtruism.org\/posts\/A54Lrz5vQK5kgtxA2\/ea-survey-2017-series-haveea-priorities-changed-over-time\"><u>bottom of the funnel (AI)<\/u><\/a> seem more appealing with time and further exposure.\u201d<\/p>\n<p>According to the Center for Effective Altruism, that\u2019s the <strong>ideal<\/strong> route. It wrote in 2018: \u201cTrying to get a few people <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/beta.effectivealtruism.org\/posts\/tny62ELsfnPNtY2Tx\/a-model-of-an-ea-group\"><u>all the way through the funnel<\/u><\/a> is more important than getting every person to the next stage.\u201d<\/p>\n<p>The magnitude and implications of Effective Altruism, says Gleiberman, \u201ccannot be grasped until people are willing to look at the evidence beyond EA\u2019s glossy front cover, and see what activities and aims the EA movement <em>actually<\/em> prioritizes, how funding is <em>actually<\/em> distributed, whose agenda is <em>actually<\/em> pursued, and whose interests are <em>actually<\/em> served.\u201d<\/p>\n<h2 class=\"wp-block-heading\"><strong>Key takeaways<\/strong><\/h2>\n<h3 class=\"wp-block-heading\"><strong>\u2013 Core EA<\/strong><\/h3>\n<p>In the Public-facing\/grassroots EAs (audience, followers, participants):<\/p>\n<ol>\n<li>The main focus is effective giving \u00e0 la Peter Singer.<\/li>\n<li>The main cause area is global health, targeting the \u2018distant poor\u2019 in developing countries.<\/li>\n<li>The donors support organizations doing direct anti-poverty work.<\/li>\n<\/ol>\n<p>In the Core\/highly engaged EAs (contributors, core, leadership):<\/p>\n<ol>\n<li>The main focus is x-risk\/ longtermism \u00e0 la Nick Bostrom and Eliezer Yudkowsky.<\/li>\n<li>The main cause areas are x-risk, AI-safety, \u2018global priorities research,\u2019 and EA movement-building.<\/li>\n<li>The donors support highly-engaged EAs to build career capital, boost their productivity, and\/or start new EA organizations; research; policy-making\/agenda setting.<\/li>\n<\/ol>\n<h3 class=\"wp-block-heading\"><strong>\u2013 Core EA\u2019s policy-making<\/strong><\/h3>\n<p>In \u201c<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.techdirt.com\/2023\/12\/22\/2023-the-year-of-ai-panic\/\"><u>2023: The Year of AI Panic<\/u><\/a>,\u201d I discussed the Effective Altruism movement\u2019s growing influence in the US (on <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.politico.com\/news\/2023\/12\/15\/billionaire-backed-think-tank-played-key-role-in-bidens-ai-order-00132128\"><u>Joe Biden\u2019s AI order<\/u><\/a>), the UK (influencing <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.politico.eu\/article\/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism\/\"><u>Rishi Sunak\u2019s AI agenda<\/u><\/a>), and the EU AI Act (x-risk <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.aipanic.news\/i\/143088555\/eu-ai-act-pushing-for-regulation-of-general-purpose-ai-systems-gpais\"><u>lobbyists\u2019 celebration<\/u><\/a>).<\/p>\n<p>More details can be found in this rundown of how \u201c<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.thedailybeast.com\/the-ai-doomers-have-infiltrated-washington\"><u>The AI Doomers have infiltrated Washington<\/u><\/a>\u201d and how \u201c<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.politico.com\/news\/2024\/02\/23\/ai-safety-washington-lobbying-00142783\"><u>AI doomsayers funded by billionaires ramp up lobbying<\/u><\/a>.\u201d The broader landscape is detailed in \u201c<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.aipanic.news\/p\/panic-as-a-business-is-expanding\"><u>The Ultimate Guide to \u2018AI Existential Risk\u2019 Ecosystem<\/u><\/a>.\u201d<\/p>\n<p>Two things you should know about EA\u2019s influence campaign:<\/p>\n<ol>\n<li>AI Safety organizations constantly examine how to <a href=\"https:\/\/www.aipanic.news\/p\/the-ai-panic-campaign-part-1\" target=\"_blank\" rel=\"noreferrer noopener\"><u>target \u201chuman extinction from AI\u201d and \u201cAI moratorium\u201d messages<\/u><\/a> based on political party affiliation, age group, gender, educational level, field of work, and residency. In \u201c<a href=\"https:\/\/www.aipanic.news\/p\/the-ai-panic-campaign-part-2\" target=\"_blank\" rel=\"noreferrer noopener\"><u>The AI Panic Campaign \u2013 part 2<\/u><\/a>,\u201d I explained that \u201cframing AI in extreme terms is intended to motivate policymakers to adopt stringent rules.\u201d<\/li>\n<li>The lobbying goal includes <a href=\"https:\/\/www.aipanic.news\/p\/the-665m-shitcoin-donation-to-the\" target=\"_blank\" rel=\"noreferrer noopener\"><u>pervasive surveillance and criminalization<\/u><\/a> of AI development. Effective Altruists lobby governments to \u201cestablish a strict licensing regime, clamp down on open-source models, and impose civil and&nbsp;<a href=\"https:\/\/twitter.com\/aipolicyus\/status\/1777689593774555546\" target=\"_blank\" rel=\"noreferrer noopener\"><u>criminal liability on developers<\/u><\/a>.\u201d<\/li>\n<\/ol>\n<p>With <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.techdirt.com\/2023\/04\/14\/the-ai-doomers-playbook\/\"><u>AI doomers<\/u><\/a> intensifying their attacks on the open-source community, it becomes clear that this group\u2019s \u201cdoing good\u201d is other groups\u2019 nightmare.<\/p>\n<h3 class=\"wp-block-heading\"><strong>\u2013 Effective Altruism was a Trojan horse<\/strong><\/h3>\n<p>It\u2019s now evident that \u201csending money to Africa,\u201d as Eliezer Yudkowsky acknowledged, <strong>was never the \u201cactual plot.\u201d<\/strong> Or, as Will MacAskill wrote in 2012, \u201calleviating global poverty is dwarfed by existential risk mitigation.\u201d The Effective Altruism founders planned \u2013 from day one \u2013 <em>to mislead donors and new members<\/em> in order to build the movement\u2019s brand and community.<\/p>\n<p>Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism\/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.<\/p>\n<p>This needs to be investigated further.<\/p>\n<p>Gleiberman observes that \u201cThe movement clearly prioritizes \u2018longtermism\u2019\/AI-safety\/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.\u201d We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.<\/p>\n<p><em>Dr. Nirit Weiss-Blatt, Ph.D. (<\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/twitter.com\/DrTechlash\"><strong><em><u>@DrTechlash<\/u><\/em><\/strong><\/a><em>), is a communication researcher and author of \u201cThe&nbsp;<\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.niritweissblatt.com\/\"><strong><em><u>TECHLASH<\/u><\/em><\/strong><\/a><em>&nbsp;and Tech Crisis Communication\u201d book and \u201c<\/em><a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.aipanic.news\/\"><strong><em><u>AI Panic<\/u><\/em><\/strong><\/a><em>\u201d newsletter.<\/em><\/p>\n<p class=\"filed\">\nFiled Under: <a href=\"https:\/\/www.techdirt.com\/tag\/ai\/\" rel=\"tag\">ai<\/a>, <a href=\"https:\/\/www.techdirt.com\/tag\/ai-doomerism\/\" rel=\"tag\">ai doomerism<\/a>, <a href=\"https:\/\/www.techdirt.com\/tag\/bait-and-switch\/\" rel=\"tag\">bait and switch<\/a>, <a href=\"https:\/\/www.techdirt.com\/tag\/effective-altruism\/\" rel=\"tag\">effective altruism<\/a>, <a href=\"https:\/\/www.techdirt.com\/tag\/existential-risk\/\" rel=\"tag\">existential risk<\/a>, <a href=\"https:\/\/www.techdirt.com\/tag\/x-risk\/\" rel=\"tag\">x-risk<\/a><br \/>\n<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>from the come-for-the-ending-global-poverty,-stay-for-the-ai-doomerism dept The Effective Altruism movement Effective Altruism (EA) is typically explained as<\/p>\n","protected":false},"author":1,"featured_media":53385,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts\/53383"}],"collection":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/comments?post=53383"}],"version-history":[{"count":3,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts\/53383\/revisions"}],"predecessor-version":[{"id":53389,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts\/53383\/revisions\/53389"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/media\/53385"}],"wp:attachment":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/media?parent=53383"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/categories?post=53383"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/tags?post=53383"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}