Transcript
1
00:00:00,400 --> 00:00:46,950
Kayla: Long termism is the view that positively influencing the long term future is a key moral priority of our times. It's about taking seriously the sheer scale of the future and how high the stakes might be in shaping it. It means thinking about the challenge we might face in our lifetimes that could impact civilization's whole trajectory, and taking action to benefit not just the present generation, but all generations to come. Okay, we are back with cult. Or just weird. While we're making our way through the TESCREAL bundle, an acronym referring to the prominent futurist ideology currently defining fucking Silicon Valley, of all things.
2
00:00:47,250 --> 00:00:53,202
Chris: That must be the fastest we've ever talked about. Started the actual topic?
3
00:00:53,266 --> 00:00:54,234
Kayla: Oh, I haven't started the topic.
4
00:00:54,282 --> 00:00:55,258
Chris: It was instantly.
5
00:00:55,314 --> 00:01:13,228
Kayla: This is just for people who maybe haven't listened to previous episodes and are just tuning in, just catching them up to speed on what we're talking about. Essentially, what we're talking about is how this is all a cope for our innate and deeply human fear of death and whether all this stuff is a cult or if it's just weird.
6
00:01:13,364 --> 00:01:21,924
Chris: Yeah, I'm still impressed, though. I don't know. Cause it's usually. Usually we start with, how was your day today? How was a good day? How did I. I have a good day, too.
7
00:01:21,972 --> 00:01:24,860
Kayla: We'll get to that. We'll get to that right here. Cause we're doing our introductions.
8
00:01:24,980 --> 00:01:25,876
Chris: Oh, right. Okay.
9
00:01:25,948 --> 00:01:33,656
Kayla: I'm Kayla. I'm a television writer. Fear of death enthusiast, probably a lot of other things. Thanks for listening to culture. Just weird. Who are you?
10
00:01:33,728 --> 00:01:37,856
Chris: I'm Chris. I make games. I do podcasts. I sometimes look at data.
11
00:01:37,968 --> 00:01:56,264
Kayla: If you're listening to the show, you are currently supporting the show, and we really appreciate that. If you'd like to support us further, you can go to patreon.com culturesweird. And if you'd like to talk more about any of the show's topics, you can find us on discord linked in the show notes. Speaking of our Patreon, we actually have two new patrons to shout out this week.
12
00:01:56,352 --> 00:01:56,944
Chris: Yes.
13
00:01:57,072 --> 00:02:07,520
Kayla: So thank you so much to Karen and Jim for joining our Patreon. You enjoy the outtakes and the polls and some of the other stuff we got going on over there.
14
00:02:08,620 --> 00:02:09,916
Chris: Our outtakes are free.
15
00:02:10,027 --> 00:02:11,240
Kayla: The outtakes are free.
16
00:02:12,540 --> 00:02:19,804
Chris: But hey, you know what? That makes the top of the funnel really wide, because everybody listening right now can just go on over to our Patreon, listen to outtakes.
17
00:02:19,892 --> 00:02:29,476
Kayla: You can hear our cats. You can hear us burping motorcycles. A lot of motorcycles. It's a fun time to swear words. Definitely swears. Which we do not do on the show.
18
00:02:29,548 --> 00:02:30,228
Chris: Fuck no.
19
00:02:30,324 --> 00:02:31,180
Kayla: That was really good.
20
00:02:31,260 --> 00:02:36,844
Chris: Thanks. That was. Yeah. Classic. Classic. I have one more bit of business, actually.
21
00:02:36,932 --> 00:02:37,804
Kayla: Business, us.
22
00:02:37,932 --> 00:02:39,932
Chris: We have transcripts now.
23
00:02:40,076 --> 00:02:42,068
Kayla: Ooh, podcast transcripts.
24
00:02:42,124 --> 00:03:04,922
Chris: Finally. I know, I was like, oh, it only took us six seasons, but we do. So if you are listening to this and are unable to hear, then go on over to our website. Actually, the transcripts should be available wherever the podcast is available. But I know for sure they're also on the website where the episodes live on the website and you can read episodes instead of listening to episodes.
25
00:03:04,986 --> 00:03:09,730
Kayla: Or at the same time, if you are a person like me who has to have the subtitles on while you watch television.
26
00:03:09,810 --> 00:03:13,522
Chris: That's right. It actually technically is a subtitle file.
27
00:03:13,706 --> 00:03:14,490
Kayla: Cool.
28
00:03:14,650 --> 00:03:18,386
Chris: Which I thought would make a difference on YouTube, but YouTube already subtitled it.
29
00:03:18,458 --> 00:03:28,864
Kayla: YouTube does already subtitle it. Okay, well, go check out our transcripts. Enjoy. We hope it makes the show more accessible to more people. Are you ready to jump into today's topic?
30
00:03:28,952 --> 00:03:30,344
Chris: I'm already ready already.
31
00:03:30,472 --> 00:03:32,776
Kayla: So last week. I think you made that joke last week, actually.
32
00:03:32,808 --> 00:03:34,624
Chris: Did I? Okay, well then I'm not gonna do it again.
33
00:03:34,672 --> 00:03:35,312
Kayla: Well, no, we're keeping it.
34
00:03:35,336 --> 00:03:36,320
Chris: I have to cut it. Please.
35
00:03:36,440 --> 00:03:43,224
Kayla: Last week we talked about the c in test Grail cosmism. We've gone a little bit out of order on the acronym so far.
36
00:03:43,232 --> 00:03:44,584
Chris: Oh, we've been way out of order.
37
00:03:44,672 --> 00:03:51,930
Kayla: But now we're finally tackling the last two letters, EA and L. Effective altruism and longtermism.
38
00:03:52,050 --> 00:04:00,802
Chris: Okay, I have a problem with the EA. Every other letter in test grill I know is just one thing. An EA, for some reason, gets two letters in test grill. Come on.
39
00:04:00,866 --> 00:04:27,654
Kayla: I mean, it is two words. Everything else is just one word. I guess we've touched on the EA and the L a little bit as we've gone through these last 18 episodes. Obviously you talked about it with Doctor Emile Torres in the test grill episodes. A lot of this stuff came up in the rationalism episodes. Tires, so to speak. So now it's time for us to look under the hood and really get to know what these letters stand for.
40
00:04:27,742 --> 00:04:49,252
Chris: Part of my understanding, actually, of why Doctor Torres and Doctor Gebru created the test Creel acronym in the first place was because it's impossible to talk about one thing without at least touching on another. So I think it kind of makes sense that we've already sort of bumped into pretty much everything that we're gonna be talking about today, you can't.
41
00:04:49,276 --> 00:04:52,508
Kayla: It's like wading through a pool full of corpses. I don't know why. That was my.
42
00:04:52,564 --> 00:04:59,388
Chris: Wow. Is that your go to? I was gonna say it's like a cork board with yarn, but I guess corpses is good, too.
43
00:04:59,444 --> 00:05:03,396
Kayla: I guess, like, you know why that was.
44
00:05:03,428 --> 00:05:04,412
Chris: Dude, you are morbid.
45
00:05:04,476 --> 00:05:08,164
Kayla: Cause, like, if you're waiting through a pool of corpses, you'd, like, keep bumping into them.
46
00:05:08,292 --> 00:05:13,196
Chris: Oh, okay. Yeah, I guess in your mind, that would be the thing that you'd think of first.
47
00:05:13,268 --> 00:05:14,600
Kayla: I'm sorry, everyone.
48
00:05:16,370 --> 00:05:17,546
Chris: No, you're not.
49
00:05:17,738 --> 00:05:27,754
Kayla: So first, let's talk about. I just have death on the brain because this is the death season, even though we're talking about AI first. Effective altruism.
50
00:05:27,842 --> 00:05:28,274
Chris: Yes.
51
00:05:28,362 --> 00:06:09,914
Kayla: A lot of our listeners might already know a little bit about EA, even outside of our podcast, because of the whole Sam Bankman fried FTX fiasco that unfolded in 2022, which we will get deeper into. But the short version is that Sam Bankman Fried, known widely as SBF, was a cryptocurrency entrepreneur. He founded a cryptocurrency exchange called FTX, made a shit ton of money, and then got arrested and jailed for, like, a bunch of fraud related crimes. And I think generally, investors, like, lost a bunch of money. But before he got in trouble, SBF was a big, effective altruism guy, donated to a number of EA causes before his downfall. And so it was like, kind of a big deal in the news at the time.
52
00:06:09,962 --> 00:06:16,404
Kayla: And everybody, a lot of the news was talking about his EA connections, and that kind of helped bring EA into the mainstream.
53
00:06:16,532 --> 00:06:53,096
Chris: So can you help me clarify? Because I think I had this notion, but I'd never really, like, explicitly clarified it, but. So FTX, which is Sam Bankman Fried's cryptocurrency fund, that didn't in and of itself have anything to do with effective altruism, but he himself, as a person, was a big advocate for EA. And then that's what made EA. So, like, when FTX fell through and Sam Bankman Fried turned out to be a giant fraud, that's the thing that tarnished the EA image, because FTX wasn't itself about EA, right?
54
00:06:53,168 --> 00:07:15,452
Kayla: As far as I know, and we'll probably talk more about Sam Bankman Fried on the next episode rather than this episode. So hold anything we say here with a little bit of a grain of salt, as far as I know, FTX Washington, a cryptocurrency exchange. So I don't think it was about EA, but he himself was like, he made a shit ton of money. He was an extraordinarily wealthy person and.
55
00:07:15,476 --> 00:07:18,196
Chris: Was a big, like, did he make the money?
56
00:07:18,228 --> 00:07:21,700
Kayla: Ea? Well, money was there, and it was in his name.
57
00:07:21,740 --> 00:07:22,748
Chris: He acquired money.
58
00:07:22,804 --> 00:07:32,950
Kayla: Money came to be. And he, as a Silicon Valley guy, was like, power. A powerful enough figure that he was, like, getting people into EA.
59
00:07:33,060 --> 00:07:33,658
Chris: Got it.
60
00:07:33,754 --> 00:07:35,554
Kayla: And spreading the word about ea kind of thing.
61
00:07:35,602 --> 00:07:35,954
Chris: Okay.
62
00:07:36,002 --> 00:07:38,058
Kayla: As far as I know. And again, we'll talk more about it.
63
00:07:38,074 --> 00:07:53,722
Chris: No, that makes sense. A little bit later, I was like, when that first. When the news first broke on all this stuff, I was just a little confused. Cause I was like, is it. Is he in charge of some EA organization, or is it just so. It sounds like it's just. It was mainly his own personal charisma that was driving that.
64
00:07:53,786 --> 00:07:55,306
Kayla: Yeah, he was just a test realist.
65
00:07:55,418 --> 00:07:56,738
Chris: Right. Okay.
66
00:07:56,914 --> 00:08:07,190
Kayla: But effective altruism has a deeper history than just SBF. It's actually been around as a concept for over a decade. So let's go back to the beginning. Over a decade doesn't sound like that long.
67
00:08:07,270 --> 00:08:16,678
Chris: No, dude, these days, ten years. It is ten years. And not even just these days, but in the thing we're talking about, ten years is forever.
68
00:08:16,774 --> 00:08:18,222
Kayla: It's more than ten years.
69
00:08:18,286 --> 00:08:18,934
Chris: Jeez.
70
00:08:19,062 --> 00:08:22,342
Kayla: I think some of the earliest stuff we're talking about is, like, 2000.
71
00:08:22,526 --> 00:08:22,958
Chris: Wow.
72
00:08:23,014 --> 00:08:24,006
Kayla: And that's, like, ancient.
73
00:08:24,118 --> 00:08:28,570
Chris: That is super ancient. That's back when Eliezer Yudkowski was predicting the end of the world in 2008.
74
00:08:30,030 --> 00:08:46,804
Kayla: In 2011, before the world ended, an organization called giving what we can, and an organization called 80,000 hours decided to merge into a joint effort. Giving what we can had been founded at Oxford University just two years prior. Headed up by philosopher Toby Ord, his wife in physician Toby.
75
00:08:46,852 --> 00:08:48,044
Chris: Pondering my ord.
76
00:08:48,132 --> 00:09:05,114
Kayla: Pondering my ord, his wife and physician in training, Bernadette Young, and philosopher William McCaskill. I'm pausing here because I don't know how much I want to say about William McCaskill in this episode or save it for the next episode. I have so many thoughts and feelings about William McCaskill.
77
00:09:05,252 --> 00:09:07,782
Chris: You're bringing up the usual suspects here.
78
00:09:07,846 --> 00:09:34,872
Kayla: These are the usual suspects of test Grail and specifically of the EA and l. Members of giving what we can pledged to give 10% of their income or more, to, quote unquote, effective charities, which at the time were largely focused on alleviating global poverty. 80,000 hours was a nonprofit focused on researching what careers are the most, quote unquote effective in terms of positive social impact. Like, 80,000 hours refers to the average amount of time a person will spend in their career.
79
00:09:34,976 --> 00:09:40,312
Chris: Oh, you just poked a neuron. I feel like I remember 80,000 hours now.
80
00:09:40,376 --> 00:09:56,700
Kayla: There you go. I do remember that philosopher William McCaskill was also one of its founders. And, like, this guy was like, okay, how many years ago was 20? What's 37 -13 24? Yeah, this guy's, like, 24 at the time.
81
00:09:57,080 --> 00:09:59,072
Chris: I hate math. Don't make me do math.
82
00:09:59,176 --> 00:10:36,224
Kayla: When the two organizations merged, the members voted on a new name, and the center for effective Altruism was born. The convergence and kind of like, introduction of the phrase effective altruism to describe the kind of ethical approaches taken by some philosophers at the time coincided with a couple other things that would eventually kind of fall under either the EA umbrella or at least the wider test grail umbrella. Okay, we're talking charity assessment organizations. I'm gonna, like, hopefully trigger some more neurons for you. Givewell and open philanthropy, which were founded in 2007 and 2017, respectively.
83
00:10:36,312 --> 00:10:37,248
Chris: I remember both of those.
84
00:10:37,304 --> 00:10:41,632
Kayla: We're, of course, talking less wrong. The rationalist discussion forum, founded in 2009.
85
00:10:41,696 --> 00:10:42,928
Chris: I am trying to forget that one.
86
00:10:42,984 --> 00:10:52,580
Kayla: We're talking the Singularity Institute, founded to study the. I think it has a different name now, but at the time, it was the singularity Institute, and it was founded to study the safety of artificial intelligence.
87
00:10:52,920 --> 00:10:56,728
Chris: In two thousand s I AI. Yeah, so that was Elezer's thing.
88
00:10:56,784 --> 00:10:57,704
Kayla: I think it's called something else.
89
00:10:57,752 --> 00:10:58,440
Chris: And now it's Miri.
90
00:10:58,480 --> 00:10:59,572
Kayla: Miri. Thank you.
91
00:10:59,736 --> 00:11:01,156
Chris: Intelligence Research Institute.
92
00:11:01,228 --> 00:11:09,884
Kayla: And we're also talking about the now defunct future of Humanity Institute, founded to study things like existential risk for humanity in 2005.
93
00:11:09,972 --> 00:11:11,852
Chris: And that was the Nick Bostrom joint.
94
00:11:11,916 --> 00:11:23,516
Kayla: Bostrom joint, which. In Oxford, I think I may leave that to you to talk about in future episodes, because there's also a lot to say about Nick Bostrom. There's so much left to talk about here.
95
00:11:23,588 --> 00:11:24,340
Chris: Too many things.
96
00:11:24,420 --> 00:11:26,420
Kayla: Everybody is so scared of dying.
97
00:11:27,240 --> 00:11:41,776
Chris: And so am I, by the way. The fall of the future of humanity. Wait, what was it? No, not future humanity. What was it called? Oh, it was called future humanity. Oh. That's why we named our episodes. That. That was only a few months ago. It was, like, April as of publishing here.
98
00:11:41,808 --> 00:12:35,380
Kayla: Yeah, it was April 2024, I believe. More loosely related. There were also followers of this moral philosopher named Peter Singer, who also gravitated these circles. And Peter Singer, I think, started. Started his publishing in the seventies. So this stuff's been around for a while. All these groups and the people who either belonged to them, believed in them, promoted them, or followed them kind of all got munged together in the mid aughts and obviously beyond. In 2013, philanthropists hosted the first annual effective Altruism Global conference, which has taken place every year since. But what exactly is effective altruism? We'll go back to that age old question. What would you say you'd do here? William McCaskill, we talked about multiple times already. He's one of the main architects behind the movement, and he defines EA as this in his essay effective introduction.
99
00:12:36,160 --> 00:12:45,014
Kayla: Effective altruism is the project of using evidence and reason to figure out how to benefit others as much as possible and taking action on that basis. End quote.
100
00:12:45,152 --> 00:12:50,682
Chris: See, again, the first, like, when you first dip your toes into this stuff.
101
00:12:50,746 --> 00:12:52,218
Kayla: I think it's noble.
102
00:12:52,314 --> 00:12:54,790
Chris: Yeah. I'm like, that sounds great.
103
00:12:55,330 --> 00:13:08,270
Kayla: I have to say, I don't have a lot of. I went into this with a real bad attitude, and I came out of it with not a real bad attitude. I kind of turned around on it. I think that maybe next episode, I'm gonna have a bad attitude again.
104
00:13:08,610 --> 00:13:09,866
Chris: That's how it goes here, man.
105
00:13:09,938 --> 00:13:14,900
Kayla: This episode's kind of like background, and next episode's kind of gonna be more like the poking of the holes.
106
00:13:15,020 --> 00:13:26,320
Chris: Yeah, that's how we do things here. That's what we did with. Remember the Hare Christian episode? The first one was like, wow, that's so neat. They do awesome singing, and the place was cool, and it's like, cheap, good food. And then the next one was like, murders.
107
00:13:26,860 --> 00:13:48,930
Kayla: Yeah, that is a trope. On our show, William McCaskill's pinned tweet on Twitter goes a step further. Affective altruism is not a package of particular views. It's about using evidence and careful reasoning to try to do more good. What science is to the pursuit of truth, yea, is, or at least aspires to be the pursuit of good. End quote.
108
00:13:49,350 --> 00:13:54,678
Chris: That's. Man, I like that Easter egg.
109
00:13:54,734 --> 00:14:38,364
Kayla: For our listeners who may be into this stuff, I think that quote tweet was in reply to a Steven Pinker tweet about the pitfalls of Ea. I'm not gonna talk about Steven Pinker right now, but just Easter egg for anybody who might be listening and has any opinions about Steven Pinker. Largely effective altruists work to select the most effective charities to donate to and the most effective careers to dedicate their lives to, either by making the most money so that they can donate more, which is known as, quote unquote, earning to give or by choosing careers that are focused on the greater good. And as we've learned, this is not really a niche movement. It's fairly widespread across academia and has launched a number of institutes, research centers, advisory organizations, and charities.
110
00:14:38,562 --> 00:14:50,860
Kayla: It's estimated by EA critical scholars that EA based charities have donated at least several hundreds of millions of dollars. It's probably over a billion dollars at this point to their chosen causes. There's a lot of money here.
111
00:14:51,360 --> 00:14:57,232
Chris: I see. Now I'm kind of like, wondering, how are they calculating what is the most good?
112
00:14:57,296 --> 00:15:07,844
Kayla: That's why there are research centers and institutes and stuff, is that they have people whose work is to calculate and figure it out and decide and recommend it.
113
00:15:07,852 --> 00:15:11,404
Chris: Sounds like utilitarianism, the movement. Like, that's what the whole thing kind of sounds.
114
00:15:11,452 --> 00:15:17,804
Kayla: It is. There are differences that we'll get to, but there are similarities as well.
115
00:15:17,892 --> 00:15:18,520
Chris: Right.
116
00:15:19,340 --> 00:16:04,920
Kayla: What are some of those chosen causes, by the way? What are ears donating their money to the human fund? Well, yes, no. They actually, they've got some very specific things. First, before we get into the actual causes, I wanted to note that EA considers something that they call, quote unquote, cause prioritization. So, like, unlike other nonprofits who focus on a single issue, so, like Susan G. Komen, we all know that's specifically for breast cancer. Effective altruists believe the most money should be given to the cause that will do the most good. So there's not, like, there's not a human fund. There's not a, like, we are effective altruism. Donate to us, and we'll make the most money for effective altruism. They're like, we're gonna work to figure out where the money needs to go, rather than picking a specific thing.
117
00:16:05,300 --> 00:16:26,028
Kayla: They also do not subscribe to local ideals of philanthropy. So, like, helping your local community versus helping a community halfway across the world. Like, a lot of nonprofits are very, like, you know, donate to this nonprofit because it helps, like, people in your city, versus donate to EA causes because they help the most people, even if.
118
00:16:26,044 --> 00:16:27,412
Chris: It'S regardless of where.
119
00:16:27,596 --> 00:16:28,140
Kayla: Yeah, right.
120
00:16:28,180 --> 00:16:28,910
Chris: Okay.
121
00:16:29,100 --> 00:16:37,602
Kayla: Effective. Like I mentioned, effective altruists have organizations specifically for researching and analyzing cause prioritization.
122
00:16:37,746 --> 00:16:38,226
Chris: Okay.
123
00:16:38,298 --> 00:16:39,802
Kayla: That's the whole thing.
124
00:16:39,826 --> 00:16:44,830
Chris: Now, just noting here that I'm skeptical of such activities.
125
00:16:46,530 --> 00:16:47,890
Kayla: I might un skeptic you.
126
00:16:47,970 --> 00:16:50,314
Chris: Okay. I have a degree of skepticism going into it.
127
00:16:50,322 --> 00:16:59,242
Kayla: I think that you should. And I also think that I went into this being like, you guys don't do anything. And then I went, oh, my God, these guys do quite a bit, actually.
128
00:16:59,386 --> 00:17:05,622
Chris: Yeah. I'm not denying that they do a lot of work. I'm sure they do a lot of work. But you know what? I'll let you get to that.
129
00:17:05,646 --> 00:17:53,432
Kayla: Well, hold your thoughts. In general, though, to go to the specific causes, EA focuses currently on the, as we mentioned, alleviation of global poverty, tropical diseases such as malaria, and deworming initiatives, human deworming, animal welfare. Like this is a big one. A lot of especially early effective altruists focused on this. And interestingly, a number of EA critics are also animal welfare people, like animal ethics philosophers. Recently there was a book that came out that was, I forget exactly the title. I think I'm linking it in the show notes because I referenced these academics. But there was recently a book of essays that came out criticizing EA. And the three academics were like animal. Among other areas of study were animal ethics philosophers.
130
00:17:53,496 --> 00:18:27,570
Chris: That's interesting. It surprises me a little bit because I remember Emil saying in one of our, one part of our interview that, I hate to quote this because I don't remember who he was quoting, but it might have been McCaskill or might have been from somebody in the book that he wrote. And that's why I don't know if it's an EA or EA est or a long termist, but he quoted somebody as saying basically, like, if certain species go extinct, that's fine, because they're not sentient or sapient like we are, so they don't. That would be like a net positive.
131
00:18:27,950 --> 00:18:54,502
Kayla: I think that there's some. I think that they have an interesting set of ethics around animals because it does seem like eaers are very clear that, like, humans are not animals, humans are not sentient. And it also seems like they still can ascribe suffering to animals and say that animals suffer. And so it's better to not cause the suffering of the animals even though they're not sentient. Like a lot of ea people are vegan and vegetarian. Like McCaskill, I think, is a vegetarian.
132
00:18:54,566 --> 00:18:54,974
Chris: Oh, really?
133
00:18:55,022 --> 00:18:59,424
Kayla: Yes. And this is a result specifically of their EA beliefs.
134
00:18:59,512 --> 00:19:00,672
Chris: Right. Okay.
135
00:19:00,856 --> 00:19:11,656
Kayla: And last on the list of causes, the long term future and existential risk. They want to make sure we don't do catastrophic shit. Now that makes life a disaster for potential future humankind.
136
00:19:11,848 --> 00:19:14,340
Chris: Okay. Yep. There's the x risk thing.
137
00:19:14,640 --> 00:19:33,566
Kayla: First three relatively mainstream normal causes. The last one is where we start to tip over into, like, that weirder side of the test Creole, as we've already covered. That's where we get into AI risk. How do we save trillions of future humans, even if that means worsening the suffering of billions of current humans? That kind of stuff, right?
138
00:19:33,598 --> 00:19:34,850
Chris: That's the l, right?
139
00:19:35,230 --> 00:19:45,894
Kayla: In short, long termism. Yeah, but we're not there yet. We're still talking about effective altruism. I want to talk about how effective altruism really is.
140
00:19:45,982 --> 00:19:47,846
Chris: Oh, effective. Effective altruism.
141
00:19:47,918 --> 00:19:57,578
Kayla: Altruism, which, like, is kind of difficult thing to measure because it's such a big thing. And it's already hard to be like, if I donate a million dollars, how much help is this doing?
142
00:19:57,634 --> 00:20:00,226
Chris: That's hard to measure who affects the effectors.
143
00:20:00,338 --> 00:20:13,698
Kayla: But luckily for us, Scott Alexander, a rationalist blogger you may remember from our episodes on less wrong, has an essay titled in continued defense of affective altruism that does do the work of giving us some hard numbers.
144
00:20:13,834 --> 00:20:26,378
Chris: Yeah, he has a bunch of, like, famous, I guess, if you want to say posts, unless wrong. And he also graded Slate Star Codex, which is like, where part of the rationalist diaspora on the Internet went.
145
00:20:26,474 --> 00:21:24,122
Kayla: Now, these numbers were dug up by him, and I do believe that he's done the work to verify this stuff. But I only verified. I verified one of the claims personally because I'm bad at mathematic and it checked out. So he claims. This is the one that I verified. He claims that effective altruism has prevented around 200,000 deaths from malaria, citing a number from against malaria foundation, or AMF. Okay, so Givewell, the EA charity assessor we mentioned earlier, identifies against malaria foundation as one of their top recommendations. Scott Alexander says that givewell funds about 90 million of AMF's $100 million revenue. So to quote from Alexander's essay, Gibbel estimates that malaria consortium can prevent one death for $5,000. And EA has donated about $100 million per year for several years. So 20,000 lives per year times some number of years.
146
00:21:24,186 --> 00:21:32,810
Kayla: I have rounded these two sources combined off to 200,000. Side note, for me, like, yeah, I saw anywhere between like 150,000 to 185,000 to 200,000.
147
00:21:32,890 --> 00:21:33,562
Chris: Okay.
148
00:21:33,706 --> 00:21:50,160
Kayla: As a sanity check, malaria death toll declined from about 1 million to 600,000 between 20 15, mostly because of bed net programs like these, meaning EA funded donations in their biggest year were responsible for about 10% of the yearly decline, end quote.
149
00:21:50,460 --> 00:22:01,020
Chris: Okay, that sounds good. I know I've heard, like elsewhere, that malaria nets are like a thing, and that's like, you know, an effective thing.
150
00:22:01,100 --> 00:22:14,514
Kayla: I remember that being like a big Bill Gates thing, like malaria has been talked about by people with a lot of money that they're looking to donate for a long time. And clearly the deaths have gone down globally and that's a good thing.
151
00:22:14,642 --> 00:22:16,350
Chris: Good job. I agree.
152
00:22:16,970 --> 00:22:26,698
Kayla: Scott Alexander also has this to effective altruism. Has treated 25 million cases of chronic parasite infection. These are the numbers that I have not verified.
153
00:22:26,754 --> 00:22:27,282
Chris: Okay.
154
00:22:27,386 --> 00:23:18,276
Kayla: Given 5 million people access to clean drinking water. Supported clinical trials for a currently approved malaria vaccine and a malaria vaccine also on track for approval. Supported additional research into vaccines for syphilis, malaria, some other things that I don't know, hepatitis C, hepatitis E. Supported teams giving developmental economics advice in Ethiopia, India, Rwanda. Convinced farms to switch 400 million chickens from cage to cage free. That's where some of the animal ethic stuff comes in. Freed 500,000 pigs from tiny crates where they weren't able to move around and gotten 3000 companies, including Pepsi, Kellogg's, CV's, and Whole Foods to commit to selling low cruelty meat. Those are all. If we can trace those efforts back to either EA donors or EA charity assessors, that's not small shit. That's big shit.
155
00:23:18,388 --> 00:23:19,132
Chris: Big if true.
156
00:23:19,196 --> 00:23:39,316
Kayla: Big if true. My next sentence is now these are big claims. If you're like me, you might be going, okay, like, are all these things actually effective altruists? Are we just like calling some efforts EA because it's easier to like absorb something than like actually do something? Like there's, it's like a malaria foundation out there that's doing all the work and EA is taking the credit for it?
157
00:23:39,348 --> 00:24:07,274
Chris: Yeah, I'm like, and again, like, on that note, I'm also like unclear. Like, there's clearly, there's. Givewell is an EA specific organization, but isn't EA more like a movement? So if I work for XYZ charity that's doing the malaria nets, that isn't givewell. What did you call it, the name of it, against malaria. If I'm working for, against malaria and I self identify as an EA, is that being counted?
158
00:24:07,362 --> 00:24:22,530
Kayla: Well, I think what Scott Alexander was counting there was the fact that Givewell is responsible for 90% against malaria Foundation's funding and Givewell is EA specifically to him. And I agree that counts as like a quote unquote EA effort.
159
00:24:22,610 --> 00:24:24,202
Chris: Totally. Yeah. Yeah. Okay.
160
00:24:24,306 --> 00:24:51,046
Kayla: He also says this quote, I'm counting it. And this is of everything he's evaluating here. I'm counting it as an EA accomplishment if EA either provided the funding or did the work. Further explanations in the footnotes. And this is a very well footnoted essay. Okay, I'm also slightly, this is called test reel, Scott. I'm also slightly conflating EA rationalism and AI doomerism rather than doing the hard work of teasing them apart.
161
00:24:51,198 --> 00:24:54,630
Chris: See, you can't do it. If only you had the acronym.
162
00:24:54,790 --> 00:25:30,834
Kayla: Side note, Alexander does have a section on EAS impact on AI. That's where the AI doomerism comes in. But we're skipping that for now because again, the hard work of teasing them apart is hard. And for organizational purposes, discussions of AI, to me, fit better in the framework of what we're discussing next, which is longtermism. Why are we hewing long termism so closely to effective altruism? Why am I doing two episodes at once? Again is because longtermism essentially grew out of EA. There's a reason why it's the last letter in the test Grail bundle and why it follows Ea. It's because it's literally a subset or a subculture of effective altruism.
163
00:25:30,962 --> 00:25:32,770
Chris: If you take just those, it's eel.
164
00:25:32,850 --> 00:25:38,002
Kayla: It's eel. I'm viewing the l as kind of like the final boss of test Grail.
165
00:25:38,066 --> 00:25:39,282
Chris: Yeah, yeah.
166
00:25:39,466 --> 00:25:49,854
Kayla: I'm saying that now. And something worse is going to come along. Not that long termism is necessarily bad. It's not necessarily bad. And actually, I will say there is another final boss that may or may not come up in the show.
167
00:25:49,902 --> 00:25:51,702
Chris: Oh, is this like a secret boss?
168
00:25:51,846 --> 00:25:53,030
Kayla: I think there's a hidden boss.
169
00:25:53,110 --> 00:25:54,558
Chris: Hidden boss. Cool.
170
00:25:54,654 --> 00:26:01,638
Kayla: There's something. I'll just say it here. There's something called effective accelerationism. That's like a movement that's currently taking shape.
171
00:26:01,774 --> 00:26:03,126
Chris: Well, now it's not a secret boss anymore.
172
00:26:03,158 --> 00:26:04,486
Kayla: And that's the secret boss.
173
00:26:04,638 --> 00:26:10,130
Chris: Okay, is this like one of those bosses that is optional, but if you fight it's harder?
174
00:26:11,430 --> 00:26:12,758
Kayla: Yes, sure.
175
00:26:12,854 --> 00:26:14,090
Chris: Ruby. Weaponization.
176
00:26:14,580 --> 00:26:40,514
Kayla: Effective altruism is one thing. I'm just trying to explain what it is. Effective altruism is like, maybe we shouldn't let AI kill everyone and we should have some safety regulations. And effective accelerationism says fuck you. No, the only way we can save the world and the future of humanity is if we pedal to the metal. No regulations on AI get wrecked. But they're not in the task creel bundle yet.
177
00:26:40,652 --> 00:26:56,894
Chris: Mm. They're sort of like orbiting around it. By the way, speaking of letters like, do you know how hard it is for somebody in the video game industry to rework their brain around EA? Meaning, I know. Effective altruism and not electronic arts.
178
00:26:56,942 --> 00:27:18,170
Kayla: I know. Me too. One important thing to know about EA, the movement, not electronic arts, is that it's primarily a quote, unquote, like elite movement, meaning that it originated in high status educational institutions, appeals directly to the very wealthy. Obviously. It's all about like, give a lot of your money, earn to give, make a lot of money so you can give it. And it has therefore become.
179
00:27:18,210 --> 00:27:19,106
Chris: Alleviate your guilt.
180
00:27:19,178 --> 00:27:52,740
Kayla: Yeah. It's therefore become very pervasive in Silicon Valley culture. And that's where the long termist subculture incubated and hatched to define longtermism more deeply. We'll go back to Macaskill again. He long termism is the view that positively influencing the long term future is a key moral priority of our times. It's about taking seriously the sheer scale of the future and how high the stakes might be in shaping it. It means thinking about the challenge we might face in our lifetimes that could impact civilization's whole trajectory and taking action to benefit not just the present generation, but all generations to come.
181
00:27:53,480 --> 00:28:01,664
Chris: Okay. Like, again, like with every other letter on the intro bit, I'm sort of on board.
182
00:28:01,752 --> 00:28:03,536
Kayla: Yeah. It's the argument for climate change.
183
00:28:03,608 --> 00:28:12,760
Chris: Right, right. There's just a lot of broadness and assumptions there about when you say long term future, how long? What do you mean?
184
00:28:13,180 --> 00:28:25,060
Kayla: Who, who is a good question. In his recent book, what we owe the Future, Macaskill breaks it down further. And then Wikipedia pulled a great quote so I didn't have to do the hard work of going and checking the book out from the library.
185
00:28:25,180 --> 00:28:26,148
Chris: Thanks, Jimmy Wales.
186
00:28:26,244 --> 00:28:34,968
Kayla: Wikipedia describes the books as such. His argument has three parts. First, future people count morally as much as the people alive today.
187
00:28:35,104 --> 00:28:36,104
Chris: All right, now I'm off.
188
00:28:36,232 --> 00:28:46,340
Kayla: Second, the future is immense because humanity may survive for a very long time. And third, the future could be very good or very bad, and our actions could make the difference. End quote.
189
00:28:46,640 --> 00:28:56,734
Chris: Okay. Yeah. Two and three seem alright. I don't know about the valuing the future humans just as much as existing humans.
190
00:28:56,792 --> 00:28:58,034
Kayla: I got a problem with that one.
191
00:28:58,122 --> 00:28:59,666
Chris: That is like mad speculative.
192
00:28:59,738 --> 00:29:05,770
Kayla: I got a problem with that one. Yeah, I'm gonna not talk about my problems with that one yet. I'm gonna hold off.
193
00:29:05,810 --> 00:29:08,962
Chris: You're just gonna say it. You're just gonna tease it.
194
00:29:09,026 --> 00:29:16,314
Kayla: I just. This episode again, is more for like information and background. And the next episode is the color episode where I get to go like, I think that this is dumb.
195
00:29:16,402 --> 00:29:17,458
Chris: Oh, that's my favorite part.
196
00:29:17,514 --> 00:29:31,156
Kayla: I know. If you'll remember from previous episodes, this boils down to, quote, bringing more happy people into existence is good. All other things being equal, long term risks are generally focused on existential risks and preventing the destruction of humanity. Which is a good thing.
197
00:29:31,268 --> 00:29:34,428
Chris: It's a good thing. I can't disagree with that. As broadly as it's stated.
198
00:29:34,484 --> 00:29:44,772
Kayla: I'm back around on longtermism after this episode. There's problems, there's problems. But also fearing about climate change and wanting to fix it, that is a.
199
00:29:44,796 --> 00:29:50,916
Chris: Long termist issue, if that's what. For the long termists that care about that kind of thing, I agree with you.
200
00:29:50,948 --> 00:30:10,672
Kayla: A lot of them do. A lot of them do. Okay, existential risk. I keep bringing up climate change, but this can also cover nuclear war, pandemics, global totalitarianism, and then, of course, the weirder stuff like nanotechnology and the grey goose stuff, and artificial intelligence. AI AGI, that stuff.
201
00:30:10,776 --> 00:30:12,056
Chris: Grey goose is good.
202
00:30:12,128 --> 00:30:36,578
Kayla: Grey goo. Grey goo. The nanobots just turn everything into gray, goes into vodka. Yeah. Long termists seek to reduce these risks so that we can improve the number and quality of future lives over long time scales. They also believe that human. The reason why this is, like, important to them now is they believe that humanity is currently at a critical inflection point where what we do now determines the ultimate future of humanity, which has.
203
00:30:36,634 --> 00:30:38,030
Chris: Never been true before.
204
00:30:38,370 --> 00:30:43,592
Kayla: It's. I'm. I don't think they're totally right, but I also don't think they're totally wrong.
205
00:30:43,786 --> 00:30:44,276
Chris: Yeah.
206
00:30:44,348 --> 00:30:50,796
Kayla: If you look, especially, again, climate change. If you look at climate change and we hear all the time, like, if we don't get our emissions down, then it's gonna be ruining the world forever.
207
00:30:50,868 --> 00:30:57,356
Chris: My only joke there was, at all points in time, humanity is affecting what comes after us.
208
00:30:57,388 --> 00:30:59,148
Kayla: Yes, you're right.
209
00:30:59,284 --> 00:31:02,900
Chris: But, but we're extra special. You're totally right.
210
00:31:02,940 --> 00:31:09,360
Kayla: Yeah, I think we're extra special. I think that. I think that. I can't argue with the climate change thing. We are extra special in that.
211
00:31:09,690 --> 00:31:15,402
Chris: Yes. And also, it's not. Climate change isn't the first environmental catastrophe that we've had to contend with.
212
00:31:15,466 --> 00:31:16,858
Kayla: Oh, really?
213
00:31:16,994 --> 00:31:17,710
Chris: Yeah.
214
00:31:18,610 --> 00:31:20,790
Kayla: You're sound like a climate change denier.
215
00:31:21,770 --> 00:31:25,850
Chris: No, I'm not saying it's. It's not the first man made environmental.
216
00:31:25,890 --> 00:31:26,810
Kayla: We all know.
217
00:31:26,970 --> 00:31:30,946
Chris: Just don't be upset that you're. You're taking the l here. You're doing the l episode.
218
00:31:30,978 --> 00:31:32,522
Kayla: There absolutely is no l here for.
219
00:31:32,546 --> 00:31:35,306
Chris: Me to take all kinds of l's. It's raining l's.
220
00:31:35,458 --> 00:32:16,568
Kayla: But again, we go back to the question, what would you say you do here. And then we go back to Scott Alexander's article on the effectiveness of these movements. And I'm going to now focus on the AI section, because, again, that's such a big subset for long termists. So, quoting from Scott Alexander's article, things that they have done include founded the feel of AI safety and incubated it from nothing up until the point where many people are talking about this, endorsing it. We've got Sam Altman, which, oh, boy, do we need to talk about that next episode. We've got Bill Gates, we've got big names, and even, I think, the us government. We're all talking about AI safety, right?
221
00:32:16,664 --> 00:32:22,128
Chris: We have enough of a notion of it that Andreessen Horowitz can just steamroll right over.
222
00:32:22,304 --> 00:32:23,336
Kayla: He's an IC guy.
223
00:32:23,408 --> 00:32:24,100
Chris: I know.
224
00:32:24,440 --> 00:32:53,032
Kayla: Another thing is, EA helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future super intelligences. They've gotten major AI companies, including OpenAI, to work with arcevals and evaluate their models for dangerous behavior before releasing them. They became so influential in AI related legislation that political. Accuses effective altruists of having, quote, taken over Washington and, quote, largely dominating the UK's efforts to regulate advanced AI.
225
00:32:53,136 --> 00:32:54,940
Chris: Ooh, that's some language.
226
00:32:56,920 --> 00:33:12,076
Kayla: They helped the british government create its frontier AI task force. And I like this assertion from Scott Alexander. Won the PR war. A recent poll shows that 70% of us voters believe that mitigating extinction risk from AI should be, a, quote, global priority.
227
00:33:13,348 --> 00:33:14,920
Chris: Wonder where that poll came from.
228
00:33:15,300 --> 00:33:20,900
Kayla: I believe that quote comes from the Artificial intelligence Policy Institute, or AIPI.
229
00:33:21,060 --> 00:33:22,876
Chris: Okay, so they did some polling.
230
00:33:22,988 --> 00:33:25,968
Kayla: Did some polling. It was conducted by YouGov.
231
00:33:26,084 --> 00:33:28,120
Chris: It was conducted by the t 101.
232
00:33:28,160 --> 00:33:29,540
Kayla: It was definitely conducted by.
233
00:33:30,280 --> 00:33:37,380
Chris: It came door to door. Hello. Are you afraid of my metal body?
234
00:33:39,240 --> 00:33:57,438
Kayla: And it's the ones that say no. You really got to watch out for a couple non AI. But still, long termist related wins were helped organize the secured DNA consortium, which helps DNA synthesis companies figure out a. What their customers are requesting and avoid accidentally selling bioweapons to terrorists.
235
00:33:57,574 --> 00:33:58,302
Chris: That's good.
236
00:33:58,406 --> 00:34:08,790
Kayla: Yeah. That's also, like, a thing that people buy on the dark web. I watched this show on Netflix that I told you about. Remember the roommate from hell or whatever that show was called?
237
00:34:08,830 --> 00:34:09,398
Chris: Oh, yeah.
238
00:34:09,494 --> 00:34:22,951
Kayla: And one of the people had a roommate that was constantly trying to poison and kill her. And she ordered. She didn't order staph infection. She ordered a worse, unsurvivable version of staph. Infection off of the dark web.
239
00:34:23,014 --> 00:34:23,735
Chris: Jesus Christ.
240
00:34:23,806 --> 00:34:26,998
Kayla: And, like, luckily the FBI found it or something.
241
00:34:27,159 --> 00:34:29,351
Chris: Don't do that. Don't do that, don't.
242
00:34:29,455 --> 00:34:34,659
Kayla: They also provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.
243
00:34:34,958 --> 00:34:36,215
Chris: Okay, that's a good one.
244
00:34:36,286 --> 00:34:44,139
Kayla: They donated tens of millions of dollars to pandemic preparedness causes years before COVID and positively influenced some countries COVID policies.
245
00:34:44,599 --> 00:34:45,175
Chris: Okay.
246
00:34:45,246 --> 00:34:58,103
Kayla: And again, these are claims from Scott Alexander. You know, take everything with a little bit of a grain of salt, but these are ea and long termist causes and things that they're talking about thinking about saying we should donate our time, attention, and money to.
247
00:34:58,271 --> 00:35:11,831
Chris: All right, keeping your Scott Alexander hat on. What do you think he would say to Elias Rudkowski's thing where he's like, it's okay if we get into a global thermonuclear war, if it prevents AI catastrophe?
248
00:35:11,975 --> 00:35:37,022
Kayla: I don't get the sense that Scott Alexander would think that was a good idea, but I don't know. I get the sense and I'm not. I haven't read the sequences, but Scott Alexander seems, maybe, I don't say more measured, but definitely seems more sequenced, less focused. Elie Isaac Dukowski is very focused on AI threat. And I think that Scott Alexander's focus is a little wider.
249
00:35:37,086 --> 00:35:37,390
Chris: A little.
250
00:35:37,430 --> 00:35:56,410
Kayla: Okay, a little broader. The key argument for long termism is basically this. Quoting from a vox article, quote, future people matter morally just as much as the people alive today. There may well be more people alive in the future than there are at the present or have ever been in the past, and we can positively affect future people's lives.
251
00:35:56,990 --> 00:36:06,574
Chris: I'm, again, exactly like I was before, down with all of that, except for I don't know where they're getting the future. Hypothetical people are as important as.
252
00:36:06,662 --> 00:36:31,740
Kayla: I don't either. I don't either. But, like, imagine if you lived 500 years from now and you lived in a world where nuclear, global nuclear war happened 500 years prior, and now you are. Your life fucking sucks. Would you have some anger at your ancestors? Would you think that they had morally owed you better?
253
00:36:33,160 --> 00:37:24,960
Chris: And this is hypothetical, so this doesn't need to be hypothetical because we already do live 500 years after other humans, and we also go 100 years after other humans. I don't particularly care for a lot of actions of my ancestors, and some of them do impact me and my fellow citizens to this day. So I think sometimes the answer to that is yes. I wish there were some effective altruists in the 18 hundreds that had ended slavery sooner. Right. That would have been nice, right. Or if they were around when redlining was a thing and had managed to have that not be. That would be nice. By the same token, I don't know. You go back far enough, and there have been world wars. Certainly there's been world wars in this past century, but even before that, there's wars that consumed all of Europe.
254
00:37:26,620 --> 00:37:39,094
Chris: I'm not saying that's a good thing. I'm just saying that once you get far enough in the future, it's kind of like, I don't know. I don't know if that would have been better off a different way. I don't even know if I would exist.
255
00:37:39,182 --> 00:37:48,494
Kayla: But I think that's why these guys talk about x risk, because x risk is different than what previous peoples have been capable of.
256
00:37:48,582 --> 00:37:59,614
Chris: Sure. That's why they're concerned with the utter erasure of humankind. And I get that. God, now I'm, like, arguing in their favor because I'm saying, like, even more.
257
00:37:59,662 --> 00:38:20,878
Kayla: I think it's super wrong to argue in the favor. I think we'll get into some of the problems in the next episode. The problem comes from fucking people. It's always, people fuck shit up. Like, we are not perfect. And even if you take a perfect ideology, which this is not, it's gonna go in some weird ways. And it has gone in some weird ways, and it continues to go in some weird ways.
258
00:38:20,934 --> 00:38:21,222
Chris: Right.
259
00:38:21,286 --> 00:38:30,806
Kayla: And I think that issue of future people matter morally as much as the people today has gotten really warped in some of these guys brains to mean future people matter more.
260
00:38:30,918 --> 00:38:31,310
Chris: Right.
261
00:38:31,390 --> 00:38:39,286
Kayla: And we must do things to save those future people. Fuck everyone alive today. They can suffer and die. Those people matter. And that's a problem.
262
00:38:39,358 --> 00:38:47,890
Chris: That dog ends up wagging that tail with the like. Therefore, all the stuff I'm doing as a billionaire is already good. Oh, God.
263
00:38:48,270 --> 00:38:55,106
Kayla: I think that's my biggest problem with this stuff, is that these guys that are talking about it are all rich. And I don't care what they have.
264
00:38:55,138 --> 00:38:57,018
Chris: There's zero diversity. It's like they're all.
265
00:38:57,154 --> 00:39:01,258
Kayla: It's all rich white people. This is a very, very white movement.
266
00:39:01,434 --> 00:39:02,122
Chris: Yeah.
267
00:39:02,266 --> 00:39:12,498
Kayla: And there's just. There's far too much wealth here for me to, like, be comfortable with these guys talking to each other and planning stuff for my life and my children's lives and my great grandchildren's lives and.
268
00:39:12,514 --> 00:39:13,882
Chris: Your great, great.
269
00:39:14,026 --> 00:39:30,676
Kayla: And some of these people, you would be shocked. I'm sure you're shocked. Terrible records on, like, how they talk about disabled people and how they talk about. You don't say, yeah, it's not great. It's not great. But that's for a future episode.
270
00:39:30,788 --> 00:39:43,236
Chris: Yeah. I just. I don't know. I do like your. Your question, though. I do like your question of, like, if you live 500 years, because I'm thinking of, like, how much do I give a shit about what they were doing in the year 1600.
271
00:39:43,308 --> 00:39:43,564
Kayla: Right.
272
00:39:43,612 --> 00:39:47,570
Chris: You know? Like, I don't know. I don't know. I do, and I don't. I don't know.
273
00:39:48,470 --> 00:40:07,438
Kayla: Like I said, doing this episode kind of brought me back around on some of these ideologies, and then. And then I scurried away. And then they brought me back, and then I scurried away. It's like you doing the less wrong episodes. Like, these movements have contributed to some pretty inarguably good things. Malaria. Great.
274
00:40:07,614 --> 00:40:10,350
Chris: Yeah, malaria is awesome. I'm glad they contributed to it.
275
00:40:10,470 --> 00:40:28,926
Kayla: There's a lot of really bad things here, and it's. It's no fun to just talk about the good stuff. So next time on culture, just weird. We are going to get into the w part of our acronym, the weird. What the hell is going on with Eanl that's had it in the headlines over the last year? And where is it going now?
276
00:40:29,038 --> 00:40:31,286
Chris: And the J part of our acronym, Juicy.
277
00:40:31,358 --> 00:40:36,142
Kayla: Juicy. Called her juicy weird. This is Kayla, this is Chris, and.
278
00:40:36,166 --> 00:40:40,150
Chris: This has been the long term call to her. Just weird.