Join the conversation on Discord!
July 2, 2024

S6E14 - The Future of Humanity: TESCREAL

Wanna chat about the episode? Or just hang out?   --- 🎵 it's like 10^58 motes 🎵 when all you neeeed is a torture 🎵   Chris & Kayla discuss the third & final piece of Émile Torres's interview. --- *Search Categories*...

Wanna chat about the episode? Or just hang out?

Come join us on discord!

 

---

🎵 it's like 10^58 motes 🎵 when all you neeeed is a torture 🎵

 

Chris & Kayla discuss the third & final piece of Émile Torres's interview.

---

*Search Categories*

Anthropological; Science / Pseudoscience; Common interest / Fandom; Internet Culture

 

---

*Topic Spoiler*

TESCREAL & interview with Dr. Émile P. Torres

 

---

Further Reading

Dr. Émile P. Torres's Website

Dr. Émile P. Torres Wikipedia

Dr. Émile P. Torres Twitter

Dr Torres & Dr Gebru's joint paper on TESCREAL

Interesting claim that the existence of birth control means we're already Transhuman (and therefore have lots of evidence about its effects)

---

*Patreon Credits*

Michaela Evans, Heather Aunspach, Alyssa Ottum, David Whiteside, Jade A, amy sarah marshall, Martina Dobson, Eillie Anzilotti, Lewis Brown, Kelly Smith Upton, Wild Hunt Alex, Niklas Brock, Jim Fingal

<<>>

Jenny Lamb, Matthew Walden, Rebecca Kirsch, Pam Westergard, Ryan Quinn, Paul Sweeney, Erin Bratu, Liz T, Lianne Cole, Samantha Bayliff, Katie Larimer, Fio H, Jessica Senk, Proper Gander, Nancy Carlson, Carly Westergard-Dobson, banana, Megan Blackburn, Instantly Joy, Athena of CaveSystem, John Grelish, Rose Kerchinske, Annika Ramen, Alicia Smith, Kevin, Velm, Dan Malmud, tiny, Dom, Tribe Label - Panda - Austin, Noelle Hoover, Tesa Hamilton, Nicole Carter, Paige, Brian Lancaster, tiny, GD

Transcript
1
00:00:00,320 --> 00:00:16,574
Emile Torres: So I think there's sort of two problems. One is the values that are used to determine what it means to become a superior post human. You know, if you say like, we're gonna make ourselves better, well, better is a.

2
00:00:16,622 --> 00:00:17,390
Chris: By what standard?

3
00:00:17,430 --> 00:00:19,010
Emile Torres: Yeah, by what standard?

4
00:00:38,250 --> 00:01:37,514
Chris: Hello, everybody, and welcome back to cultajist weird. What youre about to hear is part three of a three part interview about the transhumanist movement, its offshoots, and its cultural impact. If youd like to listen to the first two episodes in that series, that would be the previous and previous episode to this one. Now, without further ado, here's the final piece of my chat with Doctor Emile Torres. I want to talk about your journey, and even my journey honestly, within this idea of TESCREALism or singletarianism or whatever I would have called myself, a lot of the things that these guys say are at first blush hard to disagree with. We want to improve the human condition and lessen suffering. We want to create the most good for the most people is what effective altruists say. Where does that go wrong?

5
00:01:37,602 --> 00:01:45,030
Chris: How does that, I know we've talked about it a little bit already, but what takes that good stuff and turns it into bad stuff?

6
00:01:45,490 --> 00:02:31,196
Emile Torres: Yeah, I mean, fantastic question. There are a bunch of things to say. I think maybe one thing to mention, or to start with, is something that you had pointed to earlier, which is this notion of shut up and multiply. And so that was illustrated by Elias Judkowski, very much a part of the TESCREAL bundle. Transhumanists who participated in the exotropian movement was a big singularitarian. He hired Ben Goertzel, the leading cosmist, to head his singularity institute. He founded. Rationalism has played a huge part in development of EA and long termism. So, okay, sorry, but this is precisely why we came up with the acronym. That is too much to say.

7
00:02:31,308 --> 00:02:32,480
Chris: Right, right.

8
00:02:32,980 --> 00:03:24,124
Emile Torres: So he is a TESCREAList. That's the much more economical way of putting it, yes. So he argued that if you take seriously this idea that morality comes down to the mathematics, then if you're presented in a forced choice situation between two possibilities, one is that a single individual is tortured mercilessly and relentlessly for 50 years straight. And the other is that some absolutely enormous, unfathomable number of people suffer the almost imperceptible discomfort of having a speck of dust in their eye. Which of these two scenarios should you choose? In other words, which is better or which is worse? It would be a more intuitive way of putting it. And he argued that if you just do the math, then the second scenario of dustbacks is worse, because even though.

9
00:03:24,172 --> 00:03:26,620
Chris: Depending on the number, if the number of people is large enough.

10
00:03:26,740 --> 00:04:24,764
Emile Torres: If it's large enough. Exactly. So just do the math. Add up the tiny, tiny discomfort that each individual feels, and ultimately, then the total quantity of suffering will be greater than one person being tortured brutally. And so shut up and multiply. This is a key, influential idea within these communities. I think it just illustrates one way that it goes wrong, because when you hear these ideologies and their communities advertised as valuing reason, rationality, science, evidence, clear thinking and so on, that sounds really good. And then you see that their understanding of what it means to be rational and ethical and so on leads them to say, oh, we should pick the person being tortured for 50 years because that is less bad than, you know, bazillion people.

11
00:04:24,892 --> 00:04:29,148
Chris: What if that person's really annoying, though? I mean, then. Right, then it stands to reason, I.

12
00:04:29,164 --> 00:04:33,560
Emile Torres: Think, yeah, maybe there are other factors that.

13
00:04:34,060 --> 00:05:05,690
Chris: Maybe if it was me, then people would be like, okay, right? Yeah, I think this is Lee off podcast, too. But my wife, co host and I, we discuss the moten the eye thing a lot. And I try to take. I actually take the elezer position about, like, well, if it's big enough. And she's like, no, you're an idiot. I think she's right. But it's, you know, we just. We talk about it a lot. All right, I have to break us here, because I feel like we need to revisit the moat discussion.

14
00:05:06,070 --> 00:05:07,290
Kayla: I don't want to.

15
00:05:07,750 --> 00:05:09,806
Chris: You want to describe my face right now?

16
00:05:09,878 --> 00:05:11,126
Kayla: No, I want to describe what I'm.

17
00:05:11,158 --> 00:05:14,878
Chris: Feeling because I'm feeling my grinning, devilish face.

18
00:05:14,974 --> 00:05:21,342
Kayla: I'm feeling unbridled rage. Okay, first of all, you think I'm right?

19
00:05:21,486 --> 00:05:27,622
Chris: Well, all right. Right is a pretty definitive statement in a world where.

20
00:05:27,726 --> 00:05:29,370
Kayla: Cause you've never said that to me.

21
00:05:29,550 --> 00:06:03,954
Chris: Well, first of all, I'm not going to admit that to you, like, in your face. Okay? Like, I'm not. I'm not a bad arguer. Second of all, right, again is. It's a. It's a relative sort of thing. Okay, let me. Let me back up. I do think that your position, the position that people take, that. Well, actually, I don't even know your position now that I think about it, because there's kind of like two positions. There's like, you're pro mote or you're pro torture. And I kind of feel like your position is like, this is stupid.

22
00:06:04,042 --> 00:06:06,754
Kayla: Okay? That's two different things.

23
00:06:06,842 --> 00:06:55,840
Chris: So I'm not sure what, like, I think that the people that are ultimately don't torture the person I think have the more correct position. Yeah, I have a hard time squaring that with a utilitarian. I don't think you can square that with a utilitarian view, which either points to the bankruptcy of the utilitarian view, which I think is kind of what Torres was saying, what it does for me, or it just makes it challenging. And I'm not sure if the utilitarian view is totally bankrupt. I tend to think of it as, like, there are different situations and scenarios in which different tools in the toolbox are useful. And I think sometimes it is. You do have to do shut up and multiply and do the math. I think sometimes that is what is required to make a good decision.

24
00:06:56,300 --> 00:07:08,972
Chris: I don't think that it's always that, but I think that if we didn't have that in our toolbox, I think that we'd be down at tools. Yeah. Can't disagree with that, can you?

25
00:07:09,116 --> 00:07:19,622
Kayla: What I can disagree, what I can agree with you here is that I think that there's a fundamental flaw in the setup of the thought experiment.

26
00:07:19,686 --> 00:07:20,126
Chris: Okay.

27
00:07:20,198 --> 00:07:42,690
Kayla: It feels like we're trying to say, like, oh, you can. The calculations that go into quantifying the suffering that is the result of horrendous 50 years of torture, and the calculations that go into quantifying the mode of dust in trillions of people's eyes.

28
00:07:43,720 --> 00:07:48,328
Chris: Trillions is so unbelievably smaller than the number they're talking about is to not.

29
00:07:48,344 --> 00:08:13,510
Kayla: Even be saying, I know, but that makes it even worse for me. And that's a different thing. I think that the bigger the number gets, the more ridiculous and absolutely, like, unhinged. The question is. And makes me go, like, I want to go back and revisit our less wrong episodes and get even more angry, but that's a personal problem. I just. I don't think that the. I don't think it's apples and oranges, my guy.

30
00:08:14,050 --> 00:08:54,060
Chris: So I agree with you. And I think that actually, I might even mention kind of what you're talking about. I think I might mention that as part of my belief about this to doctor Torres in a few minutes here in the interview. But I think part of the problem is. Yeah. Like multiplying extreme. When I say extreme, large numbers. When I say extremely larger than trillions, you wouldn't believe. Okay. Like, the number they're talking about is in a notation that if you were to write the number down on paper. Not only is there not enough paper in the universe, there's not. Even if you wrote a single number on every single atom in the universe, even that wouldn't be enough to write the number down, not the number of atoms in the universe.

31
00:08:54,600 --> 00:09:07,170
Chris: If the number of atoms in the universe each constituted a little piece of paper to write a number down, still wouldn't be enough to write the number down in a regular notation. Unfathomably large number, right? Just to level set what they're talking about.

32
00:09:07,210 --> 00:09:09,698
Kayla: So large as to not function, exist.

33
00:09:09,794 --> 00:09:46,490
Chris: As to cease to have meaning, right? Like, we have numbers that are that small. We have the Planck length, right? So I kind of feel like at a certain point, numbers get large enough that, like morality, like your normal day to day morality calculations just aren't relevant anymore, right? And this kind of goes a little bit with what I think we're going to talk about in the bonus episode with Pascal's mugging. Is that it? When you do start to get arbitrarily large numbers multiplied against some increasingly small percentage benefit or whatever, then decision theory kind of goes out the window. So I agree with that.

34
00:09:46,650 --> 00:10:06,868
Chris: I will also say that in the intervening time since I talked to Emile, and also, obviously, since we had our previous Rocos basilisk episode several years ago, when I was doing the less wrong research, of course, I found the motive dust post. Because you have to, right? It's like famous. Like, if I didn't research that, I wouldn't have done my research properly.

35
00:10:06,924 --> 00:10:09,924
Kayla: It's the thing tearing our marriage apart. I'm glad you went back to the scene of the crime.

36
00:10:09,972 --> 00:10:32,834
Chris: Right, well, exactly. And I want to get more tools in my toolbox on how to tear our marriage apart. So I went there and I read it. There was a good argument against it that I hadn't thought a poster replied with an argument in favor of please don't torture the guy. Mode of dust is fine. I. But they argued it in a way that was, like, within the mathematics.

37
00:10:32,962 --> 00:10:36,258
Kayla: Oh, see, that's good. That's not what I can do, but that's good, right?

38
00:10:36,314 --> 00:10:38,650
Chris: And that's why it was, like, very compelling to me. So basically.

39
00:10:38,690 --> 00:10:43,226
Kayla: But also that's like, oh, sorry, sorry, sorry.

40
00:10:43,338 --> 00:10:49,346
Chris: Yeah, but you know what, though? But rage, though, still rage, don't worry, you can keep your rage. Put your rage in a little box.

41
00:10:49,378 --> 00:10:50,938
Kayla: That was the right thing to do. That was the right thing to do.

42
00:10:50,954 --> 00:10:59,340
Chris: Put your rage in a little box. It's totally fine. So their argument was of the form. Okay, you have to consider recovery time.

43
00:11:00,000 --> 00:11:03,660
Kayla: Mmm mmm. That's real good.

44
00:11:04,520 --> 00:11:14,584
Chris: One person being tortured for 50 years, assuming that they even can live an amount of time in order to recover from that, God knows how long you'd have to live. Hundreds of years maybe, to recover fully from something like that.

45
00:11:14,632 --> 00:11:15,104
Kayla: Right?

46
00:11:15,232 --> 00:11:23,904
Chris: Who knows? If you consider the recovery time of just to bring it back down to, like, an understandable number, let's just say trillions, even though trillions is basically zero.

47
00:11:23,952 --> 00:11:25,256
Kayla: That's why I said trillions earlier.

48
00:11:25,368 --> 00:11:58,510
Chris: You know what? You were correct. If you consider the recovery time of trillions of people. Yes. Maybe the recovery time, if you add it all together, might be longer than the 300 years it would take for the tortured person to recover. But they're all doing the recovery. You multiply the recovery times the number of people the same way you do the discomfort times the number of people. So the recovery time for the mode of dust is roughly 1 ns. Because the multiplication happens on both sides.

49
00:11:58,590 --> 00:11:59,190
Kayla: Right.

50
00:11:59,350 --> 00:12:06,294
Chris: Multiplication happens on the discomfort side and also on the recovery side. And I thought that was actually a really compelling argument.

51
00:12:06,342 --> 00:12:15,574
Kayla: That is a really compelling argument. One I probably would not have arrived at on my own in those terms. So thank you, poster for me either.

52
00:12:15,622 --> 00:12:25,950
Chris: And I'm like, pro math. And I think that utilitarianism in this case, though, it's like saying pro life. No, it's you're anti math. Why are you anti math, Caleb? Hate math.

53
00:12:26,690 --> 00:12:31,490
Kayla: I want to just touch on the second reason why I have rage about this thought experiment.

54
00:12:31,530 --> 00:12:32,690
Chris: This is going to have to be its own episode.

55
00:12:32,730 --> 00:13:15,206
Kayla: It's just the thought experiment itself. I think part of what bothers me about it is that it just feels like a worse. The ones that walk away from Amelus, it just feels like a lopsided version of that. The ones who walk away from Omelis very famous short story by Ursula K. Le Guin in very short story, everybody go read it. It is posited that there's a town that exists called Omelas, and everybody in the town prospers and is happy, and their children have enough to eat, joyous and running around. But the only way to make this happen is that locked away in a closet somewhere is a child who is horrendously abused all of the time. Tortured, tortured. This child's suffering and the joy of the entire town are linked. The joy is dependent on the suffering.

56
00:13:15,278 --> 00:13:24,102
Chris: What I like about this short story, by the way, is that it takes us back down to normal numbers. So it's much more compelling, I think, to talk about.

57
00:13:24,166 --> 00:13:35,608
Kayla: I think the numbers. Yes, but I think the more compelling thing is the. The levels of experience are actually on the same balance. The joy.

58
00:13:35,664 --> 00:13:36,224
Chris: That's what I mean.

59
00:13:36,272 --> 00:13:45,968
Kayla: And prosperity of an entire town being dependent on the suffering and isolation and torture of a single person that I can wrap my head around. And maybe I'm stupid.

60
00:13:46,024 --> 00:13:46,776
Chris: No, that's what I mean, actually.

61
00:13:46,808 --> 00:13:51,312
Kayla: Maybe I'm a dumb ass. No, that's literally what I mean. I get the quadrillions of people, but I just feel like that is.

62
00:13:51,376 --> 00:13:59,460
Chris: No, you're a human with human experiences and human experiences. It's easier to digest a town full of people, not a multiverse full of people.

63
00:13:59,640 --> 00:14:28,436
Kayla: Compelling story and thought experiment to me. And I highly recommend everybody go read it. You can't put the math on it, though. That's the thing. You can't. It's not a utilitarian story. It has that utilitarian trapping. I think. It's not ultimately because of the conclusion. I don't want to spoil it too much. Versus the conclusions that are being drawn on less wrong, which is what we just talked about, like, oh, how do we quantify things like recovery? It's all about the quantifying. And I think the ones who walk away from malice is trying to get.

64
00:14:28,468 --> 00:14:31,292
Chris: Away from the ones who walk away from less wrong.

65
00:14:31,436 --> 00:14:33,996
Kayla: Yeah. Isn't that more right or whatever?

66
00:14:34,108 --> 00:15:23,882
Chris: Yeah, that's more right. Okay, I should shut up now. Yes. No, no. Well, I mean, in general, yeah. But yes, I think that is kind of what that short story is saying. But I also think that short story is, more than anything, just asking us to think about this scenario and the ways in which we make these types of calculations in our own civilization, because we do. And to be completely fair, and this is my other big point that I want to make. I think that's all that the less wrongers were doing, too. I think they get a lot of heat for this particular thought experiment from me personally. From you personally? Well, if you calculate all of the heat mathematically that they get in the world, roughly half of it comes from Kayla.

67
00:15:23,986 --> 00:15:42,514
Chris: There's a little bit from Emile, and then the rest is from everybody else. No, but I just. I think that, like, when I went into the post and, like, read the initial post, read all the replies, my sense wasn't that, like somebody was saying, or even that Eliezer himself was saying, hey, guys, I just wanted to give you this scenario and tell you what the correct answer was.

68
00:15:42,562 --> 00:15:42,914
Kayla: Right.

69
00:15:43,002 --> 00:16:10,102
Chris: It was, hey, we have this utilitarian mindset. So I'm going to purposely pick something. Like, that was the whole point of the thought experiment, was to pick something as horrible as possible one side with the smallest number and as, like, literally, the reason mode of dust he chose in the post. The reason he chose mode of dust was because he was like, what is the least something that's still discomforting, but, like, the minimum thing that's discomforting.

70
00:16:10,166 --> 00:16:10,822
Kayla: Right.

71
00:16:11,006 --> 00:16:27,740
Chris: So it was purposely chosen that way as a thought experiment to consider these utilitarian ideas. And, like, where do they break? If they break? Let's discuss this. Like, I don't think there's anything wrong. No. You know, like, I don't know. I just think that it gets kind of miscast is like, do you see what these idiots believe?

72
00:16:27,900 --> 00:16:55,810
Kayla: I think ultimately one of the things it might boil down to is that some people are going to extrapolate more like value and learning and meaning making out of the post on less wrong. And some people are going to extract more value in learning and meaning making out of something, like the ones who walk away from omelas. And those are just two different kinds of people. These are just two different approaches to a similar thought experiment. And I'm just enraged.

73
00:16:55,970 --> 00:17:06,810
Chris: No, I think you're right. Like, I think that the sort of, like, online debate, and in this case, as I've said in the less wrong episodes, very respectful debate.

74
00:17:06,890 --> 00:17:07,442
Kayla: Right.

75
00:17:07,586 --> 00:17:23,530
Chris: But I definitely think that's still, like, either an acquired taste or, like, something that it doesn't appeal to everybody. I know that my sort of, like, penchant for, like, getting into, like, heated discussions with people is pretty rare. I acknowledge that.

76
00:17:24,109 --> 00:17:25,569
Kayla: I don't think it is.

77
00:17:26,869 --> 00:17:53,336
Chris: Well, the fact that I like it, and at the end, I'm not, like, God, piece of shit. I think that's pretty rare. So I agree with you that the ones that walk away from Omelis is much more of a digestible sort of parable because it's a narrative form. Yeah, just, I guess, long way to say that. I agree with what you just said. There's some people who might find that to be more palatable.

78
00:17:53,448 --> 00:18:01,184
Kayla: Boom. I win. I win. I win the debate. I win it all. I win less wrong. I win the argument. The end.

79
00:18:01,312 --> 00:18:05,392
Chris: Okay. But you better thank that guy that posted about the recovery time. I did.

80
00:18:05,416 --> 00:18:06,920
Kayla: I said thank you to the poster, and I say thank you.

81
00:18:06,960 --> 00:18:12,456
Chris: You already did. Wow, that was fast. We were just talking about it. Okay, I think that we should go back to the interview now.

82
00:18:12,488 --> 00:18:13,380
Emile Torres: Yes, we should.

83
00:18:15,790 --> 00:19:06,078
Chris: I feel like part of the problem with utilitarianism, and even with the long termism, is actually people think that things scale and things don't scale, especially when you're talking about something like ethics or morality or human behavior. I think that things do scale within a certain limit of small numbers that we are used to dealing with the trolley problem. You can kind of answer, you can kind of say, well, I'd like to save the five people, I guess. And I think that makes sense. But I think at some point, there's like a phase change where it kind of doesn't matter if you have ten to the 58 people and you multiply that by the zero one of utility that they lose from having the mote in the eye compared to the one person being tortured.

84
00:19:06,254 --> 00:19:54,466
Chris: Because we've gotten into sort of like a different realm of thinking. And I think that actually kind of comes into play with the long termism as well, is like, well, there's so many people, and it's so far in the future that it kind of jumps beyond the point where we can really actually have a rational discussion about it in this sort of utilitarian framework. Because I still think that there's some value to utilitarianism. I just think that when it gets absurd to absurd degrees with the mote in the eye example, I think that's where it just really breaks down. Let me turn this into a question. So do you think there's something fundamentally we're misusing utilitarianism here, or do you think that there's still a value to sometimes multiplying?

85
00:19:54,498 --> 00:20:49,974
Emile Torres: I guess my view is, I would say it's a bit complex, although I'm oftentimes annoyed when people describe their own views that way, as if they're like a more nuanced thinker than others. I don't mean it that way, but I guess part of the issues, I don't think utilitarianism is completely worthless, as in ethical theory. You mentioned the trolley problem, like in a situation where, I mean, there are different variants of it, of course, but like the standard cases, you're standing next to a railroad track lever, and, you know, that would switch the track so that the trolley would go straight or to the side. And in that situation, it's sort of like you are actively involved in it, but you're also sort of a passive actor because the trolley is going to kill somebody.

86
00:20:50,102 --> 00:20:51,150
Chris: It's already happening.

87
00:20:51,270 --> 00:21:06,330
Emile Torres: It's already happening. Yeah. And so in that situation, it does seem right, you know, I would pull the lever so it only kills one person, although I would feel sick about that. Still just feel sick about the situation.

88
00:21:06,630 --> 00:21:07,610
Chris: Yeah. Yeah.

89
00:21:08,110 --> 00:21:31,450
Emile Torres: But I think utilitarianism, I feel like it's a good theory to sort of have in one's moral toolbox, but it's just one of many tools. And I think deontological constraints are really important. I mean, I myself am probably most aligned with a theory called moral particularism.

90
00:21:31,950 --> 00:21:33,720
Chris: I have not heard of that. Yeah.

91
00:21:33,760 --> 00:22:27,350
Emile Torres: So this is. It's basically so it could be understood as the opposite of a moral generalist approach. And moral generalism is what characterizes most of the other major theories within western ethics. So consequentialism or utilitarianism, deontology and so on, what makes it that generalist? The fact that they are trying to identify these general rules for good ethical behavior that apply. And hopefully, it's just a very small number. Maybe it's just one. In the case of utilitarianism, maximize utility. That's these rules that apply in every single situation at any point in time, past or future. And so one of the particular critiques is that maybe that's just the wrong way to think about ethics. Maybe ethics isn't like science in this sense.

92
00:22:27,390 --> 00:23:18,584
Emile Torres: Like, in science, we try to at least, like fundamental science, we try to come up with these laws that are invariant, as philosophers say, invariant across space and time. So, you know, Einstein's equations for gravity are supposed to work here on earth, no less than they were in the most distant galaxies across the universe, 93 million light years away. And so maybe ethics, like, isn't like that. Maybe. Maybe that works for science. Maybe there are these invariant laws that just. This is just the way reality is everywhere at all times. But maybe ethics is more like the domain of comedy. You know, it's like there are no rules of comedy. Like, if there were, I would be a stand up comedian because I just, like, memorize the rules.

93
00:23:18,632 --> 00:23:54,842
Emile Torres: I go up there, tell the jokes in every situation, no matter where you are, which culture, cultural context, and so on. Here are the rules for being funny. But comedy isn't like that. It, like, really depends on the particular situation. It's like, yeah, you could even say they're guidelines, right? Like, one thing that makes often characterizes a joke is a violation of expectation, right. I think somebody's gonna say something, they have to say the opposite. It took you off guard. That's funny. But, like, that's not a hard, fast rule, because, like, I could, you know, throw something right now as we're talking that would break a window that would violate an expectation. You wouldn't think that's funny?

94
00:23:54,946 --> 00:23:59,202
Chris: I would think it was hilarious. Whose window? Actually.

95
00:23:59,346 --> 00:24:52,060
Emile Torres: Right. Yeah, like that. Okay. That would be like, that would be weird if I did. So maybe ethics is like that. And so that's kind of my view. It's like, I think utilitarianism is an instance of this generalist approach to ethics that maybe is just wrong. And I would say in addition to that, it's just, it's all of the, it's the quantitative approach of utilitarianism that also, in my view, makes it problematic thinking that ethics is just a matter of adding up the numbers and basically treating people, you and I, as near means to an end, because you and I, have zero intrinsic value. On the utilitarian view, we have only instrumental value as a means to an end, to the end of maximizing value. And like I said before, we're just the containers. We're just these fungible, you know, that is replaceable units.

96
00:24:53,040 --> 00:25:08,952
Emile Torres: So I also object to that view that, yeah, like, we should create new, happy people because just these means to an end. No, no. I think people are ends in and of themselves, which is a very kind of deontological way of thinking about it.

97
00:25:09,056 --> 00:25:09,336
Chris: Yeah.

98
00:25:09,368 --> 00:25:10,864
Emile Torres: So does that make sense?

99
00:25:10,952 --> 00:25:53,348
Chris: Totally. Actually, I think, yeah, what you're saying makes a lot of sense. I think, you know, I don't know if I've been like, certainly haven't heard the term. I said I haven't heard the term, but in my head, I kind of feel like that matches me personally in large part, too. You brought up some good points about the trolley problem specifically. First is that it's kind of already smuggling some assumptions in there that we don't really talk about. We just talk about it as well. One versus five. But, okay. Actually, there's a lot of assumptions about where did the trolley come from? Why is it there? Why am I there? Who put me there? There's actually all this context that makes my decision go one way or the other. On the trolley problem that isn't just one versus five.

100
00:25:53,484 --> 00:26:11,026
Chris: What is moral or ethical in one case may not be in another case. And I think that's a scary thought, but I think it's kind of unavoidable. It just doesn't feel like you can universalize kind of like what you're saying. You can't universalize something, a law, around something like ethics or morality.

101
00:26:11,148 --> 00:26:20,262
Emile Torres: Yeah. So that is sort of my approach to thinking about these things. And that's why I said it's complex without trying to compliment.

102
00:26:20,366 --> 00:26:21,502
Chris: It is, though. It is?

103
00:26:21,606 --> 00:26:22,250
Emile Torres: Yeah.

104
00:26:22,590 --> 00:26:51,530
Chris: You said on a previous episode of this interview that you were a TESCREAList. You would call yourself a TESCREALlist yourself probably before you coined the term, but now that's what you would call it. So what was that journey like? What brought you from being into that stuff to being more like, you know, warning about its dangers? Because I got to admit, like, there's things that I still am, like, I want to live forever. That'd be cool, you know, like the Matrix would be fun, you know? I don't know.

105
00:26:51,950 --> 00:27:41,594
Emile Torres: Yeah. I mean, so I think there are two problems with the eugenic aspect of transhumanism, and part of the transhumanist project is like figuring out how to live forever, to overcome diseases and the aging process and so on, which also sounds kind of good to me. You know, I, like a lot of other people, I have death anxiety. I don't want to die. And so I think there's sort of two problems. One is the values that are used to determine what it means to become a superior posthuman. You know, if you say, like, we're going to make ourselves better, well, better is a.

106
00:27:41,642 --> 00:27:42,394
Chris: By what standard?

107
00:27:42,442 --> 00:28:14,384
Emile Torres: Yeah, by what standard? It's a normative term meaning you need some norms or criteria or values to understand how that word is being used or defined. So one of the problems then, concerns the values that underlie this notion of human betterment or perfection or transcendence. There's a lot of ableism that a sort of defining feature of a post human utopia is that there's no disability. There's this notion that, okay, we're going to be more intelligent. Well, what does intelligence mean?

108
00:28:14,512 --> 00:28:15,860
Chris: Obviously, IQ.

109
00:28:16,520 --> 00:29:06,436
Emile Torres: Right. So IQ is like this super limited way of thinking about intelligence. I mean, IQ, the concept IQ tests and so on were developed by various eugenicists in the 20th century who were super racist and really sexist. And they thought, you know, women have, they're not as intelligent. There's certain racial groups that are less intelligent than other groups. And so you consequently was shaped by individuals with who were deeply motivated by these prejudices. So this notion of IQ sort of has inherited some of those deeply problematic ideas. Yeah. So all of this is to say that on the one hand, it's just the criteria according to which they are defining post humanity that I think is really problematic. And then I also think that in practice, the process of trying to become post human would probably have pretty catastrophic consequences.

110
00:29:06,588 --> 00:29:59,970
Emile Torres: So we hinted at this earlier, like, on the one hand could massively increase wealth inequality. You know, could make the rich even richer than they are. I mean, since 2020, the five richest individuals in the world have doubled their wealth, while the bottom 60%, which is roughly 5 billion, have seen their wealth decline with these enhancement technologies, like, you could expect that to be amplified greatly. But also there was this philosopher named Robert Sparrow, who had this really good paper where he pointed out that, so imagine a bunch of parents who have the option radically enhancing their children. Let's imagine that they are embedded in a racist, sexist, homophobic society, maybe a society exactly like ours.

111
00:30:01,350 --> 00:30:24,038
Emile Torres: And now, if they are rational and if they're inclined towards consequentialism, so what matters for the rightness or wrongness of their actions is the actual consequences, then they're going to be inclined, if they're rational, to ensure that their children are men, are white, are straight, and so on. Because. Straight.

112
00:30:24,094 --> 00:30:24,814
Chris: Neurotypical.

113
00:30:24,902 --> 00:30:32,422
Emile Torres: Yep, neurotypical. Exactly. So because those people are the ones who do best in our racist, homophobic, sexist society.

114
00:30:32,566 --> 00:30:33,134
Chris: Right?

115
00:30:33,262 --> 00:31:19,534
Emile Torres: And so as more and more parents choose to have children like this, the individuals, the parents who have objected and resisted this urge, they're going to become increasingly the minority. And so there's more and more pressure for them to say, well, I don't want to have a kid who is not white, because now 90% of society is white. And just think about how all of the opportunities that my child is going to be denied by virtue of their skin color. And so he points out, like, it really could just take one or two generations for society to homogenize and ultimately to become exactly the sort of society that those old eugenicists from the 20th century wanted to bring about. You know, blonde hair, blue eyes, white men, and so on.

116
00:31:19,582 --> 00:32:05,218
Chris: Right. I mean, kind of without even intending, which is interesting because I forget the vocab word, but I know that TESCREALists differentiate themselves, particularly transhumanists combat the eugenicist label by saying, well, we are not like, I think you use the term first wave eugenicist. We're not like that. Not top down. We're not. It's okay because we're a democracy. It's going to be a ground up. I know they use a term like genealogical something or there. It's ground up. So morphological freedom. Yeah, yeah, exactly. So. So it'll be fine this time. But the scenario you just described was literally that. And because of these forces that people don't really like to think or talk about, it would just push us into that same thing anyway.

117
00:32:05,354 --> 00:32:23,584
Emile Torres: Yeah. So I think that is a key point, because they'll say, well, our version of eugenics is not authoritarian. The old eugenics in the US and in Germany and elsewhere, there are four sterilizations and so on. No, no, this is a liberal form of eugenics. So you get to choose whether you want to be enhanced or not.

118
00:32:23,712 --> 00:32:29,528
Chris: You get to choose whether you want to be in a lower class or have all of the benefits. It's totally up to you.

119
00:32:29,664 --> 00:33:17,834
Emile Torres: Exactly. Yes. So it does not, the state doesn't need to be involved for there to be a kind of illiberalism to this whole situation. Because when you're in a society and like, okay, you're a college student and like, you know, half of your peers are enhancing their minds with these nootropics, you know, drugs, like they're taking modafinil or whatever to make them enable themselves to sleep for just 2 hours a day, study more, and so on. You're going to feel enormous pressure if you're a parent. And like all the other parents around, enhancing or even just 10% of parents are radically enhancing their children so they have a leg up. You know, those kids are able to sit still in class better than the other kids who are just normal and can't be sitting for 8 hours a day.

120
00:33:17,922 --> 00:33:38,770
Emile Torres: Then there's, I mean, there is this invisible hand, if you will, that is deeply oppressive and is going to be forcing them to make a decision. I feel like one of the more explicit acknowledgments of this illiberal aspect to so called liberal eugenics was in Ray Kurzweil's 2005 book, the singularity is near. Yeah.

121
00:33:38,810 --> 00:33:54,328
Chris: So, hey, sorry. That book has been enormously influential on me, and I would say when I read it in 2006 or seven, is a big reason why I think I sort of went down that singularitarian path.

122
00:33:54,424 --> 00:33:55,080
Emile Torres: Yeah.

123
00:33:55,240 --> 00:34:00,768
Chris: So it's. Yeah, and I know that's, you know, stands to reason. Right. It's like the bible of singletarianism. But anyway, just.

124
00:34:00,824 --> 00:34:05,584
Emile Torres: Yeah, that book was also what got me into transhumanism in the first place.

125
00:34:05,752 --> 00:34:06,056
Chris: Right.

126
00:34:06,088 --> 00:34:47,994
Emile Torres: Yeah. And so he has this fictional conversation with Ned Ludd, whose name gave us the term Luddite, of course. And so in this conversation, Ludd is saying, what if I don't want to genetically modify myself to become a post human right? And Kurzweil's response is, well, if you don't want to do that is completely fine. We're not authoritarians. You are free to do whatever you want. But if you don't genetically enhance yourself, then you won't be around for very long to influence the debate. And to me, that is kind of a chilling statement. You know, you're going to get left behind, if I may use that term.

127
00:34:48,161 --> 00:34:49,858
Chris: Oh, callback. Excellent.

128
00:34:49,954 --> 00:35:01,530
Emile Torres: Yeah, good callback. So you'll get left behind. And in that sense, it's like Kurzweil is acknowledging. Okay, yeah, this is quote unquote liberal, but do you really have a choice? No, you don't.

129
00:35:01,650 --> 00:35:12,690
Chris: I mean, in a way, it's almost like more dangerous than top down because it's. Yeah, exactly. It's like it's easier to just kind of let it in. Yeah.

130
00:35:12,810 --> 00:35:39,856
Emile Torres: So these are some of my problems with transhumanism. And, you know, I do not accept the anti transhumanist argument that we should not pursue radical human enhancement because that's plain God. I don't have any problem fundamentally with like, modifying ourselves. And as a matter of fact, if our species exists for long enough, you know, for the next 100,000 plus years, we will evolve into a post human species that isn't right.

131
00:35:39,888 --> 00:35:40,872
Chris: It'll happen regardless. Yeah.

132
00:35:40,936 --> 00:36:14,236
Emile Torres: Unless we use genetic technology to fix our genotypes in some way. But otherwise we will evolve. So I don't have a problem with that. It's just like, on the one hand it's what are the values that determine who we should become as post humans? And on the other hand, it's like in practice it just looks like it's going to be absolutely disaster. It's going to worsen wealth disparities and could result in homogenization of society, which is exactly what some of the trans humanists say. Oh, we want a diversity of different kind of post human beings. Well, I don't know. In practice, is that going to happen? I don't think so.

133
00:36:14,388 --> 00:36:41,210
Chris: Yeah. I mean, the thing that's most attractive to me and I think probably most people and is also like number one on the agenda, the anti aging. I don't think even me who is like, oh, man, anti aging. Yes, let's do it. I don't know how you can look at that and say, like, you know, that's going to be rich. People are going to get it first and there's going to be a feedback effect there. Right. Plain as day. You don't even have to be particularly into this stuff to see that. That's the case.

134
00:36:41,590 --> 00:37:03,168
Emile Torres: A couple years ago there was a clip from an interview with Jared Kushner that was being passed around my circles of him saying, we may be the first generation to never die. And just seeing him say that just underlined it, we got a radical life extension technology in the next few years, Donald Trump could be.

135
00:37:03,304 --> 00:37:06,992
Chris: I'm gonna cue a sound effect of brakes screeching here.

136
00:37:07,096 --> 00:37:07,740
Emile Torres: Right?

137
00:37:09,640 --> 00:37:22,940
Chris: So was it that notion that sort of like, oh, this is going to probably exacerbate existing problems? Was that the thing that kind of started pushing you away from it? Or was it something else? Was it like, a particular moment in time?

138
00:37:23,800 --> 00:38:27,772
Emile Torres: So it was that. And there were really two other things. One is related, I guess, which has to do with realizing that the particular utopian vision that is at the heart of this bundle of ideologies was designed almost entirely by a bunch of super privileged white dudes at elite universities like Oxford and in Silicon Valley, who were all, like, very libertarian, kind of neoliberal in their political leanings. And consequently, this vision, which is basically an extension of techno capitalism, is deeply impoverished. And so I've said before, like, one of the most striking features of the TESCREAL literature is that there's almost no acknowledgement, even of much less serious consideration of what the future could or should look like from alternative perspectives. There's no discussion of indigenous communities and what the future might look like from their particular traditions.

139
00:38:27,916 --> 00:39:17,092
Emile Torres: A lot of them have visions of the future. Islam, even. I mean, I'm very much an atheist, but given the fact that religion is growing around the world, not shrinking on the global stage, religion is increasing. Islam is the fastest growing religion in the world. There'd be almost 3 billion Muslims in. In the world by 2052.6 billion, to be more exact. I don't know how you can be a serious futurist and just ignore that. Just pretend like religion doesn't exist. No, if you want to seriously talk about what the future should look like, you need to take seriously these various alternative perspectives of what the future could be. And I think it's not enough to just even acknowledge that these perspectives exist or try to incorporate them into your. Your vision of the future.

140
00:39:17,156 --> 00:39:29,720
Emile Torres: There's a colleague of mine whose name is Monica Vilsquita, and the way she puts it is that you can never design for, you must always design with. So this is a claim about the process.

141
00:39:30,580 --> 00:39:31,452
Chris: That's good.

142
00:39:31,596 --> 00:40:02,460
Emile Torres: It's really good. So it's not even enough to say, we're sensitive to social justice issues and sense the fact that there are these cultural perspectives and so on, but you need to include in the design process voices that represent these particular perspectives. And if you don't do that, then the utopia that you are designing is going to be a dystopia for most people. And so this is why I've argued that avoiding an existential catastrophe would be absolutely catastrophic.

143
00:40:03,800 --> 00:40:04,792
Chris: Interesting.

144
00:40:04,976 --> 00:40:58,270
Emile Torres: It's a weird way to put it. The idea is that if we avoid an existential catastrophe, then by definition, we will have realized the utopian vision at the heart of the TESCREAL bundle. And if we realize that utopian vision at the heart of the TESCREAL bundle, I think that would be completely catastrophic for most of humanity, because there is just no place for other cultures, for other thought traditions and so on. This vision is just all about just maximizing value, going out, plundering the universe. It's not even clear that the natural world, that non humanity creatures, have a future in this utopia that they're trying to bring. So, like, for example, Will MacAskill, one of the leading figures in this TESCREAL group, wrote in his 2022 book what we are the Future, that our systematic obliteration of the biosphere might be net positive.

145
00:40:58,650 --> 00:41:04,642
Emile Torres: And the reason for that is that while wild animals suffer because they're engaged in this darwinian structure.

146
00:41:04,666 --> 00:41:07,230
Chris: Oh, yeah. This is such a bizarre. Yeah, please.

147
00:41:08,920 --> 00:41:13,880
Emile Torres: So the fewer wild animals there are, the less wild animal suffering there will be.

148
00:41:14,040 --> 00:41:18,540
Chris: And so it's like the reverse of the. Like, we should make more people because people happy.

149
00:41:18,840 --> 00:41:23,312
Emile Torres: Exactly right. So he claims that people in general have net positive lives.

150
00:41:23,416 --> 00:41:23,728
Chris: Yeah.

151
00:41:23,784 --> 00:41:41,848
Emile Torres: He further claims that wild animals in general probably have net negative lives. So if you take this utilitarian view, then if you eliminate those beings, even if they're sentient beings that have net negative lives, you are increasing the total amount of value because they were subtracting value.

152
00:41:41,904 --> 00:41:43,340
Chris: They were subtract. Right. Right.

153
00:41:43,760 --> 00:42:27,244
Emile Torres: So this is the kind of logic. So it's, you know, would indigenous communities, would Muslims, would people who embody different views on what the world ought to look like, would they have any place in the utopian world? Utopia by its nature, is exclusionary. It always leaves out something. You know, if there are non believers in the christian heaven, it's not heaven. If there are capitalists in the communist utopia, then it's not utopia. Something's gone wrong. So doubt of this utopia that they're trying to bring about, actually, I think it's most of humanity, and in fact, I think it's probably most living creatures on earth. And so that is also what led me to go like, wow, this is. Utopian vision is just fundamentally very exclusionary.

154
00:42:27,332 --> 00:43:15,060
Chris: Yeah, well, it kind of reminds me of the whole, like, yep, they're totally chill with the fact that they expect ten of them to survive the apocalypse because that's the only people they really kind of gave a shit about in the first place. And that would maximize value. After all. It just reminds. Yeah, it kind of brings it back to that. Is there anything, like you said, you're still. You still think that human enhancement, for example, it could be something that is good with a lot of caveats about how we manage it and what we value when we bring it about? Do you have other examples of that? Like things where you're like, well, I still like this concept. It's just when it's not bastardized into some weird shut up and multiply situation, it could be a good thing that we do.

155
00:43:19,850 --> 00:43:23,474
Emile Torres: Nothing really occurs to me, should I get a neuralink?

156
00:43:23,522 --> 00:43:27,190
Chris: Because it'll let me play civilization? That seems pretty good.

157
00:43:29,650 --> 00:44:10,150
Emile Torres: Yeah, I would probably not advise that one does that. I'd be worried about. I don't know. That's a very intimate, invasive technology. And a lot of these companies, their business model is based on surveillance, capitalism, the collection of data. There's all sorts of ways you could collect data if you have a neural chip. So I would be very worried. In the real world, I am deeply opposed to transhumanism and a lot of these technologies, in a very abstract sense, I'm much less opposed and would make an argument that they could potentially be used for good.

158
00:44:10,190 --> 00:44:44,400
Emile Torres: I mean, it's a similar thing, like saying, you know, a lot of philosophers would say that torture in a completely idealized situation where, you know for a fact that it's effective, you know for a fact that, you know, 10 billion people are going to die, and so on, they would say, yes, you should torture this individual. But the large majority of those philosophers, myself included, would say, in the real world, we will never, ever have anything like that. And so all these thought experiments about the ticking time bomb, and would you torture someone? They're just irrelevant. And so that's sort of how I feel about a lot of these ideologies.

159
00:44:44,440 --> 00:45:17,840
Emile Torres: Like, yes, I don't know, in a vacuum where you're controlling every single aspect of the scenario or situation and, you know, with certainty that it's this way or that way, then, like, okay, yeah, maybe then human enhancement might be completely fine if you know that it's done democratically. And everybody is aware of the pitfalls of eugenic thinking, that certain groups of people are better, that humanity can be organized in a hierarchy. And so on if when people have, like, completely eradicated all of these biases, then should we modify humanity? I don't know. Sure.

160
00:45:18,220 --> 00:45:20,852
Chris: But there was a lot of assumptions that you just said there.

161
00:45:20,876 --> 00:46:19,460
Emile Torres: Yeah, huge amount of assumptions. And so that's sort of how, I mean, maybe the one thing I would say, which is not so much a value, doesn't concern a value judgment, but nonetheless is related, I think, to your question, concerns the singularity. And so it's not so much a claim that, okay, like, maybe there's a version of the singularity that might actually be good if this advanced technology were actually used to benefit all of humanity, which I think in the real world, it absolutely won't be. It'll just empower the already powerful. Nonetheless, I would say that I think it is possible that we create technology that is completely transformative. And I am somewhat skeptical of these arguments that we might have an intelligence explosion, one of the definitions of the singularity. I am skeptical that's even physically possible for. For various reasons.

162
00:46:20,360 --> 00:47:08,520
Emile Torres: Really good article by a philosopher named David Thorstad called. I think it's against the singularity hypothesis, which basically explains my view, but he articulates in a way that is more sophisticated than I possibly could. So I'm skeptical of that. But nonetheless, I think it's possible. And that worries me. I'm not just saying, like, oh, the singularity idea is ridiculous because it's never, ever going to happen. I don't know. It could. And we might be in a world where we create these machines that are able to solve problems and process information much better, much faster than we possibly could. But I'm anxious about that world. Does that make sense? So it's a. Yeah, I don't know. It could happen, but I don't see any really good outcome.

163
00:47:09,220 --> 00:47:24,620
Chris: Right. So what's the action item, then, for, you know, for all of us that are concerned about this? Like, what is. Is there anything that we can. That we should be doing to try and combat some of the more, like, you know, nefarious bits that we've been talking about?

164
00:47:25,520 --> 00:48:28,840
Emile Torres: Yeah. So this is, I think, a really important question that I don't have a good answer to. I don't really know what we should do. In the TESCREAL FAQ, frequently asked questions that I'll publish soon. The last question is exactly this. And I mentioned that a primary goal of future research projects is to try to formulate a good answer. What can we do? What should we do? Should we go stand outside of OpenAI and protest? Should people actually join hands with some of the doomer TESCREAL people? I don't think so, but that's one possibility. Beyond that, I'm just nothing really sure. I mean, these companies are, and the TESCREAL individuals who have founded these companies and now run them, they are massively powerful. And someone like me, I just published articles in Truth dig and Salon and various other. So I'm just sort of screaming into the void.

165
00:48:29,220 --> 00:48:35,720
Emile Torres: I really don't know what to do. It's okay. That's the reason why I'm deeply pessimistic about the future.

166
00:48:39,230 --> 00:49:07,200
Chris: Back in the beginning of the interview, were sort of talking about the eschatological nature of this bundle of beliefs. And I've seen takes where AGI is being akin to summoning God. You talked about singularity rapture for nerds. Right? So I guess the question then is, does this bundle, do TESCREALists constitute a cult? I got to get the name of the podcast. I got to brand the interview here.

167
00:49:07,320 --> 00:49:59,720
Emile Torres: Yeah, I would say that there is a good case to make that a lot of these communities are run like a cult. It might sound hyperbolic, but I really don't think it is. So take ea, for example. It is very hierarchical. There's a lot of hero worship. I mean, by their own admission, you could find them talking about this. People in the EA community have said there's way too much hero worship. There's a lot of worship of McCaskill and Toby Ord, Elias Witkowski and Nick Bostrom and so on. You know, you sort of like, if you're one of the lower rungs of this hierarchy, you oftentimes need some kind of permission to do certain things. You know, the center for Effective Altruism has guidelines for what people in the a community should and should not say to journalists.

168
00:50:00,700 --> 00:50:33,970
Emile Torres: They tested a ranking system, secretly tested an internal ranking system of members of the EA community based in part on IQ. So members who have an iq of less than 100, which points removed once they have an iq of 120, then they could start to have points added. All of this sounds very cultish. They even do something. Many members participate in what they call doom circles, where you get in a circle and you criticize each other one at a time. And if you.

169
00:50:34,390 --> 00:50:37,006
Chris: Oh, man, yeah, that is for sure.

170
00:50:37,198 --> 00:51:26,598
Emile Torres: Right? And if you go on the EA forum website, you can find the statement that they begin to each critique with. And it is like, you know, for the greater good, for the benefit of this individual and the aim of improving themselves and their epistemic hygiene and so on. It sounds so bizarre. I mean, there are many examples. There's a lot of myth making in the community. You know, leading figures like McCaskill basically propagated lies about Sam Bankman Fried, you know, one of the saints of the EA, rationalist, long termist community. He drove a beat up Toyota Corolla and lived this very humble lifestyle. What they didn't tell you, what they knew was that bank McFree lived in a huge mansion and owned $300 million in bahamian real estate.

171
00:51:26,774 --> 00:52:19,892
Emile Torres: So just like a lot of the televangelists who preach one thing and then have private jets, who are flying around in private jets without anybody really supposed to know about that. McCaskill himself celebrated the release of his book what we owe the future by partying at an ultra luxurious vegan restaurant in New York City, where the menu started at, I think it was $430 person. And then a couple months later, he went on the Daily show and boasted to Trevor Noah that he gives away more than 50% of his income. Well, yeah, he's got a bunch of millionaire billionaire friends who pay for him to go down to the Bahamas and eat at these ultra luxurious restaurants and so on. So, anyways, there's probably 50 more things to say about that, but it is very cultish in many ways.

172
00:52:19,916 --> 00:52:28,480
Emile Torres: And in fact, when I was part of the community, we had a number of conversations about whether or not we are part of a cult.

173
00:52:29,060 --> 00:52:29,452
Chris: Interesting.

174
00:52:29,476 --> 00:52:43,858
Emile Torres: I have even. Here's my last thing I'll say. I've even had a. This is 100% true. I've had family members of some of the leading eas contact me in private to say that they're worried their family member is part of a cult.

175
00:52:43,994 --> 00:53:08,720
Chris: Oh, my gosh. Some of the things you mentioned are pretty classic. Yeah. So I just have one more question before I let you go. Are you worried that this interview is going to get you tortured by a future super intelligent robot? Or maybe this is. Maybe the interview itself is the torture. Maybe you're already being tortured.

176
00:53:09,580 --> 00:53:23,982
Emile Torres: No, this was fun. This is fun. This is not a version of hell. I've done some interviews that I wanted to escape. This was a whole lot of fun. I mean. Yeah, good question. I mean, there's this idea of Rokos basilisk.

177
00:53:24,076 --> 00:53:24,418
Chris: Yeah.

178
00:53:24,474 --> 00:54:09,980
Emile Torres: Yeah. So for all I know, I might end up being tortured. Maybe this would be presented as evidence. I remember Colbert, he once showed a clip of a trend that was going around TikTok and YouTube of people who were taking their old dryer outside and plugging it in with an extension cord and then turning the dryer on with a cinder block or brick in ithemenous. And these are hilarious videos. If you watch it, the driver will self destruct over ten minutes. And it's pretty funny. And anyways, the reason I mentioned this is that he suggested that this would be exhibit a when we are trial by the machines.

179
00:54:10,960 --> 00:54:14,112
Chris: Look what you did to these poor dryer machines. Yeah.

180
00:54:14,256 --> 00:54:15,704
Emile Torres: Mistreated us so horribly.

181
00:54:15,752 --> 00:54:18,802
Chris: I know. And after they did so much for us.

182
00:54:18,976 --> 00:54:26,438
Emile Torres: Yeah. So, I don't know. Maybe this interview and my writings opposing the creation of superintelligence would be used against me. I don't know.

183
00:54:26,574 --> 00:54:40,090
Chris: Well, if this gets you tortured forever in the future, then I apologize. But if it doesn't, then I'll just say I really appreciate your time and expertise on this. I've had a lot of fun talking to you.

184
00:54:40,550 --> 00:54:45,302
Emile Torres: Maybe we'll be there together. You can apologize to me, right, while.

185
00:54:45,326 --> 00:54:47,838
Chris: We'Re getting our lashes or whatever the robot does.

186
00:54:48,014 --> 00:54:50,970
Emile Torres: Yeah. Cool. Great chatting with you.

187
00:54:55,190 --> 00:54:56,770
Chris: So there you have it. It's a cult.

188
00:54:57,510 --> 00:55:00,050
Kayla: Honestly, our job's done.

189
00:55:01,030 --> 00:55:12,734
Chris: It was more culty than I thought. When I asked that last question, I thought it was gonna be like, oh, yeah, these some weird jargon. But then Emil kept going on about these different things that were like, no, that's mega culty. What are you talking about?

190
00:55:12,782 --> 00:55:40,896
Kayla: Yeah, I might have some follow ups for doctor Torres. I want to know more about these shame circles. And, like, we definitely need to look up, like, the vocabulary and the verbiage that's used. And, like, I want to know what these, like, family members, when they're saying, like, hello, is my family member in a cult? I want to know if they're like, are they looking for deprogrammers? Like, I just, I have some follow ups for Doctor Torres, and I'm very grateful that they shared those little culty tidbits with us.

191
00:55:41,008 --> 00:56:25,970
Chris: Yeah, I it's funny because I've, I keep going back and forth on because that was mostly, it sounded like it was about TESCREAL, but it sounded like with the heavy emphasis on the EA part, on the effective altruist community. And I keep going back and forth on that where I'm like, oh, man, these guys earn to give. That's so culty and dumb. But then I'll read stuff or I'll talk to someone and get this sense that it's like, no, they're just trying to do the most good with the fewest amount of resources. That's smart. And then I'll hear stuff like that where I'm like, whoa, what? Yeah, I just have such whiplash. This whole last month and a half, two months of episodes, I just keep having this whiplash about the various appendages of the TESCREAL community.

192
00:56:26,090 --> 00:56:57,394
Kayla: I think for folks who are not full fledged TESCREALists, I think that there's a lot of take what you like and then leave the rest here. There's a lot of cool things to be gleaned and, like, interesting ideas and, like, you know, I hope that I'm humble enough to not think that I have all the answers and know everything about everything and have enough curiosity to be like, ooh, what's going on in this community? But, yeah, leave the rest. Leave it.

193
00:56:57,482 --> 00:57:24,612
Chris: Yeah, there's definitely a bit to leave, such as, let's talk a little bit about the eugenics thing. I think that was one of the biggest points of certainly this part of the interview and maybe of the entire interview is, and we've discussed this in previous episodes, we will definitely bring it up again. Transhumanism, in particular, has some roots in eugenics of the early 20th century, which.

194
00:57:24,636 --> 00:57:25,380
Kayla: Was not a good.

195
00:57:25,460 --> 00:57:53,022
Chris: Which was not good. Not that there's a good eugenics, but. Well, actually. But that's maybe some of the argument that might be put forth by certain transhumanists is that the current version that you might call eugenics is not as bad because it's liberal eugenics. It's this new wave eugenics where it's ground up. There's no Hitler at top saying, like, we have to sterilize these bad people. It's people making their own choices.

196
00:57:53,086 --> 00:57:53,526
Kayla: Right.

197
00:57:53,638 --> 00:58:26,448
Chris: And the scenario outlined in the interview makes a lot of sense, though, is like, if the societal pressures exist, if these structures existed, then it doesn't matter if it's the Fuhrer saying it or if it's ground up, if it's just this influence that has this homogenizing effect over time because everybody wants to have babies that are the most adapted to whatever societal conditions we have. And were talking about this during the interview. This kind of already exists.

198
00:58:26,624 --> 00:58:27,380
Kayla: Yeah.

199
00:58:27,680 --> 00:59:11,522
Chris: And through our own fertility journey, we've had to think about this. When we had our genetic testing for our son. One of the things that genetic testing is for is they will test to see if down syndrome exists in the fetus. And what you and I were talking about was, like, this exact thing that Emile Torres was talking about where society is set up, that you are incentivized to not want to have a child that would suffer, right. With this down syndrome that's exist. Like, we can already do that. There's already people that decide that elect, based on genetic testing, to not have a Down syndrome child because they will suffer in a society that is ablest and not set up for them.

200
00:59:11,586 --> 00:59:11,906
Kayla: Right.

201
00:59:11,978 --> 00:59:19,834
Chris: But the problem isn't necessarily the child, it's that we're in a society that's not set up for them. So literally, that stuff that the doctor Torres was talking about is already happening.

202
00:59:19,922 --> 01:00:19,972
Kayla: And I think that's the thing that I get more worried about here. Rather than like a gattaca type future where we're just like genetically engineering all the babies to like, have, you know, really great eyesight and like, blue eyes and blonde hair and like, be good at Ubermensch. I don't know who'll ever get there, whatever. I just. I do think about and have moral dilemmas about what we as a society are going to expand to consider. Oh, well, I, as a parent, this is an acceptable reason for me to terminate a pregnancy for quote unquote, medical reasons. And that's not to get into any sort of, like, pro life, pro choice argument. It's simply we currently have a window for termination for medical reasons of a fetus. Some people elect to terminate pregnancies when down syndrome expresses. That's on a weird borderline for me.

203
01:00:20,036 --> 01:00:38,716
Kayla: Some people choose to terminate when things like trizomy, I think, 13 and 18 are expressed which are chromosomal or genetic abnormalities or conditions that will all but ensure a very painful, short and life full of suffering for a baby that could potentially be born with that short.

204
01:00:38,748 --> 01:00:40,920
Chris: Like, day or less? Hours. Days.

205
01:00:42,200 --> 01:00:42,976
Kayla: Not good.

206
01:00:43,088 --> 01:00:44,260
Chris: Not good, right?

207
01:00:45,840 --> 01:01:11,704
Kayla: I think that when we continue to hone our science and technology around identifying genes for things, it's concerning. It is concerning to think like, oh, what if we identify an autism gene or this gene or that gene? Like, are we then do we live in a society that is going to expand the reasons why we terminate pregnancies?

208
01:01:11,872 --> 01:01:48,680
Chris: Well, we can't do that with autism because then nobody would know when the trains are coming. I mean, so that would just. Society would collapse. But to your point though, yeah, like, I. There's like, I don't know, it's fuzzy, right? It's like there's the slam dunk of like, this child, if it survives outside the womb at all, will have like a painful, like 2 hours and then suffocate, right? Like, that's easy. And then there's, like, on the other side where it's like, I want to terminate this fetus because it's going to have brown eyes. Like, that seems pretty bad. And then there's stuff in the middle where it's like, I mean, I certainly think that on the side of, like, I'm way on that side for something like autism.

209
01:01:49,620 --> 01:02:06,232
Chris: But then you go back to, like, the down syndrome thing, and it's like, yeah, I don't know. Like, it seems horrible, but yes, they would suffer in a world not set up for them. You and I were talking. You actually mentioned this. Like, imagine a world wherever ableism was a thing of the past, and we had systems set up for.

210
01:02:06,296 --> 01:02:10,528
Kayla: We built an infrastructure that accommodated all different kinds of disabilities.

211
01:02:10,624 --> 01:02:15,808
Chris: Right. There's like a system waiting for people with this sort of disposition. Right.

212
01:02:15,864 --> 01:02:24,192
Kayla: You know what school you're going to get into, you know what group home you're going to be able to get into. You have government subsidies for various therapies and counselings and the things that you.

213
01:02:24,296 --> 01:02:56,276
Chris: Need in order to live a functional, happy life. Then I'm like, well, yeah. Like, then it's. Then it seems like a slam dunk. Like, why would you not have that child? But again, goes back to Doctor Torres argument. Not argument, but Doctor Torres scenario, where these pressures of living in a society that is or is not constructed for. You still have this, even though there's not a top down, like, you know, sterilize these people. It's still there. And that's. It's disconcerting that it's like, it's kind of already there.

214
01:02:56,348 --> 01:03:37,128
Kayla: The pressure from the system to conform to certain standards is like, that's extremely powerful. Yeah. Whether or not there's a Hitler type figure saying, do this, do that. Like, it's just that's a powerful thing. And I think that. I don't know, I think that our society would not better off if were able to select for or, like, select out of disability. Like, that's just not a controversial statement. And I. I don't think our society. I think our society would better off building systems around people. Not building people around systems.

215
01:03:37,264 --> 01:04:23,170
Chris: Yeah. I certainly think that if we became more homogenized neurotypical, that would be a net loss for everyone, including the neurotypicals. Speaking of birth control, this also reminds me of a TikTok that were served, probably because we've been researching this, but there was sort of an open forum about transhumanism. There was just like a bunch of people came together to discuss transhumanism. And there's a columnist named Mary Harrington. That brought up a really good point. I'll try to summarize it here. There's an article I'll link to in the show, notes on unherd.com, unherd.com, where Mary Harrington talks about how transhumanism is already here. And basically what she says is that she considers birth control to be transhuman, which I think I agree with. Right. I think that makes sense.

216
01:04:24,870 --> 01:05:19,430
Chris: We have taken control of our biology in some way by saying it's literally in the word, right. Birth control I can control when I have birth. And so that constitutes something that's transhuman. And she's saying that we have 60 years of data now to go on to show that actually doing this sort of thing does lead to exacerbated inequality in society. So, like, people that have access to birth control are able to retain generational wealth and, you know, protect their power more so than people without money and wealth that can't access birth control. And then that has sort of like a feedback effect. So we already have evidence from the thing were just talking about regarding genetic testing of fetuses to birth control. We already have examples of this doing the things that were talking about in the interview.

217
01:05:19,770 --> 01:05:26,162
Chris: I don't really have a question for that other than just pointing out that this seems to be empirically true.

218
01:05:26,226 --> 01:05:29,026
Kayla: Right. Yeah. Well, what's the this, when you say.

219
01:05:29,058 --> 01:05:34,030
Chris: This, this concept, this warning that transhumanism.

220
01:05:34,570 --> 01:05:38,130
Kayla: Has, like, inequalities built in, that transhumanism.

221
01:05:38,170 --> 01:05:42,390
Chris: Will constitute an upgrading of inequality, as Emil put it?

222
01:05:43,120 --> 01:05:46,808
Kayla: Yeah. Yeah. I don't see how it's not, I.

223
01:05:46,824 --> 01:06:09,018
Chris: Mean, but here's the thing, though. Like, as I think about this, and I've thought about this TikTok a lot, as I'm sure you have, I also go and say, like, all right, so then would I go back in time and uninvent birth control because of this inequality that it's upgraded for us? No.

224
01:06:09,154 --> 01:06:09,830
Kayla: Right.

225
01:06:11,290 --> 01:06:57,142
Chris: I don't know what that means. I don't know if that means I'm just insensitive to the inequality or if we just have to accept that there's bad that comes with the good. I certainly think that it's, at the bare minimum, it's a splash of cold water on the utopian idea of transhumanism and the good transhumanists that I know, certainly, at least Max Moore has said I'm not a utopian person. They disavow utopianism even though they're kind of think about that anyway. But that's, like, what the word extropion is for, right? But I do think that we have real world examples that show at least that this is not all sunshine and all rainbows.

226
01:06:57,286 --> 01:07:53,266
Kayla: I think a question for me is, does there exist enough global wealth currently not distributed in a way, but does there exist enough global wealth that could, theoretically, were it distributed differently, grant equal access to birth control around the world? And if the answer to that is yes, then that supports a belief that I have that a transhumanist future built on the current system of inequality is simply going to exacerbate or reflect that inequality. It's not going to do away with it. Perhaps a step to be taken or some steps to be taken before we start building our future society around TESCREAL beliefs is fixing and dismantling some of the systems we have now and building something better that can lead to a more equal future.

227
01:07:53,438 --> 01:07:55,442
Chris: Yeah, tell that to the accelerationists.

228
01:07:55,506 --> 01:07:57,178
Kayla: I don't want to. I don't want to.

229
01:07:57,354 --> 01:08:12,930
Chris: The accelerationists are. And this is why I think, yeah, we've mentioned before Tess Krill, not a monolith, right? You have people like Elias Yudkowski, who we've kind of disparaged in this interview, but also is more on your side in terms of, like, hey, let's wait until we figure this out.

230
01:08:12,970 --> 01:08:13,506
Kayla: Right?

231
01:08:13,658 --> 01:08:26,790
Chris: Versus, like, a Marc Andreessen who's, like, pedal to the metal. You know why? Cause I'm rich. But I won't say that. I'll say that the AI will fix all these problems for everyone. And you know what? If everything goes wrong, then I have my bunker, so who cares, right?

232
01:08:26,910 --> 01:08:32,877
Kayla: I will say that I wish I disparaged Eliezer Yudkowski a little bit less in some of those previous episodes.

233
01:08:32,934 --> 01:08:34,518
Chris: Oh, but he's so disparageable, though.

234
01:08:34,573 --> 01:08:44,118
Kayla: Yes. And also, I think that more voices on the side of putting some breaks on some of this stuff is good.

235
01:08:44,254 --> 01:08:48,174
Chris: It is. I wish his voice wasn't so, like, zany.

236
01:08:48,221 --> 01:08:52,380
Kayla: I wish his voice wasn't also calling himself a hero, but he is.

237
01:08:52,460 --> 01:08:56,564
Chris: Yeah, but, I mean, I would take two Lea Judkowskis for every one. Mark Andreessen.

238
01:08:56,612 --> 01:08:57,292
Kayla: Yeah, Marc Andreessen.

239
01:08:57,316 --> 01:09:08,028
Chris: If you're listening to this, I will never get funded by Andreessen Horowitz, which, you know, we definitely need some vc funding for this podcast. If you are listening, you know, if.

240
01:09:08,044 --> 01:09:12,028
Kayla: You are a billionaire who likes to be disparaged. Yeah, give us your money.

241
01:09:12,084 --> 01:09:50,754
Chris: I can make a pitch deck that has the letters a and I in it. I can do that if that's what it takes to. I also want to point out, I'm not sure, I think I more agree with you that there is a world in which some of these enhancement technologies and control of our biological destiny, blah, blah, stuff could be good. I think I agree with you that there exists a scenario where that is a benefit. But it is interesting to hear Emil talk about their opinion that I don't know if there's ever anything that's idealized enough that this would be a good bit of gasoline to throw on. Yeah, I don't think I agree with that.

242
01:09:50,841 --> 01:10:39,766
Chris: But it's certainly a good point and a good check to say, okay, like, yeah, you're saying if this were in place, and if this were in place, and if this was solved and if this was fixed, and if this was fixed, then it would be totally good. And their point was like, okay, that's pretty unrealistic, right? Like, we always think that, like, the better society is around the corner, and it's kind of always, like, inequality is always there. Like, even engaging that thought is, like, pseudo utopian. Right? Like, as soon as we have the initial utopia, when we have the socialist utopia, then we can do the transhumanist utopia. So I think there's a really good point there, too. I don't know if I agree with it, but I also. If I didn't, that might also be a cope. I don't know.

243
01:10:39,878 --> 01:11:04,228
Kayla: I think there's also, like, this conversation is going to be imperfect because we're talking about genetically selecting out certain people or not certain people, or, like, that's always bad. Yes, yes. But we are having that conversation. Like, you and I are both mostly able bodied folks who are moving through a world that is mostly built for us, and we're speaking for people who have a different experience.

244
01:11:04,364 --> 01:11:04,884
Chris: Sure, sure.

245
01:11:04,932 --> 01:11:24,886
Kayla: Blah, blah, blah. Imperfect conversation. That being said. Yeah. I think that it is idealistic to believe that we can achieve a utopia. And there's something that you and Doctor Torres talked about in your interview of, like, utopia requires somebody being left out.

246
01:11:24,918 --> 01:11:27,030
Chris: Oh, that was a great point. Yeah.

247
01:11:27,110 --> 01:11:39,444
Kayla: I think that in turn, makes me more interested in, for myself, doing thought experiments on, like, how can we build a nice. A nice future?

248
01:11:39,532 --> 01:11:40,756
Chris: Nice utopia.

249
01:11:40,948 --> 01:11:57,980
Kayla: Not a nice utopia, because we can't have a utopia. But how can we design a society? Or how can we build a society that includes the most amount of people, right. While excluding Nazis and those who would tear down such a nice society.

250
01:11:58,140 --> 01:11:59,924
Chris: Don't you want diversity of thought, Kayla?

251
01:11:59,972 --> 01:12:02,892
Kayla: I do want diversity of thought, but I don't want fascism.

252
01:12:02,996 --> 01:12:04,164
Chris: I want diversity of thought.

253
01:12:04,212 --> 01:12:11,540
Kayla: T h o t. Lots of different kinds of whores. That's nothing. I didn't get you nothing.

254
01:12:12,240 --> 01:12:14,660
Chris: No, I was just in suppressed laughter mode.

255
01:12:17,360 --> 01:12:26,720
Kayla: I am personally more interested in that type of future thought than pedal to the metal. How do we make AI solve all of our problems?

256
01:12:26,800 --> 01:12:34,064
Chris: Yeah, but the AI will come up with the correct conclusion. Look, I'm just presenting the acceleration.

257
01:12:34,232 --> 01:12:49,700
Kayla: As long as all the AI is based on the fucking language learning from the Enron emails, I don't give a shit. I can't, I cannot. The racism is baked into the AI and we have to not have that sweet baked goods.

258
01:12:50,360 --> 01:13:38,996
Chris: Yeah, no, I think the inclusion bit that doctor Torres and that you were just talking about is huge. I was going to make a big argument about, like, well, the solution is just fucking tax the rich, right? Hell yeah, that's the solution. But I think that there is at least one other big pillar of needs to be inclusive and it's not. So I was going to use the quote that they said, I forget the name of their colleague, but you can't design for. You must design with. And I just. Obviously I moated about it at the time, but I really like it as a designer myself. This is something that you encounter a lot. Let's take out the. The sort of inequality, marginalized groups aspect of it and just look at it in terms of designing a product for a company.

259
01:13:39,188 --> 01:14:20,660
Chris: If you're designing for, then you're telling the customer what they need and you're going to have a bad product if the customer is telling you what they want, if you're only listening to the customer and it's just you're going by the analytics, what the analytics say, and you don't have any vision of your own. I think that's also bad. So that's why with. Because it's not for, it's not from, it's with. And that, to me is what. Yeah, sort of is encapsulated by this inclusiveness thing is like, if we're going to design a future, it has to be with. It can't be for the end. I don't know. I don't know. That's it. No, the other point, the other pillar I was going to say is the tax the rich thing.

260
01:14:21,560 --> 01:15:08,352
Chris: I know that the ending there that we had was a little bit of a downer, at least in terms of how Emile feels about. I did not intend to run, but how they feel about the future. Pessimistic was the word they used, and I understand that. I think when this is the thing that you're doing for a living, and frankly, this is what they were doing for a living before they were anti TESCREAL. Right. They were working on existential risks to humanity. This is sort of their bread and butter. So I understand having that notion of pessimism, and I share it sometimes for sure. I don't think, like, officially on the podcast. I can't really get behind that, though. And I'll probably talk about this with you on the couch.

261
01:15:08,416 --> 01:15:53,540
Chris: I don't think I can get behind it talking with you on the couch either. And the reason for that is that I just, I don't even after having all of this discussion about, like, the bankruptcy of the TESCREAL ideology, the ideology, the bundle of ideologies are diverse enough and have enough, like, fractures within them and different modes of thinking that I don't think that the issue is with the ideology. And even if the ideology was plainly evil, yeah, we're just going to redo nazism, know, the actual kind, like, just plainly evil. Even if it was, it wouldn't necessarily be as much of a problem if it was like, two wackos indiana. And then the SPLC could write an article about how dumb these people are. Right? Nobody cares.

262
01:15:53,720 --> 01:16:28,560
Chris: But if it's the richest, most powerful, influential people in all the world that the world has ever seen, maybe then that's a problem, regardless of whether it's a good or bad ideology, right? So that's just where it just kind of comes down to the boring answer. It's tax the rich. It's this inequality, it's this power imbalance that we have that's the problem. And it's a boring solution, and it always comes back to that same boring thing. But I think oftentimes solutions are that way.

263
01:16:29,900 --> 01:16:35,956
Kayla: Do you have hope that is something we'll see before some sort of.

264
01:16:35,988 --> 01:16:38,356
Chris: Horrendous collapse that, I don't know.

265
01:16:38,548 --> 01:17:26,380
Kayla: That's the thing that I can get Doomer about. And I'm not here to denigrate anyone for being a doomer. And I'm not here to denigrate anyone for being optimistic. I think there's validity to both. Personally, I tend to be more optimistic myself, but, you know, I gotta get through the goddamn day. Loser. Shut up. That's the thing that I can get Doomer about, is that I worry that we're not gonna be able to solve some of these quote unquote problems or solve that particular problem by choice in time before there's some, before something bad happens. I hope that we're able to get to a point where we can enact systems and taxation that leads to more income equality, just something better than what we got now.

266
01:17:26,500 --> 01:17:49,774
Kayla: I hope that we have the political will or we'll be able to, I don't know, somehow manifest that political will, particularly in the country of the United States, to have something like taxing the extraordinarily wealthy. I have a hard time believing that will happen before a collapse. And that's what I can get Doomer about.

267
01:17:49,822 --> 01:18:26,214
Chris: Sure. Yeah. It's a bit of a complex question to answer succinctly, and we're already getting long. I guess the short answer is still that I just don't know. But certainly history is full of examples of both. History has examples of like, oh boy, we fixed that before it got real shitty. You could even point to. I know this is going to sound weird, but like, 18th century England, that's what Charles Dickens was writing about in tale of two cities. It was like, oh my God, do you see what's happening in France right now? Don't let that happen here. And they kind of enacted some policies to help the poor and they actually listened a little bit.

268
01:18:26,262 --> 01:18:54,230
Chris: So there's examples of us being able to release the pressure, and then there's also examples of it, you know, spiraling until it something phase changey happens, something threshold crossing happens. And I know that's what we're kind of worried about. That's what you're worried about? I don't know. We're definitely in a system that has a feedback loop of like, money buys you political power and political power entrenches your money. Yeah, that's not good.

269
01:18:55,010 --> 01:18:57,202
Kayla: Why does it always come back to capitalism?

270
01:18:57,346 --> 01:19:01,980
Chris: Capitalism? Oh, yeah. I don't know, man.

271
01:19:02,480 --> 01:19:03,540
Kayla: I have a question.

272
01:19:04,040 --> 01:19:13,480
Chris: But the bottom line is I think that if there's answer, what do we do next? And I'm not smarter than Doctor Torres.

273
01:19:13,560 --> 01:19:15,384
Kayla: And his colleagues what the answer is.

274
01:19:15,432 --> 01:19:24,366
Chris: But if I had to sit here and answer it because I do have to answer it because we're on a podcast. That's my answer. Tax the fucking richest.

275
01:19:24,468 --> 01:19:25,506
Kayla: I have a question for you.

276
01:19:25,618 --> 01:19:26,430
Chris: Okay.

277
01:19:26,890 --> 01:19:28,738
Kayla: Are we doing the criteria?

278
01:19:28,834 --> 01:19:47,610
Chris: Oh yeah, that's right. We talked about this. So, okay, here's. I thought about this. I think we're running long enough on time that we should forego the criteria. Now, I think that also we're going to be doing episodes, most likely on both EA and Elle and longtermism.

279
01:19:47,690 --> 01:19:48,250
Kayla: Okay.

280
01:19:48,370 --> 01:19:54,242
Chris: You have a couple episodes coming up. So I think let's do individual criteria for those groups.

281
01:19:54,306 --> 01:19:55,578
Kayla: Okay. And not test.

282
01:19:55,634 --> 01:20:07,826
Chris: And then at some point in the future, whether it's at, like, the end of one of those episodes, or maybe we do, like, a very minisode where we just, like, sum up all of the TESCREAL stuff, then we do criteria at that point.

283
01:20:08,018 --> 01:20:08,778
Kayla: Okay.

284
01:20:08,914 --> 01:20:10,458
Chris: Because it's getting long.

285
01:20:10,554 --> 01:20:11,710
Kayla: I have a prediction.

286
01:20:12,370 --> 01:20:13,514
Chris: The prediction is yes.

287
01:20:13,602 --> 01:20:15,430
Kayla: Colt cult. Colt, Colt.

288
01:20:15,730 --> 01:20:36,028
Chris: Okay. Yeah. Done. I mean, I don't know. Across the board, honestly, the case that Doctor Torres laid out at sort of the end of the interview there is probably stronger than we would have done with our criteria anyway. So I don't think we're gonna beat that. Like, they have. What is the circles? They have doom circles, shame circle. They have shame circles.

289
01:20:36,164 --> 01:20:42,684
Kayla: But they were bad. I just wanna go on the record right now and say I love robots. I love the future. I love science.

290
01:20:42,732 --> 01:20:43,716
Chris: I know. I know.

291
01:20:43,748 --> 01:20:55,618
Kayla: I love AI. I love virtual reality. I love, I love. I do not love it. Being born from the current system that we have. Because then we see robots with guns.

292
01:20:55,754 --> 01:21:00,210
Chris: And trans Boston dynamics police dogs, transhumanism.

293
01:21:00,290 --> 01:21:07,690
Kayla: Leading to, like, genetically selecting for blue eyes only. And I don't like this shit. I like robots. I don't like these robots.

294
01:21:07,770 --> 01:21:11,082
Chris: So that's called solarpunk, Kayla. So you have to behind Solarpunk.

295
01:21:11,106 --> 01:21:12,386
Kayla: It's called hope punk, baby.

296
01:21:12,458 --> 01:21:13,114
Chris: Hope punk.

297
01:21:13,162 --> 01:21:13,346
Kayla: Yeah.

298
01:21:13,378 --> 01:21:14,106
Chris: Did you just make that up?

299
01:21:14,138 --> 01:21:15,576
Kayla: No, there's problems with it too.

300
01:21:15,698 --> 01:21:16,516
Chris: Hope punk.

301
01:21:16,588 --> 01:21:17,164
Kayla: Yeah.

302
01:21:17,292 --> 01:21:19,932
Chris: I hate that. Because of the alliteration. That's not even alliteration. What is that?

303
01:21:19,956 --> 01:21:21,252
Kayla: It's like you don't like that glottal stop?

304
01:21:21,316 --> 01:21:27,092
Chris: The glottal stop? Is that what they're calling Solarpunk now? Or is it like a different subgenre?

305
01:21:27,116 --> 01:21:27,908
Kayla: It's a different subgenre.

306
01:21:27,964 --> 01:21:35,348
Chris: Oh, well, let's talk about that off show. Yeah, let's go talk about that on the cult. Or just weird discord links in the.

307
01:21:35,364 --> 01:21:37,644
Kayla: Show notes and then give us money on Patreon.

308
01:21:37,692 --> 01:21:38,876
Chris: Give us money on Patreon. Please.

309
01:21:38,908 --> 01:21:40,476
Kayla: Patreon.com. Culture. Just weird.

310
01:21:40,508 --> 01:21:47,232
Chris: Unless you're a vc, in which case, let's talk about my routing number for my bank account. This is Kayla, and this is Chris.

311
01:21:47,336 --> 01:21:48,248
Kayla: And this is Ben.

312
01:21:48,344 --> 01:22:13,590
Chris: Cult or just Tess grilled.

Dr. Émile P. Torres Profile Photo

Dr. Émile P. Torres

Author / Eschatology expert

Émile P. Torres is a philosopher and historian whose work focuses on existential threats to civilization and humanity. They have published on a wide range of topics, including machine superintelligence, emerging technologies, and religious eschatology, as well as the history and ethics of human extinction. Their most recent book is Human Extinction: A History of the Science and Ethics of Annihilation (Routledge).