Transcript
1
00:00:00,480 --> 00:00:29,624
Chris: I wonder that myself, and I kind of want to follow up with Doctor Torres about that. Is the position of these long termists that creating people at all is good, right. Or that creating people is good, but they also have to be happy in the stack rank of value in the universe. Where is happy person rock? An unhappy person. Right. Does it? Clearly happy person is on top.
2
00:00:29,712 --> 00:00:30,296
Kayla: Right.
3
00:00:30,448 --> 00:00:35,944
Chris: But is rock better than unhappy person or is unhappy person better than rock? I think unhappy person beats rock.
4
00:00:36,032 --> 00:00:37,528
Kayla: Then it just makes me wonder.
5
00:00:37,664 --> 00:01:44,480
Chris: Which beats Scissors. Welcome to cult or. Just weird. What you are about to hear is part two of a three part interview on transhumanism, its offshoots and its social impact, especially in Silicon Valley. If youd like to listen to the previous episode, that will contain part one of this interview. Now, without further ado, heres the second part of my chat with Doctor Emile Torres. I just get really interested in the honestly, I really should read the survival of the richest and possibly have him on the show. But the other part of that is of the mindset isn't it a good thing that the people surviving are all of these high, productive, high, what would be the word? High talent, high, whatever, the transhumanists.
6
00:01:44,560 --> 00:01:59,730
Chris: Because part of the bundle here is we only want the super people to reproduce because that's how we're going to bring about this utopia. Not to get ahead in my questions about eugenics, but that's part of it, right?
7
00:02:00,470 --> 00:02:35,802
Emile: Yeah. So I think there are a lot of transhumanists who would say that the explicit goal is to ensure that radical enhancement technologies are distributed in an egalitarian manner so that they're widely accessible. That sounds nice, but in practice, is that going to happen? No, actually, you know, I'm not a big fan of Harari's work. His book homo deus Human the God. But there is a section, and it's titled upgrading inequality. And he makes a good argument.
8
00:02:35,826 --> 00:02:37,050
Chris: Wow, that's such a good.
9
00:02:37,170 --> 00:02:54,242
Emile: Yeah, yeah, absolutely. And he makes a good argument that the 20th century biomedical advances and technologies, the socioeconomic elite had a reason to want them to be widely distributed.
10
00:02:54,386 --> 00:02:54,690
Chris: Right?
11
00:02:54,730 --> 00:03:35,680
Emile: Like, if you want to keep the engines of the economy roaring, you know, you need a healthy population. So vaccines should be made widely available and so on. But with these radical enhancement technologies, assuming that it's even possible to enhance human beings in some kind of radical way, I'm actually very skeptical about that. I think enhancement is going to be way harder than a lot of people assume. But just bracketing that for a moment, if it is possible, the socio economic elite are going to have a very strong reason to prevent it from being widely available because it's going to, at least in theory, it will level the playing field and power. If there's one thing that power wants, it's to hold on to power.
12
00:03:36,020 --> 00:04:04,230
Emile: And so by leveling the playing field, people in power lose their positions atop the hierarchy of power in our society. So it's like in practice, what is going to happen? Well, they want themselves and their class to remain dominant. So this sort of ties into the eugenics issue that you're pointing out. And I do think a lot of people who are building bunkers see themselves as elite in the sense of being better.
13
00:04:04,780 --> 00:04:20,120
Chris: Yeah. It's not just that they have more power, it's that they deserve the more power. Right. Is that they deserve that. And so it's a good, you know, they wouldn't think of it as, well, I'm holding on to power. They would think of it as, you know, we deserve to get here, so we should hold on to our genetic superiority. Although they probably wouldn't say those words.
14
00:04:20,420 --> 00:04:22,732
Emile: I mean, they might even.
15
00:04:22,796 --> 00:04:23,920
Chris: Yeah, they might even.
16
00:04:24,660 --> 00:05:04,006
Emile: Yeah. So I think you're absolutely right. I mean, the reason they are in the position that they're in is precisely because they're better. And so, which is just false. I mean, there are tons of studies showing that luck has an enormous, you know, catching a lucky break and so on, has an enormous. Is a major determining factor when it comes to people becoming successful. You know. Reminds me a bit of Sam Bankman Fried's brother, both of whom gay Bankman Fried, both of whom were in this tradition bank that freed was probably the most prominent effective altruist and long termist in the world. Maybe he still is at this point.
17
00:05:04,038 --> 00:05:44,078
Emile: But his brother Gabe exchanged some emails with, I can't remember, but perhaps individuals from open philanthropy, which is another one of these tuscul organizations, people in the community, and they talked about buying the island nation of Nauru in the Pacific Ocean. And the explicit plan was to build a bunker on the island nation of Nauru to house effective altruists so that if there is a global catastrophe that results in, as they put it in their emails, 99.9% of humidity percentage dies out, then eas will be able to repopulate the world. And so there is this sense of.
18
00:05:44,094 --> 00:05:46,806
Chris: Like, well, easy, they're the right ones to do it.
19
00:05:46,838 --> 00:05:59,860
Emile: Yeah, they're the right ones to do it. They think of themselves as unusually rational and intelligent. They have high iqs and so on. So who better to repopulate the planet than these superior individuals?
20
00:06:00,520 --> 00:06:30,160
Chris: The fallacies abound, too. Like, I don't even know if we'll have time to get into the iq stuff here. I don't even know where to go next. I do have, actually, the next question I have written is probably a pretty good one, because we're sort of talking about this already, but why is it important for us to, like, I'm sitting here laughing, right? Like, oh, these crazy people. But like, also, it's kind of scary, right? Why is it important that we keep an eye on this whole TESCREAL thing? Why should our listeners feel that this is an important thing to know about?
21
00:06:30,660 --> 00:07:09,420
Emile: Right? So I think the world is full of all sorts of bizarre and potentially dangerous ideologies, but I think focusing on these ideologies, most of them, critiquing them and so on, is just not worth it. Why? Because a lot of these ideologies don't have much influence in the world. Task realism is different. These ideologies are absolutely widespread in Silicon Valley. I mean, I would describe them as the water that these people swim in. It's just the air they breathe.
22
00:07:09,720 --> 00:07:11,584
Chris: It's that influential there?
23
00:07:11,752 --> 00:08:22,566
Emile: Yes, absolutely. Pretty much everybody's a transhumanist. Effective altruism is massively influential. I mean, there's a New York Times article, which I believe was a profile of Sam Altman that mentions it, explicitly says the fingerprints of EA are all over every single one of the major AI companies. So it's absolutely widespread. It's driving, motivating a lot of the projects that people in the tech industry are working on. And I would also say that the AI companies like DeepMind, OpenAI, Anthropic, and more recently, Xai, founded by Elon Musk, these companies are producing products that are shaping our world already in pretty significant ways. GPT bard the other LLM, larger language, model based systems. Yeah. They're transforming our world right now. So consequently, the impact of these companies is quite significant. Well, once again, where did these companies come from? They emerged out of the TESCREAL movement.
24
00:08:22,678 --> 00:09:22,944
Emile: One of the arguments that Gebru and I make is that if not for the TESCREAL movement, these companies would not exist, and we almost certainly would not be talking about AGI right now. Interesting. The significance of the TESCREAL bundle, the impact of the TESCREAL bundle on the world has been already quite profound. And so that is why I think it's really important for people to understand what these ideologies are, what their vision of the future is, what their various value commitments are and how they are infiltrating major governments, not just Silicon Valley, but they're shaping policy in the US. There's a leading figure named Paul Cristiano who has a very history with EA and long termism and rationalism that goes back more than a decade. He was just hired as the leader of the AI safety group. I'm forgetting the exact agency, but I.
25
00:09:22,952 --> 00:09:25,728
Chris: Think it was homeland Security, actually, which is a little weird.
26
00:09:25,864 --> 00:10:01,492
Emile: I think. That's right, yeah. So he now has this massively influential, powerful position within the us government. Foreign policy circles in general, and the United nations in particular are beginning to embrace long termism. So there's lots more I could say about this. Elon Musk says long term is a close match for his philosophy. Sam Altman has a long history with EA, rationalism, long termism and so on. So these ideologies are just, they have enormous impact and power, and that is why studying them is so important.
27
00:10:01,646 --> 00:10:20,940
Chris: Yeah. Once you sort of answer the question of, like, all of the elite people and governing organizations are extremely influenced by it, that kind of answers the question of why the rest of us also need to pay attention to it. Why do you think it's gained so much traction with the Silicon Valley ultra wealthy crowd?
28
00:10:21,680 --> 00:11:23,630
Emile: I think there are two reasons. One has to do with how much money that community has. So the effective altruist community, for example, which is basically the sibling of the rationalist community, and there's tremendous overlap. You know, a lot of people consider themselves rationalists, are also eas and vice versa. So before the collapse of FTX, which of course was run by Sam Bankman Fried, they had $46.1 billion in committed funding. Just an absolutely mind boggling amount of money. They lost many billions when FTX went under, but they still had tens of billions in committed funding. So some of that money comes from Dustin Moskovitz, the co founder of Facebook. Jan Talen, co founder of Skype. Vitalik Buterin is another crypto guy who funds a lot of these.
29
00:11:24,690 --> 00:11:26,310
Chris: Ethereum inventor. Yeah.
30
00:11:26,370 --> 00:12:16,560
Emile: Yep. He just gave 660 million to the Future of Life Institute, which is another one of these TESCREAL organizations. And one of the reasons that the community has ended up getting so much money is because they have promoted this idea of earn to give. And this is the idea that the best way you can do the most good in the world or benefit future generations hundreds or millions or billions or trillions of years from now is to go and pursue the maximally lucrative jobs, go work on Wall street or work for a petrochemical company, if you have to, and then donate all of your disposable income back into the community. And so that's exactly what Bankman free did. I mean, he was the great success story of earn two give.
31
00:12:17,220 --> 00:12:20,900
Chris: That's the backbone of EA, is that notion.
32
00:12:21,200 --> 00:12:32,768
Emile: Yeah. So they backed off of it just a little bit in recent years. And also I think Bankman freed sort of. It was a bit of a pr nightmare for earn to give.
33
00:12:32,944 --> 00:12:39,064
Chris: Yeah, yeah, I know. It was like a big disaster. But in some ways I'm like, maybe it's good that happened.
34
00:12:39,232 --> 00:13:02,490
Emile: Yeah, I know. I mean, obviously there were a lot of people who were seriously hurt. That is unequivocally very bad. But in terms of the influence of EA and long termism, it might have been good because it sort of exposed, I think, a certain kind of moral bankruptcy that lies at the heart of these communities.
35
00:13:02,830 --> 00:13:11,292
Chris: You have to have the financial bankruptcy or else nobody cares about the moral bankruptcy. Right. I mean, that just goes back to the profit mode of thing you were talking about earlier.
36
00:13:11,446 --> 00:14:03,050
Emile: Yeah. On the one hand, they have these homegrown billionaires. On the other hand, I think another main reason that these ideologies have become so pervasive in Silicon Valley and been able to infiltrate major world governments like the UK government, United nations and so on, is because they have a natural appeal to tech billionaires who then are willing to donate large amounts of money to the community. Because one of the key ideas of long termism is that the future could contain so many people spread throughout the universe, almost all of whom would be living in computer simulations. But assuming that they'd be on average happy, then simply, it's just a numbers game.
37
00:14:03,090 --> 00:14:36,766
Emile: Simply by virtue of how many there could be, we should prioritize ensuring that they have good lives and that they exist in the first place over contemporary problems that are non existential in nature. So existential problems, existential risks are those that would prevent this vast and glorious future from being realized. And I'm quoting Toby Ord, the leading EA long termist and non existential risks are the ones that wouldn't threaten this vast, glorious future. So that would be like global poverty or factory farming, Social Security probably depends.
38
00:14:36,798 --> 00:14:48,730
Chris: On who you ask too, I think. Don't some even say global warming is not actually that big of a deal in the long term? So even something like that, unless it threatens the project, right?
39
00:14:49,070 --> 00:15:09,530
Emile: Absolutely. That's the key idea of in existential risk. It's something that threatens the project. And so the implication of that is that long termism is telling these tech billionaires that they're not only morally excused from neglecting the plight of the global poor, but you're more a better person.
40
00:15:10,430 --> 00:15:16,246
Chris: If there's one thing that they love, it is being told how awesome they are, right?
41
00:15:16,358 --> 00:15:46,302
Emile: Yes. So you don't want to give money to solve global hunger. Instead, you're focused on your company, Neuralink, trying to merge our brains with AI, and your company, SpaceX, trying to get us to Mars, which is the stepping stone to the rest of the galaxy, which is a stepping stone to the rest of the universe. Okay, that's what you're into, Elon Musk. Well, actually, that's completely fine. And in fact, what you're doing is exactly what this ethical framework prescribes.
42
00:15:46,406 --> 00:16:13,070
Emile: You are doing more good by focusing on space and the transhumanist project, emerging our brains with AI than if you were just pouring your money into trying to help the global poor, many of whom are poor precisely because of the, one could argue, at least, of the capitalist system that enabled people like Elon Musk to become so massively wealthy to make, last I checked, roughly $450,000 per hour.
43
00:16:13,850 --> 00:16:17,590
Chris: So, yes, that is a lot of burritos.
44
00:16:19,890 --> 00:16:25,550
Emile: I would love to make that in one year, and I just retire, live off that for the rest of my life, given how frugal I am.
45
00:16:28,680 --> 00:17:23,839
Chris: Yeah, that makes a lot of sense regarding the ideology is both influenced folks that are homegrown billionaires, but then it also provides this delicious framework of not just excusing, but also patting them on the back for being the Ubermenschs that they so clearly are. Yeah, it's so weird to me. The longtermism philosophy in particular starts off with this sort of, like, we want to think in the long term and, like, prevent existential threats to humanity. I'm like, oh, yeah, I can be on board with that. And then they're like, in the year 3077, there will be quadrillions of people living on the planet zebes that I'm like, hold on, like, what? Can we back up a second? Like, it's just the extrapolation and the shut up and multiply concept where it's like this utilitarianism on multiple doses of steroids.
46
00:17:24,459 --> 00:17:42,959
Chris: And then you take that again. You take it and extrapolate it into this future that people can't even predict tomorrow, much less ten, much less 1000 years from now. But it's being used at least as an excuse for a basis of decision making. Just blows my mind, I guess. There's not really a question there.
47
00:17:43,699 --> 00:18:43,986
Emile: Yeah, you're absolutely right. You know, I would distinguish between long term thinking and long termism. Long termism goes way beyond long term thinking. One of my criticisms of our society is precisely its short termism, its myopia, which is sort of built into our institutions, built into quarterly reports and four or six year election cycles, where politicians just really cantae be focused on generations 100 years from now and expect to get elected again. So I think that is an enormous problem, especially as we are confronting these trans generational catastrophic scenarios like climate change, which essentially will change the livability of our planet for the next 10,000 years. What we're doing right now, what we've done the past century, will determine the livability of our planet for the next 10,000 years.
48
00:18:44,098 --> 00:19:14,450
Emile: It could be the case that there's some magic technology we invent that is able to, at scale, extract carbon dioxide from the atmosphere. It's possible we invent something like that and the whole climate change problem is fixed, you know, in a couple decades. Fingers crossed. But nobody should count on that. I mean, the IPCC basically does count on that for. For a lot of its projections. But I mean, for the same reason that, like, you shouldn't start smoking now, thinking like, well, when I get cancer in 20 years, they'll have a cure. I don't know. Maybe.
49
00:19:15,230 --> 00:19:16,170
Chris: Right, right.
50
00:19:16,550 --> 00:19:20,370
Emile: That's just not judicious. That's not nice.
51
00:19:22,150 --> 00:19:22,970
Chris: Reckless.
52
00:19:23,270 --> 00:20:07,930
Emile: Yeah. And so all of this is to say that, okay, if we are realistic and acknowledge that we might not have these technologies that save us, and from the climate catastrophe and biodiversity loss, which doesn't get enough attention because it's also absolutely catastrophic. So all of this is saying that these are long term problems, and we need, desperately need long term thinking to solve them. Long termism, I think it's such a huge shame that at exactly the moment when we most need long term thinking, long termism swooped in with this word, which sounds very nice. Long termism. It's a great word. I wish they hadn't gotten.
53
00:20:08,510 --> 00:20:13,150
Chris: The audience can't see me right now. I'm just shaking my. I'm like, oh, my God. Yeah, you're right.
54
00:20:13,310 --> 00:20:48,786
Emile: Yeah. And exactly the moment they say, oh, actually, we are the long term thinkers. Well, you sort of look more closely. They're focused on literally billions, trillions of years from now, with the goal of realizing a particular kind of techno utopian world among the stars, full of literally astronomical numbers of digital people. Ten to the 58 is one of the estimates from Nick Bostrom, a leading figure in the TESCREAL movement. Ten to the 58 one, followed by 58 zeros. That's how many digital people there could be in the future.
55
00:20:48,938 --> 00:20:50,070
Chris: Sounds crowded.
56
00:20:50,980 --> 00:20:52,228
Emile: Well, the universe is big.
57
00:20:52,324 --> 00:20:53,800
Chris: That's true. That's a good point.
58
00:20:56,220 --> 00:21:43,512
Emile: Yeah. So it's really unfortunate that they have, to some extent, sort of monopolized the conversation. They grabbed the microphone, and they position themselves as having the answer to short termism. When I would say that long termism is an extreme philosophy built on, in many ways, built on utilitarian ethics, as you had alluded to earlier. And a lot of the claims that they make and that they present as being intuitive, like future people matter, there could be a lot of them, and there's something you could do to influence them. That sounds very intuitive. Actually, if you look at the details, it's extremely counterintuitive. I mean, they would say that, you know, there are.
59
00:21:43,656 --> 00:22:25,910
Emile: When they say future people matter, and I had sort of gestured this earlier, when they say future people matter, they're not just saying that it matters that their lives are good, but it matters that they exist in the first place. Because on their view, there are two ways to benefit someone. There are ordinary benefits. This is what philosophers call ordinary benefits. This would be like holding the door for someone or giving somebody some money if they need it. All of the normal ways that you might help someone in the world. And then there are these so called existential benefits. An existential benefit is where you bring someone into existence on the assumption that they'll have a good life. So if they do actually have a good life, then you have benefited them by creating them in the first place.
60
00:22:26,530 --> 00:23:17,426
Emile: And even more on the utilitarian view, they would say that there are two ways to make the world in general, a better place. On the one hand, is to increase the happiness of the people who currently exist. Okay, that makes sense. But if your goal is to maximize the total amount of happiness in the universe, there's a second possibility. You just create new, happy people. So, utilitarianism, on this particular interpretation, that is, influence within long termism, says we should, in order to maximize value, create the largest population possible. And so these are some of the really counterintuitive. Like, how can you make the world better? Well, I don't know. I could go and help the individual who's unhoused down the street. I could give them money, get them in apartments, put in position to get a job and so on.
61
00:23:17,538 --> 00:23:36,258
Emile: Alternatively, I could just have a kid and maybe that better way to make the world, to improve the world, than helping an actual, living, breathing person who can bleed and who is currently suffering down the street. So, very intuitive, even though they present it sneakily as a deeply intuitive position.
62
00:23:36,394 --> 00:24:03,342
Chris: Yeah. It just doesn't seem to stand to logic. Right. How is making a new person exist either beneficial or not beneficial? It's simply making another person exist. They didn't have any choice in the matter, and now they're here. But before that, it seems that there's no definitive answer. Like, you cannot answer the question logically. Like, why would that be a benefit to have more people?
63
00:24:03,526 --> 00:24:17,886
Emile: Yeah. So, you know, ultimately, a lot of moral philosophy does come down to just intuitions. And, you know, this just seems right to me. This doesn't seem right to me.
64
00:24:17,918 --> 00:24:18,690
Chris: Right, right.
65
00:24:19,040 --> 00:25:12,378
Emile: And when you start to think about ethics as a branch of economics and you accept this fundamental as a postulate, you accept that more value, which you could understand as happiness, whatever that is. Exactly. Or pleasure or something. The more value there is in the universe, the better the universe as a whole becomes. So if you accept that's the case, then it makes perfectly good sense. So, like, oh, we need to create more people, you know, because people are the containers or the vessels of value. So if somebody has a net positive amount of value, then that person, that vessel, increases the total amount of value in the universe, thereby making the universe itself as a whole better. So, I mean, utilitarianism and capitalism emerged around this exact same time. And I don't think that's a coincidence.
66
00:25:12,434 --> 00:25:43,540
Emile: It's a very quantitative way of thinking about ethics or the economy. It's all about maximization. Maximize value again. Well, okay, we can make people happier, or we can just make new happy people, but, yeah, so these are very strange ideas that really are the foundation for a lot of, like, in particular, the long termist vision of the future. And, yeah, I think they're really philosophically problem. The long term is don't let on just how controversial these claims are.
67
00:25:45,480 --> 00:25:59,384
Chris: So, Kayla, we gotta get to making those babies. We gotta stop killing the babies that we make. I am so keeping that.
68
00:25:59,552 --> 00:26:06,680
Kayla: Please don't. I don't even think that I've had, like, seven abortions. I would like you to do it differently this time.
69
00:26:06,720 --> 00:26:09,000
Chris: There needs to be more people. I swear to God, I'm keeping that.
70
00:26:09,040 --> 00:26:09,560
Kayla: Please stop.
71
00:26:09,600 --> 00:26:11,316
Chris: There's more people. There's more people.
72
00:26:11,388 --> 00:26:12,036
Kayla: Then do it differently.
73
00:26:12,068 --> 00:27:00,326
Chris: You gotta make more people. Okay, look, I said this to you while were listening, but what. What the fuck? Like, how I just can't fathom the idea that somebody thinks that there is moral value. And I get it. I know there are people that do think this, but there's moral value and, like, transforming inanimate matter into people. Like, turn rock into person, and therefore I did good. Like, I guess I'm maybe still mathematical about it, but, like, it seems like the average happiness is what's important. Like, making the people that exist more happy. Totally. Why is it not just that? Like, I just don't understand that. Like, that's why add more person equals more happy.
74
00:27:00,438 --> 00:27:21,382
Kayla: Just, I have a really hard time believing that this is a good faith description of why they have these beliefs. I have a really hard time believing that, like, why Elon Musk makes $450,000 an hour, which I can't even talk.
75
00:27:21,406 --> 00:27:24,518
Chris: About that don't spend it all in one place, Elon.
76
00:27:24,694 --> 00:27:53,778
Kayla: And chooses to spend that money on solving, quote unquote solving problems. Bye. Not making the hyperloop and not going to Mars and not doing good AI. And not and not. Like, he has so many companies that have these grandiose ideas, but truly in practice have really just served to ingraten. What's the good word?
77
00:27:53,914 --> 00:27:54,786
Chris: And bigify.
78
00:27:54,938 --> 00:27:57,514
Kayla: Add to his wealth, his personal wealth.
79
00:27:57,602 --> 00:27:59,470
Chris: Aggrandize his wealth.
80
00:27:59,870 --> 00:28:42,032
Kayla: The weird thing about him, I guess, like, side note, one of the weird things about him, about talking about him, like, I don't believe that he actually believes this. I do not believe that Elon Musk actually believes this. I think that it is more likely in my head, and this could be a cope totally open to that possibility. It's more likely that, like I said to you off podcast, this feels like a prosperity gospel thing. This feels like Joel Austin saying, what God wants is for me to be rich. That's what I feel like with Elon Musk, where he's saying, what we got to do is we got to add to my wealth. We got to add to the wealth of the wealthy so that we can make more happy people in the future. The future people.
81
00:28:42,136 --> 00:28:46,528
Kayla: I don't think Elon Musk is thinking about future people. I think he's thinking about his $450,000 an hour.
82
00:28:46,624 --> 00:28:46,880
Chris: Right.
83
00:28:46,920 --> 00:28:54,430
Kayla: The weird thing about him. Sorry, before you respond, the weird thing about him is how many fucking kids he has and how he's obsessed with having kids.
84
00:28:54,510 --> 00:28:55,254
Chris: He's doing it, man.
85
00:28:55,302 --> 00:28:57,302
Kayla: Cuz, do you know how many kids this man has?
86
00:28:57,446 --> 00:28:59,318
Chris: I don't know. Is it like six or something?
87
00:28:59,374 --> 00:29:02,110
Kayla: No, he had five with his first wife.
88
00:29:02,190 --> 00:29:02,862
Chris: Holy shit.
89
00:29:02,926 --> 00:29:17,606
Kayla: Five children. He's got two or three with Grimes. And then like, two or three more, I think, God damn. Maybe one or two more. But he's got multiple from his first marriage, multiple from his second, others with, like, his employees.
90
00:29:17,718 --> 00:29:33,890
Chris: This makes sense why he identifies as a long termist. Because clearly he wants to just have more babies. Actually, this is a perfect segue. Cause I just wanted to disembark. Piggy, wait here. So we've been talking about Tessreal these last two episodes. And on the next episode we will also be talking about. Actually, we'll just be talking about Tuscreel for the rest of the season.
91
00:29:33,970 --> 00:29:35,114
Kayla: Rest of my life, the rest of.
92
00:29:35,122 --> 00:29:35,858
Chris: My whole fucking life.
93
00:29:35,874 --> 00:29:41,298
Kayla: The rest of my short life. Because they're not trying to solve the problem of my extinction. They're simply thinking about.
94
00:29:41,314 --> 00:29:42,514
Chris: They're just thinking about how they can.
95
00:29:42,562 --> 00:29:47,210
Kayla: To the nth power of people in 400 million years, which aren't going to exist because we're all going to die.
96
00:29:47,250 --> 00:30:12,070
Chris: And so that's the l part of Tessreal. Specifically what you were just talking about. Take the big l. L. Actually, I think, honestly, like, of all of the letters and Tescreal, l is the l est. L is the l of tessreal. I think there's a lot to be said for transhumanism. I think there's a lot to be said. There's some to be said for extropianism. I think I have once identified as a singletarian, and I think there's some cool stuff there.
97
00:30:12,190 --> 00:30:12,638
Kayla: I agree.
98
00:30:12,694 --> 00:30:49,014
Chris: Blah, blah. Cosmism, space travel. Okay. Like, effective altruism. At least the people that are into being efficient about it and not earn to give great long termism is weird. Like, long termism is like, what the fuck? This whole concept that. I know I said this in the interview, but we're basing our decisions off of an assumption that we can predict what's going to happen a thousand years from now. And we assume that there's going to be millions, not even millions. What do you say? Ten to the 58th or something?
99
00:30:49,062 --> 00:30:49,550
Kayla: Ten to the nth.
100
00:30:49,590 --> 00:30:53,808
Chris: Power, like an insane number of people in the future is just, what if.
101
00:30:53,824 --> 00:31:02,544
Kayla: We make all these people and they're all miserable? Did we fail? Why is there this assumption that if we have more people, there'll be more happy people?
102
00:31:02,712 --> 00:31:40,514
Chris: That's a good question. I wonder that myself and I kind of want to follow up with Doctor Torres about that. Is the position of these long termists that creating people at all is good, right? Or that creating people is good, but they also have to be happy in the stack rank of value in the universe. Where is happy person rock and unhappy person? Right. Clearly happy person is on top. But is rock better than unhappy person, or is unhappy person better than rock? I think unhappy person beats rock.
103
00:31:40,602 --> 00:32:13,280
Kayla: Then it just makes me wonder, which beats scissors. This is me extrapolating, and this is not anything that was said in your interview with Doctor Torres. This is not something I read somewhere, but I just have to. I can't help but wonder with how much this butts up against the eugenics and the Ubermensches and the whatever of the world are these people, especially the quote unquote elites, saying that, like, well, you know, we should be having as many kids as we can because, like, we can give them happy lives? Like, I don't really care about what, like, the poors are doing.
104
00:32:13,400 --> 00:32:13,952
Chris: I think there's.
105
00:32:13,976 --> 00:32:17,392
Kayla: This should be having lots of kids because the riches can give them.
106
00:32:17,456 --> 00:33:08,382
Chris: It's kind of both. So it's kind of like, I think that there's, like, this general sense that, you know, in, like, an abstract, moral way, it's good for there to be more people. However that comes about. Then why aren't they not also a eugenics component to it of. And also, actually, the best people to reproduce would be us, which. And I think this comes up later in this interview, if I recall our talk correctly. But that's part of why they're not super concerned about half existential risks. Their only concern is actually wiping everybody out. Down to zero, they call that. Or either wiping everybody down to zero or erasing civilization enough that it can't climb back. That's called the rungless ladder. I think we've talked about that a little bit.
107
00:33:08,406 --> 00:33:52,310
Chris: The idea is that as you climb the ladder of civilization and technology, do the rungs stay behind, below you or not? Because if they don't and you fall down, you might not be able to get back up. So it's what were saying. It's about whether it threatens the project or not, right? So if we're talking about, like, a light existing, a light existential threat, such as, they would say maybe global warming or global poverty or some other sort of civilizational collapse, that's actually not that bad, because the billionaires are going to survive in their bunkers, and then they will repopulate, and then we'll better off anyway, because it'll be all of the effective altruists that all repopulated, and then we'll have our totally not eugenics by the way thing.
108
00:33:53,370 --> 00:34:11,130
Kayla: It also just, like, elons gotta be in his bonnet about declining birth rates. That's the thing that he talks about. And the only reason we talk so much about fucking Elon Musk is because he is so front facing with his beliefs on a goddamn day to day. Cause he tweets all the fucking time, right?
109
00:34:11,170 --> 00:34:18,810
Chris: There are dozens and hundreds of elons in Elon land, in Silicon Valley that just don't tweet every day.
110
00:34:18,889 --> 00:34:30,110
Kayla: But then I'm just like, bro, why are you trying to kill trains with the hyperloop and not, like, trying to figure out how to get the microplastics out of our balls?
111
00:34:31,210 --> 00:34:32,737
Chris: I like the microplastics in my balls.
112
00:34:32,754 --> 00:34:38,126
Kayla: Well, but supposedly the microplastics in the balls might be contributing to why we're having declining, like, fertility rates.
113
00:34:38,158 --> 00:34:40,158
Chris: Yeah, no, I remember that. I remember that coming out of school.
114
00:34:40,174 --> 00:35:06,364
Kayla: I don't know if that's true, but. But either way, we. There are. I just, like, I get base x. I get the, like, okay, we got to go to space so that we can, like, colonize the universe. That makes sense with the long termist philosophy. I get. What else is he doing? I don't get anything else that he doesn't really, in terms of self driving cars, long termism, except for, like. Yeah, nothing else really goes along with that. Like, why? I guess neuralink.
115
00:35:06,412 --> 00:35:08,480
Chris: Yeah, neuralink does. AI does.
116
00:35:08,900 --> 00:35:29,220
Kayla: But, like, there's so many other things that could be invested in when you have the access and the abundance that he does that does not address any of these issues. And it just makes me go, like, why is he doing this? Oh, it's because he invests in Hyperloop. Because then he can kill trains when he can maximize profit for Tesla.
117
00:35:29,260 --> 00:36:20,952
Chris: Okay, so I think that this putting a magnifying glass on the whole, like, well, what about declining birth rates? Why aren't they trying to fix that with XYZ programs? I think that actually, really, if you look at that, it really highlights what they really care about. Right? Because they say they care about this future with more humans that are doing more things and more galaxies or whatever, but only such that it doesn't conflict with them holding on to their power and wealth. Right, right. Cause the easy answer to that. And there's been studies on this, and I've read articles about it, but, like, a big contributing factor to the climbing birth rate is income inequality, right? People have kids when they feel like they have the resources to have kids, and they don't when they don't. That's an average statement, right? It's not 100%.
118
00:36:21,016 --> 00:36:22,920
Kayla: If you can't afford a kid, you can't have a kid, right?
119
00:36:22,960 --> 00:36:29,422
Chris: People are less likely to get pregnant if they feel like they cannot afford the kidde. Like, pretty simple stuff, particularly countries that.
120
00:36:29,446 --> 00:36:31,718
Kayla: Have, like, great access to birth control, et cetera.
121
00:36:31,774 --> 00:36:51,702
Chris: That's the thing that they don't do, right? The thing that they don't do is like, okay, let's work on this income inequality problem, because that will demonstrably solve this, like, declining birth rate thing, but they won't do that. So I think that is like, a really. That's like, a really good bellwether for, like, what's actually going on here right in their heads anyway.
122
00:36:51,846 --> 00:37:36,022
Kayla: I also wonder if there's a sense of, like, well, we don't really need to solve those problems because, like, people usually figure it out. Like, folks who are not extraordinarily wealthy make up most of people, right? Like, vast majority of humans on the planet are people that do not have access to much wealth at all. And yet we continue to have children. We continue to figure out ways to survive. Not everybody. Lots of people die early. Lots of people die due to poverty. Lots of people die to malnutrition. And yet, steadily, our population globally has increased. And I wonder if these long termists of the world kind of feel like I don't really need to figure that out, because that gets figured out.
123
00:37:36,046 --> 00:37:37,038
Chris: People gonna fuck.
124
00:37:37,174 --> 00:37:37,814
Kayla: Yeah.
125
00:37:37,942 --> 00:38:35,992
Chris: Look, I don't really have a great answer to that other than I do know that there are definitely governments in the world for which this is a huge problem. Thinking of Japan specifically, their population is aging. Their birth rate is declining. I think they're having a similar problem in Korea, and that is a huge socioeconomic problem for them. Are the Silicon Valley elites thinking about this or dismissing it? I don't know. But I do know that there are people in some forms of power that do care about it a lot. I think, honestly, I've talked so much about. Clearly, I'm anti long termist here. I think that's the easiest one for me to punt out of my belief orbit. Maybe I'm a medium termist. I don't know. Because Emil makes a good point. We do think too short term.
126
00:38:36,096 --> 00:39:03,468
Chris: There's a lot of stuff with our current systems that makes it hard for us to tackle long term issues, such as climate change. But, like, a kajillion people living in a simulation in the matrix on planet X 532, that's not it. That's not long term thinking. So I'm like, yeah, I don't know, I guess I agree with them about the whole, like, man, what a shame that they stole that word.
127
00:39:03,644 --> 00:39:10,568
Kayla: It does feel like the better long term solution is to solve the issues of the now so that we have a long term.
128
00:39:10,764 --> 00:39:19,500
Chris: Right. But again, a, if it doesn't threaten the project, they don't care, and b, if it winnows down the population to just the billionaires, then maybe that's good.
129
00:39:20,280 --> 00:39:22,640
Kayla: But then who's gonna do the work that makes them wealth?
130
00:39:22,720 --> 00:39:23,704
Chris: They're not smart, Kayla.
131
00:39:23,752 --> 00:39:26,736
Kayla: How do they extract wealth from no one?
132
00:39:26,888 --> 00:39:38,724
Chris: They're not smart. Like, that's why you have people saying, like, how do I incentivize my security guard to not kill me if there's no money? Like, they literally don't understand economics. Literally.
133
00:39:38,812 --> 00:40:14,266
Kayla: Like, it's just the snake eats its tail. If you let everybody else, if you're billionaire status, and then you're like, everybody is expendable, then you're not gonna have billionaire status anymore because no one's doing the labor. Yeah, this is more of a comment than a question, but just you and doctor Torres were talking about, why is this something important to know about, to think about to talk about? And why is the TESCREAL movement something important for us to think about? And it just made me think of that quote of like, oh, you might not be interested in politics, but it's sure as hell interested in you. And that feels very similar to this.
134
00:40:14,338 --> 00:40:17,386
Chris: Where like, you might not be interested in robots, but they're interested.
135
00:40:17,498 --> 00:40:34,690
Kayla: They are very interested in you. You might. 1 may not think this is important, but given the magnitude of power behind it is important in your daily life. Unfortunately, I'm not very happy about it. I don't want to think about Silicon Valley.
136
00:40:34,810 --> 00:40:53,878
Chris: We are all forced to give a shit about the things that the billionaires give a shit about because they are fucking in control of everything except for the end of the world, apparently. So they don't know about that. So they're gonna go in the bunkers. But like I said, remember, it's not just that they can't stop the end of the world. It's like they may not want to if they're the only ones left in their bunkers procreating.
137
00:40:53,934 --> 00:40:57,518
Kayla: Well, power, something that doctor Tor said is like, power wants to hang on to power.
138
00:40:57,574 --> 00:40:59,374
Chris: One thing power wants to do is hang on to it.
139
00:40:59,422 --> 00:41:02,944
Kayla: And power will try and hang on to power until it is holed up.
140
00:41:02,952 --> 00:41:07,392
Chris: In a bunker until it is wrenched from their cold, dead, TESCREAL hands.
141
00:41:07,456 --> 00:41:08,100
Emile: Yeah.
142
00:41:08,880 --> 00:41:16,872
Chris: So next time on Cult of just weird, we will wrap up this interview with Emile Torres. I forget the rest of the stuff that we talk about, but I'm sure that it's very good.
143
00:41:16,936 --> 00:41:18,024
Kayla: I think it's really interesting.
144
00:41:18,112 --> 00:41:19,000
Chris: Is it really interesting?
145
00:41:19,040 --> 00:41:23,808
Kayla: I mean, you came out of the interview being like, oh, my God, that was the best interview I ever did. So I'm assuming we're gonna get to some good stuff.
146
00:41:23,864 --> 00:41:27,344
Chris: I think I was mostly just enamored with my own smart insights.
147
00:41:27,472 --> 00:41:27,784
Kayla: Yeah.
148
00:41:27,832 --> 00:41:31,340
Chris: So anytime I talk, I'm like, oh, man, that was so interesting.
149
00:41:33,290 --> 00:41:49,442
Kayla: If you want to hear more of Chris's just wonderful insights and just bask in his glorious intelligence, you can hang out with us on discord. You'll find the link in our show notes. And if you want to give him money for how smart and wonderful he is, you can go to patreon.com culturejisweird.
150
00:41:49,546 --> 00:41:52,858
Chris: You can also just on the street, you can just hand me cash. Find us if you want.
151
00:41:52,914 --> 00:42:01,866
Kayla: Find us and give us cash. And then I'll say to you, when you hand that cash over, I'll say to you, this is Kayla, this is.
152
00:42:01,898 --> 00:42:06,510
Chris: Chris, and this has been cult or just cash.