Transcript
1
00:00:02,080 --> 00:01:05,732
Chris: For our audience. Singularity. Why is it called that? It refers to the idea that technological progress accelerates, so eventually it'll accelerate to the point where knowing what was going on this morning is irrelevant to this afternoon, which I know, like, kind of already feels that way. But a big part of this is because the singularity will allegedly be the result of an intelligence explosion, which is what will happen when we make an AI smarter than us. And then it wants to make an AI smarter than it, which wants to make an even smarter AI, and so on, until you wind up with an AI of God like powers that brings about transformative change to humanity, which, as you mentioned, absolutely is a millenarian ethos. And if transformative to humanity sounds familiar, that's because that's a transhumanist goal. Hello again. Welcome to Kel.
2
00:01:05,916 --> 00:01:09,876
Kayla: What? Welcome to Celtics, Celts or just weird?
3
00:01:09,908 --> 00:01:16,788
Chris: Celts or just weird? A spin off podcast where we talk about celtic peoples and are they weird?
4
00:01:16,844 --> 00:01:18,236
Kayla: And the Boston Celtics.
5
00:01:18,428 --> 00:01:20,412
Chris: Boston Celtics. Kayla.
6
00:01:20,476 --> 00:01:20,740
Kayla: Sorry.
7
00:01:20,780 --> 00:01:28,854
Chris: That's how you pronounce it correctly. I'm Chris. I am a game designer. I pronounce things wrong. And I also occasionally do some data science, analytics type stuff.
8
00:01:29,052 --> 00:01:33,390
Kayla: I'm Kayla. I'm a television writer, and my co host is a dirty bostonite.
9
00:01:33,850 --> 00:01:38,322
Chris: No, I'm not. I moved away from Massachusetts when I was, like, five.
10
00:01:38,386 --> 00:01:39,830
Kayla: You can't escape your roots, Chris.
11
00:01:40,370 --> 00:01:44,190
Chris: I was born in Danvers Hospital, but not that Danvers hospital.
12
00:01:44,810 --> 00:01:46,066
Kayla: Not the Arkham asylum one.
13
00:01:46,098 --> 00:02:00,358
Chris: Not the Arkham asylum. Not the one that Arkham Asylum was based on, I think. All right, Kayla, I know how much you like talking about weird hypothetical scenarios, so I've got one for you.
14
00:02:00,414 --> 00:02:05,454
Kayla: Is it a lot, or is it a little? Because it's both. It's. I love it, and I hate it so much.
15
00:02:05,582 --> 00:02:14,966
Chris: Don't stop me. Actually, if you've heard this before, because this is for the show, consider the following paradox. And this is called Newcomb's paradox.
16
00:02:15,158 --> 00:02:16,814
Kayla: I. I don't want to do this.
17
00:02:16,942 --> 00:02:27,262
Chris: That's too bad. Cause here we go. Imagine a super intelligent computer algorithm, or AI, or whatever you want, that is extremely proficient at predicting human decision making.
18
00:02:27,326 --> 00:02:27,862
Kayla: Okay?
19
00:02:27,966 --> 00:03:21,720
Chris: It's accurate 99.999% of the time. Or if you prefer, the original way that this was posited is it's infallible. It's always accurate at predicting human decision making. I find that a little distracting because that I'm like, well, nothing's infallible, so I prefer the 99.99% version. Either way, it doesn't really change the way we talk about it. So you have this extremely accurate computer in terms of predicting what humans will decide. Okay, now, imagine you are presented with two boxes. You are given one of two choices. You can take the contents of both boxes, or you can only take the contents of the second box B. What is in the boxes? Box A is transparent and always contains $1,000 guaranteed. Box B is opaque, and it can contain either nothing or $1 million.
20
00:03:22,220 --> 00:03:49,436
Chris: Now, the prediction computer we talked about a minute ago predicted your choice ahead of time, but you don't know what it predicted. What you do know is that if it predicted that you would only take box b, then it put the million dollars in box b. If, on the other hand, it predicted you would take both boxes, it didn't put anything in box b. So the question is, do you take both boxes, or do you only take box b?
21
00:03:49,508 --> 00:03:57,748
Kayla: I don't understand why you wouldn't just take, like, I don't understand. I think I'm too stupid for this. I think I literally am too dumb, because I don't understand.
22
00:03:57,884 --> 00:03:58,572
Chris: Which one do you?
23
00:03:58,596 --> 00:04:05,516
Kayla: I don't understand why you simply would not just take box B. Because if you have a million dollars, who cares about $1,000?
24
00:04:05,628 --> 00:04:34,232
Chris: What's fascinating about this is that your reaction of, like, I don't understand why you wouldn't do blank is extremely common, and it's like 50 people. And this includes, like, professional philosophers, too, by the way. So, like, oh, I don't want to listen to that. I'm just saying, half of people, including the professionals, are like, well, it's obviously box b. And then half the people are like, it's obviously a and b.
25
00:04:34,296 --> 00:04:35,260
Kayla: What do you think.
26
00:04:37,370 --> 00:04:43,034
Chris: By the way, including, we talked about this on our discord, which, oh, we didn't do our call to action this time. Oh, my God.
27
00:04:43,082 --> 00:04:44,154
Kayla: You just started with the anecdote.
28
00:04:44,202 --> 00:04:49,994
Chris: I just started right with the anecdote. Everybody, if you want to support us, go to our Patreon. Patreon.
29
00:04:50,082 --> 00:04:53,778
Kayla: If you want to see the hours long conversation Chris had about this, if.
30
00:04:53,794 --> 00:05:10,958
Chris: You want to debate this with me personally, then come to our discord. The link is in the show notes. One of our lovely discord members completely disagreed with that and also thought that it was kind of like, obviously, why wouldn't you just take both boxes?
31
00:05:11,094 --> 00:05:13,330
Kayla: I don't get it.
32
00:05:15,750 --> 00:05:16,742
Chris: It's a.
33
00:05:16,926 --> 00:05:17,566
Kayla: Is it positive?
34
00:05:17,598 --> 00:05:18,046
Chris: It's a toughie.
35
00:05:18,078 --> 00:05:22,350
Kayla: If the computer predicted that you would take both boxes, then you only get $1,000.
36
00:05:22,470 --> 00:05:31,774
Chris: If the computer predicted you would take both boxes, then it put nothing in the second box, which, therefore, if you take both boxes, then you get $1,000 instead of.
37
00:05:31,902 --> 00:05:36,130
Kayla: That's what I don't understand. Why would you pick $1,000 over a million dollars?
38
00:05:36,440 --> 00:05:41,808
Chris: So I'm gonna try to explain the logic of the two boxer people, which is.
39
00:05:41,864 --> 00:05:44,740
Kayla: Please do. Cause I think there's, like, brain damage going. Ignore.
40
00:05:45,600 --> 00:05:52,700
Chris: Ignore the fact for a second that there was a computer that did a prediction and put stuff in boxes. What should you do in that case?
41
00:05:53,280 --> 00:05:55,072
Kayla: Well, what's the setup then?
42
00:05:55,136 --> 00:05:58,872
Chris: You just. Nobody tells you that a computer did this. What would you do?
43
00:05:58,976 --> 00:06:00,208
Kayla: What's the setup?
44
00:06:00,384 --> 00:06:02,692
Chris: The setup is, do you take one box or two boxes?
45
00:06:02,816 --> 00:06:06,220
Kayla: So if there's a box of a million dollars and a box of $1,000, do I take both or one?
46
00:06:06,260 --> 00:06:18,676
Chris: No, you don't know what's in the second box. It could be zero. It could be a million. The first box, you know, has a thousand. So do you take only the second box? That might be zero, might be a million. Or do you take both boxes?
47
00:06:18,788 --> 00:06:22,172
Kayla: Oh, I see. Okay. So you're saying that.
48
00:06:22,356 --> 00:06:25,852
Chris: What would you do in that scenario? I didn't tell you there was that prediction thing.
49
00:06:25,876 --> 00:06:26,772
Kayla: That's too stupid.
50
00:06:26,916 --> 00:06:27,720
Chris: I just.
51
00:06:28,140 --> 00:06:30,004
Kayla: I'm too dumb for this. I think I am.
52
00:06:30,052 --> 00:06:34,712
Chris: Nobody told you that there is a predicting computer. They just say, like, hey, box B might have one or the other.
53
00:06:34,836 --> 00:06:35,696
Kayla: Then you would take both.
54
00:06:35,768 --> 00:06:47,752
Chris: You would obviously take both. Right. So if that's the case, then why does it matter what the computer did in the past? No matter what the computer did, taking both boxes will always give you $1,000 more.
55
00:06:47,816 --> 00:06:48,280
Kayla: Okay.
56
00:06:48,360 --> 00:06:48,704
Chris: Than taking.
57
00:06:48,752 --> 00:06:52,180
Kayla: So those people are saying that. I guess that's where the.
58
00:06:52,480 --> 00:06:53,784
Chris: That's why it's a paradox.
59
00:06:53,912 --> 00:06:55,368
Kayla: It's not a paradox, though.
60
00:06:55,424 --> 00:06:56,176
Chris: Oh, it totally is.
61
00:06:56,208 --> 00:06:57,260
Kayla: No, it's not.
62
00:06:58,840 --> 00:07:02,260
Chris: The other thing I've noticed about this is it makes people angry for some reason.
63
00:07:02,680 --> 00:07:03,776
Kayla: I'm enraged.
64
00:07:03,928 --> 00:07:07,616
Chris: Yeah. I don't know. It's a weird phenomenon.
65
00:07:07,768 --> 00:07:14,456
Kayla: Okay. I'm like, I can kind of see it. I can kind of understand, I think, the problem.
66
00:07:14,528 --> 00:07:16,376
Chris: The computer made its decision in the past.
67
00:07:16,448 --> 00:07:21,856
Kayla: I know, but you're saying that the computer is a perfect predictor. Yeah.
68
00:07:21,968 --> 00:07:22,464
Chris: Yeah.
69
00:07:22,552 --> 00:07:33,740
Kayla: So if it's a perfect predictor, but people who say take both think that the prediction was made in the past, so it really doesn't bear any weight. Exactly, but.
70
00:07:34,320 --> 00:07:39,472
Chris: Exactly. No, this is why it's a paradox. Because whenever you go down one mode of reasoning, you always kind of go like, but.
71
00:07:39,496 --> 00:07:47,420
Kayla: But what I'm saying is that it does bear weight because it happened and it happened correctly.
72
00:07:48,360 --> 00:07:48,944
Chris: Correct.
73
00:07:49,032 --> 00:07:55,736
Kayla: So if I take both boxes, I'm only getting $1,000. If I take the second box, I'm getting a million dollars.
74
00:07:55,848 --> 00:07:58,392
Chris: Look, I want to know the correlation.
75
00:07:58,576 --> 00:08:04,252
Kayla: I want to know the correlation of what people say versus whether or not they believe in goddess.
76
00:08:04,386 --> 00:08:16,800
Chris: I think that's actually very salient, because I think that there's definitely. And, like, a lot of. I'm not the only one to say this. Like, a lot of people have said this. Like, the premise that there is an infallible predictor is, like.
77
00:08:16,880 --> 00:08:17,624
Kayla: It makes it hard.
78
00:08:17,712 --> 00:08:32,562
Chris: This is a crux here, right? That's something we're not used to, and it's something, frankly, that I don't think is possible. And this paradox is part of why I think that, because my feeling is as written. You definitely take box b. You don't. Two box. That's my feeling.
79
00:08:32,626 --> 00:08:34,250
Kayla: My. Me too.
80
00:08:34,370 --> 00:08:35,833
Chris: You definitely take box b.
81
00:08:35,881 --> 00:08:37,634
Kayla: Wait, but I thought you and I got in an argument about this.
82
00:08:37,682 --> 00:09:05,782
Chris: No, we got an argument about the mode of dust. We're gonna. We'll do that in a future episode. No, we're gonna. Kayla. I'm sorry. This is the podcast. Now. The podcast is just dumb hypotheticals, okay? So the reason I say it's box b is because if you are telling me that this infallibly predicts what I do, then that means that if I pick box b, it'll definitely be a million dollars. And if I pick both, then it'll definitely be $0. Like, you told me that. Now, what that also implies is that there's something going on here called retro causality.
83
00:09:05,926 --> 00:09:06,718
Kayla: Yes. Yes.
84
00:09:06,774 --> 00:09:27,198
Chris: Which is my decision. Retro caused the computer to decide whether it put money in one or the other or whatever. My decision in the present influenced what the computer in the past did. Because of that. Because of that implication of retro causality. That's part of why I think it's impossible to have something that predicts decisions infallibly.
85
00:09:27,294 --> 00:09:28,366
Kayla: Because of the implication.
86
00:09:28,438 --> 00:09:29,382
Chris: Because of the implication.
87
00:09:29,446 --> 00:09:44,106
Kayla: I think that's why I want to know the correlation between people who believe in God and nothing. Because it's like, if you don't believe in God, then you have to take both boxes. Yeah, you fucking take the boxes. Because that's magical. It is. It's magical thinking. It doesn't make any rational, logical sense.
88
00:09:44,138 --> 00:09:44,874
Chris: In the real world.
89
00:09:44,922 --> 00:09:48,754
Kayla: In the real world. But if I'm existing within the context.
90
00:09:48,802 --> 00:09:50,698
Chris: In the confines of the hypothetical, and.
91
00:09:50,714 --> 00:09:54,234
Kayla: I'm taking, then I feel like retro causality exists.
92
00:09:54,402 --> 00:09:54,970
Chris: I agree.
93
00:09:55,050 --> 00:10:03,930
Kayla: And so if you're asking me, you have to ask me two different things. And I will give you two different answers if you ask me this question as is, it's b. If you ask me real life, it's both.
94
00:10:04,090 --> 00:10:45,806
Chris: Agreed. Cause if you told me this was real life, then I would say that you were wrong about the infallible predictor. I would say that's not something that's possible. And part of it is because of the retro causality. And part of it is also like, there's a lot of chaos theory going on here. So my decision as to which box to take affects the computer's prediction, which also affects my decision, which also affects the computer's prediction. So there's a feedback cycle there, right? And feedback cycles are the canary in the coal mine for when you have a chaotic system. And a chaotic system is inherently impossible to predict in any pragmatic way, even for a supercomputer. It's not that. It's not deterministic.
95
00:10:45,958 --> 00:10:56,662
Chris: The chaotic system is deterministic, but it quickly becomes impossible, even for a computer made of all of the atoms in the universe to predict something that has that property with it.
96
00:10:56,766 --> 00:11:04,278
Kayla: That's why the dinosaurs had babies, even though they were all female. Spoiler alert for Jurassic park.
97
00:11:04,374 --> 00:11:05,318
Chris: Life finds a way.
98
00:11:05,374 --> 00:11:05,726
Kayla: Yeah.
99
00:11:05,798 --> 00:11:15,054
Chris: And as a side note here, this isn't like, this is decision theory. Like, this isn't just like, well, some nerds arguing on the Internet. I actually.
100
00:11:15,102 --> 00:11:17,350
Kayla: Please don't put me in the prisoner's dilemma with the computer.
101
00:11:17,390 --> 00:11:57,780
Chris: This is not prisoner's dilemma with the computer. This is prisoner's dilemma between the United States and the Soviet Union. I recently heard an interview. It was actually on. It was one of Dan Carlin's interviews with Annie Jacobson, who wrote a book called Nuclear a scenario. And one of the things they talked about was there was this elderly former director of the United States nuclear Stratcom, which is like the people in charge of doing the nuclear strike, if that ever came to pass, and determining the strategy. And for people who don't know, in the cold War, we had this policy, and I say we. I mean both the US and the USSR, of what's called deterrence. We each produced an insane number of nuclear weapons, all pointed at the other.
102
00:11:57,940 --> 00:12:10,924
Chris: And the reason we know the Russians aren't going to nuke us is because we have stuff pointed at them. And the reason that they know, we're not going to nuke them is because they have stuff pointed at us. Right. There's this mutually assured destruction which deters either side from doing anything.
103
00:12:11,012 --> 00:12:11,644
Kayla: Right.
104
00:12:11,812 --> 00:12:20,736
Chris: By the way, the paradox were talking about was called Nukem's paradox. I just now noticed that. Anyway, so this nuclear.
105
00:12:20,848 --> 00:12:23,940
Kayla: More like, don't. Nukem's paradox, am I right?
106
00:12:24,400 --> 00:12:50,662
Chris: So this former director of nuclear Stratcom was talking as, like, an old man, basically being like, you know, I think back now, and I wonder, like, if somebody had showed me that missiles were inbound to the United States, would I launch my missiles? Would I launch against Russia? And, like, it kind of feels like the answer is no, because at that point, deterrence failed.
107
00:12:50,726 --> 00:12:51,294
Kayla: Yeah.
108
00:12:51,422 --> 00:13:00,918
Chris: So even if we're all gonna die over here, even if the entire north american continent is gonna be glassed, what sense is there in, like, glassing the other hemisphere?
109
00:13:01,014 --> 00:13:01,494
Kayla: Yeah.
110
00:13:01,582 --> 00:13:12,610
Chris: What sense is there and killing everyone when half of everyone's already gonna die? Like, to the point where some people even, like, sort of thought experimenty. But, like, maybe the red button shouldn't be connected to anything.
111
00:13:13,460 --> 00:13:15,332
Kayla: Mmm. Mm.
112
00:13:15,436 --> 00:13:21,780
Chris: It should just be there as, like, a deterrence is the strategy, but if it fails, we shouldn't nuke, I think.
113
00:13:21,820 --> 00:13:24,996
Kayla: But that the nuclear button, is it a button?
114
00:13:25,068 --> 00:13:25,716
Chris: I don't know.
115
00:13:25,828 --> 00:13:28,132
Kayla: Should be inside a human body.
116
00:13:28,276 --> 00:13:28,932
Chris: Yeah.
117
00:13:29,076 --> 00:13:37,124
Kayla: So that if someone's gotta push it, they gotta kill a literal guy. You have to. If you're mister president and you want a nuke, you're gonna have to kill a guy.
118
00:13:37,172 --> 00:13:53,704
Chris: You have to, like, open him up with a knife, like saw style. Yeah. The paradox here, though, is that if you know that the other side is thinking that if both sides know that the other side is like, well, they're not really gonna do it right, then deterrence goes away, and now it's more likely to happen.
119
00:13:53,792 --> 00:13:57,360
Kayla: Yeah. I hate it. Don't talk to me about this.
120
00:13:57,440 --> 00:14:45,716
Chris: It's horrible. So decision theory, which is what this whole newcomb's paradox and what I just talked about with the nuclear exchange, decision theory is a thing, and it's actually, like, not just an esoteric thing, it's a thing at, like, the highest possible stakes for civilization, for the human species. So I just, you know, and I invite you guys in the audience, too, like, ping us on social media or come on, discord and talk about it, because I do get really curious about how people think about these types of paradoxes, and I need to pump the brakes here so that we can move on, because like, God knows we've already spilled like, way too much digital ink over this on discord, as you mentioned. But this thought experiment is one of the favorite thought experiments of less wrongers, and they like thinking about decision theory.
121
00:14:45,748 --> 00:15:31,982
Chris: It's a field they care about a lot. Eliezer Yudkowski even invented his own branch of decision theory, or sloppily reinvented, depending on who you ask, called functional decision theory. If you've recently listened to our season two, episode four on Roko's basilisk, we called it timeless decision theory then, but Mister Yudkowski has since rebranded it anyway. It has to do with what decisions we do make or should make under different conditions. And I think the connection to AI development is kind of obvious here, right? You're literally trying to program a decision machine if you're making AI. And I won't go into all the crazy details that underpin the Roko's basilisk idea because we did that on the other episode. But I will mention that decision theory is a big part of it since we're talking about things less wrongers are into here.
122
00:15:32,006 --> 00:16:17,016
Chris: This may also be a good time to briefly recap what Rokos Basilisk is. So basically it's this thought experiment that somebody posted on less wrong. The poster's name was Roko, hence the name Roko spasilisk. And the idea was like, if there's a future super intelligent AI, even if it's good for humanity, it may decide that it should torture people that knew that it could come into existence but didn't give their resources to help it come into existence. And the reason it will decide that is that might influence people in the past. Sort of like the nukem computer thing. It will retro cause and influence people in the past to try to bring it about sooner. And it will make that calculation based on the utilitarian idea.
123
00:16:17,048 --> 00:16:45,222
Chris: Like every moment I can make myself come into existence sooner is like one more person I can save from cancer because I, as the super intelligent God, like Aih, cured all cancers and did all this cool stuff. And so like every moment that I do that, I push that back. I save x number of lives. So it's totally worth it torture these people that knew that I could be in existence and didn't support it. And that's really scary to these folks.
124
00:16:45,406 --> 00:17:01,696
Kayla: I also want to know now, and I know it's probably zero because I doubt there's a lot of faith in God, and belief in God in less wrong community is my bias assumption. But again, I want to know the correlation of, like, what your stance on Roko's basilisk is and what your stance on God is? I want to know.
125
00:17:01,848 --> 00:17:02,648
Chris: Yeah, I know.
126
00:17:02,744 --> 00:17:10,215
Kayla: And when I say God, I don't mean like, aren't you a Christian? I mean like, do you view the universe as having some sort of sentience or not?
127
00:17:10,328 --> 00:17:13,784
Chris: Right. Did somebody create the universe or is it just all random?
128
00:17:13,952 --> 00:17:23,030
Kayla: I think that my answer and my change in answer has reflected my change in relationship to goddess.
129
00:17:24,279 --> 00:17:25,231
Chris: What's your change in answer?
130
00:17:25,255 --> 00:17:31,207
Kayla: Well, now I'm like, fuck you, Roko's basilisk. Fuck you if you're gonna exist. Fucking torture me, bitch. Like, that's how I feel.
131
00:17:31,263 --> 00:17:37,743
Chris: I mean, under decision theory conditions, that might actually be the right choice because then that means you're uninfluenceable.
132
00:17:37,831 --> 00:17:40,127
Kayla: And in fact, that's who become uninfluenceable.
133
00:17:40,183 --> 00:17:51,179
Chris: Eliezer Yudkowski himself said that basically if you like, pre commit to saying, like, fuck off, then you are uninfluenceable. And then they won't. The super intelligent AI won't torture you because it wouldn't have had any effect.
134
00:17:51,359 --> 00:18:08,244
Kayla: I don't care. I'm just saying you agree with Aliezer. I hate that. Because then it means like, oh, saying fuck you is the right answer. Because then he won't get you. No, get me, get me, bitch. I'm really angry about this because I'm angry about God and the universe.
135
00:18:08,332 --> 00:18:09,628
Chris: Yeah. And I'm with you, man.
136
00:18:09,684 --> 00:18:13,920
Kayla: I just. I have a lot of anger. And so now I don't care about Roko's basilisk anymore.
137
00:18:15,700 --> 00:18:20,876
Chris: To me, I still find it like an interesting thought experiment. That's what it was originally supposed to be. But, like, it really scared a lot of people.
138
00:18:20,948 --> 00:18:22,332
Kayla: It scared the shit out of me.
139
00:18:22,356 --> 00:18:59,722
Chris: It scared the shit out of me. Of you. It scared me. But like, people on less wrong were like having literal nightmares about it. Some people. Some people, not everyone. Some people are like, this scare me, that bit. This is dumb. Some people are having nightmares. I read on rational wiki. Rational wiki has an entry on Roko's basilisk and half of it is devoted to explaining why it's b's to make you feel better and the reason they put that there. They say the reason we put this here is because it wasn't here. We just had. Here's what Roco's vascular is about. And we kept getting emails from less wrong community members saying, like, nobody will let us talk about this on less wrong because it became like, a taboo subject.
140
00:18:59,786 --> 00:19:00,082
Kayla: Right.
141
00:19:00,146 --> 00:19:41,732
Chris: That's in our other episode about how it got Taboo ified there. Nobody will let us talk about this on less wrong. I'm so scared about this and like, I don't have anybody to talk about it with. You guys have this article that seems pretty, you know, well researched. Like, what do you guys think? So apparently they got like, so many emails from like, scared less wrong people that now they have this whole section called. So you're worrying about the basilisk and it's like literally like paragraphs and paragraphs long and like, going into all the different reasons why it's not something to worry about. But that's just like, that's how much effect it had, right? And like, people even like, did donate quite a bit of money to AI research and like, what's called the AI alignment problem because of that.
142
00:19:41,756 --> 00:19:55,152
Chris: Like, now some people donate money because they think that's an existential risk and we should donate money anyway, right? I don't think that's necessarily a bad idea, but some people definitely donated money specifically because it's like, holy shit, I'm gonna get tortured in the future by this, like, God, AI.
143
00:19:55,216 --> 00:19:57,968
Kayla: Oh my God. I hope that this was a marketing psyop.
144
00:19:58,064 --> 00:20:05,944
Chris: Nice. Yeah. Roko was actually working for the organ. By the way, the organization that a lot of people donate to is run by Eliezer Yudkowski.
145
00:20:06,032 --> 00:20:08,500
Kayla: Oh, that seems extremely important to say.
146
00:20:08,920 --> 00:20:10,424
Chris: Yeah, we did say that in our episode.
147
00:20:10,472 --> 00:20:11,296
Kayla: I didn't remember that.
148
00:20:11,328 --> 00:20:32,592
Chris: It's not the only organization that people will donate to, but like, he has an organization dedicated. It was called Miri and then it got changed to Siri or something. Anyway, machine Intelligence research Institute dedicated to the quote unquote the alignment problem. Aligning a super intelligent AI's motivations and desires and personality with human goals and desires.
149
00:20:32,656 --> 00:20:39,512
Kayla: I have a lot of problems with somebody coming up with a thought experiment.
150
00:20:39,576 --> 00:20:41,344
Chris: He didn't come up with it. Roko did.
151
00:20:41,512 --> 00:20:42,812
Kayla: You're right. Thank you. You're right.
152
00:20:42,836 --> 00:20:45,436
Chris: And Eliezer tried to purge you from the site.
153
00:20:45,508 --> 00:20:47,620
Kayla: I'm wrong. I'm wrong there.
154
00:20:47,740 --> 00:20:49,180
Chris: I'm just making you less wrong.
155
00:20:49,340 --> 00:20:52,580
Kayla: Thank you for making me less wrong. That helps me a lot.
156
00:20:52,620 --> 00:20:58,796
Chris: You could counter me that Eliezer, when he was trying to stamp it out and it only made it worse, knew what he was doing. But I don't.
157
00:20:58,868 --> 00:21:07,806
Kayla: No, and I think that. I don't think there's any. I think that the Barbra Streisand effect is extremely difficult to deal with.
158
00:21:07,908 --> 00:21:28,282
Chris: I completely agree. So that whole thing alienated a lot of people in the community and was part of what created this rationalist diaspora is like a lot of people either because they thought it was silly and dumb, or because they were scared of it, or both, sort of detached from less wrong as a website.
159
00:21:28,346 --> 00:21:28,962
Kayla: Fascinating.
160
00:21:29,026 --> 00:21:51,958
Chris: Yeah. Speaking of the basilisk, this brings us to another common trope on less wrong and in subsequent rationalist communities, the idea of stopping existential threats to humanity. I mean, specifically ones where AI kills us all. This community doesn't really care as much about more mundane existential threats like climate change or things like that.
161
00:21:52,014 --> 00:21:52,838
Kayla: Why not?
162
00:21:53,014 --> 00:22:11,084
Chris: To be fair, I'm really generalizing here. There are definitely a lot of people in the community that are worried about the climate crisis. But like, overall, on average, the people that are in this rationalist community care much more about the Sci-Fi type threats like AI and nanotechnology. And you just asked me why not.
163
00:22:11,172 --> 00:22:12,660
Kayla: Yeah, the goo problem.
164
00:22:12,740 --> 00:22:40,380
Chris: Yeah, grey goo. So the reason, the answer to that is they feel like it's much more of an existential threat. Climate change will be disastrous, but it won't potentially kill every single human being and prevent us from ever existing in the future the way that they think AI or nanotechnology might. So they think that it's a bigger problem to solve, even though it's like one problem is kind of here and the other one is like, yeah, I don't know if that's gonna happen, man.
165
00:22:40,420 --> 00:23:03,560
Kayla: That's the problem. Like, when you have like hundreds and hundreds of scientists being like, hello, this is the existential crisis of our time and we must act now versus, like, your friends on the Internet saying, wouldn't it be nutso if like, we invented nanobots and then they, like, replicated and they replicated so hard, they replaced every single, like, thing on planet Earth until planet Earth was just nothing.
166
00:23:04,020 --> 00:23:36,664
Chris: Nanobots. Yeah. So I want you, and also you, our listeners, to keep this in mind because this is going to be a recurring trope, a recurring idea for the next, like, several episodes, is that, like, there's definitely a disconnect between folks in this community and what they care about in terms of this sort of, like, really speculative thing that they, like, think is probably true, though, and is really dangerous and, like, actual stuff that's happening right now in the real world.
167
00:23:36,752 --> 00:23:37,820
Kayla: Right, right.
168
00:23:38,280 --> 00:23:39,448
Chris: That's gonna come up again.
169
00:23:39,544 --> 00:23:40,340
Kayla: Okay.
170
00:23:41,600 --> 00:24:11,114
Chris: Yudkowski himself was even featured in a Time magazine article, which, my opinion, this is last year, this is 2023. I think this is a gigantic mistake on Time's part to platform him this way. But anyway, the whole article was like this ultra panicked screed about how AI is going to kill everyone if we don't stop development immediately. And he has all this whole section about how he and his wife are really scared about their kids not having a future.
171
00:24:11,202 --> 00:24:14,482
Kayla: There's this whole, why would you be scared of that? Their kids are signed up for cryonics.
172
00:24:14,666 --> 00:24:35,110
Chris: Right? But see, that's the thing, is that AI, I'm just being a bitch, but it's still kind of salient because the reason he'd be scared for that and not scared for something like climate change is because like, climate change will still have the cryonically frozen guy. But if AI kills everyone, right, if the grey goo consumes the entire earth, then cryonics be damned. It doesn't matter.
173
00:24:35,190 --> 00:24:57,454
Kayla: I guess I just kind of feel about not AI, but definitely about the grey goo and like maybe AI to some extent, the way that like climate change deniers feel about climate change in like the, I mean, that's such a big, not even the come on, it's like, it's such a big problem that it's like not even worth worrying about. I don't have any control there. Like, if the planet becomes grey goo, then the planet becomes grey goo.
174
00:24:57,502 --> 00:25:01,914
Chris: Caleb, maybe you should donate to Eliezer Yudkowski's machine intelligence research.
175
00:25:01,982 --> 00:25:05,790
Kayla: Choose. I would rather donate to Mitt Romney.
176
00:25:07,210 --> 00:25:09,194
Chris: He does have a binder full of women.
177
00:25:09,242 --> 00:25:09,910
Kayla: Exactly.
178
00:25:10,650 --> 00:25:28,310
Chris: So, yeah, so like part of this article too is like he has this little anecdote where he's like, his wife texts him about how their kid lost their first tooth and how sad it was. She was sad watching this happen because it made her think of how doomed the kid is.
179
00:25:28,810 --> 00:25:58,216
Kayla: Okay, see this I need to have, I need to talk to him. I need to talk to Eli Ajakowski because I'm just like, how do you make it through the fucking day without thinking about you fucked up by having kids? Like, you did a bad the article also. And I'm like, honestly, I'm a little worried when you get that anxious, which I totally understand, talking again, as a parent of a dead child. Totally get it. But when you get that level of anxious, I really worry about the mental health impact it has on you. Oh, and your ability to.
180
00:25:58,248 --> 00:26:31,736
Chris: Oh God, I am super worried about the guy. Yeah, absolutely. He is very upset right now. Here's another quote from the article. Make it explicit international diplomacy that preventing AI extinction scenarios is considered a priority above everything else, including preventing a full nuclear exchange. And that allied nuclear countries are willing to run some risk of nuclear exchange if that's what it takes to reduce the risk of large AI training runs.
181
00:26:31,768 --> 00:26:33,656
Kayla: What is nuclear exchange?
182
00:26:33,808 --> 00:26:54,184
Chris: Nuclear exchange is what we talked about at the top of this episode where, like, we nuke each other and, like, almost everyone dies. So because he thinks that AI is everyone dies, he's saying that nuclear exchange is preferable if it prevents everyone dies. It's preferable to have almost everyone dies.
183
00:26:54,232 --> 00:26:59,568
Kayla: See, I watched BBC's threads, kind of nuts, and I think that the grey goo scenario is better than that.
184
00:26:59,624 --> 00:27:00,744
Chris: I think so, too.
185
00:27:00,912 --> 00:27:07,288
Kayla: Not that I'm not. I'm not advocating for. I'm not advocating for anything here. I'm just simply saying, are you on.
186
00:27:07,304 --> 00:27:08,112
Chris: The side of the nanobots?
187
00:27:08,136 --> 00:27:08,720
Kayla: I'm on the side of the.
188
00:27:08,720 --> 00:27:12,000
Chris: Oh, my God. Are you native nanobots? Are you nanobots, man?
189
00:27:12,040 --> 00:27:17,244
Kayla: I'm nanobots, Mandy. I don't know. I don't even know what I'm saying anymore.
190
00:27:17,332 --> 00:27:48,080
Chris: When he says international diplomacy, he basically means everybody needs to agree to have this not happen. And if there's a rogue actor, that's like, I'm going to make some. Even if it's a state entity, that's like, yeah, we're doing AI research. The other nations in this accord should be willing to use nuclear weapons and even be retaliated against. If Russia did it, if everybody was like, let's agree to stop doing this, and Russia was like, fuck you, we're doing it, then everybody should be willing to nuke Russia and be nuked.
191
00:27:48,580 --> 00:27:50,396
Kayla: I can't follow you all the way down.
192
00:27:50,548 --> 00:27:51,492
Chris: That's good, Kayla.
193
00:27:51,556 --> 00:27:57,860
Kayla: Yeah. I'm not going to lie. I'm scared of AI.
194
00:27:58,020 --> 00:27:58,460
Chris: Sure.
195
00:27:58,540 --> 00:28:02,956
Kayla: I do not think it's wrong to consider AI an existential threat.
196
00:28:03,148 --> 00:28:04,588
Chris: I agree. I agree.
197
00:28:04,644 --> 00:28:07,004
Kayla: And potentially a very bad one.
198
00:28:07,052 --> 00:28:12,432
Chris: The danger is not nearly clear in presence enough to warrant comparison to nuclear exchange.
199
00:28:12,496 --> 00:28:32,664
Kayla: I think as long as we're having the experience of, like, hey, this AI company just revealed that all of their AI was actually, like, a warehouse full of people india googling for you, we can calm down a little. And also, like, we know what. Yeah, this is what you said. We know what nukes would do to us. We don't know what AI will do.
200
00:28:32,712 --> 00:28:36,040
Chris: Right. It's an unknown quantity that a lot of these people think of as known.
201
00:28:36,160 --> 00:28:44,836
Kayla: And I think you can convince yourself of how it's actually going. This is the problem with Roko's basilisk, is that you can convince yourself that you know absolutely what's going to happen.
202
00:28:44,988 --> 00:29:02,836
Chris: We are more rational than everyone else. We have thought this through and it's a logical inevitability. If you just simply apply utilitarianism and the decision theory and you've read the sequences, this is gonna happen. Like, you can definitely convince you. That's why people got so scared of it. It's not because it was a scary thought. It was because it was like, it felt like certainty.
203
00:29:02,908 --> 00:29:06,484
Kayla: Right. And I, and I have been in that. I've, I've been there.
204
00:29:06,572 --> 00:29:06,868
Chris: Yeah.
205
00:29:06,924 --> 00:29:07,636
Kayla: And I get it.
206
00:29:07,748 --> 00:29:28,748
Chris: And like, I don't disagree with Eliezer's the rest. You know, there's part of the article where it's like, we have to, like, stop this now and like, we have to like really take it seriously and like, we don't have any idea what we're making right now. And like, we, it's not that we shouldn't make it like 50 years from now once we've like, done the work to understand what the fuck is going on, but we should stop it right now.
207
00:29:28,844 --> 00:29:30,420
Kayla: I'm also don't disagree with any of that.
208
00:29:30,460 --> 00:29:34,068
Chris: But then when he goes to like, and we should nuke people, like, that's like, whoa.
209
00:29:34,164 --> 00:29:49,598
Kayla: Yeah. And I think it's also like, it's not bad to have in this community to have the voices that are like, let's find some breaks when so many voices in this community are like.
210
00:29:49,654 --> 00:29:50,838
Chris: But like, when the, let's put a.
211
00:29:50,854 --> 00:29:52,382
Kayla: Fucking brick on the pedal.
212
00:29:52,406 --> 00:29:59,094
Chris: Oh, yeah, absolutely. And I, yeah, that's, I think that there's a lot of value in the sort of what's called, you know, the doomer community.
213
00:29:59,222 --> 00:29:59,822
Kayla: Right.
214
00:29:59,966 --> 00:30:11,116
Chris: But like, actually I shouldn't even say that because I think doomers, the proper definition of Doomer is like, you think everything's fucked and nothing matters, which is as much of.
215
00:30:11,308 --> 00:30:12,804
Kayla: I know it's gonna happen.
216
00:30:12,892 --> 00:30:15,724
Chris: Paralyzed. Yeah, exactly. Like, I know this is gonna happen, so it doesn't matter.
217
00:30:15,772 --> 00:30:19,124
Kayla: Right. I know it's gonna happen. I know this is gonna happen. So therefore Roko's basket is gonna happen.
218
00:30:19,172 --> 00:30:43,700
Chris: Right. So that's just as bad. But like, for people that are saying let's pump the brakes. Yeah. I definitely think that's super valuable. I think it's counterproductive though. When you have somebody, then you have a pump the brakes guy also saying these crazy things, then it's easier to go like, okay, the doomers are stupid. That's obviously, we can just go full forward because look at the guy saying we shouldn't. He's a nut job.
219
00:30:44,040 --> 00:30:44,776
Kayla: Is there.
220
00:30:44,888 --> 00:30:49,592
Chris: Right. Like, reading this article made me for a second go like, oh, I guess we're probably fine. This is nuts.
221
00:30:49,696 --> 00:30:51,380
Kayla: How does Eliezer make his money?
222
00:30:51,840 --> 00:31:07,536
Chris: I believe he is. He still runs Miri, the Machine intelligence research institute. Okay, so it's a nonprofit that is like, again, it's, their whole thing is like AI alignment. We want to make safe AI. We want to make sure that people that are doing AI do it safely.
223
00:31:07,648 --> 00:31:37,926
Kayla: And this could be a bias, but I feel like that at least gives me a sense of whether or not this is somebody who is profiting off or grifting from fear mongering versus somebody who is a true believer and not profiting off of that necessarily. You can make the argument that, well, if he's saying that, oh, we're all going to die from AI, then people are more likely to go to donate to his thing. But I think that there's way more money these days in being on the pro AI style.
224
00:31:38,038 --> 00:31:38,670
Chris: Way more money.
225
00:31:38,710 --> 00:31:46,710
Kayla: So if you were simply in it for the grift, and again, this could be bias, whatever, but I just, that's a data point. You're more likely to be in it for the Griffith.
226
00:31:47,050 --> 00:32:02,618
Chris: Yeah. You're gonna be Marc Andreessen, right? Not Eliezer Yudkowski. Like, he says crazy things, but I do think that he is being authentic when he in his beliefs. I'll give you a little bit of context, though. This isn't the first doomsday he's been wrong about.
227
00:32:02,714 --> 00:32:07,314
Kayla: Oh, no. Was he a 2012 guide? Did he think the end in 2012?
228
00:32:07,362 --> 00:32:29,420
Chris: No, no. He only does science based doomsdays. So in 1990s, a young Yudkowski predicted that nanotech would destroy the world by 2010. It didn't, which it didn't. Apparently, he abandoned that idea at some point because he later predicted his development team. So the team he was working with at the time would build a super intelligent AI by 2008.
229
00:32:29,920 --> 00:32:30,584
Kayla: This is not.
230
00:32:30,632 --> 00:32:31,632
Chris: It also didn't happen.
231
00:32:31,736 --> 00:32:40,584
Kayla: This is not, this is now going exactly against what I was just saying because this is the pattern of somebody. This is like those cult leaders that keep changing the date of the end.
232
00:32:40,592 --> 00:32:41,596
Chris: Of the world, right?
233
00:32:41,668 --> 00:32:46,476
Kayla: They're like, oh, yeah, it's totally end on this date versus, oh, that date passed. So it's totally going to end on this date.
234
00:32:46,548 --> 00:33:15,316
Chris: So the build a super intelligent AI date wasn't a doomsday prediction at the time. This is when he was like, rah. I think we should build AI's. And then at some point, I don't exactly know when he went from, like, we should build AI's and it's cool and we'll align it. Okay. To, like, we need to really hardcore pump the brakes. At some point, he made that switch. So that wasn't like a. A doomsday prediction, but the first one was. And that was like, even weirder because I'm like, really? We're gonna have nanobots by 2010 that are gonna destroy everything. Dude. Come on, man.
235
00:33:15,348 --> 00:33:16,772
Kayla: The great goo really got some people.
236
00:33:16,836 --> 00:34:11,393
Chris: I know, but let's bring it back. The takeaway here is that the rationalist community really cares a lot about existential risks, and they tend to care about the singularity related ones much more. And we've talked about singularity a little bit. I feel like it's time. I'm kind of obligated to mention singularitarianism here and kind of explain it a little bit because there's a huge overlap between the rationalist community and the singletarian community, largely because of Mister Yudkowski. He can definitely be considered a singletarian because he thinks that we're moving towards this future with a super God like AI. Kayla, I know you know all about the technological singularity because I was totally in the cult after reading Ray Kurzweil's book the singularity is near, which was written in 2005, so we can extrapolate his definition of near.
237
00:34:11,442 --> 00:34:13,121
Chris: It has to be at least 19 years.
238
00:34:13,186 --> 00:34:16,353
Kayla: I think you literally said to me, like, have you heard the good news?
239
00:34:16,442 --> 00:34:17,025
Chris: Did I rinse?
240
00:34:17,058 --> 00:34:18,230
Kayla: No, you didn't. No, you didn't.
241
00:34:19,250 --> 00:34:22,466
Chris: Is it bad that I thought that might be something that I could have said?
242
00:34:22,538 --> 00:34:36,437
Kayla: No, but it was. You did recommend this book to me with. With the fervor of a religious man. Well, and it wasn't wrong. I mean, maybe it was. I don't know. I like it.
243
00:34:36,453 --> 00:35:22,254
Chris: Singularity is. Near is like a burn salve if you're, like, scared about existential threats and AI, because it's like the whole book is basically like, don't worry, this is going to be awesome. It doesn't say, like, there won't be existential threats. You know, it is honest about, like, we're just going to face different ones, right? Like, it's still. There's still going to be challenges. But if you're worried about Terminator two, Judgment Day, that's unlikely to happen. AI's are going to be good for us, and also we might become immortal. So if you're worried about dying and worried about specifically dying from robots, it's like, oh, man, this guy actually says that it's going to be a utopia, and that's kind of like where singletarianism was born. That's a bible ish for the idea of the singularity.
244
00:35:22,432 --> 00:36:04,318
Chris: But, yeah, so we read that a while ago for our audience. Singularity. Why is it called that? It refers to the idea that technological progress accelerates, so eventually it'll accelerate to the point where knowing what was going on this morning is irrelevant to this afternoon, which I know, like, kind of already feels that way. But a big part of this is because the singularity will allegedly be the result of an intelligence explosion, which is what will happen when we make an AI smarter than us. And then it wants to make an AI smarter than it, which wants to make an even smarter AI, and so on, until you wind up with an AI of God like powers that brings about transformative change to humanity, which, as you mentioned, absolutely is a millenarian ethos.
245
00:36:04,494 --> 00:36:53,054
Chris: And if transformative to humanity sounds familiar, that's because that's a transhumanist goal. So most singulitarians are sort of like, by definition, also transhumanists. And Eliezer is definitely one of them, as someone who's very concerned with future super intelligences. And since the rest of the less rank community more or less, either by just dissociation or by adulation, most of the less wrong community also believes in the technological singularity as well. Okay, so I want to mention the rationalist diaspora one more time here, because throughout these episodes, I've been talking about less wrong and stuff that's happened there. And I want to emphasize that most of these events are, well, in the past, like, a decade in some cases. Eliezer himself hasn't posted on less strong in quite some time. And the site activity is way down from its peak.
246
00:36:53,142 --> 00:37:31,914
Chris: It's definitely still active, but a lot of folks have moved on to other homes on the Internet. And I think we mentioned this last episode. But one of the most prominent posters on Lessrong, a man by the name of Scott Alexander, created a sort of less wrong 2.0 with a slightly different flavor called Slate Star Codex, which, again, it's just like another online forum. And as near as I can tell, this is where, like, the largest chunk of the rationalist diaspora currently lives online. And now I found myself with only one more thing to talk about. And I don't know why this is coming last, because it's one of the weirdest and like most important bits about less wrong and rationalists. Bayes theorem.
247
00:37:32,082 --> 00:37:33,194
Kayla: Bayes theorem.
248
00:37:33,242 --> 00:37:45,362
Chris: Bayes theorem you might remember this from the definition where the guy said, like, oh, it's like a pragmatism philosophy that's concerned with cognitive biases and Bayes theorems, which, like, at the time, was totally contextless. What the fuck is he talking about?
249
00:37:45,426 --> 00:37:47,002
Kayla: Yeah, yeah.
250
00:37:47,026 --> 00:38:01,710
Chris: So Bayes theorem is kind of two different things. The first thing that Bayes theorem is just a theorem in mathematics that deals with how probability works. I won't go into the details. It's just something in math.
251
00:38:05,210 --> 00:38:18,038
Kayla: Sorry. Something in math is. That's just like how I approached all math classes. Like, that's what. That's what trying to learn math in school felt like to me. Was somebody going like, oh, it's something in math here. It's just something in math. It was that vague. I hated it.
252
00:38:18,134 --> 00:39:08,628
Chris: I mean, that's what it is. It's basically that it's just something in math. The second thing that Bayes theorem is this sort of, like, magical word of invocation that rationalists use, and it contains several ideas, a few of which are, like, even maybe sort of related to the actual math, but kind of not really. When rationalists say Bayes or Bayesianism, and they say those things a lot, what they mean is a worldview where you acknowledge you are acting on uncertainty and that your actions and decisions can only ever have some probability assigned. Nothing is ever really 100% guaranteed truth. And as a part of this worldview, you should also have an initial probability of something being true. Bayesians refer to this as your quote unquote priors. There's some more jargon for you, by the way.
253
00:39:08,684 --> 00:39:10,782
Kayla: I like that one, though, which you.
254
00:39:10,806 --> 00:39:13,610
Chris: Then update based on what actually happens.
255
00:39:15,510 --> 00:39:16,822
Kayla: This is good. I like it.
256
00:39:16,886 --> 00:39:53,998
Chris: I like it, too. I'm honestly a big fan of this sort of worldview, and I definitely would say I even share it. I think it would benefit a lot of people if they were able to think in a more probabilistic manner. And we'll talk about that in a second. I just want to help clarify an example of all this prior stuff I was talking about. Say you have a coin and you want to know if it's weighted or not. You should flip the coin a whole bunch of times to find out. Right? Yes. But a Bayesian would also say that you can and kind of should assign some chance ahead of time based on things like, well, how does the coin look? Does it look normal, or does it look like it has a little bump in it?
257
00:39:54,094 --> 00:40:30,978
Chris: Did I get this coin from a bank or from Bob's discounted weighted coin emporium. Did Bob himself tell me it was weighted? All of those things are your priors, right? So, based on those priors, you might say, like, I got it from Bob's weighted coin emporium. So it's probably weighted. Or maybe you got it from the bank. So you're like, okay, it's probably just like a regular coin. It's 50. Those are your priors. Then you flip the coin and update your priors based on the results. Maybe you flip Bob's weighted coin a thousand times, and it comes up heads. 497 entails 503 times, and you're like, oh, shit. Actually, that's not weighted. You update your priors based on the results.
258
00:40:31,074 --> 00:40:34,190
Kayla: This is like how to think like a robot, but the right way.
259
00:40:34,640 --> 00:41:03,998
Chris: And that's why they talk about it, because they're constantly trying to solve that problem. But it's all just kind of a convoluted way for less wrongers to say that they think it's a good idea to update your beliefs, which, yeah, I think there's also a dash of extraordinary claims require extraordinary evidence in there, right? That's kind of what it means. If your prior, quote unquote, is extraordinary, then you need to say, okay, well, it's probably a low chance for this happening based on my prior knowledge of what could happen.
260
00:41:04,104 --> 00:41:04,778
Kayla: Right.
261
00:41:04,954 --> 00:41:24,970
Chris: I also definitely detect a hint of trust network in there, too. Right? Like, updating your priors is kind of like what we do when, like, somebody we respect says something dumb or quotes something from a site that we don't like or whatever we have to like. Okay, well, you know, before, my probability of this person saying something true was 70, and now it's like 65.
262
00:41:25,090 --> 00:41:25,562
Kayla: Right.
263
00:41:25,666 --> 00:42:12,070
Chris: Maybe that also updated my priors about the stupid website. Maybe it's not so stupid. I don't know. And I think everyone would benefit from checking their hindsight bias, too, and be more comfortable using uncertainty in their reasoning. Just because you gambled and won a lot of money doesn't mean that was a wise investment. And just because you invested in an index fund that went down this month doesn't mean that it was a bad investment. So I think it's a good way of thinking about things overall. It's just that rationalists use the word Bayesian in this really bizarre in group signaling sort of way, which has, like, little bearing on the actual mathematical Bayes theorem. But it's really important to them. They say it all the time. They consider themselves. Some even think that their community should be called Bayesianism instead of rationalism.
264
00:42:12,110 --> 00:42:41,884
Chris: Like, that's how into this they are. Which is so weird, because, again, it's just about, like, I know that I'm acting on uncertainty and I should update my beliefs. That's, like, tangentially related to the actual Bayes theorem. But, you know, Yudkowski extolled Bayesianism in the scripture. I mean, sequences, it became a thing with a capital t. All right, Kayla, it's time. I know were kind of holding off undoing criteria since.
265
00:42:41,932 --> 00:42:44,604
Kayla: Oh, I was gonna ask. I was gonna ask if we could do criteria.
266
00:42:44,732 --> 00:42:56,540
Chris: Yeah, were kind of holding off because were like, let's wait till we've covered transhumanism. But, like, as we pull on that thread, I think we're getting to a point where we should stop and reflect. And also, like, less wrong is a nicely contained community.
267
00:42:56,660 --> 00:42:59,092
Kayla: Yeah, we don't have to do transhumanism. We can just do these things.
268
00:42:59,116 --> 00:43:00,564
Chris: I mean, I think we still should.
269
00:43:00,652 --> 00:43:01,436
Kayla: Not right now, though.
270
00:43:01,508 --> 00:43:08,064
Chris: Not right now. I think we still should at sort of, like, the end of this. But I think we can also stop here and be like, okay, what about lesser arm?
271
00:43:08,112 --> 00:43:17,496
Kayla: Stop here and extol judgment upon a group, which I have been dying to do. No, I haven't been dying to do it. But we haven't done it in a while.
272
00:43:17,688 --> 00:43:24,648
Chris: We haven't done the criteria in a while. I've definitely judged groups. I do that every day. All right, let's start. Charismatic leader.
273
00:43:24,744 --> 00:43:27,340
Kayla: Check, check, check.
274
00:43:27,880 --> 00:43:32,854
Chris: I mean, you might disagree with how charismatic the guy is, but people in.
275
00:43:32,862 --> 00:43:34,330
Kayla: The community, I mean, he is a hero.
276
00:43:35,110 --> 00:43:45,210
Chris: He is a hero. Self proclaimed hero. People in the community do appeal to authority there a lot. So I'd say this is definitely present and fairly high expected harm.
277
00:43:45,790 --> 00:43:48,198
Kayla: The basilisk. It hurt people.
278
00:43:48,294 --> 00:44:04,430
Chris: The basilisk did hurt people. I think that Eliezer's particular brand of dumerism is harmful too, because it kind of discredits ideas that might actually be good, like AI safety.
279
00:44:04,550 --> 00:44:25,254
Kayla: I think that if Roko's basilisk hadn't been so impactful that it tore the community apart, I could be like, maybe that was just a one off. But it seems, like, kind of fundamental to the group. So I'm just gonna. I'm gonna say. And, yeah, Elliot, with the things that he's talking about now and, like, having time, news articles that are like, we're all gonna die, feels like we can say the harm is high. Yeah, expected harm is high.
280
00:44:25,302 --> 00:44:35,144
Chris: There's also, like, you know, Sneer club, which is, like a community for, like, people who were in less wrong. That kind of, like, hate it now, right? That implies some harm there, too. So I'd say it's high ish.
281
00:44:35,192 --> 00:44:36,016
Kayla: They're just haters.
282
00:44:36,088 --> 00:44:37,660
Chris: It's not, like, extremely high.
283
00:44:38,120 --> 00:44:38,976
Kayla: It's not heaven's gate.
284
00:44:39,008 --> 00:44:44,340
Chris: It's not like heaven's gate. It's not like Jonestown, but it's high ish. Niche and society.
285
00:44:45,400 --> 00:44:46,944
Kayla: It is niche, isn't it?
286
00:44:47,112 --> 00:44:51,220
Chris: Yeah. I think it depends on what you consider, like, society in this case.
287
00:44:51,520 --> 00:44:55,454
Kayla: We live in a society. We live in a society, and I don't want to anymore.
288
00:44:55,592 --> 00:45:02,594
Chris: I think that if we're saying, like, all of America, I think it's definitely niche. I think when we're saying Silicon Valley, it's not niche.
289
00:45:02,642 --> 00:45:06,890
Kayla: Like, if I went to a party with a bunch of tech bros. From Silicon Valley and I talked about less wrong, they would know what I was talking about.
290
00:45:06,930 --> 00:45:08,234
Chris: Absolutely.
291
00:45:08,402 --> 00:45:09,162
Kayla: I never want to go.
292
00:45:09,186 --> 00:45:21,114
Chris: And especially I've heard that there's, like, that people will go to effective altruist gatherings, and you're gonna be a little bit lost if you don't know all of the joke from les fraud.
293
00:45:21,162 --> 00:45:42,090
Kayla: Oh, my God, I'm so glad I live here. Even Hollywood's not this bad. And, like, I was gonna say this before, Hollywood has so much fucking jargon that it can be hard to have conversations with industry people, because just the nature of being on set, you have to have jargon, and there's so much jargon. There's not this much jargon.
294
00:45:42,670 --> 00:46:01,592
Chris: It's very jargonized. I would also say that if you're saying influence on society, it's pretty high, because the people that this is popular with are extremely powerful, potentially powerful folks in Silicon Valley. They're the rich, they're the elite. They're the ones that are making your iPhone, and it's apps. So.
295
00:46:01,656 --> 00:46:11,260
Kayla: But is it these people, or isn't it the opposite people? Is it less wrong that's these powerful people? Or is it the we're not scared of AI people that are the powerful people?
296
00:46:12,200 --> 00:46:58,120
Chris: Look, Eliezer Utkowski himself is very scared of AI. That doesn't mean that everybody in the rationalist community is. There's definitely some that aren't. There's some people that are like, I'm about all about alignment and safety, and some people are more like, I'm accelerationist, and I just think that we should pedal to the metal, and then there's people in between. Okay, so I bring the influence thing up as just sort of like, well, is it niche if, like, the influential people have this in their brain all the time? I don't know. I kind of feel like I'm leaning away from niche. Yeah. I don't know. Based on the fact that it's ubiquitous within Silicon Valley and the influence that Silicon Valley has on the rest of culture, I'm thinking not niche, actually.
297
00:46:59,180 --> 00:47:05,308
Kayla: I hate it. I don't know. This exists in a gray, and I don't operate well in that.
298
00:47:05,484 --> 00:47:07,076
Chris: Yeah, I'm sorry that everything's gray.
299
00:47:07,148 --> 00:47:07,868
Kayla: Grey goo.
300
00:47:07,964 --> 00:47:09,440
Chris: Dogma.
301
00:47:09,900 --> 00:47:13,168
Kayla: Seems pretty dogmatic to me, actually. You tell me.
302
00:47:13,184 --> 00:47:13,720
Chris: I don't know.
303
00:47:13,800 --> 00:47:14,304
Kayla: You tell me.
304
00:47:14,352 --> 00:47:22,560
Chris: Yes and no. Right? This is part of the problem I've been having this whole time, is, like, these guys seem very reasonable and they're fighting bias and they're engaging in good faith.
305
00:47:22,640 --> 00:47:27,824
Kayla: And if part of the whole thing is update your body of knowledge, that's not dogmatic.
306
00:47:27,872 --> 00:47:49,572
Chris: But I've also seen. Yeah, but then I've seen people be kind of referring to their, like, before. Before I discovered the rationalist community, I was blah, blah, thought wrongly and was dumb. But now that I've discovered it, everything. And some of that is like, oh, that's great. You found something that worked for you really well. Some of it's like, there's a before and after where I was, like, dumb before and I'm smart now. So that feels a little dogmatic.
307
00:47:49,636 --> 00:47:50,684
Kayla: Does that feel dogmatic?
308
00:47:50,732 --> 00:47:53,404
Chris: Yeah. Everybody else is wrong except me because I'm the rational one.
309
00:47:53,452 --> 00:47:57,252
Kayla: Yeah. Yeah. Okay. I think it's. I I think it's high.
310
00:47:57,436 --> 00:47:58,996
Chris: I think it's high ish.
311
00:47:59,028 --> 00:48:01,800
Kayla: But, like, I also high in a weird way. Yeah.
312
00:48:02,820 --> 00:48:04,052
Chris: Ritual high.
313
00:48:04,196 --> 00:48:05,796
Kayla: Yeah, yeah.
314
00:48:05,868 --> 00:48:12,376
Chris: Read the sequences and you'll know. Safe or unsafe exit. I did not get the impression that.
315
00:48:12,408 --> 00:48:13,592
Kayla: Like, seems safe to me.
316
00:48:13,616 --> 00:48:22,656
Chris: Bailing from the community was gonna get you ostracized or anything. In fact, there's a whole diaspora of rationalists that seem to be fine with the fact that they're a diaspora.
317
00:48:22,728 --> 00:48:37,736
Kayla: And you can be involved in this without having to sever all of your ties with everyone around you. So even if you leave, less wrong. It's not like QAnon, where the philosophy slowly forces you to weed out everyone in your life irl, and you're stuck with only the Internet. I don't know.
318
00:48:37,848 --> 00:48:45,560
Chris: But at the same time. At the same time. Now me think, like, you know, if you get to the point where, like, all you can talk about at parties is AI.
319
00:48:45,640 --> 00:48:52,224
Kayla: Right. But that's different than the explicit things in QAnon of, like, how can you associate with those baby killers type stuff?
320
00:48:52,312 --> 00:48:57,648
Chris: And. Yeah. And it's not. It's not like we're talking about exit here. Not like we already talked about the. In jargon.
321
00:48:57,744 --> 00:48:58,048
Kayla: Right.
322
00:48:58,104 --> 00:48:59,320
Chris: This is like, whether exiting is. Okay.
323
00:48:59,360 --> 00:49:05,664
Kayla: This is what I'm saying. Like, you. You're not going to leave. Less wrong. And not have a community existing. If you had a community going in, you have a community going out.
324
00:49:05,712 --> 00:49:19,188
Chris: Yeah, yeah. Percent of life consumed. I don't really know how to answer this one. I didn't run across anything where people were like, holy shit, I spend all my time here. That being said, though, like, there's some people that are, like, really prolific posters where I'm like, how do you have time to do anything else?
325
00:49:19,244 --> 00:49:22,644
Kayla: Well, it seems like this is all that people can, like, fucking talk about in some of these circles.
326
00:49:22,732 --> 00:49:25,692
Chris: Yeah. I mean, this has the potential to be pretty high.
327
00:49:25,716 --> 00:49:28,420
Kayla: I'm gonna say at least medium. Medium to the best of our knowledge.
328
00:49:28,500 --> 00:49:31,740
Chris: Especially if you're one of the, like, I can only talk about AI at parties, people.
329
00:49:31,820 --> 00:49:32,440
Kayla: Right.
330
00:49:32,800 --> 00:49:34,888
Chris: All right, then finally, chain of recruits.
331
00:49:35,064 --> 00:49:36,256
Kayla: You tell me.
332
00:49:36,448 --> 00:49:44,424
Chris: I don't think anybody's recruiting anybody else here. I don't get the sense that people are like, come join this community. We're gonna love bomb you.
333
00:49:44,472 --> 00:49:55,544
Kayla: I feel like it's this. It's another. There's that similarity with empty spaces, again, is that it's just like something that you happen upon or come upon or seek out versus it is knocking on your doorstep.
334
00:49:55,672 --> 00:49:56,000
Chris: Yeah.
335
00:49:56,040 --> 00:49:59,870
Kayla: Knocking on your doorstep. Can you knock on someone's doorstep? Why would you do that?
336
00:49:59,910 --> 00:50:33,580
Chris: Yeah, we're getting to the end here, folks, so our brains are even more wrong than normal. Okay, so let me go through the criteria here. We have charismatic leader, very high expected harm, somewhat high niche in society. Not really, if you count the influence. Dogma, high ish, depending ritual, very high exit safe. It can consume your life. Chain of recruits is low. What do you feel like that is.
337
00:50:34,480 --> 00:50:35,616
Kayla: I think I have a problem.
338
00:50:35,728 --> 00:50:37,664
Chris: Are you confused now? Because that's the whole thing.
339
00:50:37,712 --> 00:50:38,992
Kayla: I think I have a general problem.
340
00:50:39,056 --> 00:50:44,192
Chris: Though, is that we shouldn't be doing this and our whole premise is flawed.
341
00:50:44,256 --> 00:50:50,820
Kayla: I feel like I don't think I can call anything a cult. What the. Who the fuck am I? What are you talking about?
342
00:50:52,010 --> 00:50:55,470
Chris: We probably should have done that before episode like 100 or whatever.
343
00:50:58,050 --> 00:50:59,354
Kayla: I don't think it's a cult.
344
00:50:59,442 --> 00:51:02,210
Chris: You don't think it's a cult. Why? There's a lot of high scores there.
345
00:51:02,290 --> 00:51:27,986
Kayla: It feels that way. I think it scored low on some of the really important cult ones. I think that chain of recruits and safer, unsafe exit are really big for me. I think if there's not an unsafe exit, that's like a weighted answer. But the other things are so high. What did we call? I mean, Q and a. We called a cult, obviously. Empty spaces. Did we?
346
00:51:28,058 --> 00:51:29,058
Chris: I don't remember.
347
00:51:29,154 --> 00:51:33,314
Kayla: I'm just thinking of other Internet based things. Did we call Cicada? We called Cicada a beneficial cult.
348
00:51:33,442 --> 00:51:38,018
Chris: I mean, we already answered this. We did this episode. We did, yeah.
349
00:51:38,034 --> 00:51:39,802
Kayla: What did we say in Roku? What did we say in less room.
350
00:51:39,826 --> 00:51:42,818
Chris: Before we said it was a cult. I can tell you what I think.
351
00:51:42,874 --> 00:51:43,538
Kayla: Yeah. But now I have.
352
00:51:43,594 --> 00:51:48,924
Chris: We also have more criteria now and more knowledge because this is all new stuff we're talking about. Do you wanna know what I think?
353
00:51:48,972 --> 00:51:50,040
Kayla: I would love to.
354
00:51:51,300 --> 00:51:53,092
Chris: I think it's a cult, but who cares?
355
00:51:53,276 --> 00:51:54,932
Kayla: That's kind of what I feel like.
356
00:51:55,076 --> 00:52:18,002
Chris: I think that it's, like, a lot of this. Okay, so first of all, these criteria are bullshit, and we're bullshit, and we're just doing this for fun. Like, the criteria are a way to kind of, like, talk about and think about and summarize all of the things that we've discussed on the episode. It's not a. Like, it is a cult. It is not a cult because cults aren't real. That's not a real thing.
357
00:52:18,066 --> 00:52:18,338
Kayla: Right.
358
00:52:18,394 --> 00:52:38,034
Chris: It's just a thing to, like, kind of fun talk about, like, very real stuff. So I think under the, like. Yeah, we're going to call it something thought experiment, hypothetical. Yeah, it's a cult in the real world. Like, I don't know. Maybe it is, maybe it isn't. Who cares? Like, the important stuff is that, like, it's a diverse community in terms of, like, the good and the bad.
359
00:52:38,122 --> 00:52:43,682
Kayla: I think a religious studies anthropologist could write a really compelling, like, paper on this community.
360
00:52:43,826 --> 00:52:46,242
Chris: Yeah. And I. And I like a lot, again, it's.
361
00:52:46,266 --> 00:52:49,010
Kayla: A new religious movement or a high control group or whatever.
362
00:52:49,170 --> 00:52:52,730
Chris: Sure, sure. And I like a lot of the stuff that they talk about.
363
00:52:52,770 --> 00:52:59,618
Kayla: Yeah. It's a cult, though. Like, I think a lot I think is a cult. It's the same way that I think, like, because in my head I went, like, well, I think singularity is a cult. Yeah.
364
00:52:59,674 --> 00:53:00,050
Chris: Yeah.
365
00:53:00,130 --> 00:53:01,210
Kayla: It's totally.
366
00:53:01,330 --> 00:53:02,714
Chris: It's a cult. Doesn't mean it's bad.
367
00:53:02,802 --> 00:53:03,138
Kayla: Yeah.
368
00:53:03,194 --> 00:53:05,266
Chris: It just. It's got some cult like stuff going on.
369
00:53:05,298 --> 00:53:14,538
Kayla: Yeah, it's a cult. It's. It's probably not as bad as some of the other ones. I think that you can get out of this one a little easier than you can get out of something like QAnon, for sure. But it is a cult. It is a cult.
370
00:53:14,714 --> 00:53:20,778
Chris: I agree. And that's good because we agree with ourselves. So at least we're consistent, self contained.
371
00:53:20,874 --> 00:53:22,378
Kayla: We're not dogmatic at all.
372
00:53:22,554 --> 00:53:29,114
Chris: The sources are all in the show notes. We're kind of just doing that now rather than yapping about them on here. I don't want to listen.
373
00:53:29,122 --> 00:53:32,350
Kayla: Don't you want to hear me say Wikipedia every episode?
374
00:53:32,770 --> 00:53:36,354
Chris: Do you know how many tabs I have open that are just Wikipedia, Kayla?
375
00:53:36,402 --> 00:53:36,826
Kayla: I do.
376
00:53:36,898 --> 00:53:59,106
Chris: Oh, my God. I have so many less wrong tabs open, too. Holy shit. Next time on culture. Just weird. We take notice of the fact that we are deep into an interconnected rabbit hole, and we talk to an expert about this whole transhuman, extropion, singletarian rationalism, yada, yada, rabbit hole that we've been down.
377
00:53:59,178 --> 00:54:01,186
Kayla: Thank God. Tell me what to think. Someone tell me what to think.
378
00:54:01,218 --> 00:54:13,092
Chris: I know. Just. Just tell me what's going on. Tell me what to think. Please. I want to join your cult. Whatever it is, I'll join your cult. You tell me what to think. I have to worry about anymore. And with that, this is Kayla, this.
379
00:54:13,116 --> 00:54:15,404
Kayla: Is Chris, and this has been coulter.
380
00:54:15,452 --> 00:54:17,380
Chris: Just please don't make me have to think about it.