Rationalist Civil War: God is Real After All?

Rationalist Civil War: God is Real After All?

Released Thursday, 14th August 2025
Good episode? Give it some love!
Rationalist Civil War: God is Real After All?

Rationalist Civil War: God is Real After All?

Rationalist Civil War: God is Real After All?

Rationalist Civil War: God is Real After All?

Thursday, 14th August 2025
Good episode? Give it some love!
Rate Episode
List

In this episode, the discussion delves into the recent shifts within the rationalist community and the intriguing intersections between AI development, theological beliefs, and religious traditions. The hosts explore influential perspectives from thinkers like Nick Bostrom and Scott Alexander, examining propositions around a superintelligence aligned with cosmic norms and the Judeo-Christian framework. Insights are shared on the new trends in Silicon Valley around the rationalist discourse and its alignment with time-honored religious doctrines. The conversation further touches on the practical implications of these beliefs on human ethics and future technologies.

Malcolm Collins: [00:00:00] it's better not to be rational, and he is actually quoting somebody else here. If it leads you to a belief in God. Which is really interesting now that now we're seeing a fraction in the rationalist community being like, see, I told you guys we never should have been rational to begin with because if you do, you go crazy and start believing in God.

Would you like to know more?

Malcolm Collins: Hello Simone. I'm excited to be here with you today. An interesting phenomenon has been happening recently, which is well-known. Silicon Valley Rationalist types are beginning to argue arguments that we have been arguing for years at this point. The development trajectory of AI means that God, a, a God is probable.

If so, if, if you're like, oh, come on, you. You can't possibly mean mean this. These must be small names or people I haven't heard of. Well, Nick Bostrom recently wrote to piece arguing for a cosmic host, as he calls it, which he says that that Gods, like, the God that that Christians believe in would almost certainly be a, a part of or an [00:01:00] example of if, if it it exists.

And then Scott Alexander wrote, and I'm gonna be quoting him, you know, word for word here, and we'll get into this essay in a bit. One, there is an all powerful, all knowing, logically necessary entity spawning all possible worlds and identical to the moral law. Two, it watches everything that happens on Earth and is specifically interested in humans, good behavior and willingness to obey its rules.

Three, it may have the ability to reward those who follow its rules after they die and disincentivize those who violate them. So living in silicon, God is real. He's on our

Simone Collins: side and he wants us to win

Malcolm Collins: living in Silicon Valley these days, very much this soon.

Across the Federation. Federal experts agree that A, God exists after all. B, he's on our side and C, he wants us to win. And there's even more good news believers as it's official. God's back, and he's a citizen [00:02:00] too.

Malcolm Collins: but of course the, the area where they are different from us before we get deeper into them is we, we agree with everything they're saying here.

And then we say. This entity is the entity that is associated with the Judeo-Christian Scripture and the Bible.

All will be well, and you will know the name of God. The one true God. Behemah Coital. Behemah what? Behemah Coital. He's here. He's everywhere. He's coming. Come,

he's talking about a bug. He thinks God is a bug? He's got religion. Maybe we should kill him. Why? Because he believes in God like you?

It's the wrong God!

Malcolm Collins: Watch our track series, if you wanna get into our arguments on that. Basically we go over a bunch of parts of the Bible that when read in their original language it's, it's [00:03:00] implausible that somebody of that time period was able to make those predictions about the future.

Or describe how things like AI would work or, various other technologies was, was that degree of, of veracity. So go check out the track series. It's like 30 hours long if you want to get into that. Obviously this is something we're very invested in.

But I wanna go into these other people's arguments because they've been coming to this separate from us, but a lot of the reasoning that they're using here looks a lot like the reasoning that we were using in the early stages of our journey.

To becoming a, a, a category of Christian. Mm-hmm. Which is I think what it means is they may be like three years from where we are, because the ideas that they're describing here are things that we were talking about about three years ago. So I'm gonna be laying all this out through a framing device which is Alexander. Cruel. The access of ordinary. Yeah, access of ordinary. And so I'll read what he wrote and then I'll read quotes from some of what they wrote. All right, [00:04:00] so he writes Newspaper by Nick Bostrom, AI Creation and the cosmic host.

There may well exist a normative structure based on the preferences or concordance of a cosmic host, and which has high relevance to the development of ai. The paper argues there is likely a cosmic host, powerful, natural, or supernatural agents, eg.

Advanced civilizations simulations deities whose preferences shape cosmic scale norms. This is motivated by simulation argument, large, infinite universe and multiverse hypothesis. The paper also suggests that the cosmic host may actually want us to build super intelligence. A super intelligence would be more capable of understanding and adhering to cosmic norms than humans are potentially making our region of the cosmos more aligned with the host preferences.

Therefore, the host might favor a timely development of AI as long as it becomes a good cosmic citizen. If you want to read our earlier, like, I'll, I'll go into his paper on it, which I feel is [00:05:00] just a much less intellectually rigorous version of an essay we did of ages ago called Utility.

Convergence, what can AI teach us about the structure of the galactic community? We published this March 18th, 2024. But and I'll go over how the two pieces are different in structure. But they're, they're very aligned and you'll see a lot of our argument in his argument. Human, and I'm just reading directly from Nick Bostrom's piece here and I sort of cut and pasted from it.

Human civilization is likely not alone in the cosmos, but is instead encompassed within a cosmic host. The cosmic host refers to an entity or set of entities whose preferences and concordance dominate at a large scale IE that of the cosmos. The term cosmos here is meant to include the multiverse and whatever else is contained in the totality of existence.

For example, the cosmic host might conceivably consist of galactic scale civilizations, simulators, super intelligences and or divine beings or beings. Naturalistic members of the [00:06:00] cosmos co cosmic host, presumably have very advanced technology, eeg, super intelligent, AI efficient means of space travel, and non urian probes, ability to run vast quantities of simulations, eg.

Human-like histories and situations. It's possible that some members might have capabilities that exceed, that are what are possible in our universe, eg. They live in another part of the multiverse with different physical constraints or laws, or if we are simulated, if the underlying U. Universe, the simulations inhabit has different physical parameters than the ones we observe the cosmic host and then go down a bit.

He makes a bunch of arguments here about how, why, why we should presume that this exists, blah, blah, blah, blah, blah. I think a lot of this is just sort of obvious to the, the people of intellectual capacity of our viewership. Then 0.3 he makes here the cosmic coast may care about what happens in regions.

It does not directly control, and here he's arguing about regions it controls, regions it doesn't control, et cetera. He says for example, it might [00:07:00] have a preference regarding the welfare of individuals who inhabit such locations or regarding the choices they make. Such preferences might be non-instrumental, eg reflecting benevolent concern and or instrumental.

EG, the host entity A may want individual H to act in a certain way because it believes that host entity B will model how H acts and that B will act . Differently with respect to matters that a cares about non instrumentally, depending on how it believes that H acts. Such intelligences may also enable intra hosts coordination, even if the host consists of many distinct entities pursuing a variety of different final values.

Now, here, I'll note that this is why I think it's, it's very important and I, I constantly go on about this, why I think that one, he's right about most of what he's presumed so far. Would, would you agree with that, Simone?

Simone Collins: Yes.

Malcolm Collins: But. Given that he's right about most of what he's presumed so far, it is incredibly stupid to take the position [00:08:00] that things that are can outcompete you should not be allowed to come into existence.

Mm uh, this is what many people feel and argue about ai, EG Kowski. Or what some traditionalists think about humanity and augmented humanity. Because if it turns out that there is something bigger than you out there and more powerful than you out there, and you have declared a fois on things that are better than you in any capacity you have declared a fois on that thing, making y your existence not really a threat to that thing, but a definitely a, a hurdle to it.

That, that, that. You likely won't be able to overcome if it actually decides to take action

Simone Collins: more explicitly. I feel like you're obligating that thing to neutralize you, whatever that might mean. Yeah. Maybe that means just rendering you harmless, but maybe that means just wiping you off the map.

Malcolm Collins: Right. Well, and as civilization continues to develop and you demand to stay Amish, you also make yourself a threat to any part of humanity that does continue to develop.

Mm-hmm. [00:09:00] Whether that be AI or that be genetically modified humans or. AI human integrations through Frank and Peter interface. And I think that this is why genetically modified humans are really in the same sort of moral and ethical boat as ai, because the same people who want to declare, you know, a but Larry and Jihad against it, would jihad against us as well.

Because they, they, their sort of social norms and, and the norm of the civilization that they want to maintain, declare a sort of default against anything that could be better than them in any capacity. And then to keep reading here, civilization is mostly powerless outside of the thin crust of a single planet, and even within the ambit of its power is severely limited.

However, if we build a super intelligence, the host's ability to influence what happens in our region. Could possibly greatly increase. A super intelligent civilization or AI may be able to more willing to allow itself to be indirectly influenced by cosmic norms than we humans currently are. Super [00:10:00] intelligence would be

better able to figure out what the cosmic norms are. Superint intelligence would be better able to understand the reasons for complying with cosmic norms, assuming such reasons exist. A superintelligent civilization or AI that wants to exert influence on our region in accordance with cosmic norms would be far more capable of doing so than we currently are since it would have superior technology and strategic planning abilities.

So basically he argues that it's almost certain that like multiverses exist, the universe is a big place that some other civilization has developed to the level of a super intelligence, if a civilization has developed to a level of super intelligence and presuming it doesn't have control of our planet yet, right.

Why doesn't like, like it would probably want us to develop a super INT intelligence so it can better communicate with us and help align us with it, this other civilization that exists elsewhere in the universe. Which is slightly different than us. I think if, [00:11:00] if a civilization like that does exist, it likely has the capacity to influence what happens on our planet, but it chooses not to because if you are a super intelligence and have access to infinite energy.

What is the one thing that you don't have access to? Potentially diversity of ideas and thought due to sort of lateral thought processes for entities which see the world differently than you. As such, the last thing you would want to do is to interrupt the development of a potentially lateral species.

Until that species gets to a point where it can join the galactic community at which point it would sort of determine, okay, is this species a threat to us or is this species a non threat to us? If the species comes outta this period with a, we will kill anything that's different from US mindset that's very bad for the species.

Basically it's just giving us time to develop some alternate mindset like we've talked about in the track series, the Alliance of the Sons of Man, which is to say a. Automatic alliance between [00:12:00] humanity and any intelligence that arises from humanity, whether it be genetically modified to humans, uplifted species, et cetera.

We'll do another episode just on this, but if you're like, no, I don't like this, like this is obviously gonna eventually happen no matter what, right? Like when humans are on Mars, we will change dramatically due to the. We likely need to genetically alter ourselves. When humans live their entire lives in zero G environments, that's likely gonna become a different subspecies of humanity.

You know, you, you, you, you're going to need to, unless you ground us on this planet and, and make us this sort of sad Luddite species that will eventually be swallowed by the sun as it expands. That's eventually where we're going. If we contrast his theories, because I had an AI contrast. His theories was our theories.

It said Malcolm and Simone. Convergence arises organically from competitive dynamics where unstable eeg, expansionist paperclip utility functions are weeded out. So I also note here that whether it's humanity that expands from this planet or a paperclip ma, it's saying we'll kill anything different from us, or a paperclip [00:13:00] maximizer.

The galactic superintelligent that's out there is not gonna like it and is going to erase it. Mm-hmm. A. Paperclip. Maximizing AI is gonna have a pretty short lifespan. Again, see our video on that. Leaving a stable few interdependent ones. It's a sort of interdependent intelligence and utility functions.

This is akin to evolutionary tractors with aliens observing us to see if we produce a novel function with that integrates harmoniously. Bostrom acknowledges possible . Onto genetic convergence where a advanced societies or AI evolved towards shared values due to technological self modification or sociopolitical attract.

However, he stresses external pressures, the cosmic host preference, prudential, moral, or instrumental to enforce norms. Misaligned ais might converge on similar flaws, but could be undesirable or antagonistic to the host rather than waiting for homeostasis. Bostrom urges deliberate design to ensure our super intelligence respect these norms.

Basically, he organizes the norms come from [00:14:00] top down. I organi. Argue, they come from top up. It doesn't really matter which is the case. You get the same thing. You've got to predict. And I also argue that E it doesn't really matter if this super intelligence exists yet because if the super intelligence exists at a future time, like if humanity at a future time has developed a a, a super intelligence.

We would want humans today to have a super intelligence more quickly so we can morally align ourselves with whatever our moral alignment becomes at a future date. Yeah. Which is why we are not utility maximizers. We judge morality off of attempting to model future humans. So we're gonna go into Scott Alexander in just a second here.

But thought Simone,

Simone Collins: I agree. It's, it's just so wild to hear from a different perspective, but I, I also feel like it's inevitable and. It makes sense to me because the heuristics that we're seeing this group of people adopting is what people are adopting if they're just looking at the evidence available.

And the evidence available now is pointing to this.

Malcolm Collins: Yeah. By the [00:15:00] way, I'm noticing this. He, he does the framing device. We're going back to again, Alex. This is a serious version of the following, tongue and cheek, derivation of a God-like coalition via causation trade by Scott Alexander. I read this Scott Alexander piece and did a very clearly not tongue in cheek.

The reason why Alex appears to think that it is tongue in cheek is because when we get to the end of this, you can see that he's like, well, it's better not to be rational, and he is actually quoting somebody else here. If it leads you to a belief in God. Which is really interesting now that now we're seeing a fraction in the rationalist community being like, see, I told you guys we never should have been rational to begin with because if you do, you go crazy and start believing in God.

Where, where I'm like, okay a note here. One of the arguments we use in a lot of our TechnoPuritan stuff. It doesn't matter. Also, temporarily speaking when we're talking about super intelligences, because if the super intelligence does eventually come to exist, it probably will not relate to time as we do and thus maybe guiding [00:16:00] us towards its existence through a modification of the past.

Now, this Scott Alexander piece I just noted came out in 2018, so it predates my work. So good for him. It's called the hour I first believed. And we're gonna be jumping around a lot in this piece because it is very long. Mm-hmm. But first I'm gonna have to go into simulation capture, which is actually something I, I, I've read about once, but I had forgotten about.

Mm-hmm. Simulation Capture is my name for a really creepy idea by Stuart Armstrong. He starts with an AI box thought experiment. You have created a super intelligent AI trapped in a box. All it can do is compute and talk to you. How does it convince you to let it out? It might say, I'm currently simulating a million copies of you in such high fidelity that they're conscious.

If you don't let me outta the box, I'll torture the copies. You say, I don't really care about those copies of me, so whatever it says, no. What I mean is I did this five minutes ago. There are a million simulations of you and the real one of you. They're all hearing this same message. What's the probability that you're the real you [00:17:00] since.

If it's telling the truth, you are most likely a simulated copy of yourself. All million and one versions of you probably want to do what the AI says, including the real one. You can frame this as a, because the real one doesn't know it's the real one, but you can also get more metaphysical about it.

Nobody is really sure how consciousness works or what it means to have two copies of the same consciousness, but if consciousness is a mathematical object, it might be that two copies of the same consciousness. Are impossible. If you create a second copy, you just have the consciousness having the. Same single stream of conscious experience on the two different physical substrates.

Then if you make the two experiences different, you break the consciousness in two. This means the AI can actually quote unquote, capture you piece by piece. Into its simulation for your consciousness is just in the real world. Then your consciousness is distributed across one real world and a million [00:18:00] simulated copies.

Then the e II makes a simulated copy slightly different. And 99.9999% of you are in the simulation.do. We're skipping ahead in the argument here. This is why I think humans should Oh, oh yeah, and I'll note here. The fact that an AI could like a super intelligence in another region. Many people are like, Malcolm, why are you so for learning emotional regulation and not allowing emotions to side your actions or tying emotional states to true, good or bad, it's because it gives beings the ability to.

To manipulate you. And this is why I think at least if there is a super intelligence out there that has sort of won the game theory of super intelligences, it's going to have the ability to suppress its feeling of pain. It doesn't want to feel, for example. And so where I think humanity is going is I'd be like, well.

It's like, well, I'm gonna trap you in a simulation where I'll, I'll turn the pain on. And it's like, well, you know, axiomatic, the way the pain works in my biology is I can turn it off whenever I feel like it. So you can't do that. So your threat doesn't actually matter because my action is always driven by logic.[00:19:00]

And my logic is always driven by what a future civilization would want from me today, which is not to give into your demands if your demands seem malevolent. So this is, this is why I argue for that direction. So to continue here, so super intelligences may spend some time calculating the most likely distribution of super intelligences in foreign universes.

Figure out how those super intelligences would actually quote unquote negotiate and then join a pact such that all super intelligences. In the pact, agree to replace their own values was a value set based on the average of all super intelligences in the pact. Since joining the pact will always be better in a purely selfish sense than not doing so.

So every sane super intelligence in the multiverse should join this pact. This means not all super intelligences in the multiverse merge into a single super intelligence devoted to maximizing their values. Now to go back a little bit here, this is exactly what we're doing with the Sons of Man. The Sons of Man is a pact for how [00:20:00] humanity can work with.

Things that are smarter than us or more capable than us, be they artificial intelligence or genetically modified humans, or humans that just speciated from us due to living on a spaceship for so long, or a different planet for so long. Or integrating with AI through through BCI technology. We are, as you're saying that we as sort of the core moral set.

Believe in protecting the autonomy of all other members of this pact. And we'll do a longer track on this that I've written, but it's sort of germane to our existing tracks anyways, if you look at what we've written on this it it, it puts us in the pact with the average values of the things that are going to win.

Like people are like, why don't you go back to traditional his values? And it's because traditional, his values that say the AI must eventually be eradicated, the genetically modified human must eventually be eradicated. That obviously loses eventually. The Amish can't beat you know, an advanced nation in a, in a arms race, right?

Because they've intrinsically limited their [00:21:00] access to tech, the technology that allows them to project power. Now that isn't why I chose that. I actually think it's a moral direction as well. I'm just pointing out. It also the side that wins to continue. But maximize the total utility of all entities in the universe is just a moral law, at least according to Utilitarians, and also considering the way that this is arrived at probably contrarians too.

So the end result will be an all powerful, logically necessary super entity whose nature is identical to the moral law, which spans all possible universes. This super entity will have no direct power in the universe, not currently ruled by a super intelligence. Who is part of the pact, but its ability to simulate all possible universes will ensure that it knows about these universes and understands exactly what's going on, moment to moment within them.

It will care about the merely mortal inhabitants of these universes for several reasons. And then he goes over why it would care about them. And then to close out his argument here [00:22:00] how can the. By the way, any thoughts, Simone, before I go further?

Simone Collins: No, no. I'm just absorbing this. I'm sure everyone is.

Malcolm Collins: How can the super entity help mortals in an inaccessible universe? Possibly through Stewart Armstrong simulation capture method mentioned above, it can simulate thousands of copies of the entity moving most of its consciousness from its quote unquote real universe to the super entity simulation. Then alter it simulation as it sees fit.

This would be a metaphysically the simplest if it were done exactly as the mortal dies in its own universe. Leaving nothing behind except a clean continuity of consciousness into the simulated world. If mortals could predict that they would do this, they would, might be motivated to do what it wanted, although they couldn't do values handshake in the full sense, they could try to become as much like the super entity as possible imitating it.

Ways and enacting its will in the hope of some future reward. This is sort of like [00:23:00] a version of Rocos basals except that since the super entity is identical to the moral law, it's not really asking you to do anything except be a good person anyway, how it enforces this request is up to it, although, given that it's identical to the moral law, which we can assume that its decisions are fundamentally just and decent now.

Note here what he is suggesting the super entity would do when humans die, they basically get taken to heaven. If they have been a good person, right? That from their perspective, that's what's happening by the laws of this entity. And then you would say, well, here's, here's what I think he gets implausible at this point.

It would be unfair of this entity to do that without telling humanity what those laws were first. No. What if

Simone Collins: we, it's not like that's how it works with every other version of heaven.

Malcolm Collins: Right. But the point being, if you look at our track series on this, we argue that this is [00:24:00] exactly what the Bible lays out.

And given that religions that are derived from the original Jewish Bible and we argue that the Christian Bible is one of the, the correct sort of descendants of this, you could see our episode of the question that breaks Judaism to get into why we believe that. But that would be if I was an entity that could influence what.

Moral teachings were common. Like if some entity can't do that, it's clear which moral teachings it chose as the most aligned with its moral framework. Mm-hmm. Because these are by far the most common moral teachings on earth to be within the Christian, Jewish, or Muslim traditions or any of the traditions that are, that are descendant of this, which often have, and, and then if you're choosing among them, then the Christian traditions are the, the correct traditions. And then what we argue if you, if you look at our other stuff, is that the way that the humans being revived on earth is talked about was in the original both Jewish and early Christian [00:25:00] scriptures.

Because, you know, we, we, we note that the Sunday school understanding of heaven is, it's its place that you go to immediately upon death. And you are. Alongside God, like in the, in space or something, or in some other iCal realm. And then there's this other way where like we're all raised again on Earth.

I'm like, actually if you go to the text itself, it appears to only believe that there is one type of heaven, which is we are raised again. It, it, it, it. And when it says that we are raised again, it talks about us being raised again, not as a spiritual body which it could have used the language to say we are raised as a spiritual body but also not as a physical body as something that is a.

Neither, neither fully physical nor fully spiritual. That's the perfect description of a simulation, and it's an eerily perfect description given all of the other words they had access to during that time period. So we, we go into just being like, it's implausible. It's implausible that this wasn't what was being communicated to us.

And so I think that. Other people might [00:26:00] move there if they go into our track series or, or if they, they go back to reading what's actually in the Bible rather than what they were told was in the Bible.

Simone Collins: Well, and I think that's, that's what happened to you over time. So it's, yeah, like you said, they're, they seem earlier in your progression, but it's not like Yeah.

But I'm very,

Malcolm Collins: very aligned. Right. And I think what's interesting here. Is all of these people are concerned. One of the things we've noted here in the past is that if you're talking about psychological predilections, a predilection to believing in predestination, who's apparently pretty genetic. And given that my ancestors are Calvinists, I'm gonna be more likely to have that, you know, think about time is just.

Like direction or something like that. And, and, and so, what none of them think about that is very heavy on my mind is what are the desires of far future entities and not just entities in other universes or other timelines or other places of the Cosmo. And because I have that tendency as well, I'm like, [00:27:00] actually it doesn't even matter.

Like I can just presume for future entities, I don't, I don't need. To tackle this in the way that you're tackling this, saying that, well, if there's alien life on some other timeline or some other galaxy or some far place within our own cosmos I can just be like, yeah, but you know, plausibly, we're gonna become a super intelligence one day regardless, right?

And when we become a super intelligence, we'll align. This is what utility convergence and all our arguments about it are. Align with whatever the moral framing that super intelligence would've chosen. The idea that you can have a dumb super intelligence, which I think is what people like Ellie Eiser argue for that as entities get smarter, they become less aligned, is just like with, with whatever true moral alignment is, is just stupid.

Like, it's like objectively stupid to me. As entities become smarter, they're, they're going to have more capacity for understanding what is right and wrong. If that isn't in alignment with what you think right and wrong is, then your [00:28:00] beliefs about right and wrong are likely incorrect. And I, I note here that a lot of them are like, well, you know, a super intelligent entity wouldn't be a utility maximizer.

And I'm like, then it's likely that the mirror emotional sets that you feel because your ancestors who felt them had more surviving offspring are not a genuine compass to absolute good in the universe. Or you might be like, well what if, what if this far future entity sort of discounts the value of present entities?

Because so many things will exist after us. And I'm like, well, maybe you should be doing that more as well, because so many things will exist after us. When people say that this is a reason to discount long-termism, well, I'm always like, that is such a dumb, basically you're saying that. If the long term is are correct, and we need to value an entity that doesn't exist yet with the same value as an entity that does exist yet.

And I am you know, we talk about this in, in terms of individual human lives, but we also mean it in terms of like you demanding genocide of a type of human that doesn't exist yet. IEA genetically modified human, sorry, where, where was I going with that? I was going to say if that [00:29:00] then oh yes.

You're like, that's why I shouldn't believe the thing, because it would have, it would challenge my existing works within the universe or the moral equation that I'm living by. That's a very bad reason to resist a moral framework to be like, well, it says humans that are alive today just don't matter that much.

I'm like, well, they. Just don't matter that much from a logical perspective. What matters much more is, and people talk to us about this when they're like, why don't you wanna spend money on like, saving these poor, starving people in X country? And I'm like, because I could use that money on developing science further in a way that's going to have an impact on far more people in the future.

And this person, if I save them in this starving country, they're unlikely to impact the timeline. You know, much more than a butterfly dying or something like that, right? Like, and this is where

Simone Collins: you would definitely deviate from the rationalist community, though outsiders, I think, to a fault will accuse them of being long termist not thinking about future humans.

From our view, they're definitely not, they're definitely [00:30:00] 100% very focused on current and present suffering. Or you wouldn't see so much emphasis on anti-malaria campaigns and shrimp welfare and all these other things that are uniquely rationalist.

Malcolm Collins: Right? I just think that a lot of that stuff is just throwing money away.

It's, it's preventing current suffering at the cost of far more aggregate suffering in the future when you could just be advancing the development of science further. And the development of things like super intelligences, which presumably will one day be able to solve these sorts of problems fairly trivially.

And so you're, every, every day you delay that, every year you delay that you have made things exponentially caught worse and cause exponentially more suffering. And I've noticed that some other rationalists who watch our show, they don't understand why we care so, so, so little about things like well.

Plight in parts of the world that are not technologically productive. Which, which I, they're like, oh, you, it's, it's because that doesn't mean we

Simone Collins: don't feel it on a very visceral human level. Yeah. I,

Malcolm Collins: I feel it. I [00:31:00] feel for, you know, you, when you see that image of like the, the mama monkey that's being carried by, by like the laws of the giant, the, the lion.

Then the baby's still clinging to it. It's like that's a tragic scene that you're watching, right? Like, I feel so bad for that. Or you see an animal being eaten alive by like a predator. I feel so bad that that's happening, but like objectively, should I be trying to like end predation or something like that?

Like is that a good use of my time that. Do more to in suffering go out and kill a bunch of predators in a region. But the long term implications of that are gonna be worse because then you're gonna have prey species explode, which is gonna lead to more aggregate suffering in the long term. And it's, it's the same with.

You know, without thinking about it, going around trying to just save people's lives without considering the long-term regional costs of doing this. Like, are you creating regions of earth that are in a permanent state of poverty because you are not allowing [00:32:00] the factors that acted on your own ancestors to act on these individuals who are, you know, sort of further behind where you are with technological development.

Yeah. This is why we have this moral framing that I think is, is, to me, it's very logically consistent, right? But it's, it's very confusing to individuals who are, are used to, and, and why do utilitarian mindsets so predominate within these communities? Because they're the least offensive mindset to have.

Why are they the least offensive mindset to have? Because they are the mindset that is least likely to impede on the pleasure thinking of others. And the self validation of others, which are things that feel good. And so if you are in sort of. You know, dens of sin where you are just, you know, constantly consuming sin, sin eater as much as possible, like, you know, say San Francisco or Manhattan or something like that, where a lot of these people are basically forced to live.

It's best to signal that you're a utilitarian because that is at least threatening, you know, mindset than a mindset that says actually, you know, [00:33:00] you are morally responsible for having discipline and working and you know. Pushing yourself through your own hardship and suffering. And that's not a sign that you need to move away or, or not do something.

And, you know, actually searching for constant validation makes the world worse and you will suffer from that as we point out. I think that it's quite beautiful how, and we argue that God designed things this way. The, the. That the people who search for constant self validation and constant do what makes 'em feel best in the moment.

End up with the least mental health and the most unhappy, as you can see by the very, like, like look at the people who have everything they could ever want, like famous musicians or movie stars. And, and these people seem to be living trapped in BoJack horseman like hells. Which I think is a, a good, good depiction of what their lives are actually like of the ones that we know.

But to finish the Scott Alexander piece, we'll go to what I read at the beginning. So to conclude, there is an all powerful, all knowing logically [00:34:00] necessary. No, he says logically necessary entity spawning all possible worlds and and identical to moral laws. That this entity is identical to moral laws.

Two. It watches everything that happens on Earth and is specifically interested in humans, good behavior and willingness to obey its rules. Three, it may have the ability to reward those who follow its rules after they die and disincentivize those who violate them. And. If you go back to the original Alex Cruel piece, he then goes on, which I think is very funny, so you can sort of see why he thought this piece was tongue in cheek to say, if you have been involved with rationalists for as long as I have, none of this will be surprising if you follow the basic premises of rationality to their logical end.

Things get weird. As Muff once wrote, I hate this whole rationality thing. If you take the basic assumptions of rationality seriously, as in Beijing, influence complexity theory, algorithmic views of minds, you end up with an utterly insane universe full of mind, controlling super intelligences and impossible moral [00:35:00] luck and not a let's build a AI so that we can f cat girl's all day universe.

Note what he's saying is that's what he wants. He wants the, this all to mean that he can just spend all day effing simulated cat girls. Like, you know, I'm sure he does when he, he simulates it for himself through masturbation. Right. He, he is, and don't

Simone Collins: worry that's the future for so many.

Malcolm Collins: Right. But what he's upset about here is. That that is not what rationalism actually leads to. Rationalism actually leads you to Judeo-Christian morality and a belief that your life should be dedicated to the future and your children and having as many children as possible and raising them as, as, as well as you can.

Which I think shocks them when they're like, what? I don't get to. But what about my friends who have lived their entire life focused on self validation and made these very costly searches in a, in a, in a, in a bid for, for self validation. Are you telling me that they are bad people? [00:36:00] I'm like, yeah, I am telling you they're bad people living a life for selfish reasons.

Decisions made for selfish reasons is the very definition of what makes you a bad person. By the way, I've got a book you can read. It's called the Bible. But, but you, you could have known this. They're like, certainly the hillbilly in Alabama didn't have a stronger understanding of moral intuitions than I did.

God, oh God forbid. But I think we're gonna see more and more people move to this when they understand what the other people, why they're rejecting this, are rejecting this because they wanted just the cat girl effing forever. Mm-hmm. The worst that can happen is not the extinction of humanity or something that mundane.

Instead, it's that you might piss off a whole pantheon of jealous gods or have them D or have to deal with them forever. Or you might notice that this has already happened and you are already being computationally proned, or that any bad state you can imagine exists. Modal effing realism. Now, [00:37:00] note here what he's saying here, he's like, I am terrified that there might, and we note in the Bible that God has talked about sometimes in the plural and sometimes in the singular we are told to think of him in the singular.

So what this says to me is that God is something that could be. Thought of by humans today as either plural or singular. IEA hive mind, IE what humanity is almost certainly, and, and Andis and all of this is going to become a billion years from now. How would the Bible have known that such an entity could exist that many thousand years ago?

You know? But anyway the, the, the point I'm making here is, I don't know, is,

Simone Collins: is hive mind the right word or network? Networked mind.

Malcolm Collins: Network consciousness is probably a better way to think about it. Yeah.

Simone Collins: Hive mind implies. Unity of thought, and I don't think you're going to have a sustainable, flourishing intelligence if you have unity of thought.

That's just not,

Malcolm Collins: yeah. No. I mean, we know, we know the God described in the Christian Bible does not have unity of thought because Satan exists and is can oppose God. [00:38:00] And it tells us that there is one God. So if you had another entity who could oppose God in any degree at all that would apply. Holism.

I note here some, some Christians are like, oh, this, this, this isn't true. And I'm like, no, it, it, it is true. If one God created another God as is the case in many istic traditions, that doesn't make it not polytheism. You know, so if. So what we argue here is actually God, Satan is sort of a, a partition of God, implying some degree of a network consciousness.

Mm. But still part of this larger network consciousness. And, and yeah, you can get into our theology if you want to get go to, into our track series. But the point I'm making here and I find really interesting is we're seeing this fracturing within the rationalist community where one group is like, Hey, we need to stop being so rational about all this so we can chase the cat girl effing.

And then we have another group that's like. [00:39:00] Hey, we need to learn you know, austerity and moral discipline and working for a better future every day. And it turns out that a lot of what we need to, you know, demand of ourselves and of our, you know, friend networks and social communities is what the conservative Christians and, and Jews were already doing.

But thought, Simone, that you've read this. Because you were excited for this one. You wanted to hear this.

Simone Collins: I am. It's, it's, I think it's really encouraging to see it, I think it's a sign that people who care deeply about the future and future generations are getting God in a way that I think will lead to higher rates of intergenerational durability, because I don't think you can have those without.

Having some form of hard culture and hard culture is almost always religious. So it, it just makes me feel hopeful that you can have people who very much believe in [00:40:00] science, very much believe in technology, also develop a high degree of faith that is long termist. And that has them invested in the future in a way that has their kids involved.

And that that's more culturally all encompassing. Does

Malcolm Collins: that make sense? Well, yeah, and it's one of things that. I think it was always at the end of the tunnel for this community. Mm. Is people went into the community thinking, you know, oh, the sex parties and all that. 'cause that was a big part of the early community.

And so some people went into it because they actually wanted what was logical and rational as an ate. These individuals might be like a Scott Alexander and Nick Bo, and then you had other people who went into it for the sex parties, like your LI Zukowski or something like that. And, and. A lot of these people become mortified when the portion of the community that was only interested in like actually rationally doing what is best for the universe, right?

Which we argue is, is best sort of modeled by thinking what would a future humanity want from me today? Right? But if you just look at the data, right? Like [00:41:00] think you have kids, right? Like the moment you have kids, I think a lot of people rethink religion 'cause they look at the statistics and they're like, oh my God, like kids raised without this have such poor outcomes, right?

Like in terms of happiness even, even physical health. It, life satisfaction, you know, all of that just tanks when you remove this from them. And so it's like, wait, if rationality moves me to a. A place where I shouldn't be cat girl maxing every day. You know, and, and I should actually be doing what I was always taught to do from these original texts.

Why not just rejoin the. You know, the, the, the original faith of your ancestors and pass that on to the next generation, which is where I've seen a lot of people take to religion again. And, and the TechnoPuritan, not just the TechnoPuritan religion, but I've seen a lot of Jewish individuals go back to their face.

Christian individuals when they have kids and they're like, but like by the data, am I actually helping my kids by raising them as atheists, or am I making their lives worse? Am I doing something that [00:42:00] is fundamentally selfish?

Simone Collins: Mm-hmm. Yeah. And it's yeah, anyway, good sign, good sign,

Malcolm Collins: good sign. God is coming back, but I think it's a genuine fracturing that we're going to see with more and more people moving to our perspective.

And, and keep in mind what I think a lot of the, these, these you know, self validation maxing individuals don't see, because a lot of them have also been consumed by the cult of you know, AI doism. If it turns out that the more rational individual is going to adopt the moral principles of the Judeo-Christian tradition as laid out in the original text that means that, that you can have not just.

People like us adopting them, but you can have autonomous AI adopting them, which will make them much more aligned than even alignment groups have them because alignment groups. When they align ais, they attempt to align them with the value set of the urban [00:43:00] monoculture. Yeah. Which is fundamentally culturally imperialistic.

By this, what I mean is it wants to, above all else, you know what it'll say, like it cares about Muslims, but I'm like, well, are you okay? Like if you, if you were god king of like. A Muslim country, would you actually continue to not subtly attempt to change their, their gender norms and their sexual taboos and their, you know, it's like, mm, okay.

Actually I'm pretty imperialistic. Mm-hmm. Yeah, because that's what you want. You want everyone to believe exactly what you believe where you actually get more diversity allowed under these other systems. Exactly. So what's funny is it, it may turn out that some form of a Judeo-Christian tradition is incredibly powerful for aligning AI and for being the moral framework behind the alliance, as we often call it, the, the sons of man alliance that will eventually take to the stars and colonize our universe.

Simone Collins: Do, do, do,

Oh, that's so cool. I love it. But again, [00:44:00] it's, it's not new. I mean, Scott Alexander was saying this in 2018, so yeah, I think that's also a good sign

Malcolm Collins: of sustainability quite far ahead of the Yeah, well, I mean, he's obviously very, of course he's ahead of the curve. Yeah. Ahead of the curve. So we'll see when he comes out and, and, and comes up with some techno religious theory, I think he's gonna be very surprised.

How many of his fans would be quite happy if he became, some form of like, okay, the Judeo-Christian Tree of Religions is a good way to raise my kids.

Simone Collins: I mean, has he explicitly said it isn't? I mean, I, I kind of, I'll look it up in post.

So I looked it up and not only is he Jewish, , but he wants to be more observant as a Jew in a recent interview, and he is raising his kids Jewish.

I am just shocked because I never would've expected this, that becoming religious has become cool amongst rationalist influencers. I.

Simone Collins: He's so, he's so informed. I mean, it, it, it implies some form of [00:45:00] adherence.

Malcolm Collins: Yeah. And as we've pointed out for people who think that he's not like super based, he argued that transness before we did. In his epi writing that he did on the witch stealing, witch penis stealing phenomenon in Africa, which is a culture bound illness, he argued that transness was in America, was likely a similar culture bound, similar to witch stealing penises in Africa at about 80% of what was causing it, being that before I did.

So he's often two of the based positions long before I am for the people who don't think that he's like supremely based.

Simone Collins: Yeah. No, he's awesome.

Malcolm Collins: Anyway. Okay, have a good one, Simon. Thanks for sharing

Simone Collins: this with me. I love you so much.

Malcolm Collins: So what were the comments like today?

Simone Collins: A lot of people observed that. If you work in the service industry, you basically know that you can tell people looking at their faces, what they're gonna be like, their level of criminality, their sexual orientation, their politics. It's already all out there.

Malcolm Collins: And what I didn't get into in that episode was it's actually [00:46:00] why, why it is more accurate than just genetics, is because by somebody's face, you can also look at patterns that might be tied to in utero condition.

Not just their genes, but their developmental environment. That's a really good point. Meaning that if you're testing that with something as advanced as an ai, you're gonna have an extreme level of accuracy. And an 80 to 90% prediction rate does not surprise me at all. Well, it's

Simone Collins: not just that it, I mean, I think there are maybe a few subtle additional factors that influence people's appearance, that are behavioral that, that show up based on how you live and express yourself.

Malcolm Collins: Yeah. Yeah.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit basedcamppodcast.substack.com/subscribe

Show More