Despite having developed in entirely different environments, with entirely different histories, fictional cultures very often believe things that are similar to one or more human religions. Sometimes the similarity is nearly perfect. TVTropes calls that the Lowest Cosmic Denominator. For the particular case of Christianity being duplicated, it becomes “Crystal Dragon Jesus“. But does this make sense?
Note that there is a hard distinction between ethics and religion. Ethics is prescriptive statements about behavior. Religion is descriptive statements about the universe that assert some supernatural element. “I undertake the training rule to abstain from taking life” is an ethical statement. “If you murder, your self will spend its next incarnation in one of the Naraka realms” is a religious one.
Some things about religions in the fantasy literature make an abundant amount of sense. If there are zombies running around that a priest can turn back by waving a particular symbol, or if the armies of the Valar are fighting Morgoth across a continent, or if there is a reincarnating elemental-power kung-fu master saving the world every generation, it is quite obvious that something important is going on.
But this does raise a question of terminology: I have defined religion as characterized by belief in something supernatural. But if Aslan is running around the landscape fighting Tash, isn’t that automatically now part of the normal world? Belief in something we would call supernatural is irrelevant if it is an everyday occurrence. And religion no longer applies.
Terry Pratchett plays with this in Discworld. In that setting, gods exist, many of them. But they only have as much power as they have true believers. The Great God Om is significantly inconvenienced when he comes down to the Disc and finds that he only has one faithful follower left, leaving him incarnated as a maimed tortoise. Because most of the Discworld gods are gratuitously cruel, much of the population of the Disc is quietly Nay Theistic to avoid giving them more power than they have (“Of course they exist. But don’t go around believe’n in ’em. It only encourages ’em”). It’s rather like being in a city dominated by rival mafia dons: either you get one to protect you, or you keep your head down to avoid the attention. Vocal atheists tend to get hit by lightning by the gods that do have power, and so the surviving population of them are mainly fireproof golems. This being Pratchett, the social commentary is of course quite deliberate.
Human Religions in SciFi
When an author has incorporated religion(s) into a science fiction setting, particularly those set in the future, human societies tend to have those religions either be current ones or be similar in many ways. This makes sense if there has indeed been historical continuity, but it is important to remember that all real religions change dramatically over decades and centuries. Special mention here goes to Dune, where Frank Herbert took some liberties with Zen Buddhism and Sunni Islam to create the Zensunni adepts. Furthermore, Dune has the Bene Gesserit, who exploit religion for their own political ends – deliberately seeding legends on planets for the protection of their agents.
Herbert also did something very important with Dune: he did the research. Herbert was raised Catholic and became an atheistic Zen Buddhist later in life, but he took care to incorporate Muslim and Jewish as well as Christian and Buddhist elements into his world-building. That level of preparation is rare. It is far too easy to fall into Write What You Know while not doing the research and also into Author Appeal, and produce a fictional culture that is dominated by only a single religion that the author is familiar with or professes themselves or a complete lack of religion if the author is an agnostic or atheist. I do not have the statistics to back up the statement, but it seems to me that there is an excess of Christian themes in at least the English-language scifi and fantasy literature as compared to the actual worldwide distribution of religions (although this is perhaps offset by religion or the lack thereof not being that important in many scifi and fantasy works).
There is a related problem, where a fictional culture that is supposed to be one specific religion is portrayed as something else entirely. In Buffy: The Vampire Slayer and a lot of other works, Wicca is misrepresented. Going back a few decades and somewhat more abstracted, James Blish was significantly confused about 1950s Catholic doctrine when he wrote “A Case of Conscience“. There are far too many badly-intentioned examples. Some misrepresentation is people not doing the research. Some is people wanting to make a religion look as bad (or good) as possible.
For scifi aliens, there shouldn’t be anything exactly identical in an alien religion as compared to any human religion – there are two entirely different histories. Again, this is religion and not ethics. There are two themes that work as an excuse for there being too many identical elements: ancient astronauts or time travel. In Babylon 5, everyone thinks that the Vorlons look like angels. That was deliberately engineered by the Vorlons, who liked to go around the galaxy hacking the genetics of non-technological races so that they would like flying bilaterally-symmetric glowing figures. Babylon 5 also had a messianic religion centered around Valen, a Minbari prophet who said that he would return in the future. That was explained by Valen being a time traveler, Jeffrey Sinclair, who was born a thousand years later.
Other times there is a partial excuse for Crystal Dragon Jesus. If the religion of an alien culture is defined by the needs of the plot the writer wants to do, they will slant the world-building appropriately. Taking one more from Babylon 5: the Centauri were themed like the Roman Empire, so they have an extensive pantheon of various misbehaving gods and an imperial cult where emperors are elevated to godhood. In Star Trek: Deep Space Nine, the writers wanted to make the Captain into an actual messiah, so Bajor has a religion based around dual gods – good and evil – who are both actually Starfish Aliens that like to live inside wormholes. Captain Sisko becomes the emissary of the good ones (“The Prophets”), and disappears into heaven/closed time-like curves inside the wormhole at the end. Cargo Cults are popular in science fiction too, as a way for otherwise technologically-limited groups to have access to something without being able to replicate it.
But, these excuses for similarities aside, why should aliens have anything like human religions at all?
The origins of many individual human religions are argued. But a tendency to invoke supernatural explanations to phenomena is obviously common among humans, and has been for a very long time. Anthropological models of the development of religion describe religions as an emergent property or byproduct of known cognitive biases of human brains. We tend to assume correlation and causation even where neither exists, tend to falsely assume intelligent intent, and are easily manipulated by even entirely false fears. We fool ourselves into being more sure of our statements than we actually are, over-estimate how much others agree with us or how much we disagree with them, and like beliefs that we know others hold better. We also reflexively divide others into people in our group and outsiders, and favor the in-group over the out-group.
And so unless their members are very careful to avoid it, human societies quite rapidly develop numerous elaborate and very specific fictional scenarios to try and explain things that may not even exist. And things can get very dangerously confused when those different scenarios conflict with each other. To use TVTropes vocabulary again, religions are very devoted fandoms.
Would intelligent aliens necessarily have any of the same biases that we have? And if they didn’t have one or another, would religions as humans make them still appear or not? If not, what else might emerge instead?
Is some level of in-group favoritism inevitable for an intelligent species? Or can intelligence develop without it, automatically valuing all members of the species equally? What society evolves from that, and would something recognizably similar to human religions appear? Can we say that any religious institutions that do appear would be far less hierarchical, and perhaps far less important in society, if people did not often evaluate the needs of those who share particular beliefs in some supernatural concepts above those of those who do not?
Of course, given such a large difference in cognition, many things other than religion would be different. I played with this with the ursians, where over-valuing the in-group leads to a genetic diversity crisis quickly and so they have less such favoritism than humans do. This shows up in their sexual ethics, which are different from human norms because that was what optimized survival. But I have not considered what religions they might or might not have.
Pareidolia makes most of us prone to see human faces and figures and other patterns we consider significant everywhere. Clouds, sand dunes and hills, the shells of crabs, a colon and a single parentheses (parenthesis?). Some level of pareidolia is an evolutionary advantage: it is good for any animal to be sensitive to patterns corresponding to its prey, its predators, and others of its species. But consider an alien species with much less permitting pareidolia than we have. They would not have emoticons, and very different art. They would also not have people asserting that random patterns of char on toast was a miraculous appearance of a religious figure, or that the reflection of light off of a polished steel dome was a sign from God. Would such people still come up with anything we would call a religion? If so, what might it be like?
And one more:
Agent detection is the tendency to assume an intelligent intent where one does not necessarily exist. We do it very easily – just consider how we anthropomorphize even relatively simple devices, such as dice or a deck of cards. Taking a more complex system: when did you last complain that your computer is out to get you? This can be explained as having a survival advantage: anything that could potentially indicate actions by a predator or an adversary should be approached with caution, and false positives cost far less than false negatives.
I don’t think an intelligent species could evolve without some level of agent detection. Part of any successful intelligence has to be being able to identify other intelligences; wither to cooperate, confront, or avoid them. But like pareidolia, we could consider a species where the criteria for what makes them think “there is intent there” are more or less stringent or just different. How does that change a society, as well as any tendency for religions to appear or not?
Many of these questions may seem a bit abstract, but I think they’re useful to think about. Truly realistic alien cultures will differ from human norms in ways that are not simply derived from their environments, and recognizing and confronting the biases inherent in how we think shows some possibilities to explore. I’ve focused on religion or the lack thereof here, but this extends to everything that such aliens might think or do.
I think there is a dearth of good science fiction that explores these themes. We have space opera, where the aliens are often indistinguishable from humans in how they think. Other works have aliens whose thought patterns are said to be incomprehensible, but that usually seems to me as as excuse to skimp on the world-building. There is a large body of literature (including some of my own attempts) that explores how cultures and behaviors can be directly changed by the environment a species lives in, but that usually assumes ‘like humanity unless noted’. Given that Most Writers Are Human, it is hard to work through the implications of alien cognition consistently. Does anyone know of such a work?
You know it’s coming. You’ve got your shotgun, your food and water, and useful barricades for blocking doors and windows. But, the numerous fictional portrayals aside, what would really happen in the event of a zombie outbreak? Or perhaps we should ask instead: what couldn’t happen?
They just keep going and going and going…
Zombies should not be the Energizer Bunny, any more than humans are.
The most obvious limitation is that the zombies are decaying. Wait long enough, and they’ll presumably stop moving. There’s a reason normal humans shut down after they’ve taken that degree of trauma. If the spinal cord has been significantly damaged, the zombie can’t be walking anywhere. Even if the nerves are structurally intact, inadequate blood flow will lead to nerve damage very quickly: within 5 minutes for the central nervous system, maybe 20 minutes for the spinal cord, digestive organs, and muscles. Since the trauma that turns someone into a zombie is usually accompanied by lots of blood loss, how are their tissues getting oxygen and why aren’t they permanently down within tens of minutes?
If we grant that the zombie is structurally sound and getting enough oxygen that it doesn’t immediately go into an ischemic cascade, there are still problems. Consider energy. If a zombie doesn’t eat, how far can it go?
There is enough ATP in normal human muscles for a few hundred meters of walking. So clearly the zombies still have ATP synthesis going, or they’d be notably non-threatening. If we assume glycolysis but no way to replenish sugars, after thirty or forty kilometers the zombie will drop – hitting the wall like any endurance athlete. If zombies have massively up-regulated fat metabolism, they can go a couple of hundred kilometers before they run out of body fat to burn into motion. This would be a convenient way to explain why the zombies would like high-fat foods like brraaiinnss, but as we’ll see that distance limit makes zombies relatively easy to contain.
If the zombies can eat each other and/or humans, then they can go further. But absent an external food supply, the population of zombies will exponentially decay with distance (the math is the same as the rocket equation), with an e-folding distance of ~200 km. That means that any initial population of zombies will die off quickly. Shambling around twenty-four hours a day looking for brains is energy-intensive. At normal walking speed, the population of zombies will die off with a timescale of a couple of days.
All of this assumes that the zombies are limited by their food supply. Omnivorous zombies or zombie cows are much more dangerous – there are far more grains than brains. But the normal human-eating zombie is pretty easy to contain, because the infection dies off so quickly. All you need to do is give the zombies something to chase until they drop dead (or un-undead, or more dead, or whatever). It’s persistence hunting in reverse: in the zombie apocalypse, you survive by having the zombies chase you.
In the interests of not losing the zombies when you run far enough ahead of them to have a snack and refill your water bottles, it’s probably best to start with a car with a full tank of gas. Have a couple of your friends serve as rear-guard to make sure the zombies don’t get too close, and drive in a big loop around the infected area, pied-pipering the zombies to their doom. With 600 km of driving, almost all of the zombies trailing you will be dead. Just 5% will survive if they can eat each other and run at the same time. You don’t need to do anything silly like armor the car. A zombie limited to normal human strength can’t break the windows, and the extra weight would cut into the gas mileage.
Speed and Transmission
Slow zombies. Why do these work at all? In that case, it’s not even a matter of outrunning your friends. These zombies are so slow, it just doesn’t make sense that they’ve been able to infect anybody beyond the initial carriers without something else going on.
Which brings up the problem of the transmission of the zombification. The zombies, regardless of speed, want to eat your brains. And in most cases, a zombie without a head isn’t going anywhere. So, assuming transmission via bites or other bodily fluids, the only people who get turned into zombies and not just eaten will be those slow and unlucky enough to get bitten or scratched, but fast enough to get away afterwards. This is a big limit on infection. In order for the outbreak to spread, each zombie has to bite far more people than the one-per-two-days limit from the energy content of their muscles and also avoid getting eaten by fellow zombies.
The Resident Evil series gets around all of this by having the virus become airborne or blood-borne depending on the situation. But then all of the characters that are fighting the zombies should have been infected as soon as they got into the same room as one. And of course those zombies are far too fast and too long-lived to be consistent.
You can avoid a lot of these issues by ignoring physics by means of magic. Dungeons & Dragons includes zombies, of course, generally under the control of some sort of evil necromancer. More necromancy shows up in Dead Beat, a book in the Dresden Files, where the zombies require an ongoing rhythm, metaphysically replacing their heartbeats, in order to keep moving. Either way, physics is not the limitation.
That said, the usual issues with zombies still apply — just stabbing one won’t stop it, though head shots probably work better. On the plus side, they’re generally not contagious.
Preferred solution for D&D? Bring a cleric.
For Dead Beat? Bigger zombie. No, really. We’re talking getting your undead pet T-Rex to munch the evil zombies for you.
Of course, with the zombie apocalypse, an ounce of prevention is worth a pound of cure. You know it’s a nasty virus? Fine. Treat it like ebola and stash it in a biosafety level 4 lab. Biohazard suits with positive air pressure (to avoid contamination even if there is an accidental puncture) are mandatory, along with multiple airlocks and serious decontamination of anything from inside. And all of the air vents run through micropore filters and then directly into furnaces, to burn up anything airborne that escapes (there goes Resident Evil). Similarly, if you’re testing some experimental new retrovirus on humans or animals… keep them in a nice, safe quarantine, long enough to check for weird side effects. And really, test the animals first.
So the zombie apocalypse is really quite ineffective as apocalypses go. So we’ll end this with a few funny things.
Here’s a take on the corporate zombie by Jonathan Coulton.
And, regardless of your political affiliation, we present what should be an entertaining “endorsement” by Joss Whedon.
Both of us wrote this one – RMR & MWB.
I keep talking about the importance of considering what technologies (in the broadest sense of the word ‘technique’) can do to a society when writing a story. In many stories, the technology itself isn’t the purpose of the story that was being written. Inception is a heist film – in some sense, the dreaming technology is less relevant than the interactions between Cobb and his memories of his wife, and his quest to return to his children and home even if that means assaulting Fischer. This is the second of Asimov’s three kinds of science fiction: the technology is incidental to the adventure.
But what about role-playing and strategy games, where the techniques concerned are the main way that the players’ characters interact with the game world? Here it is essential to consider how the technologies interact with one another, in the form of possible combinations of the rules. Otherwise the game may end up essentially unplayable. Most good games don’t have that severe of a problem, but considering all of the possible combinations of rules becomes very difficult for complicated games where the full description of the rules may be hundreds of pages long. In particular, economics is very hard to do right.
If the game developers and beta testers haven’t found and fixed all possible problematic combinations of rules, there will be exploits to take advantage of. And given a large enough base of players, they will be found. There are very long lists of such game-breaking techniques, but here I’ll focus on economic ones.
In all of the versions of Dungeons and Dragons, there are exploits that allow relatively low-level characters to defeat any opponent. The most notorious example is Pun-Pun, a level-1 starting character in DnD 3.5 that has arbitrarily high power. But the exploits in DnD are easier than that. The purchase price of a ten-foot ladder is less than the sale price of the two ten-foot poles and shorter rungs that it is made of. A player can in theory drain all of the cash out of the local economy.
In the Dresden Files RPG, there is another simple exploit. A lot of the competitive balancing in the game relies on high-powered magical characters being unable to use complicated technology, particularly computers and other electronics. This is justified by the characters ‘magical energy’ damaging the electronics. They are walking techbane. But moving water is also established to block magical energy. You can still use a cell phone, a GPS, a computer terminal, and all of the fancy gadgets in a modern hospital as long as either they or you are encased in a thin layer of circulating water. Time to go shopping at a fire-fighter uniform supplier.
In Mage: The Ascension, some characters can magic the laws of probability and win the lottery. That may get a certain amount of unwanted attention, but you only need to do it once. To deal with this sort of thing, all of these games have one basic rule: the moderator is always right.
Things get a bit more problematic in games without a moderator constantly adjusting the rules to avoid or limit game-breaking. It doesn’t even have to require a large player base – AI programs can do the same thing. In 1981 and 1982, a challenge using the rules from the sci-fi RPG Traveller was twice won by an early learning program. It found that thousands of kamikaze ships would defeat any other solution.
A more recent example: Starcraft is very close to competitive balance, but in scenarios with nearly-infinite resources and equally-skilled players, the Protoss game race has a slight advantage over the others. They can assimilate enemy units and potentially have three times the army of anyone else (if the game lasts that long).
Game software can be updated. This is easier for online games. In World of Warcraft, there have been both positive and negative game-breaks. On the positive side, there was a bug that could be exploited to allow a single paladin character to do death by a thousand cuts to the hardest-to-defeat enemies in the game in one move (as opposed to the usual method of two dozen characters taking several minutes to bring it down). On the negative side, a programming bug caused the Corrupted Blood debuff to turn into a pandemic inside the game, killing off or wounding almost all characters. Those were all fixed by obvious rule patches in short order.
The Limits of Rule Patching
But rule patching can only go so far. The most complicated game breaks arise from interactions between many player characters, in effect a large synthetic economy. The World Of Warcraft internal auction markets are nowhere close to equilibrium, and arbitrageurs can make lots of in-game money. In some cases, trading bot programs have accumulated up to several times the total amount of money in circulation on any one game server. Blizzard deals with that by shutting down bot accounts whenever they are detected. But Matt Fisher at Stanford tells me that bot programs can be programmed to appear almost identical to a human player who obsessively trades on the market, so there is no way to fix that.
Perhaps the limits of rule patching can be excused. When game systems are so complicated that no-one can predict their outcomes, and when they involve the interactions of thousands of separate agents, fixing problems with them is as hard as fixing problems with the real-life economy. Doing significantly better than random chance would be impressive.