Think religion is all about individual beliefs? Think again. The great sociologist Émile Durkheim argued that religion is "eminently social." This means that, in the real world, religion depends on relationships between people - sharing beliefs, doing rituals together, arguing about theology. So if you're going to study religion, you need to account for those social factors. And if you want to model religion using computer simulations, then you need your virtual people (or "agents") to influence one another in ways that don't necessarily match an individualistic worldview.
Image source: Fotolia.com
In economics, people are often modeled as rational actors who try to maximize their individual utility. (I'm not doing full justice to economics here - but economists haven't done full justice to reality for centuries, so fair's fair.*) Of course, actual people make instrumentally irrational choices all the time. Part of the reason for this is that humans are intensely social animals with all kinds of weird motivations that aren't captured in rational models. For instance, psychological research shows that people have a strong drive to affiliate with members of their own ingroups or tribes, and that this desire arises partly from a fear of death. At the same time, people's tribal commitments can cause them to value some things - a sacred piece of land, say - far more highly than any market assessment would. In fact, they might value it infinitely.
These "sacred values" play a big role in religion. In the Modeling Religion Project, it just so happens that we're trying to create computer simulations that give us insight into how religion affects people and societies. As such, one of our current tasks is to build an agent-based framework - a "virtual mind" - whose behavior will be guided by psychological and anthropological insights into sacred values, religiosity, and tribal dynamics.
In most economic models, agents assign values to things that reflect basic economic exchange tradeoffs: one piece of land is worth X dollars, given its location, potential uses, soil quality, and so forth. But in our virtual mind simulation, agents might assign values that have exactly diddly-squat to do with economics. They're dealing instead with sacred values, which can't be exchanged for any amount of money. This framework comes from researchers such as cognitive anthropologist Scott Atran, who uses sacred values theory to explain why conflict negotiations so often fail - especially when they have religious overtones.
But that's only one aspect of the model we're building. This virtual mind - which we're currently calling "LuCy," or "Lucid Cybernetic Agent" - will also seek out others of its kind whenever the world starts to look really threatening. For example, military conflict or a famine will justifiably increase LuCy's fear of death. In order to assuage this overwhelming fear, LuCy will become more tribal, grouping tightly with others who share the same group identity. At the same time, LuCy will start to believe more strongly in the spirits or gods that are associated with her particular group. In this way, the threat of death can make LuCy both more "religious" and more ingroup-oriented.
These dynamics reflect what's known as "terror management theory," which claims that people innately react to reminders of death by affiliating more closely with their cultural ingroup. Most cultures teach that a greater, overarching reality supersedes death - for example, a heavenly afterlife. Even cultures that don't believe in an afterlife usually teach that people's lives have some greater, cosmic meaning. This sense of meaning makes death seem less threatening - as an individual, you might die, but the meaning of your life isn't affected by your death. So when death looms in the real world, people become more motivated to believe in the "meaning worlds" that are created by their cultural ingroups. This urge explains why, for example, people become more religious after natural disasters, and why religiosity is higher in countries with more resource scarcity and higher mortality rates.
So...these are the first two theories we're working to incorporate into our virtual religious mind. We're building LuCy step by step, one theory at a time, in what will be a two-year process. In actual simulation runs, many hundreds (or thousands) of individual agents, each running the LuCy mind, will interact together on a landscape that might be resource-rich, resource-poor, or somewhere in between. Using just terror management theory and sacred values theory, we'll be able to generate some really interesting things: tribal allegiance, ingroup/outgroup distinctions, religious violence, sacred commitments, and more. And once the many other theories are added to LuCy, we'll be able to simulate even more aspects of real-world religion.
Eventually, LuCy will be incorporated into a new modeling platform, which we'll make available to scholars outside our project. The hope is that people from many different fields - especially, but not limited to, religious studies - will be able to come up with research ideas, set up a simulation that reflects their hypotheses, and run tests. If enough people with enough unique perspectives and expertise start using simulations, we might have a whole new way to rigorously study religion.
LeRon Shults, one of the primary investigators on the project, is leading the way on LuCy, after several years of developing similar models based on current research in religion. He and my fellow postdocJustin Lane, who's programming the architecture behind LuCy, have high hopes for the virtual mind software. In Justin's words, LuCy
will bring the study of religion up to speed and begin to reframe many of the questions in the field. The project is increasingly showing that there are tools available to deal with complex social phenomena like religion...we can no longer be content with rudimentary answers to the hard questions that religion poses.
Cool, huh? One of the reasons that religion is often hard to study is that it inspires people to behave in ways that are tough to predict. But if our models take into account the super-complex motivations that spring from sacred commitments, and the social dynamics that inspire them, we might be surprised at how much we learn.
* Ooooh, burn