A few weeks ago the cover of Newsweek featured an article by Eben Alexander, a neurosurgeon who previews what he refers to in his forthcoming book as "proof of Heaven." In his piece, Alexander argues that heaven exists because he had the experience of heaven without the biological capacity for producing experiences "himself." "Him" here refers to his body and, specifically of course, his brain: He was, the proof goes, in a coma while he had his experience of the divine.
There are a number of missing or weak links in what Dr. Alexander calls "proof of" -- but surely means "evidence for" -- the existence of Heaven. Sam Harris recently detailed one of the most important things missing from Alexander's case: evidence that his premise (experience at the time of no brain activity) is likely. That is, evidence that all brain activity actually ceased during his coma, that his heavenly experience occurred while he was in a coma, and that Alexander knows how likely those things are. I think Harris offers a convincing case that Alexander is greatly inflating each of these sets of odds.
But if we really want to know if heaven exists based on Alexander's account, there's something else we want to uncover, beyond the answer to the question how likely is it that brain activity actually explained Alexander's experience (the answer to which I'll assume everyone can agree is at least greater than 0 percent, particularly since Alexander is now very much alive). Those of us truly serious about evaluating Alexander's proof also need to know how likely is it that Alexander would think of the divine as an explanation for his unexpected and incredibly salient experience, whether or not it was actually the case. That is, we would want to know if Alexander is biased in his perception of the source of his experience.
Why this is important is first formalized by an 18th century minister named Thomas Bayes and discussed most recently in Nate Silver's book The Signal and the Noise. Both Bayes in his day and Silver now are very much concerned with questions like those Alexander is attempting to answer: How do I figure out the state of affairs in the world (or, in this case, beyond) based on the information I have access to? Bayes suggested that we want to accomplish this by interpreting new information in the context of our prior beliefs about the world, including beliefs about how likely it is that an event can fall under one kind of explanation or another. Even though there is good evidence that our brain can do a version of this, in order to do it optimally (i.e., reach the best possible explanation for incoming information) our brain would have to take the additional step of correcting for known biases in our initial explanations. For instance, when Silver explicitly applies these formulas out in the real world to polling data, he finds that he arrives at more consistent explanations (and, by extension, predictions for future data) if he adjust polling data based on the Democratic or Republican bias that has been observed for a given polling source.
Alexander has biases that similarly need to be corrected for when he tries to explain his experience. We know that these biases exist not only because of his previous commitments to faith, but because psychologists have shown that as a human he is intuitively hard-wired to seek out exactly the answer he arrived at. It is among our most natural and predictable tendencies to look for answers that involve goals, intentions, and emotions rather than those centered on chance or complex interactions among molecules/neurons. We can see that these intuitive tendencies are biases, and not just adaptive methods for truth-discovery, by looking at the large set of cases in which we've had such intuitions but knew they were unwarranted: the cloud formation that looks suspiciously like a certain animal, the complex set of thoughts and feelings we imbue our goldfish, those chance encounters that couldn't possibly be a coincidence, and so on. And we know that these intuitions come on most naturally when an individual has an unexpected and vivid experience (whether positive, as in Alexander's case, or negative, as in the case of a grieving widow).
My colleagues and I have recently shown that how strongly you trust your intuitions (possibly including those just mentioned) plays a role in how strongly you believe in God and the afterlife. We found that intuitiveness correlated with belief and that people who had just written about a positive experience with intuition reported stronger belief in God than those who wrote about a negative experience with intuition (or a positive experience reasoning through a problem). The fact that even a neurosurgeon might fall prey to these "divine intuitions" is completely in line with our results. We found that the strength of someone's faith is related to the amount of weight they place on intuition, independently of any effect education level and traditional measures of intelligence might have also had on their beliefs.
In other words, not only does Alexander the human have a baseline bias toward thinking his experience was caused by a divine source, but Alexander the believer has an even greater cognitive bias when approaching the question, "what is it that I just experienced?"
None of this goes any distance toward disproving the existence of Heaven (nor is it meant to), but if you combine the likelihood that Alexander's experience can be explained by worldly sources and the likelihood that he would come to the explanation he did whether or not heaven exists, you have to conclude that his proof is still at the very least incomplete.