The fun just never stops in Parapsychology. This is the story of a significant experiment that got published in 2011 and the absolutely enormous tsunami of skeptical idiocy that followed. It was entirely predictable. Lies, distortions, character attacks and shady attempts to discredit research is just another Tuesday for this field of science.
The Experiment By Daryl Bem
The researcher was Daryl Bem, professor emeritus at Cornell University and the landmark study was titled: Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect. It was basically a well — known psychology experiment with a couple of parameters switched so that instead of taking measurements of a subject’s physiological reactions after they were exposed to certain stimuli, the measurements were taken before the exposure. The study showed that people were subconsciously reacting to random stimuli even before they experienced it.
As far as parapsychology experiments go, this was nothing out of the ordinary, and it was built on existing studies. In other words, Bem would have had a pretty good idea of what the results would be even before he started. (When I saw the study for the first time, I also had a pretty good idea what the results would be even before I saw it. The nature of the experiment would suggest that a relatively small, but identifiable effect would be found, and that’s what happened.) Want to dive into the weeds? Read the long description here.
So why did this experiment generate such an uproar? It was published in a high prestige psychology journal: The Journal of Personality and Social Psychology (Bem, 2011) Skeptic James Alcock, who was attacking parapsychology before Luke Skywalker blew up the Death Star, wrote an article for the Skeptical Inquirer attempting to discredit the research immediately. Ever true to The Skeptical Way, he threw out any semblance of a careful inquiry in favor of a bombastic declaration:
Careful scrutiny of this report reveals serious flaws in procedure and analysis, rendering this interpretation untenable.
Yup, the skeptic thinks he’s somehow found flaws that the peer reviewers and the journal editor missed, even though they reject about 80% of submitted studies and weren’t exactly open to the idea of psychic ability in the first place. They clearly didn’t measure up to his vast intellectual powers.
The article then launches into a revisionist history of parapsychology, claiming that it is a failed science. This is a necessary preamble to convince his audience that this newest experiment is a failure as well. There were rebuttals by Dean Radin and Daryl Bem. Long story short, this particular criticism went nowhere because it was riddled with errors.
Ray Hyman, another critic dating back to before Star Wars, was determined to get his two cents in:
“It’s craziness, pure craziness. I can’t believe a major journal is allowing this work in,” Ray Hyman, an emeritus professor of psychology at the University of Oregon and longtime critic of ESP research, said. “I think it’s just an embarrassment for the entire field.”
All that was missing was a big “HARRUMPH!” followed by “Get off my lawn.”
The skeptics never considered the possibility that the peer reviewers and journal editor-in-chief accepted the experiment for publication because their scientific integrity outweighed their cognitive dissonance.
In an unusual move, Wagenmaker was allowed to respond as a comment on Bem’s paper. You can find it here. Wagenmaker performed a Bayesian analysis, which is different from the statistics method normally applied to psi research.
Bayesian statistics incorporates beliefs and probabilities into the analysis. This is great for dealing with problems where evidence is either vague or when you have conflicting evidence. Did Bob shoot his neighbor? Blood samples matched, (probable guilt is implied using the frequentest statistics) but they were close friends; he had no motive, and he had an alibi. (Probably not guilty using Bayesian statistics.)
The problem with using Bayesian statistics on controversial subjects is that you’re required to plug in your biases, which can lead to garbage in, garbage out. That’s what happened here. Wagenmaker calculated the odds against psi existing at 99,999,999,999,999,999,999 to one. You also have to input the probable effect size. This is a more complicated number, but the gist of this is that Wagenmaker’s premise is that psi is absolutely impossible, but if it were to exist, it would show up in the form of superpowers. Starting with a premise like that, it’s not surprising at all that he failed to find evidence for psi. (You can find Bem’s response to Wagenmaker here.)
Wagenmaker was not finished, however. He then wrote a rebuttal, which you can find here. And here is where things went completely off the deep end. In this paper, he argues that he’s not wrong: the problem is the entire field of psychology. In other words, he’s in favor of throwing out an entire field of science to make one positive result go away.
The absurd premise goes like this: Psychic ability is impossible. Therefore, if Bem’s experiment showed psychic ability to be real, then the very process that demonstrated this must be broken. These are the musings of people who would never, ever consider the alternative hypothesis: They. Are. Wrong.
Bem’s experiment survived this challenge because the ultimate premise of Wagenmaker’s argument was clearly wrong. (He had other accusations, such as calling the experiments “preliminary”, but these went nowhere.)
But this was far from the end of the battle. Enter Richard Wiseman, Chris French and Stuart Ritchie. The first two are fellows of the Committee for Skeptical Inquiry and likely skeptic lifers. Wiseman, in all his years of association with parapsychology, has never run a successful psi experiment to my knowledge. He and French have collaborated before on at least one other failed psi experiment. What’s really weird is that both of them make a point of publicizing their failures. Here and here. Wiseman publicized his supposed debunking of Sheldrake’s famous dog experiment and publicized his meta analysis of the Ganzfeld experiment which supposedly showed no effect. (The latter two do not have media links to follow. They were pre-Internet.) It’s almost as if they WANT to fail.
I was well aware of Wiseman and French before the saga of Bem’s experiment got rolling, so imagine my surprise when I read that Wiseman and French had failed to replicate Bem’s study, I was Shocked, SHOCKED I tell you. (You can find their studies here.) Of course, they publicized their failure. And here.
Skeptics everywhere breathed a sigh of relief that a crisis had been avoided. A failed replication? End of story. But as usual, things were far from that simple. Wiseman had set up a registry for experiments that ended in December 2011. It was far too short a time frame for most scientists to find time to replicate an experiment that had just been published.
Also, there were six experiments in total, including Bem’s. Two studies, both having significant levels of success, had pre-registered with Wiseman yet were not included. The three studies done by Wiseman et al., amounted to the same number of trials as Bem’s original study. In other words, there was an apparent attempt to make it sound like there were more replication failures than there really were and data was deliberately excluded, apparently to reach a particular conclusion. (The description of these issues can be found in a comment at the end of Wiseman’s paper.)
All of that became a moot point, scientifically at least, in 2015 with the publication of a meta analysis of 90 experiments. In science, a few failed replications are meaningless if more are coming. In the case of Wiseman and French’s failed studies, they were simply drowned out by more studies with positive results.
There are still claims out there that there were methodological errors in Bem’s study, but here’s the thing: errors in methodology almost always produce null results. They create noise that drowns out the measurement of the effect. Also, they are not replicable. If you were looking for errors in Bem’s work, you’d focus on finding a statistical error; a wrong decimal point, a number misplaced, that sort of thing. Those types of errors, though, tend to get spotted when a lot of people start looking at the study, as was the case here. Bem’s paper was vetted by people who clearly understand statistics. In particular, past president of the American Statistical Association, Jessica Utts examined the study. In any case, that type of error was never brought up.
The point I want to make here, in conclusion, is that the skeptical position in regard to Bem’s experiment is all smoke and mirrors. Once you clearly see the smoke and mirrors for what they are, you see what skepticism really is: cognitive dissonance writ large. The problem here is not the study design or the number of replications or the statistics used, the problem is people: people who refuse to accept study results because they can’t ever admit they were wrong.
The discussion isn’t about science anymore. This is about zealotry. Psi is real and there is plenty of science to back it up. The objections are insulting to any sane person’s intelligence at this point and the lengths that skeptics go to in order to maintain their narrative are fundamentally dishonest. And that’s the real problem. The skepticism is less than sane and it’s time to take a much closer look at THAT problem. You can start here.
I have seen these people go to the ends of the earth and beyond, conjuring fictional narratives so that they never, ever have to concede. I’ve seen this from garden variety skeptics on the Internet and from academics. How much longer do we have to placate their delicate egos by maintaining the fiction that psychic ability isn’t real?