As scientists we often make an experiment, and look at the p-value. If p>0.05, we sigh and ask: do I have evidence for the absence of an effect? In their Nature Neuroscience paper published on Tuesday 30 June, Christian Keysers, Valeria Gazzola of the NIN in collaboration with EJ Wagenmakers at the UvA explain why p>0.05 does not show that an experiment had no effect.
They provide an accessible tutorial with public domain software to reanalyze the data using Bayesian t-tests, ANOVAs or regressions and determine if you have hard evidence that the experiment had no effect (evidence of absence) or whether your data is undiagnostic. These simple Bayesian tests profoundly change the way we look at ‘null-findings’, and give us the power to turn negative findings into exciting, publishable conceptual advances that counteract publication biases against negative findings. It’s a true revolution – and surprisingly simple to perform.