Re-blogged from The Conversation
Not even wrong: why it matters when science is misunderstood
By James Dyke, University of Southampton
What is science?
I would hazard a guess that someone randomly accosted on the street and asked for a working definition of science would flounder a little. They may mumble something about white coats, test tubes and impenetrable maths. That is sometimes the response I get from my non-science undergraduate students.
So it was with real interest that I began reading a much-shared article on The Week by Pascal-Emmanuel Gobry entitled How our botched understanding of “science” ruins everything as I hoped to learn how he proposed shedding some light on this matter. Gobry gives the following working definition of science:
“Science is the process through which we derive reliable predictive rules through controlled experimentation.”
That sounds quite narrow, but then scientists predict things and they do experiments right? But he continues with:
“Because people don’t understand that science is built on experimentation, they don’t understand that studies in fields like psychology almost never prove anything, since only replicated experiment proves something.”
No scientist would claim that an experiment “proves” a theory, only that the theory proposed has not be shown to be false. It’s a “put it up and try to knock it down” version of science in which all scientific models are wrong, but some are more useful than others. However, it’s Gobry’s view of statistics which leads us to very strange territory:
“People think that a study that uses statistical wizardry to show correlations between two things is “scientific” because it uses high school math and was done by someone in a university building, except that, correctly speaking, it is not.“
Someone needs to have an urgent conversation with the editors of journals such as Nature, Science and Proceedings of the National Academies of Sciences because much of what these journals publish is nothing more than “statistical wizardry” that show all sorts of correlations and relationships.
For example, this study argued that many Earth system processes have tipping points that can produce large and sudden changes. But the authors didn’t go and remove all the ice from the Arctic or chop down most of the Amazon rain forest to prove such conclusions. They didn’t do any experiments. Instead they did some “statistical wizardry”. In a university building.
Such misunderstanding of what scientists do and the basis of scientific knowledge matters a lot, as Gobry goes on to demonstrate:
“While it is a fact that increased carbon dioxide in the atmosphere leads, all else equal, to higher atmospheric temperatures, the idea that we can predict the impact of global warming — and anti-global warming policies! — 100 years from now is sheer lunacy. But because it is done using math by people with tenure, we are told it is “science” even though by definition it is impossible to run an experiment on the year 2114.“
We can sidestep how this “fact” was established. That is, we can ignore the large amount of statistical analysis that was conducted on ice core, tree ring and other proxy data to correlate changes in global temperature to changes in carbon dioxide. We can also ignore how this is a straw man argument because no one claims to be able to produce precise predictions for the Earth system 100 years into the future – the best we can do is produce scenarios.
But we shouldn’t ignore the argument that because we cannot run experiments into the future we cannot call results about future climate “scientific”. By that same reasoning, we cannot call theories about dinosaurs scientific because we cannot run experiments into the past. After all, no one has ever conducted experiments on dinosaurs. All we have available are a bunch of bones. Not even that, just fossils.
Does that matter? If you are motivated to find methods to produce reliable knowledge about the natural world, then no. Conducting controlled experiments, computing statistics, and running simulations can be effective tools if they are applied in a logical and consistent manner. Take palaeontology. We’ve found all these fossilised bones. How can we put them together? What did that animal eat? How did it die? What was the climate like when it was alive? We can formulate hypotheses associated with theses questions and test them using scientific methods.
For some reason Gobry argues that building statistical models based on previously obtained experimental data isn’t proper science – even when conducting statistical analysis is essential when attempting to show how reliable any result is. You always have to subject your results to statistical tests to make sure that you didn’t get a result simply from chance.
Doing science and testing hypotheses is a bit like trying a case in a court of law. Given the evidence available we have to decide what is the most probable explanation. For questions such as what colour was an adult male Tyrannosaurus Rex, we currently don’t have sufficient evidence or theory to be able provide robust answers. For questions such as what are the impacts of humans on the Earth’s climate we have a lot of evidence. Does that mean we can state with any precision what will happen in 100 years time? No. Does that mean we can assign certainty to the prospects of dangerous climate change happening? Yes.
Nature is complex so it should come as no surprise that our methods to understand it can be similarly complex. Unfortunately, in bemoaning the public’s understanding of science Gobry’s contribution to this issue is what physicist Wolfgang Pauli once dismissed as not even wrong.
James Dyke does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.