By Emily Maloney
The word science comes from the Latin verb scire, meaning “to know.” Throughout all the iterations of science – from ancient mythologies to current astrophysics, from the Enlightenment to postmodernism, from Copernicus to Rachel Carson – the discipline of science has aimed to answer varying specificities of the question Why? in an effort to understand something more, to know something more about the world we live in.
Through the experience of answering questions of this nature, we have developed the scientific method, which, when used correctly, works! In its essence, the scientific method standardizes the way in which we go about acquiring new knowledge in disciplines ranging from international affairs to biological engineering. So why – if we have been practicing science for so long, have benefitted astronomically from scientific discoveries, have created methodologies largely agreed upon to produce correct conclusions – does science seem to constantly be at odds with society? A typical response alludes to the battle between science and public opinion for legitimacy, such as Galileo’s infamous trial by the Catholic Church or today’s rejection of scientific findings about climate change, citing the lack of scientific literacy of the public.
While this is certainly true, focusing on that side of science – the communication of scientific discovery to society at large – lets science off the hook for some of its more serious internal and critical issues.
First, science is internally facing a struggle with the reliance upon p-values that is shaping what research goes public. This reliance on significance tests then shapes what research gets done, and ultimately what conclusions are communicated to the scientific community (and furthermore, the public). When setting up scientific research, one of the most common methods is to construct a null hypothesis, in which the population parameter is equal to a certain value, and then your alternative hypothesis, that the population parameter is either greater than, less than, or simply not equal to that value in the null hypothesis. To summarize a central statistical maxim, a p-value indicates the probability of getting the result in your study assuming that the null hypothesis is true. P-values can be used to reject null hypotheses, but do not necessarily indicate that an alternative hypothesis is true. P-values are what provides the word significant to the phrase “statistically significant” seen all over science journal articles, reports of scientific studies, and undergraduate research papers. The issue with p-values does not come with concept itself, but how it has taken over the world of scientific research.
Vox recently reported that a meta-study investigating the prevalence of p-values in medical journals found that a total of “96% [of articles] reported at least 1 P value of .05 or lower”. This exceedingly high number suggests a couple of things. First, journals are prioritizing the publication of research that has at least one statistically significant result, forgetting that null results are important too. Null results tell us that our preconceived ideas are not true, that this hypothesis tested in this way could not explain what is happening. Second, researchers are likely participating in “p-hacking, in which researchers test their data against many hypotheses and only report those that have statistically significant results.” This is a disservice to the scientific community, because it challenges the validity of the research that has been conducted.
However, it is likely that this type of manipulation is not coming from a malicious place – instead, the world of academia is structured such that researchers can only succeed if they are published. Graduate students and professors vying for tenure track positions are facing intense pressure to publish in the best journals as much as possible, in the hope that it may give them some sort of job security in the unforgiving environment of higher education. Indeed, those who end up staying in academia as adjunct professors face alarmingly low wages – about thirty-three percent of adjuncts live under the federal poverty line.
The funding crisis plays into this anxiety as well. Since conducting research costs money, researchers must apply for funding from various sources (including industry) every time they conduct a study, usually through a competitive grant process. Thus, research questions are shaped around the goals of these industries and the topics most relevant or of economic importance. Instead of researchers directly dictating their research questions and feeling free to explore the areas in which interest them the most, some feel conflicted by the pressures of attaining funding for current and future research. Thus, they acquiesce to questions or research designs they believe are more likely to result in statistical significance or answers that support industry interests. A perfect example of this is a recent study that uncovered the intentional actions of the Sugar Research Foundation to influence research conducted by Harvard professors to find and emphasize a relationship between fats and Coronary Heart Disease (CHD), while also devaluing research showing that sugar intake was also an element in the rise of CHD. As seen in this example, science, especially medical science, has a direct effect on public policy choices, which may have led people to eating unhealthy levels of sugar, not believing it to be bad for their health.
Although these are only a few of the problems that the institution of science needs to discuss and solve within itself, they show how not every issue of science versus the public can be explained away by a lack of scientific literacy. While certainly the U.S. public could gain immensely from having a more sophisticated way of reading and understanding science, the internal battles science is facing now cannot be disregarded. For science to truly maintain its status as a deliverer of knowledge, it needs to examine each of these problems in turn and provide solutions for them.