You can hardly turn on the news or look at Facebook’s “trending” list without hearing about the latest results of some scientific study. However, the science news that is reported to the general public is often misleading. Reporters are known for twisting around a study’s results in order to make a catchier headline. If you’ve seen John Oliver’s discussion of this topic on “Last Week Tonight” (click here to watch it, it’s very interesting), you know what a serious problem this can be. Inaccurate or misconstrued scientific results have led to people believing that vaccines cause autism or that global climate change is not caused by human activity. In this article, I will provide you with five questions you should always ask when evaluating the validity of a science news claim. Keeping these questions in mind will help make sure you can tell science from pseudoscience.
Question #1: What news source am I viewing or reading?
As we all know, some news sources are more accurate and unbiased than others. Even seemingly-reputable sources like Scientific American have been guilty of representing the results of scientific studies out of context. Whenever possible, try to find a link to the original science paper, which is often provided in online news articles. This will allow you to view the results in a format that’s less likely to be biased. Pay attention to the journal in which the paper was published. Publications in larger journals like Science, Nature, Neuroscience, or PLoS1 are more likely to be both valid and important than those published in smaller, more specialized journals.
Question #2: How statistically significant were the results?
Statistical significance is the probability that the results of the study arose by random chance, rather than as the result of the experimental conditions. Each study will have a particular threshold that the ratios must be beneath for the results to be declared statistically significant. A common threshold is 0.05, meaning that there is a 5% chance of the results being due to random chance. This may seem small, but it means that there’s a 1 in 20 chance of the results being invalid. For results that are more likely to be valid, look for studies that have a lower significance threshold, such as 0.01 or 0.001.
Question #3: What model organisms were studied?
Clinical trials and other experiments conducted directly on humans are the most likely to have results that generalize to other humans. However, these often require years of supporting data to be authorized, so the majority of neuroscience and biology studies are conducted on animals designed to act as “models” for humans. Common animal models include bacteria, yeast, frogs, fruit flies, zebrafish, mice, and rats. Mice are the most common model for Alzheimer’s disease, though lately their accuracy in representing the disease symptoms has been brought into question. Whenever experiments are conducted on animals, be sure to take their results with a grain of salt.
Question #4: What were the sample size and timeframe?
Sample size is the number of humans or animals observed during the study. For noninvasive research such as online surveys, sample sizes are often hundreds or thousands of people. In contrast, studies that require surgery might have a sample size of only a few dozen. The smaller the sample size, the less likely that the results of the study will generalize to a larger population. On a similar note, you should also pay attention to the timescale of the study. Particularly for age-related diseases like Alzheimer’s, human studies conducted for only a few weeks or months may not be as informative as those conducted for many years.
Question #5: Does the study provide correlational or causational evidence?
This is probably the most important question of all, and it’s also the one that is most often ignored by popular reporting of science news. Correlational evidence (sometimes called epidemiological evidence) is based on observation, while causational evidence is based on random experimental assignment. Let me explain this distinction through an example. A famous 2002 study observed that people who consumed higher levels of caffeine were less likely to be diagnosed with Alzheimer’s disease. This is correlational because it shows that caffeine is correlated with a reduced Alzheimer’s risk, but cannot prove that it necessarily causes a reduced risk. It’s possible that people who consume more caffeine might also be more likely to exercise, have an active social life, or be more educated. In this case, one of these other factors (which are called confounding variables) might be the cause of the reduced risk, rather than the caffeine itself. In contrast, a causational study on this subject was conducted in 2006. The researchers fed lab mice different levels of caffeine and found that those with higher caffeine intake had reduced levels of amyloid plaques in their brains. In this case, the caffeine is more likely to be the cause of this result because all other variables were carefully controlled and the mice were assigned their caffeine intake randomly. An easy way to remember this difference is that correlation typically comes from studies where the subjects have free choice, whereas causation can only be established when the subjects are randomly assigned a condition by the experimenter.