Source for content: COMPOUND INTEREST ©, compoundchem.com. Used with permission.

Spotting bad science: the definitive guide for journalists

Being able to evaluate the evidence behind a scientific claim is important.

Being able to recognise bad science reporting, or faults in scientific studies is equally important. These 12 points will help you separate the science from the pseudoscience. This content is also available in our downloadable Desk Guide for Covering Science.

1.  Correlation & Causation

Be wary of confusion of correlation and causation. A correlation between variables doesn’t always mean one causes the other. Global warming has increased since the 1800s, and pirate numbers decreased, but lack of pirates doesn’t cause global warming.

2. Unsupported Conclusions

Speculation can often help to drive science forward. However, studies should be clear on the facts their study proves, and which conclusions are as yet unsupported ones. A statement framed by speculative language may require further evidence to confirm.

3. Problems with Sample Size

In trials, the smaller a sample size, the lower the confidence in the results from that sample. Conclusions drawn can still be valid, and in some cases small samples are unavoidable, but larger samples often give more representative results.

4. Unrepresentative Samples Used

In human trials, subjects are selected that are representative of a larger population. If the sample is different from the population as a whole, then the conclusions from the trial may be biased towards a particular outcome.

5. No Control Group Used

In clinical trials, results from test subjects should be compared to a ‘control group’ not given the substance being tested. Groups should also be allocated randomly. In general experiments, a control test should be used where all variables are controlled.

6. No Blind Testing Used

To try and prevent any bias, subjects should not know if they are in the test or the control group. In ‘double-blind’ testing, even researchers don’t know which group subjects are in until after testing. Note, blind testing isn’t always feasible, or ethical.

7. Sensationalised Headlines

Article headlines are commonly designed to entice viewers into clicking on and reading the article. At times, they can over-simplify the findings of scientific research. At worst, they sensationalise and misrepresent them.

8. Misinterpreted Results

News articles can distort or misinterpret the findings of research for the sake of a good story, intentionally or otherwise. If possible, try to read the original research, rather than relying on the article based on it for information.

9. Conflict of Interests

Many companies employ scientists to carry out and publish research – whilst this doesn’t necessarily invalidate research, it should be analysed with this in mind. Research can also be misrepresented for personal or financial gain.

10. Selective Reporting of Data

Also known as ‘cherry-picking’, this involves selecting data from results which support the conclusion of the research, whilst ignoring those that do not. If a research paper draws conclusions from a selection of its results, not all, it may be guilty of this.

11. Unreplicable Results

Results should be replicable by independent research, and tested over a wide range of conditions (where possible) to ensure they are consistent. Extraordinary claims require extraordinary evidence – that is, much more than one independent study!

12. Non-Peer Reviewed Material

Peer review is an important part of the scientific process. Other scientists appraise and critique studies, before publication in a journal. Research that has not gone through this process is not as reputable, and may be flawed.

Source for content: COMPOUND INTEREST 2015©, compoundchem.com. Used with permission.

Back to Covering Science