How much are readers misled by headlines that imply correlational findings are causal?

By Alex Fradera What do you take from this hypothetical headline: “Reading the Research Digest blog is associated with higher intelligence”? How about this one: “Reading this blog might increase your intelligence”? According to science writing guides like HealthNewsReview.org, taking the first correlational finding from a peer-reviewed article and reporting it for the public using the second wording, implying causation, is a sin of exaggeration, making a relationship appear more causal than the evidence suggests. Yet this happens a lot. A 2014 British Medical Journal (BMJ) article showed these exaggerations to be rife in media coverage of correlational studies, with 81 per cent of news articles committing the sin. Dismayingly, one third of press releases were also guilty. These normally involve editorial input from professionals and often the scientists themselves, who should really know better. Reading about this, we might conclude that science communicators of all stripes need to get more familiar with the best practice of describing causality. However, the authors of that BMJ analysis started to ponder whether readers interpret these headlines literally, or whether they draw their own conclusions. Now their research group has tested this for a paper in Journal of Experimental Psychology, and their findings suggest that while science writers need to pick up their game, science-writing guides also have some catching up to do. Rachel Adams and her col...
Source: BPS RESEARCH DIGEST - Category: Psychiatry & Psychology Authors: Tags: Media Source Type: blogs