Yes, Let’s Start Tracking Misleading Press Releases About Scientific Findings


Science of Us is not a fan of press releases that misrepresent research findings. Here’s what I wrote on the subject back in August: “Every day, countless universities send countless press releases hyping new findings to journalists around the country. It’s shocking how often these press releases overhype the findings — there have been times when I’ve been sent a press release stating X, only to open up the actual study itself and find no evidence for that claim. To take one [then-]recent example, multiple outlets recently reported that new research suggests Tetris can help curb addictions, but really, when you read the actual paper in question, there’s no reason to think that.”

The reason this stuff matters is that press releases look pretty official, what with their university affiliation and everything. Oftentimes journalists will read a press release but not the study that gave birth to it, and end up amplifying shaky claims as a result. Bad press releases are a net negative for scientific literacy — which isn’t exactly an area where we’re thriving as it is.

I bring this up because over at his blog, the biologist and stem-cell specialist Paul Knoepfler of University of California, Davis, makes a really good point: Why isn’t anyone holding universities accountable for their press releases?

It might be interesting to follow such PR retractions and collect data on whether they relate to particular fields, whether particular institutions are overrepresented in the PR hype, and if there is an increasing number of science PR retractions. What do you folks over at RetractionWatch think? I note that they have posted relatively often on press releases.

Of course, one problem with this idea is that PR retractions don’t get published. Another difficult issue is that institutions going off the deep end with their PRs often do not retract or correct the PRs even if they are bad.

What do you think of science PRs behaving badly? How often is it the PR writer versus the scientists that they are quoting who are engaging in hype? Both? Does scarce funding play a role in this?

I don’t have answers to these questions, but they’re all interesting. I do think that one easy way to fix the problem is for science journalists to simply get a bit more assertive in calling out press releases that don’t pass muster, and putting pressure on universities to retract or edit them in a timely and transparent manner. Universities shouldn’t need to be told that they’re obligated to make a good-faith effort not just to publish accurate research, but accurate summaries of that research.