Serious vs sensationalistic studies

Have you ever been confronted by someone who claims something outrageous like “75% of serious dog attacks are by ‘pit bulls’”?

Then they’ll point to a “study” reporting this, as if that settles the issue. It doesn't seem convincing to you, but it can be hard to articulate why, especially if the other person is using it simply to support their existing preconceptions. 

You probably know that not all “studies” are created equal. Anyone can publish a “study” using Google research, or using ten friends as a survey sample. If you want to go a step further, there are even fake journals, some with official-sounding titles, that will “publish” your study for a fee.

There are many things can look for to ensure the study is legit. Read through the introduction to learn about the study's design. A serious researcher will do everything possible to eliminate bias in his/her study and obtain their data from trustworthy sources.

Many of the more sensationalistic “studies” on dog bite stats rely on internet search data or media clippings. This is not a good source of data, because it’s unlikely that all incidents would be equally reported in the news. It’s not the media’s job to report on every dog attack – just the ones they hear about, or they think will be interesting to their readers.


Let’s use an example we can more easily wrap our heads around. If we wanted to study something related to car accidents, would we scan the news and assume that gives us an accurate picture of all mishaps on our roads? No way!

Some days, relatively minor accidents might get covered. Maybe it was a slow news day. Maybe the reporter happened to be there and got some great footage. Maybe a local celebrity was involved. Maybe it tied up traffic during rush hour.

But on the other hand, a serious car accident may not get reported at all. It could have happened in the middle of the night. Maybe reporters were tied up with other assignments. Maybe it got bumped because the US President sent out an especially dramatic tweet that day. Maybe the wreckage was cleared up quickly and the families weren’t willing to speak to reporters.

Conclusion? By NO MEANS would media clippings be a reliable source to study the prevalence or seriousness of car accidents in our community. We’d be better off looking at towing company records, police records, or insurance claims. Of course, these sources also have their limitations, since it is humans collecting and recording data, but they would infinitely more useful than press clippings.

When someone sends you a study, first steps in the critical thinking process are as follows:


- Who did this study? Did they set themselves up to be objective and seek the truth? Or did they go looking for information that supports their existing opinion?

- Where is it published? Is it a personal website or blog? A niche publication or advocacy/propaganda site? A small independent journal, or an academic, peer-reviewed journal? (Tip: Researchers have a methods of evaluating a journal’s “impact factor” - try Googling the journal name and "impact factor").

- Does the researcher disclose their study design? Do they discuss the limitations of this approach, and any factors that might skew the results?

- Are there attempts to seek unbiased data - using control groups, for example, or having measures in place to make sure subjects/evaluators don't bring their personal biases into the experiment.

- Is the analysis objective? A study exists to - well - study a topic! It should recognize its own limitations. The discussion and analysis should not be emotional or draw hasty conclusions. 

----

Further reading:

http://www.iflscience.com/editors-blog/four-scientific-journals-accept-fake-study-about-midichlorians-from-star-wars/

https://en.wikipedia.org/wiki/Predatory_open_access_publishing

https://en.wikipedia.org/wiki/Impact_factor