Thursday, August 13, 2015

Reporting on Science News

Wow!  Who would have thought the first year of grad school, with its classes and teaching and exams and research and picking an adviser and deciding what to do with the rest of your life, would take up so much time?  As a result of this minor interruption, you'll have to forgive me for referring to what is now a year old Skeptics' Guide episode, but it sparked an idea that I believe is still important.

The episode in question mentions a potentially misleading science news article about the extinction of the dinosaurs.  In the episode, Dr. Steven Novella argues that the focus in the news article brings readers to the opposite conclusion about the dinosaur extinction than the original research article intended.  The research article looks into new data from around the time of the dinosaur extinction and finds that although there are some indications that there was a small drop in diversity in some types of herbivores in North America, it is still overwhelmingly likely that the Chicxulub impact is responsible for the mass extinction of dinosaurs.  The news article emphasizes the idea that the diversity decrease made the entire dinosaur population more vulnerable to extinction and if the impact had come at a different time, the dinosaurs might have avoided extinction.  Although the second half of the article moves back toward the original conclusion of the original work, it gives an inordinate amount of attention to a small suggestion brought up in the paper and doesn't look at the overall scientific consensus.

In general, I've found that scientists' views on science news reporting are reserved at best.  Unlike opinion-based topics, science tends to build up enough evidence to support one side or another.  While new information can be uncovered, and sometimes we don't have enough information to make a decision, there is still often a logical correct and incorrect conclusion.  This aspect of science, combined with journalists' tendency to portray both sides of a story as if they are equally valid regardless of known evidence, leads to many misleading articles and creates a phenomenon widespread enough to have its own name: "false balance."

But how widespread is false balance actually?  It doesn't seem fair to immediately dismiss all of science news as having poor quality without looking into it. Science writing is essential after all.  Science writers bridge the gap between scientists and the public.  Most people don't have the time or the background knowledge or a good reason to sift through narrowly specific scientific journals to find out what scientific and technological advances have recently occurred.  I can barely drag myself to read the papers my adviser sends me, and it takes a few months to get acquainted with the terminology of a new field.  Scientists still aren't off the hook when it comes to science communication, but science writers can help translate for the public.  They can shed light on important ideas.  They can inspire and excite and teach!  But they need to be doing it well.

It would be illuminating to investigate the quality of scientific writing across various media outlets using a sort of "grading rubric".  The following set of guidelines is my attempt at objectively evaluating science articles for good science.  To clarify, I'm trained as a scientist, not a writer, so these criteria focus on the quality of the science and how it's portrayed, not the quality of the writing.  Obviously there are restrictions about what fits in a column, but good science should make the cut.

We assume all articles start with a perfect score of "A" and deduct grade points as necessary.  Let's begin with the headline:
  • Headline -- One-third of a grade point is subtracted for each of the following:
    • Contains the word(s) "unexplainable", "mind-boggling", "baffling", "miraculous", "holy grail", and/or "boffin".
    • Is irrelevant to the topic to be discussed.
I gave the headlines less significance than the bulk of the article by limiting magnitude of deductions to a third of a grade point.  Now let's look at the bulk:
  • Individual claims -- The rating drops by a full grade point for each false or inaccurate statement claimed by the article.
I had some difficulty deciding how to apply the "individual claims" guidelines due to the potential for articles to vary greatly in length and mistakes in magnitude.  Rather than attempt to create a spectrum of inaccuracy, I decided to stick with a binary system where a statement is either factual or it isn't.  This also reduces any bias that might come from people being more protective and picky about facts from their particular field.  Of course, necessary simplifications are considered different from factual errors and do not warrant deductions.  For example, saying an electron is a point particle instead of discussing wave-particle duality would be considered a simplification and not incorrect in most contexts.

We also need to look at the article as a whole and not only in terms of individual claims.  For that I include criteria on contextualization:
  • Context -- Statements and conclusions in the article need to be tempered by context.  What is the scientific consensus?  Why have scientists come to that conclusion?  If false balance is present in the article then a grade point will be deducted.  For example, if the article spends equal time presenting the views of scientists who claim that climate change is real and those of scientists who claim it is not real when, in reality, 99% of scientists would agree that climate change is happening, the writer will be penalized.
  • Conclusions -- An automatic failure is given to any article that leaves the reader
    with an overall impression or conclusion that is opposite of the
    conclusion reached in the original scientific paper.
All that I'm asking for is science articles that are factual and appropriately contextualized and not sensationalized to the point of being misleading, and I'm curious to see what the current state of science writing actually is.  That's why I need your help.  My guidelines need to be tested to see if they are reasonable.  Are they objective and fair?  Does the same article receive the same grade regardless of the person reading it?  Is the final criterion too subjective and too powerful?  I am happy to see The New York Times volunteer to be my first test subject by virtue of me noticing their article on social octopuses under the Google News science section, and their having a publicly accessible link to their main original paper reference.  Let's see how they, and my criteria, hold up.

We will need many, many data points before we can tell which media outlets are trustworthy and which need better quality control when it comes to science news.  If you take the time to test a news article and compare it to the press release or original paper (or both), please let me know! Just list the journalist, the journal, the titles of the news article and the comparison (a research paper or a press release), your grade, and some remarks about how you used the guidelines so we can work out any bugs in the comments section below.  And I hope my older posts can pass my own test!