Using data in science journalism
Is this data journalism?
Is this data journalism?
Is this data journalism?
Data + journalism = story
… which can be told visually
But it’s not only maps and graphs!
Data can be used in both reporting and storytelling.
And visualization can be a powerful for both finding and telling stories.
But think carefully about what you need to show to your audience. Some of the best data-driven stories contain little in the way of numbers or graphs.
Have you …
A new thing?
A thing of the future?
The pioneer: Philip Meyer
Now emeritus professor of journalism, University of North Carolina at Chapel Hill.
Pioneered use of quantitative methods in journalism with Knight Newspapers in 1960s.
Author of Precision Journalism, first published 1973.
43 dead
467 injured
7231 arrests
A Pulitzer for data journalism: 1967 Detroit riot
Data: Survey conducted in the immediate aftermath of the riot.
Findings: One theory held that the rioters were reacting to being stuck at the foot of the economic ladder. Another blamed southern blacks who had moved to Detroit. But Philip Meyer showed that college graduates were as likely to have rioted as high-school dropouts, and that those born in the South were less likely to have participated.
Attention turned instead to pervasive racial discrimination in policing and housing in Detroit.
What’s in it for me?
Where do I start?
Usually, with a question you want to answer, or a point you want to demonstrate.
Good data journalism rarely starts by aimlessly poking at a dataset.
Approach data like you would an interview: What do you and your audience want to know?
The data frame of mind
The data frame of mind
This very different to:
“I’ve written my story. Now I’d better find some numbers for a graph.”
A note of caution:
data is often ‘dirty’
Data can be seductive, but never simply assume that it is correct and consistent. Examine any data you obtain to see how it is organized, and scan for potential errors.
You will almost always need to reformat and edit data to suit your purposes; frequently you will have to do extensive data “cleaning”.
Please clean me!
But science journalists are lucky:
Lots of clean, well curated data …
…which scientists may be willing to share
The basics
(OK, I have some data. What now?)
Largest to smallest; Alphabetical etc
Count, Sum, Mean, Median, Maximum, Minimum etc
Select a defined subset of the data
Merge entries from two or more datasets based on common field(s), e.g. unique ID number, last name and first name
Think of those operations as
‘interviewing’ the data
Tools and stories: online databases
Data: Pfizer’s records of payments to doctors, scraped from the web. Data on disciplinary actions from state medical boards in four largest states; FDA warning letters to clinical investigators
Findings: Some of Pfizer’s “experts” had questionable records for patient safety
Tools and stories: work with databases on your own computer
Tools and stories: databases
Data: US Food and Drug Administration’s database of clinical investigators, joined to data on disciplinary actions from state medical boards. (Also FDA’s list of disqualified investigators, and its database of inspections of clinical sites.)
Findings: Dozens of doctors selected to work on clinical trials over the past five years had previously been censured by state medical boards for problems with patient care; some had their own problems with substance abuse.
Data: Downloads of my own genetic scans, performed by 23andMe and DeCode Genetics. Corresponding data for my DNA markers read from the same companies’ online “genome browsers”.
Findings: DeCode had a glitch in its database software that could cause the presentation of an erroneous mitochondrial DNA profile in its genome browser.
Tools and stories: databases
Tools and stories: putting data onto maps
Data: GIS data on clear-cuts, landslides and prior studies of the hazards from the state Dept of Natural Resources; logging company Weyerhaeuser’s logging permits.
Findings: With little scrutiny from state geologists, Weyerhaeuser was allowed to clear-cut unstable slopes.
Using mapping software, the reporters showed that clear-cut sites that had at least half of their acreage in a moderate- to high-hazard zone accounted for a disproportionate number of landslides in December 2007 storms.
Tools and stories: putting data onto maps
Statistical analysis, network analysis, etc:
Not just for our scientist sources
Data: Time-to-acceptance for papers involving “induced pluripotent stem cells” – an exciting alternative to embryonic stem cells which later won their discoverer a Nobel prize.
Data preparation and analysis: Searches and downloads from Web of Science database. Annotation of spreadsheet to document time of receipt, acceptance and publication, and location of primary author. Statistical and graphical analysis of time-to-acceptance data. See methods.
Findings: Papers from corresponding authors outside the US took significantly longer to be accepted for publication
Data: Citations between primary authors of papers on iPS cells.
Data preparation and analysis: Citation information extracted from Web of Science using academic bibliometric software; database queries to link the citations by primary author. Manual checking for errors. Then analysis of network graph. See methods.
Findings: The citation network graph maps influence and connections in the field. Does this help explain why non-US scientists seem to be losing the race to publish?
Data: Metadata for 34,000+ papers published in PNAS from 2004-2013, plus citation counts, scraped from the journal’s website.
Findings: Few academy members “contribute” papers at close to the maximum rate, but this group includes several members of the journal’s editorial board. Contributed papers are cited less often than those reviewed in the normal way – although the gap has narrowed in recent years.
Data preparation and analysis: After the web scraping, extensive data cleaning to remove variants of authors’ names, giving one name format for each academy member. Database queries to count papers of different types from each academy member. Statistical analysis to analyse citation rates of different classes of paper. See methods.
Beware running with scissors:
Seek expert help if you need rigorous statistical analysis!
Even Nate Silver can get his fingers burned …
… and be forced to backtrack
Read this before committing an act of data journalism!
Why aren’t we seeing more data-driven science journalism?
Some basic data resources for science reporters
Scientific literature
Patents
European Patent Office
http://worldwide.espacenet.com/advancedSearch?locale=en_EP
Search European patent applications, WIPO patent applications, or all patents across 90+ nations
US Patent and Trademark Office
Search issued patents and published applications
For more information about a patent (diagrams, history of correspondence with patent office etc), enter patent or application number here:
http://portal.uspto.gov/external/portal/pair
Google Advanced Patent Search
http://www.google.com/advanced_patent_search
Clear presentation of claims, abstract, diagrams etc.
Grant funding
National Institutes of Health RePORTER
http://projectreporter.nih.gov/reporter.cfm
Grants from NIH, with links to papers and patents arising from projects
National Science Foundation
http://www.nsf.gov/awardsearch/advancedSearch.jsp
Research.gov
Also includes NASA grants,
And note option to download all grants by year, from 2007 onwards
Clinical trials
ClinicalTrials.gov
Information on more than 239,000 clinical trials in the US and beyond
International Standard Randomised Controlled Trial Number Register
Information on more than 15,000 registered clinical trials
WHO International Clinical Trials Registry Platform
http://apps.who.int/trialsearch/default.aspx
Incorporates data from ClinicalTrials.gov, ISRCTN, and national trials registries