IdeaLab reporting:
Jonathan Stray has opened a new conversation about measuring accuracy in news reports. Stray, who works at the Associated Press and blogs on the side, comes at the issue with a refreshingly analytical, data-driven perspective. His in-depth post, which I urge you to read, does a couple of things. It summarizes important research: There seems to be no escaping the conclusion that, according to the newsmakers, about half of all American newspaper stories contained a simple factual error in 2005. And this rate has held about steady since we started measuring it seven decades ago.
And it offers some useful ideas: We could continuously sample a news source's output to produce ongoing accuracy estimates, and build social software to help the audience report and filter errors.
Stray understands that it's no good to count correction rates without tracking error rates, and vice versa -- you need to know both if you want to assess a news organization's performance. So he imagines a not-too-distant future in which many or most newsrooms sampled their story output regularly to gauge the frequency of errors and encouraged readers to submit (and rank) error reports. With some sort of standardization of both metrics, and if newsrooms could get comfortable with publishing these numbers, we'd finally have a useful yardstick for accuracy in news coverage.
http://www.pbs.org/idealab/2011/04/theres-no-problem-newsrooms-in-denial-about-rampant-errors115.html
No comments:
Post a Comment