The Future of Publishing and the Accuracy of Information

August 4th, 2009

One of the continuing controversies we all face as the Web becomes our primary tool for news, knowledge, reference…indeed for all forms of information, is determining the quality of the information published on the Web. The topic arises frequently in articles and blog entries. Much of the controversy has been associated with Wikipedia. Wikipedia has certainly suffered a number of embarrassing gaffes, but a study released by the journal Nature in late 2005 claimed that Wikipedia is essentially as accurate as the Encyclopedia Britannica. In reviewing 42 articles on the same topics from each source it found an average 2.92 mistakes per article for Britannica and 3.86 for Wikipedia. (The original article in Nature is available only to subscribers or for a $32 purchase, however the link to Nature provided here does offer some free material on the subsequent controversy). The respected technology writer Nicholas Carr later criticized the study, concluding his detailed analysis with:

If you were to state the conclusion of the Nature survey accurately, then, the most you could say is something like this: “If you only look at scientific topics, if you ignore the structure and clarity of the writing, and if you treat all inaccuracies as equivalent, then you would still find that Wikipedia has about 32% more errors and omissions than Encyclopedia Britannica.” That’s hardly a ringing endorsement….The open source model is not a democratic model. It is the combination of community and hierarchy that makes it work. Community without hierarchy means mediocrity.

It’s worth noting here that Wikipedia continues to take new steps and implement new procedures to improve the accuracy of its content. This blog entry is not intended to suggest the Wikipedia is fatally flawed: I think it’s a great resource and a publishing miracle. I raised the Wikipedia versus Britannica story because it well-illustrates the main point of this entry.

The broader controversy over information quality on the Web has myriad ramifications. Without exploring them all in this entry, I stand by the statement that the value of the proliferation of new and original voices on the Web is seriously marred if the accuracy of what is represented as fact remains suspect and undependable.

This leads me to recommend a very fine site I discovered the other day while researching this issue. The Virtual Chase: Teaching Legal Professionals How To Do Research, has an excellent section called “Evaluating the Quality of Information on the Internet.” Chapters include “Why Information Quality Matters,” “How to Evaluate Information,” and my favorite feature, a checklist for evaluating the quality of information found on the Web.

While the site is targeted at the legal profession, and therefore not relevant in all of its aspects to each of us, what better profession to turn to than one where a false fact can mean in some circumstances the death of a client or in others the loss of a $5 million lawsuit!

I’ll be continuing to cover this topic. I hope this entry provides a suitable introduction.