Home About Projects Newsletter Team Join Us
Project Home

PublicEditor

Year / 2015 User / Join Us Get Involved Developers / Meet our Team

PublicEditor is an online editing and review system allowing the public to openly assess the credibility of news articles and organizations, and share their results with the world. We assess news content by transparent, neutral criteria that everyone (even fierce partisans) can recognize as legitimate.

Our mass collaboration software trains volunteers to evaluate news and opinion reporting according to indicators of credibility painstakingly developed over centuries of scientific practice. From the early Philosophy of formal logic to contemporary Psychology’s catalogue of all the ways we are predictably irrational – scientists have amassed a thorough knowledge of the ways we humans fool ourselves, and how we can avoid doing so. PublicEditor uses cutting-edge citizen science software to organize all of this scientific know-how into an online system that brings thousands of people together in a rigorous process of evaluating the daily news.

Once our citizen scientists have labeled all the points in an article where an author successfully avoided, or fell prey to, inferential mistakes, psychological biases, or argumentative fallacies, PublicEditor computes a credibility score for the piece. These scores (through a browser extension, initially) are displayed as compact, data-rich credibility badges alongside the news article hyperlinks that appear in readers’ news feeds and search results. The badges allow readers of all skill levels to instantly assess the credibility of the content they are consuming, providing clear signals of information quality that will drive reader traffic to news sources most dedicated to high-quality content. Since PublicEditor keeps a database of credibility scores for all articles, we can also compute credibility scores for authors and the organizations that publish their work.

Credibility scores and badges will have little impact on the media ecosystem or on readers if they are not widely trusted. That is why – rather than merely performing high-level evaluations of overall website credibility (a ‘black box’ approach that can easily be gamed) – our tools collect fine-grained human input through a transparent, scientific methodology building a deep and broad network of trust. To prevent contributors’ biases entering credibility scores, PublicEditor breaks work out into small pieces, trains contributors to specialize in the identification of only one or a few credibility indicators at a time, rejects data correlated with contributors’ political biases, and weights contributors’ judgments based on the accuracy of their previous work.

PublicEditor’s citizen science approach solves another major social problem, too: the public’s sense of helplessness and anxiety in the face of so much mis-/dis-information. For the many people frustrated by today’s confusing and combative discourse, PublicEditor will provide more than hope. It will engage the public as members of a lasting community taking concrete action to improve the quality of the information all citizens consume.

PublicEditor will be sustainable, too. While our community is committed to providing the service as a public good, PublicEditor can easily generate revenue as a pre-publishing tool for newsrooms, a subscription service for corporate readers, and/or by licensing its data to content aggregators and platforms like Google, Facebook, Wikipedia, and more.

As part of an ongoing research and development effort at the UC Berkeley Institute for Data Science, PublicEditor has created a working prototype of our credibility labeling system using over 150 politically-neutral, scientific indicators of news content credibility. Deploying state-of-the-art TextThresher crowd labeling software with a community of UC Berkeley students, we are testing our system against a test set of over 200 news articles. 


Project Team

Emlen Metz, Ph.D.

SCHEMA DEVELOPMENT

Artemis Jenkins


WEB DEVELOPMENT


Tom Gilbert


SOFTWARE TESTING

Scott Peterson


SCHEMA DEVELOPMENT


Catharine Wu


OPERATIONS, ALGORITHMS

Eric Wimsatt


INTER-ANNOTATOR AGREEMENT


Benjamin Lee


ALGORITHMS, EDITING

Joshua Chung


INTER-ANNOTATOR AGREEMENT


Sam Wu

ALGORITHMS

Younjin Song


ALGORITHMS


Rishabh Meswani


DATA ANALYSIS, RECRUITING

Oscar Syu

VISUAL DESIGN SPECIALIST


Alan Pham


DATA ANALYSIS, RECRUITING

Zain Khan


INTER-ANNOTATOR AGREEMENT


Shoumik Jamil


ALGORITHMS

Anchit Sood


INTER-ANNOTATOR AGREEMENT


Yash Agarwal


ALGORITHMS

Quang-Minh Pham


D3 VISULIZATION


Hide Team