|Timestamp||Please describe one or more ideas, especially ones which make you hopeful|
|4/16/2018 20:56:44||I think there needs to be a standardized format between journalism orgs, fact checkers, and content curators. The content curators, i.e., Facebook, Google, Reddit, Twitter, need to be able to digest input from independent fact checkers for various news events. Content curators needs to be free to design their own algorithms and UIX, fact checkers ought to be independent and journalists ought to be able to dispute news stories.|
|4/17/2018 23:03:17||Trained machines inspect web pages and calculate a credibility score, which is then published and used to guide search results and feed placement. Trained people do it too, to help train and validate the machines.|
Professional fact-checkers examine popular articles and attempt to establish the veracity of key claims via independent evidence. They publish their results, with links to sitings, which are then presented to consumers near the original content.
People and organizations publish information about who they trust, and for what. This is aggregated (like pagerank) to allow the trustworthiness of many sources to be deduced starting with a few. People can be anonymized by trusted services to reduce social pressure.
|4/16/2018 22:12:06||Scientific article landing pages include an enumeration of incoming links annotated with an analysis from the authors/edits/peer-reviewers about whether the citation is legit. These annotations affect the browser experience and search/feed rankings.|
|4/19/2018 12:53:16||Credibility of Web content can be increased by taking into account validation against open, standard, machine-readable schemas duly approved by internationally recognized standards development organizations.|
|4/19/2018 13:02:01||Here are some thoughts offered by others with respect to the elements of credibility: https://incrediblemessages.com/five-keys-to-credibility-at-work/ & https://courses.lumenlearning.com/boundless-communications/chapter/credibility-appeals/ & https://seotradenews.com/5-elements-of-credibility/ & https://www.userlike.com/en/blog/10-crucial-elements-website-credibility|
|4/20/2018 14:00:17||Web content will be more credible when it has the attributes outlined in ISO 15489-1: integrity, reliability, authenticity, and usability.|
|4/20/2018 14:04:36||Executive Order 13642 established openness and machine-readability as the default for U.S. federal government information. One of the unrecognized implications is that schemas should be specified for all public records series. Web content will be more credible when it conforms to publicly specified schemas.|
First of all, a typology of disinformation should be defined and used to mark up web content/elements when possible. For instance, different types of disinformation could be considered: "stating false facts", "providing inaccurate information", "presenting tampered image", "posting media content in a false context", "presenting unproven claims", etc. Once such a typology is available and used (initially by media organizations and interested parties), it will be possible to generate large standardized corpora of annotations that can be used to train models that can automatically produce these annotations. At the same time, when such annotations are standardized and available in the HTML markup, browsers or browser extensions can use them to issue appropriate warnings to end users. An important consideration is that such annotations should be provided at the appropriate level of granularity, i.e. not necessarily involve a whole page, but instead a concrete statement, an image and its caption, etc.
The fact that this group exists, and that Facebook and Google are involved, means that a standard really does have a chance to emerge.
training readers through bias read alouds, on tech side a browser extension with a slider on varying scales of credibility, present data in aggregate as individuals, allow people to make a list (their curated expert) or a group (like a class).