Design Solutions for Fake News

Note        2

Important to keep in mind..        4

Stay informed        4

Basic concepts        7

Propaganda        10

Breaking News        10

General Ideas        13

Behavioral economics and other disciplines        13

Human Editors        13

Under the Hood        14

Facebook        17

Analysis of Headlines and Content        18

Reputation systems        20

Verified sites        24

Distribution - Social Graph        24

Fact Checking        26

Organizations        26

Special databases        26

Interface - Design        28

Flags        28

Downrank, Suspension, Ban of accounts        29

Contrasting Narratives        30

Filter Bubbles        31

Verified Pages - Bundled news        32

Viral and Trending Stories        34

Patterns and other thoughts        34

Ad ecosytem        35

More ideas…        36

Surprising Validators        42

Snopes        45

Ideas in Spanish - Case Study: Mexico        47

Not just for Facebook to Solve        49

Media Literacy - Ideas        52

Media Literacy Programs        53

The Old School Approach - Pay to Support Real Journalism        59

Suggestions from a Trump Supporter on the other side of the cultural divide        60

First Amendment/Censorship Issues        64

Rating Agencies: Bi-partisan News and Information Objectivity        65

“Pooled” White House News Dashboard        67

Specialized Models        69

A Wikipedia for Debates        69

Leveraging Blockchain, Smart Contracts, Crowd Knowledge, and Machine Learning        70

Create a SITA for news        70

Knowledgeable, validated but anonymous validation and fact checking by the crowd        70

Outlier analysis        71

Transparency at user engagement        71

“FourSquare” for News        71

Delay revenue realisation for unverified news sources        71

Reading Corner - Resources        73

First Look        73

Social Networks        73

Facebook        73

Twitter        74

Google        74

Filter Bubbles        74

User Generated Content        75

Dynamics of Fake Stories        75

Click Farms        76

Viral content        76

Satire        76

Propaganda        76

Behavioral Economics        76

Ethical Design        77

Ethics        77

Journalism in the Era of Trump        77

Cybersecurity        78

Cultural Divide        78

Experiences from Abroad        78

Media Literacy        78

Resources list for startups in this space        80

Interested in founding        80

Interested in investing        81

Interested in advising        81

Interested in partnering        81

A Special Thank You        83


Hi, I’m Eli, I started this document. Add idea below after a bullet point, preferably with attribution. Add +[your name] (+Eli) if you like someone else’s note. Bold key phrases if you can. Feel free to crib with attribution.

OF NOTE: A number of the ideas below have significant flaws. It’s not a simple problem to solve -- some of the things that would pull down false news would also pull down news in  general. But we’re in brainstorm mode.
Nov 26 - 09:30 - 8


Important to keep in mind...


Stay informed 

There must be dozens of articles and studies being published daily analyzing this issue we are working on. The more we know about the topic, what solutions have been implemented, etc. the better off we are.


NYT Magazine - Is social media disconnecting us from the big picture?

Nieman Lab - Obama: New media has created a world where “everything is true and nothing is true”

Taking back social media could be world changing - by @craigambrose <

Know who the key players are

Social networks, the media industry, politicians, celebrities, corporations, academics, advocacy organizations and associations, regulatory agencies, think tanks, librarians, cybersecurity experts, military, programmers, social media managers, regular folk... all have vested interest in this topic. Each can give a different perspective.

Throughout the document, you will see some symbols, simply pointers alongside @names, to serve as guides:

< verified account   < key contact  < collaborator  < resources

Delve into the underworld

One can’t assume that there is criminal intent behind every story but, when up against state actors, click farms and armies of workers being hired for specific ‘gigs’, it helps to know exactly how they operate. In any realm, be it ISIS, prostitution networks, illegal drugs, etc. they are experts on these platforms.

Trust me. We did NOT learn this in college.

Future Crimes by Marc Goodman @FutureCrimes <<<

Cybersecurity Summit Stamford 2016
Munich Security Conference -

“Going Dark: Shedding light on terrorist and criminal use of the internet” [1:29:12] <<

Gregory Brower (Deputy General Counsel, FBI), Martin Hellman (Professor Emeritus of Electrical Engineering, Stanford University), Joëlle Jenny (Senior Advisor to the Secretary General, European External Action Service), Joseph P. McGee (Deputy Commander for Operations, United States Army Cyber Command), Peter Neumann (Director, International Centre for the Study of Radicalisation, King's College London), Frédérick Douzet (Professor, French Institute of Geopolitics, University of Paris 8; Chairwoman, Castex Chair in Cyber Strategy; mod.).

Keep a tally of solutions - and mess ups

Aside from this, what else has been implemented by Google, Facebook, Twitter? How have people reacted? What are they suggesting? How is the media covering this? Have there been the critical turning points? The bots, so in the news nowadays, how are they being analyzed, dealt with? What has been the experience with these… abroad?

So many questions…

Statement by Mark Zuckerberg 

November 19, 2016

[Image pending while we format] --@IntugGB

Be prepared to unlearn everything you know

For years, other countries hav
e dealt with issues of censorship, propaganda, etc. It is useful to understand what has happened, to see what elements of their experience we can learn from. Case studies, debates, government interventions, reasoning, legislation, everything helps.

Essential here - insights from individuals who have experienced it and understand the local language and ideology.

Create alliances and partnerships

See what has been done or published that could serve as a blueprint going forward. Mentioned in this work is a special manual set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts. Four contacts there that might be interested in this project.

One final word

Before starting, go through the document quickly to get a sense of the many areas of discussion that are developing. Preliminary topics and suggestions are being put in place but many other great ideas appear further on, unclassified for the time being while the team gets to them.

This is a massive endeavour but well worth it. Godspeed.

Basic concepts



Discussions on Twitter 

@DiffrMedia centered around this article [Nov-23-16]

161121 - WSJ - Most students don’t know when news is fake, Stanford study finds

“The study seems to have looked at sponsored content vs. news. Very important type of fake news that  journalism promulgates”
  @journethics <

@trevortimm centered around this article [Nov-24-16]

161122 - Politico - The cure for fake news is worse than the disease

Compilation of basic terms

Possible areas of research and working definitions:

- fake / true
- factual / opinion
- factually true / factually untrue
- original source / meta-commentary on the source

- user generated content

- personal information

- news

- paid content

- commercial clickbait

- state-sponsored
- propaganda

- fake accounts
- fake reviews
- fake followers
- click farms
- patterns

- satire

- bias
- misinformation
- disinformation

- organic / non organic
- viral

Further reference:

161122 - Nieman Lab
“No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites

161122 - Medium
Fake news is not the only problem
by @gilgul Chief Data Scientist @betaworks <<


Breaking News

Watch developments with this story … intriguing to see this real time.

New report provides startling evidence of a massive Russian propaganda operation using Facebook and fraud sites - MT @profcaroll Associate Professor of Media Design @parsonsdesign The New School @thenewschool

161124 - Washington Post
Russian propaganda effort helped spread ‘fake news’ during election, experts say 
MT @jonathanweisman Deputy Washington Editor, The New York Times via Centre for International Governance Innovation @CIGIonline 

Crucial to read this article in context of Russian psyop allegations. FB weaponized targeting by susceptibilities - MT @profcaroll

161119 - NYT
Opinion: The secret agenda of a Facebook quiz


It's now imperative that the Electoral College get schooled on how Facebook may have been used to swing the election - @profcaroll

Also consider the 98 personal data points that attackers can use on victims for propaganda campaigns on Facebook - @profcarrol

160819 - The Washington Post
98 personal data points that Facebook uses to target ads to you

Unfortunately, I fear media will be reluctant to poke at the underlying ad targeting issues here because their revenue still depends on it. But given National Emergency of a propaganda revelation journos must whittle this down to data privacy because that's security solution here.

Precision targeted ads based on a duopoly’s concentrated user
data profile is a national security risk to propaganda attacks. Now we know. - @profcarroll

Good thing Germans believe in defending their privacy knowing why it must be cherished and always defended: @profcarroll

161124 - Reuters
Merkel fears social bots may manipulate German election

Whether Russia tipped the election or not, it’s interesting to realize social media offers a way to influence countries’ elections. Eliminating fake news is a matter of National Security, not just an administrative problem for Facebook. - Paul Graham @Paulg >>

Answering the question:

But how does one or many curtail this massive violation of our trust and common morality?

From the New Yorker - MT @harrisj: Mission Accomplished

160712 - The New Yorker

The real paranoia inducing purpose of Russian Hacks

When I began researching the story, I assumed that paid trolls worked by relentlessly spreading their message and thus indoctrinating Russian Internet users. But, after speaking with Russian journalists and opposition members, I quickly learned that pro-government trolling operations were not very effective at pushing a specific pro-Kremlin message—say, that the murdered opposition leader Boris Nemtsov was actually killed by his allies, in order to garner sympathy. The trolls were too obvious, too nasty, and too coördinated to maintain the illusion that these were everyday Russians. Everyone knew that the Web was crawling with trolls, and comment threads would often devolve into troll and counter-troll debates.

The real effect, the Russian activists told me, was not to brainwash readers but to overwhelm social media with a flood of fake content, seeding doubt and paranoia, and destroying the possibility of using the Internet as a democratic space. One activist recalled that a favorite tactic of the opposition was to make anti-Putin hashtags trend on Twitter. Then Kremlin trolls discovered how to make pro-Putin hashtags trend, and the symbolic nature of the action was killed. “The point is to spoil it, to create the atmosphere of hate, to make it so stinky that normal people won’t want to touch it,” the opposition activist Leonid Volkov told me.”


Russian propaganda effort helped spread ‘fake news’ during election, experts say

Never mind the algorithms: the role of click farms & exploited digital labor in Trump’s election

General Ideas

Behavioral economics and other disciplines

Human Editors

Under the Hood


FB filter.png

Facebook message that now shows for the link provided by Snopes to the original source of the hoax.

Medium - How I detect fake news by @timoreilley 

NPR - Post-Election, Overwhelmed Facebook Users Unfriend, Cut Back

Analysis of Headlines and Content

Reputation systems

Authority of shares, authority of sources...

It would seem, given recent developments, that even
that is in question. What happens when someone you trust (and is also an expert) shares info from someone you trust (who is also an expert)?  

“Someone needs to research the quality of fake news about fake news” -


Build an algorithm that privileges authority over popularity. Create a ranking of authoritative sources, and score the link appropriately. I’m a Brit, so I’ll use British outlets as an example: privilege the FT with, say, The Times, along with the WSJ, the WashPo, the NYT, the BBC, ITN, Buzzfeed, Sky News, ahead of more overtly partisan outlets such as the Guardian, the Telegraph, which may count as quality publications but which are more inclined to post clickbaity, partisan bullshit. Privilege all of those ahead of the Mail, the Express, the Sun, the Mirror.

Also privilege news pieces above comment pieces; privilege authoritative and respected commentators above overtly partisan commentators. Privilege pieces with good outbound links - to, say, a report that’s being used a source rather than a link to a partisan piece elsewhere.

Privilege pieces from respected news outlets above rants on Medium or individual blogs. Privilege blogs with authoritative followers and commenters above low-grade ranting or aggregated like farms. Use the algorithm to give a piece a clearly visible authority score and make sure the algorithm surfaces pieces with high scores in the way that it now surfaces stuff that’s popular.

Of course, those judges of authority will have to be humans; I’d suggest they’re pesky experts, senior journalists with long experience of assessing the quality of stories, their relative importance, etc. If Facebook can privilege popular and drive purchasing decisions, I’m damn sure it can privilege authority and step up to its responsibilities to its audience as well as its responsibilities to its advertising customers. @katebevan

I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve
classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy.

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB

Possible requirements

(time in existence, numbers of field reporters in each country/local should be required (I’m not a national

Verified sites

Distribution - Social Graph

Fact Checking


Special databases

Linking corrections to the fake news they correct:


161122 - Nieman Lab
“No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites
Via Nieman Journalism Lab at Harvard @NiemanLab

Rise of the fact checker - A new democratic institution?
Via Nieman Journalism Lab at Harvard @NiemanLab

Interface - Design

This may achieve the following:


Downrank, Suspension, Ban of accounts

Contrasting Narratives  

Filter Bubbles


Read more:
Trump won because he listened to the American people. It's really that simple 
@johncardillo via @DiffrMedia <<

Stewart Butterfield @Stewart, Co-founder of @Flickr and Slack @SlackHQ CEO, wrote on Oct. 2 the following:

Had a profound offline weeked at a retreat contemplating
digital distraction and design ethics in the attention economy.  

It´s a nearly intractable problem: tug at any thread, you start pulling at capitalism or democracy or individual freedoms.

[It] was amazing to see the shared perception of magnitude of the problem and the depth of commitment to find solutions from industry leaders.

One frame:


Verified Pages - Bundled news 

Viral and Trending Stories

Patterns and other thoughts 

Thead by Heather

Ad ecosytem

More ideas…


Surprising Validators

Related to cross-partisan / cross-spectrum notes above - Richard Reisman (@rreisman)

This outlines some promising strategies for making the filter bubble more smartly permeable and making the echo chamber smarter about what it echos. Summarizing from my 2012 blog post: Filtering for Serendipity — Extremism, “Filter Bubbles” and “Surprising Validators”:  

...Quoting Sunstein:

People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”

Our initial convictions are more apt to be shaken if it’s not easy to dismiss the source as biased, confused, self-interested or simply mistaken. This is one reason that seemingly irrelevant characteristics, like appearance, or taste in food and drink, can have a big impact on credibility. Such characteristics can suggest that the validators are in fact surprising — that they are “like” the people to whom they are speaking.

It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.

Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it.

...My post picked up on that:

This struck a chord with me, as something to build on. Applying the idea of “surprising validators” (people who can make us think again):

This provides a specific, practical method for directly countering the worst aspects of the echo chambers and filter bubbles…

This offers a way to more intelligently shape the “wisdom of crowds,” a process that could become a powerful force for moderation, balance, and mutual understanding.
We need not just to make our “filter bubbles” more permeable, but much like a living cell, we need to engineer a semi-permeable membrane that is very smart about what it does or does not filter.

Applying this kind of strategy to conventional discourse would be complex and difficult to do without pervasive computer support, but within our electronic filters (topical news filters and recommenders, social network services, etc.) this is just another level of algorithm.
Just as Google took old academic ideas about hubs and authority, and applied these seemingly subtle and insignificant signals to make search engines significantly more relevant, new kinds of filter services can use the subtle signals of surprising validators (and surprising combinators) to make our filters more wisely permeable.

(My original
post also suggested broader strategies for managed serendipity: “with surprising validators we have a model that may be extended more broadly — focused not on disputes, but on crossing other kinds of boundaries — based on who else has made a similar crossing…”)

- Richard Reisman (
@rreisman) (#SurprisingValidators)


Ideas in Spanish - Case Study: Mexico

Hi I just can add some ideas in Spanish, but I believe that there is a way to identify the fake news analyzing it’s patterns of propagation. I’ll be glad if someone can translate. >> @LoQueSigue

      I’ve tried years ago to develop software that could fight this battle on Twitter, and there are some ideas that could maybe also be useful on Facebook:

Súmate como socio para crear un medio 3.0 en México [Pantalla Completa]

Please believe me, in Mexico we are in a kind of future where fake news, armies of bots and trolls are working some years ago and maybe the knowledge developed here could help around the world.

Muchas de las noticias falsas dejan un rastro de propagación según han sido compartidas. Así he podido identificar si se han generado de forma orgánica, es decir por el interés legítimo de la gente en esa noticia y también cuando se trata de un equipo de personas que se dedican a propagar información falsa. Les dejo un par de ejemplos con data de Twitter que bien se puede usar en Facebook. – @LoQueSigue_

More case examples  (in spanish) here:

El día en que la sociedad derrotó a los bots: #EstamosHartosdeEPN vs #EstamosHartosCNTE :

Así fue el ataque masivo de bots al @GIEIAYOTZINAPA. Demostración

Trace of a fake news/Trending “non organic” 

Case study on Twitter:
People were paid for are spreading false information about journalist.

Link to full screen video:
Carmen Aristegui ¿verdad o manipulación? - [Pantalla completa]

Captura de pantalla 2016-11-17 a la(s) 16.34.53.jpg

Trace of a real “organic” news. A lot of people is sharing a real information connecting communities

Captura de pantalla 2016-11-17 a la(s) 16.35.48.jpg

Source? Because if true, hoo boy we can do this with graph theory.-- N.) +@IntugGB

@LoQueSigue_ --- That graphs are my own, I’ve just generated them to try to explain. The first one is about a trending topic generated yesterday spreading a fake news about a popular mexican journalist. The second one is about a campaign popular in Mexico.

You can find something about this idea on Wired: 

Pro-Government Twitter Bots Try to Hush Mexican Activists

Not just for Facebook to Solve

Some background: The INTUG Story

I wrote up a couple of suggestions here…

A Suggestion on how Facebook could fix its Fake News problem

In short:

Media Literacy - Ideas


Facebook, Google, Twitter et al need to be champions for media literacy by @DanGillmor

Telegraph - Humans have shorter attention span than goldfish, thanks to smartphones

Media Literacy Programs 

There has been a *lot* of research in this area. I’m working on a PhD in this topic.

Broadly, there are three ways of looking at the whether or not the information is in fact trustworthy - Credibility-based, Computational, and Crowdsourced. 

All of them can be gamed and hacked, so FB does have a tough problem.
I think that there are patterns of interactions with information that users have with information that are different with respect to whether or not they are trying to find an answer to a question or merely trying to support a bias. The trick is in teasing out how to reliably find out whether the user you’re watching at the moment is engaged in trying to do one or the other. Aggregating the traces of the people who are looking hard for answers might wind up being very helpful.

A pop up box that comes up when someone is about to share an article that’s been frequently flagged as fake that just says “many Facebook users have reported this article contains false information - do you still want to share it?” Could slow the spread of bad information but would not allow flagging as warfare to totally drown out opposition.

Evolve the platform’s objective

The biggest problem with fake news is many of the links looked like real news.  Sure there were the hyper-partisan sites that were clearly iffy, but I’m talking about ones like and the like.  This needs to rectified, and can be in a bunch of ways, mostly a sort of blacklist with keywords from legit news sites.  Like abcnews can be keyed out of any other permissible link. Secondary to this, a flagging feature which allows users (through an algorithm) to flag a story as ‘false, misleading, inaccurate’ etc.  Thirdly, FB/Twitter/et al need to take it upon themselves to ban hate speech posting/sharing.

Second, there needs to be a human editorial staff, hands down.  The problem with this is you have to hire a legit staff of non-partisan journalists that can curate the sidebar news section and deal with other flags that pop up.

Third, a ‘trusted source’ white-list.  National TV, national print, major local papers, etc. can be whitelisted and re-approved (annually, bi-annually, etc) to post and the ‘flagging’ feature would have to be remarkably high to merit a review of the article posted/shared.

Lastly, there needs to be a clear divide between “news” and “opinion”.  TV news, especially Fox News, has taken the “news” and twisted it into “opinion based consumption.”  News Organizations need to be held accountable on clearly stating that an article on FB/Twitter/etc is NEWS or OPINION. Hard news, facts, fact checks, quotes, etc that are said/done/reported on as news should be posted and verified as that.  If the NYT/WaPo/Fox/MSNBC/etc posts an editorial or opinion piece, post that that’s what it is.   -- Ian

I think the most feasible and therefore most likely implemented solution will be one that:

  1. Utilizes a simple rating system, like others have suggested
  2. Is logistically easy for Facebook to implement
  3. Partners with existing platforms / harnesses existing resources such as
  4. If including member ratings at all, minimizes their impact to avoid a "feedback war"
  5. Has a two-pronged mechanism to efficiently evaluate existing "news" sources and accurately rate new ones.

— zach.fried@gmail

I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve
classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy.

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB

Possible requirements

(time in existence, numbers of field reporters in each country/local should be required (I’m not a national

Special Campaigns

Promote relevant social issues applying viral techniques. - @IntugGB

All too often, people are unaware of what is stake either because they do not understand or it simply does not interest them.

Case Study: India 

(Info taken from website)

The SaveTheInternet campaign is a volunteer led effort to uphold a fair and open Internet. Over one million Indian citizens have participated at one point or another in the campaign that was started by a group of like minded individuals across India in May 2015.




AIB: Save the Internet [Fullscreen] 

The Old School Approach - Pay to Support Real Journalism

Thread starting with “Its tough when media sells” is being moved to “Suggestions from a Trump Supporter”.  Excellent insights - @linmart

Suggestions from a Trump Supporter on the other side of the cultural divide

Please note this section has a number of contributions with very valuable perspectives. We ask that the text be left as is, intact, with no modifications of their personal opinions. Thank you -@linmart

I’m not clear on whether you want comments from any member of the public or only full-time researchers and specialists that you know. But since I managed to find my way in here, I’ll go ahead and offer some good faith suggestions to help your project. I have a background in user interface design and software engineering, but I’m not here to comment as a specialist only as an interested observer from the other side of the cultural divide.


I think you can get buy-in from people on my side of the divide to stop circulating dubious news if you devise a process perceived to be fair and open.


Some Suggestions:


- Avoid devising processes or algorithms that rely on sites like Snopes or fact-check. The quality of the debunking work on those sites is uneven, excellent at times horrible at others. It is the reason many people on my side of the cultural divide do not consider those sites credible. Any debunking tied to snopes will be dismissed by many people for legitimate reasons.


- Use as much transparency as possible.




- Ensure there is real and meaningful ideological diversity on human teams.




Adding some observation to this, in the context of what you just mentioned (underlined text). All points I am identifying with my name but please add any additional ones, as needed. The comments function I am not using as I need this to appear in the preview - @linmart, @IntugGB

Thread posted under The Old School Approach has been transfered to this section: - @linmart

It’s tough when media essentially sells their stories to viewers. You are always going to have those people trying to push out something that isnt worth anything but is really flashy just to turn a quick dollar. This is also the problem in most media today. The press should not collude with gov officials. The press should report the facts against ALL officials. Media figure heads should not be pushing their opinions on the masses.  I shouldn’t have to dig into the internet to find the truth about people. The press should be telling me personal histories. Especially during the elections. 

I heard all of this negative press for trump and what he did in the past and the mean things he says. But I never heard anything about hillarys past scandals. Lets not forget that they DO exist. Its not conspiracy or lies. Its in the headlines of newspapers from decades ago. Yet I never heard an ounce of that. Why? I understand that trump was not a great person morally. But most of us common men and women aren’t, and if you think that you are, then maybe you should try practicing some humility. Lets not set these ideals that we should have some immaculately clean personas running for president. Clinton was just as bad as trump in her own ways.

Lets dial it back to JFK and his address to the press. This is what media should be like. Reporting without bias. Without opinions. Telling facts to the American people. You cant call someone a racist just because they say a stereotyped blurb about another race. That’s not racism. We need the media now more than ever to start reporting unbiased news. Facts, fair and balanced facts and not embellish them with opinion or heresay. Its so dangerous to the people as a whole. Put away your personal feelings and report fact, and if you want you bash a candidate like trump you should dig some dirt up on his opponent as well. People didn’t vote for racism or bigotry or against gays or women. They voted against the governmental system, against a failing and biased press.

As soon as we stop selling these dramatic stories that ignorant americans eat up, then we will be able to distinguish fake news from real news. Embellishing truth distorting words, these are tactics of a corrupt press. Yet these prevail in almost every media outlet in the nation. Why? Because it sells. People love drama in their dull lives and media outlets know this. They need numbers and viewers. It makes money. The press isn’t in it to help the people out anymore. Its just in it for the money. Just like the fake news sites.

So really, can they be stopped? No. Not in this day and age. Not until we unwind the current state of the media and transform it to something informative and real. Not speculate on someone’s personality. Report the facts even if its something bad about our government. Go against the grain. Give us real stories, keep your opinions out of it.

The masses love personalities in the media, that’s why we all know their names, but too often these personalities get in the way of reporting whats real and whats the personalitys opinion. We identify with them. We emulate them. If they say I don’t like Clinton or I don’t like trump, then they will mirror that. The media is in a state of distress. It’s no wonder they are falling to these barbaric hoardes of Inquirer type medias. Because people cant distinguish real from fake.

The media is all hype and passion and opinion. Even if it is laced with fact, there is a heavily biased undertone to all of it. If you are intelligent enough to see this, you may be safe. If not, you are going to end up being the one that believes the fake news headlines. Please please please put away your hate and divisiveness for the other side of the opinion, practice listening more than you do speaking. Listen and try to understand both sides. See the facts in both sides, don’t dismiss what you don’t hear from your circle as BS and if you hear something from the news that you seem interested in, dig a little further and see how much truth there is to it before you plaster it all over the brains of your cohorts like its fact.

Stop labelling people too, its disgusting. In a world where we are so passionate about gender pronouns and offending people, we need to look at ourselves. Are we labeling someone Corrupt? Racist? Sick? Bigot? Stop. We honestly don’t know more about that person than what the press tells us and that is no ground to stick labels on people. Think about a bad thing that you said or did once or maybe twice. Think about if the press got a hold of that and blasted the world with labels for you based off of that mistake. Is that who you are? No you are human. Just like I expect our president to be.  As soon as the press starts being compassionate to people as humans, we may be able to defeat fake news.


Apologies to all my friends. Call it “Proof of concept” --@linmart

ow about testing out everything we are discussing on the platform itself?

Thread on Facebook by @AndreaKuszewski (Nov 24, 2016), in reference to an article posted by the Washington Times.

How long before the white working class realizes Trump was just scamming them?


The Washington Post - This researcher programmed bots to fight racism on Twitter. It worked.

First Amendment/Censorship Issues

While we are focused on rooting out fake news -with legitimate concern for it- real or fake, it is protected under the First Amendment. More information is considered the antidote of bad information. SCOTUS took that to extremes with its “more money = more speech” philosophy.

That all said, the road to hell is often paved with good intentions. While I support the endeavor to ensure quality information is distributed, we must also be cautious not to lead ourselves into the waters of censorship. What may start out as a framework for determining truth or credibility could easily - especially under this Administration - turn far more Orwellian. The tools and algorithms we develop for good, could be used for straight up censorship OF the truth.

Regardless of what comes of the overall debate around fake news, we must build in transparency, accountability and a process that ensures that the purpose of the First Amendment - to engage and inform our citizenry with a marketplace of ideas - continues without censorship. +Diane R +@linmart


NYT - Facebook said to create censorship tool to get back into China

Rating Agencies: Bi-partisan News and Information Objectivity

Could major news producers, distributors and advertisers come together to fund a ratings agency system, organized to objectively analyze the spread of misinformation in American society?

Working like financial credit agencies or the Better Business Bureau, could ratings agencies provide valuable services to both consumers and organizations?

“Pooled” White House News Dashboard

Just as there is a White House Pool that covers the President, would like to propose something similar for a new kind of aggregation site for specific topics, such as covering the President. In the case of covering the President, the “page” might have the following sections and draw on the existing reporters and publications that are part of the White House Press Pool.

The site / page could include:

While it may be difficult to encourage competing publications to collaborate on this initiative, it may help highlight the different and competing narratives that are circulating. It may also be a way of having a neutral place to debunk news articles. So, when a friend shares something on social media that is incorrect (example: Trump incorrectly stating how much he self-funded his campaign), you can go to this pooled site, look at relevant articles and suggest your friend go there to confirm.

This is not a perfect approach and doesn’t address all the issues recently raised (example: Separating fact from commentary). Curious to read comments from others on something like this.

Specialized Models 

A Wikipedia for Debates

I've been posting some articles about an idea I've been evolving for the last five years. It would use almost every idea posted here, with a couple of distinctions - @bigokro +@IntugGB

  1. Not Facebook - FB's business model is just wrong for what we want. They want to entertain, and make money that way.
  2. Completely open and transparent, shared by all
  3. Adversarial - Even the fact-checkers would be fact-checked
  4. Crowd-sourced - not just relying on algorithms
  5. Other sites, like FB and news outlets, would permit users to link out to the "real" debate whenever one pops up inside their platform

Here is my quick write-up: Where is the Wikipedia for Debates?

And some more information:

I'll be adding more details in the near future.

Leveraging Blockchain, Smart Contracts, Crowd Knowledge, and Machine Learning 

Below are some ideas we have been working on for a while: @nickhencher

Create a SITA for news

Without collective skin in the game it is unlikely this one initiative will work - this initiative should offer bottom line benefit. Fake news is not the only problem.

SITA or Société Internationale de Télécommunications Aéronautiques, was founded in February 1949 by 11 airlines in order to bring about shared infrastructure cost efficiency by combining their communications networks.

A shared cooperative initiative for news would seem to offer wide and far reaching benefits”

Knowledgeable, validated but anonymous validation and fact checking by the crowd

News articles are graphed and outliers are … (?)

Users (checkers) graphs are created - scoring and matching against articles being questioned.  Interests. Location. Knowledge.

Use BlockOneId (Reuters is open to the use of this technology, contact: Ash) to ensure anonymity of fact checkers and ensure collusion is prevented.

Fact checkers have to be rewarded - blockchain and smart contracts can facilitate this.

Outlier analysis

Leverage machine learning to graph stories.  Look for stories that are outliers, flag as exceptional and begin further validation checks, these can be both automated and human (crowd). Reward checkers

Transparency at user engagement

Fact checking is not simple and is time consuming - assuming readers will do this is not a solution.

Assume that Fake News is going to become more sophisticated and weaponised.  At the moment this initiative seems to be focused on static event news - this is going to move to live events and there will be consequences - fake news consumed at an event can quickly turn a demo into a riot. Witness accounts and location data will become essential when validating eyewitness accounts.

“FourSquare” for News

Hand in hand with the above it should be possible to allow positive validation of an article, if someone reads an article then can check in as having read it, this engagement can then be taken down a multiple of paths.

Delay revenue realisation for non-verified news sources

Turn the model on it’s head and give the fact checkers the revenue from the fake news stories. A (excluding state sponsored) reason these stories are being produced is for money - delay the money and change the terms

(similar to Youtube model with pirated content)


Wikipedia: How to Identify Reliable Sources: (Mel @mkramer)

Other ideas

Reading Corner - Resources

Developing stories, research, ongoing initiatives, tools, etc. all related with the project.

Please post
links with a reference to who posts. MT indicates the article has been brought in from a post on Twitter. Some will include description of the source, along with areas of expertise.

< verified account < key contact  < collaborator

First Look

Trending now…

161113 - Medium - How the Trump campaign built an identity database and used Facebook ads to win the election  Joel Wilson @MedicalReport Consumer protection litigator. Former deputy attorney general in Trenton. Via @WolfieChristl Researcher, activist - updated

161123 - The Drive - These are the lobbyists behind the site attacking Elon Musk and Tesla via @ElonMusk Tesla, SpaceX, SolarCity, PayPal & OpenAI

Social Networks


161125 - The Guardian - Facebook doesn't need to ban fake news to fight it via @charlesarthur Freelance tech journalist; The Guardian's Technology editor 2009-14 <<

161123 - MIT Review - Facebook’s content blocking sends some very mixed messages via @techreview <

161122 - Reuters - Facebook builds censorship tool to attain China re-entry MT @dillonmann Communications Director @webfoundation <

161122 - NYT - Facebook said to create censorship tool to get back into China - MT @lhfang Investigative Journalist. @theintercept

161122 - Reuters - Facebook builds censorship tool to attain China re-entry MT @dillonmann Communications Director @webfoundation <

161119 - Recode - Here’s how Facebook plans to fix its fake-news problem - Steffen Konrath @LiquidNewsroom <

160520 - Guardian - The inside story of Facebook’s biggest setback - MT @GrahamBM << Founder of Learning Without Frontiers (LWF) 


161123 - VB - Twitter Cortex team loses some AI researchers MT @LiquidNewsroom <

17117 - The Washington Post - This researcher programmed bots to fight racism on Twitter. It worked. MT @mstrohm


Filter Bubbles

161122 - NYT Magazine - Is social media disconnecting us from the big picture? - MT Howard Riefs @hriefs Director, Corporate Communications @SearsHoldings  

161120 - NPR - Post-election, overwhelmed Facebook users unfriend, cut back - MT @newsalliance <

161116 - Tiro al aire: Romper la burbuja - by @noalsilencio

The Filter Bubble: What the Internet is hiding from you  Slide presentation by @EliPariser <<< MT @noalsilencio

Automated Systems

Algorithms [H2]

161123 - Medium - Detecting fake viral stories before they become viral using FB API by @baditaflorin <<  Data Scientists at Organised Crime and Corruption Reporting Project (OCCRP)  

161123 - Medium - How I detect fake news by @timoreilley << Founder and CEO of O’Reilley Media @OReillyMedia  

Common Crawl - @msukmanowsky <
Page Rank - authority of web domain -
@timoreilly via @elipariser

User Generated Content

161121 - Slate - Countries don't control the Internet. Companies do.

141023 - Wired - The laborers who keep dick pics and beheadings out of your Facebook feed 

Verification Handbook [PDF] - @Storify via @JapanCrisis 

Wikipedia: Identifying reliable sources

Dynamics of Fake Stories

161123 - NPR - We tracked down a fake news creator in the suburbs. Here's what we learned. 

161123 - Medium - Fixing fake news: Treat the problem not just the symptom 

161122 - Medium - Fake news is not the only problem by @gilgul Chief Data Scientist @betaworks, co-founder @scalemodel | Adjunct Professor @NYU | @globalvoices

161120 - NYT - How fake stories go viral

161111 - Medium - How we broke democracy MT @TobiasRose 

150215 - Tow Center for Digital Journalism - Lies, damn lies and viral content - @TowCenter via Steve Runge <

Click Farms

161120 - Never mind the algorithms: The role of click farms and exploited digital labor in Trump's election - MT @FrankPasquale < Author The Black Box Society: The Secret Algorithms Behind Money & Information

Viral content

161123 - Digiday - 'It was a fad': Many once-hot viral publishers have cooled off - via  @Digiday  NEW

161111 - The Verge - Understanding how news goes viral: Facebook buys CrowdTangle, the tool publishers use to win the internet - MT @betaworks


112316 - Could satire get caught in the crossfire of the fake news wars? - MT @JeffJarvis


161124 - Washington Post - Russian propaganda effort helped spread ‘fake news’ during election, experts say  MT Emilly Bell @emillybell Director of Tow Centre for Digital Journalism at Columbia J School

161124 - Washington Post - Russian propaganda effort helped spread ‘fake news’ during election, experts say  MT @jonathanweisman Deputy Washington Editor, The New York Times via Centre for International Governance Innovation @CIGIonline 

140602 - BuzzFeed - Documents show how Russia’s troll army hit America - MT @PhoebeFletcher Director Social Media Studies and Political Science - Centre for Strategic Cyberspace + Security Science (CSCSS) @cybercenter, Auckland

Behavioral Economics

16114 - Medium - I’m sorry Mr. Zuckerberg, but you are wrong - MT Danah Boyd @zephoria <<

Predictably Irrational: The Hidden Forces that Shape Our Decisions by Dan Ariely @danariely << Professor of Psychology and Behavioral Economics

Thinking, Fast and Slow by Daniel Kahneman 

Professor of Psychology and Public Affairs

Ethical Design

160518 - How technology hijacks people’s minds — from a magician and Google’s design ethicist by Tristan Harris @tristanharris < Ex-Design Ethicist @Google <<


161122 - Medium - An open letter to my boss, IBM CEO Ms. Ginni Rometty MT @katecrawford Expertise: machine learning, AI, power and ethics  

161113 - Medium - The code I am still ashamed off

Journalism in the Era of Trump

161123 - Washington Post - Journalists report Google warnings about ‘government-backed attackers’ 

161122 - Medium - What journalism needs to do post-election by @Brizzyc Social Journalism Director at CUNY MT @jeffjarvis <<

161122 - CJR - Maneuvering a new reality for US journalism MT @astroehlein European Media Director, @HRW <<

161122 - The Washington Post - What TV journalists did wrong — and the New York Times did right — in meeting with Trump - MT @JayRosen << Professor of Journalism at @NYUniversity

161121 - In Trump territory, local press tries to distance itself from national media

161109 - NYT - A ‘Dewey defeats Truman’ lesson for the digital age MT Karen Rundlet @kbmiami << Journalism Program Officer @KnightFdn <<


161123 - Washington Post - Journalists report Google warnings about ‘government-backed attackers’

Cultural Divide

161116 - Vox - For years, I've been watching anti-elite fury build in Wisconsin. Then came Trump.

161115 - CRJ - Q&A: Chris Arnade on his year embedded with Trump supporters - MT @mlcalderone Senior media reporter, @HuffingtonPost; Adjunct, @nyu_journalism <<

161123 - CNN - What about the black working class? - MT @tanzinavega CNN National reporter race/inequality

Experiences from Abroad

161122 - CJR - Maneuvering a new reality for US journalism by  @NicDawes Media at Human Rights Watch (@hrw)  MT @astroehlein European Media Director, @HRW <<

161123 - [IND] The Times of India - Ban on misleading posts: Collector served legal notice- MT @jackerhack  Co-founder @internetfreedom <

Media Literacy

161122 - Quartz - Stanford researchers say young Americans have no idea what’s news - MT @MatthewCooney Principal @DellEMC

160829 - Common Sense Media - How to raise a good human in a Digital World - MT @CooneyCenter <

Resources list for startups in this space

Everyone is interested in this right now, so I think it’d be useful to have a list of those wanting to actively work on it, and any other resources that could be applied to the task.

Interested in founding


Interested in investing

Interested in advising

Interested in partnering

A Special Thank You

To all of you who are making this possible. None of us is smarter than all of us.

And for allowing this dream to spread far and wide:

Eli Pariser @elipariser <

CEO and Founder
New York

Andrew Rasiej @Rasiej
Co-Founder Civic Hall
Personal Democracy Forum
Senior Advisor
Sunlight Foundation

New York

Micah Sifry @Mlsif
Co-Founder Civic Hall

Personal Democracy Forum

New York

Lenny Mendoca @Lenny_Mendoca <
McKinsey and Company
San Francisco

Ragu Narisetti @raju <

Gismodo Media Group

New York

Zeynep Tufekci @zeynep
Sociology Associate Professor University of North Carolina @UNC

Contributing Op-Ed Writer New York Times @NYTimes 
Former Fellow
Berkman Klein Center  @BKCHarvard
Author of forthcoming book on Networked Social Movements

Taylor Owen @Taylor_Owen
Assistant Professor of Digital Media & Global Affairs - University of British Columbia @UBC <
Senior Fellow Tow Center for Digital Journalism @towcenter 
Founder and Editor in Chief of  @OpenCanada 

Author of Disruptive Power: The Crisis of the State in the Digital Age

Connie Moon Sehat
News Frames Director
Global Voices Online
Global Voices @GlobalVoices

Sameer Padania @sdp
External Assessor for the Google Digital News Initiative's Innovation Fund
Program Officer Program on Independent Journalism - Open Society Foundations (OSF) @OpenSociety 

Christoph Schlemmer ‏@schlemmer
Reuters Institute for the Study of Journalism (RISJ) at the University of Oxford @risj_oxford
Business Reporter
Austrian Press Agency

Rushi Bhavsar ‏@parapraxist 

Sally Lerman @JournEthics


… making the list still, checking it twice