M[a][b]edia ReDesign: The New Realities 

Before we start ...        8

Stay informed        8

Know who the key players are        8

Create alliances and partnerships        8

'Truth in Media' Ecosystem        9

Keep a tally of solutions - and mess ups        9

Delve into the Underworld        9

Be prepared to unlearn everything you know        10

Stay connected        11

A final word        11

Stay Informed        13

Background        13

Initiatives        13

Robert Reich: Inequality Media        13

Dan Rather: On journalism & finding the truth in the news        14

Web Literacy for Student Fact-Checkers        14

Syllabus: Social Media Literacies        14

In the News        16

Essential reads        16

Featured        17

General        17

Trump Presidency        18

International        19

Full Archive [work in progress]        20

Press Room: Research and articles related with this project        21

About Eli Pariser        24

Conferences        25

Related articles        25

Bulletin Board        27

Upcoming Events        27

Bits and pieces: Liberal democracy in the digital era        27

International Fact Checking Day        28

Workshop: Propaganda and Media Manipulation        28

Ongoing Initiatives        29

The News Literacy Project        29

Stony Brook’s Center for News Literacy        29

Past Events        30

The Future of News in an Interconnected World        30

MisinfoCon: A Summit on Misinformation        30

Combating Fake News: An Agenda for Research and Action        30

Media Learning Seminar        31

The Future of News: Journalism in a Post-Truth Era        31

Dear President: What you need to know about race        32

Knight Foundation & Civic Hall Symposium on Tech, Politics, and the Media        32

Berkeley Institute for Data Science
UnFakingNews Working Group
        32

Start of Document        33

Basic concepts        33

Definition of Fake News        35

Compilation of basic terms        36

Classifying fake news, fake media & fake sources        38

Considerations → Principles → The Institution of Socio - Economic Values        40

General Ideas        41

Behavioral economics and other disciplines        41

Human Editors        42

Under the Hood        45

Facebook        47

Analysis of Headlines and Content        50

Reputation systems        51

Verified sites        55

Distribution - Social Graph        57

Fact-Checking        59

Special databases        62

Interface - Design        63

Flags        65

Downrank, Suspension, Ban of accounts        66

Contrasting Narratives        67

Points, counterpoints and midpoints        68

Model the belief-in-true / belief-in-fake lifecycle        71

Verified Pages - Bundled news        72

Viral and Trending Stories        74

Patterns and other thoughts        75

Ad ecosystem        77

More ideas…        79

FiB extension        86

Taboola and Outbrain involvement        86

Verified Trail + Trust Rating        88

Bias Dynamics & Mental Models        91

Classifying fake news, fake media & fake sources        92

Pattern & Patch Vulnerabilities to Fake News        94

Surprising Validators        96

Snopes        99

Ideas in Spanish - Case Study: Mexico        102

Not just for Facebook to Solve        104

A Citizen Science (Crowd Work) Approach        108

“Pooled” White House News Dashboard        111

The Old School Approach
Pay to Support Real Journalism
        113

Critical Thinking        114

Media Literacy        115

Programs        116

Suggestions from a Trump Supporter on the other side of the cultural divide        118

Your mind is your reality        124

Who decides what is fake?        124

Journalism in an era of private realities        125

Propaganda        125

Scientific Method        125

Alternate realities
Narratives and storytelling
Radicalization
        126

Legal aspects        127

First Amendment/ Censorship Issues        127

Espionage Act        127

Copyright issues        128

Trademark        128

Libel        128

Online Abuse        129

Harassment        129

Trolling and Targeted Attacks        129

Threats        129

Hate Speech        129

Specialized Agencies        131

Rating Agencies        131

Media Governing Body        133

Specialized Models        134

Create a SITA for news        134

Outlier analysis        135

Transparency at user engagement        135

“FourSquare” for News        136

Delay revenue realisation for unverified news sources        136

Other ideas        136

Linked-Data, Ontologies and Verifiable Claims        139

Reading Corner - Resources        143

Social Networks        143

Facebook        143

Twitter        144

Google        144

Filter Bubbles        144

Automated Systems        145

Algorithms        145

User Generated Content        145

Dynamics of Fake Stories        146

Joined-up Thinking - Groupthink        146

Manipulation and ‘Weaponization’ of Data        146

Click Farms - Targeted attacks        147

Propaganda        147

Viral content        148

Satire        148

Behavioral Economics        149

Political Tribalism - Partisanship        150

Ethical Design        150

Ethics        150

Journalism in the Era of Trump        152

Cultural Divide        152

Cybersecurity        153

Experiences from Abroad        153

Media Literacy        153

Resources list for startups in this space        154

Interested in founding        154

Interested in advising        155

Interested in partnering        156

Interested in investing/funding        158

A Special Thank You        159

ANNEX        163

Themes and keywords to look for        163

Hashtags        168

Fact Checking Guides        169

Selected reads        169

Russian interference        170


Note: Hi, I’m Eli, I started this document. Add idea below after a bullet point, preferably with attribution. Add +[your name] (+Eli) if you like someone else’s note. Bold key phrases if you can. Feel free to crib with attribution.

A number of the ideas below have significant flaws. It’s not a simple problem to solve -- some of the things that would pull down false news would also pull down news in  general. But we’re in brainstorm m[c][d][e][f][g][h][i][j][k][l][m][n][o]ode.

  • November 17, 2016


@IntugGB on dut














@IntugGB on duty - 18 Feb 17
Lina - 19:30 UTC

@IntugGB on duty - 24 Feb 2017

Kate - 21:45 UTC
Lina - 00:50 UT



In all this vastness
there is no hint
That help will come
from elsewhere
To save us
from ourselves

- Carl Sagan
PaleBlueDot

 

      “None of us is as smart as all of us” ― Kenneth H. Blanchard

banner 2.jpg

Flickr

Before we start ...

 

Stay informed 

Dozens of articles and studies are being published daily on the topic of ‘fake news’. The more we know about what is going on, all the different angles, implications, etc.  the better off we are.

Know who the key players are


T
echnologists, journalists, politicians, academics, think tanks,  librarians, advocacy organizations and associations, regulatory agencies, corporations, cybersecurity experts, military, celebrities, regular folk... all have vested interest in this topic. Each can give a different perspective.
 

Throughout the document, you will see some symbols, simply pointers alongside @names, to serve as guides:

verified account     key contact       collaborator  

C[p]reate alliances and partnerships

See what has been done or published that could serve as a blueprint going forward. Mentioned in this work, for example, is a special manual set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts. Four contacts there, alone, that might be interested in this project.

Related research:

'Truth in Media' Ecosystem

Organizations and individuals important to the topic of fake news and its solutions
Work in progress - Contributions and suggestions welcome

Keep a tally of solutions - and mess ups

Aside from this, what else has been implemented by Google, Facebook, Twitter and other organizations? How have people reacted? What are they suggesting? How is the media covering this? Have there been any critical turning points? The bots -- so in the news nowadays -- how are they being analyzed, dealt with? What has been the experience with them… abroad?

So many questions...

Statement by Mark Zuckerberg 

November 19, 2016

Delve into the Underworld

FB Fake.jpg

One can’t assume that there is criminal intent behind every story but, when up against state actors, click farms and armies of workers being hired for specific ‘gigs’, it helps to know exactly how they operate. In any realm, be it ISIS, prostitution networks, illegal drugs, etc. they are experts on these platforms.

Recommended

F[q]uture Crimes  by Marc Goodman @FutureCrimes 

The Kremlin Handbook - October 2016
Understanding Russian Influence in Central and Eastern Europe


Cybersecurity Summit Stanford 2016
Munich Security Conference -
Agenda

Panel:
“Going Dark: Shedding light on terrorist and criminal use of the internet” [1:29:12]

Gregory Brower (Deputy General Counsel, FBI), Martin Hellman (Professor Emeritus of Electrical Engineering, Stanford University), Joëlle Jenny (Senior Advisor to the Secretary General, European External Action Service), Joseph P. McGee (Deputy Commander for Operations, United States Army Cyber Command), Peter Neumann (Director, International Centre for the Study of Radicalisation, King's College London), Frédérick Douzet (Professor, French Institute of Geopolitics, University of Paris 8; Chairwoman, Castex Chair in Cyber Strategy; mod.)

Related


170212 - Medium
The rise of the weaponized AI Propaganda machine  There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.


Signal

Install it. Just because.


160622 - The Intercept
Battle of the secure messaging apps: How Signal beats WhatsApp

Be prepared to unlearn[r] everything you know


For years, other countries hav
e dealt with issues of censorship, propaganda, etc. It is useful to understand what has happened, to see what elements of their experience we can learn from. Case studies, debates, government interventions, reasoning, legislation, everything helps.

Essential here - insights from individuals who have experienced it and understand the local language and ideology.


Note. This also means learning from the “natives”, the ones born with - you know - a chip in their brains.

Stay connected

Please join us on Twitter at @Media_ReDesign and on Facebook for the latest news updates.

A Slack team (group messaging) pertaining to a number of related projects is available for those who wish to connect or do further research on the topic. You can sign up here. For those not familiar, two introductory videos are available, one showing how it can be used in teams, the other describing the platform itself.

Twelve channels have been created so far. Click on the CHANNEL heading to expand the different categories and click on any you want to join. Clicking on DIRECT MESSAGES allows you to contact all members who are currently on there; the full “Team Directory” is accessible through the menu.

A final word

Before starting, go through the document quickly to get a sense of the many areas of discussion that are developing. Preliminary topics and suggestions are being put in place but many other great ideas appear further on, unclassified for the time being while the team gets to them.

This is a massive endeavour but well worth it. Godspeed.



S[s]tay Informed  

Background

A superb summary dealing with fake news is being written up over at Wikipedia. With almost 200 sources to date, it gives an overview of much of what the issues are, starting with a detailed look at prominent sources going on to impact by country, responses on the part of industry players and, finally, academic analysis.

As a starting point and perhaps a guideline to better structure the document going forward, it is highly recommended. -
@linmart  [26 DEC 16]

Initiatives

Robert Reich: Inequality Media


After years of collaboration, Jacob Kornbluth @JacobKornbluth worked with Robert Reich @RBReich to create the feature film Inequality for All. The film was released into 270 theaters in 2013 and won the U.S. Documentary Special Jury Award for Achievement in Filmmaking at the Sundance Film Festival.  Building off this momentum, Kornbluth and Reich founded Inequality Media in 2014 to continue the conversation about inequality with viewers.

Inequality for All: Website - FB /InequalityForAll - @InequalityFilm - Trailer

Inequality Media:  Website - FB /InequalityMedia - @InequalityMedia 

Robert Reich: LinkedIn -  FB /RBReich  FB Videos - @RBReich

Saving Capitalism.jpg


Kickstarter Campaign
How does change happen? We go on a trip with Robert Reich outside the “bubble” to reach folks in the heartland of America to find out.

3,790 backers pledged $298,436 to help bring this project to life.


Saving Capitalism: For the Many, Not for the Few  
#SavingCapitalism @SavingCapitalism

Perhaps no one is better acquainted with the intersection of economics and politics than Robert B. Reich, and now he reveals how power and influence have created a new American oligarchy, a shrinking middle class, and the greatest income inequality and wealth disparity in eighty years. He makes clear how centrally problematic our veneration of the free market is, and how it has masked the power of moneyed interests to tilt the market to their benefit.

… Passionate yet practical, sweeping yet exactingly argued,
Saving Capitalism is a revelatory indictment of our economic status quo and an empowering call to civic action.”



As featured in:

160120 - Inc
5 Books that billionaires don't want you to read


Dan Rather: On journalism & finding the truth in the news

Learn to ask the right questions & tell captivating stories. Practical advice for journalists & avid news consumers.

 



Web Literacy for Student Fact-Checkers 

by Mike Caulfield @holden

Work in progress but already excellent. Recommended by @DanGillmor


Syllabus: Social Media Literacies

Instructor: Howard Rheingold - Stanford Winter Quarter 2013



Andrew Wilson, Virtual Politics: Faking Democracy in a Post-Soviet World (Yale University Press, 2005)

This  seminal work by one the world’s leading scholars in the field of “political technology” is a must-read for anyone interested in how the world of Russian propaganda and the political technology industry works and how it impacts geo politics. It has received wide critical acclaim and offers unparalleled insights. Many of the names seen in connection with both Trump’s business dealings and the Russian propaganda apparatus appear in Wilson’s work.


In the News


Essential reads


170212 - Medium
The rise of the weaponized AI Propaganda machine 

There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.


170127 - IFLA
Alternative Facts and Fake News – Verifiability in the Information Society

161228 - MediaShift
How to fight fake news and misinformation? Research helps point the way

161122 - NYT Magazine

Is social media disconnecting us from the big picture?  
By Jenna Wortham
@jennydeluxe 

161118 Nieman Lab
Obama: New media has created a world where “everything is true and nothing is true” 

By Joseph Lichterman @ylichterman √  

161118 - Medium
A call for cooperation against fake news 

by @JeffJarvis 
@BuzzMachine blogger and j-school prof; author of Public Parts, What Would Google Do?

161116 - CNET
Maybe Facebook, Google just need to stop calling fake news 'news' 
by Connie Guglielmo @techledes 
Commentary: The internet has a problem with fake news. Here's an easy fix.

Featured


Knight Foundation - Civic Hall Symposium on Tech, Politics and Media 
Agenda and Speakers
New York Public Library -
January 18, 2017

2017
Ethical Journalism Network Ethics in the News [PDF]
EJN Report on the challenges for journalism in the post-truth era

161219 - First Draft News
Creating a Trust Toolkit for journalism

Over the last decade newsrooms have spent a lot of time building their digital toolbox. But today we need a new toolbox for building trust

170114 - Huffington Post
Why do people believe in fake news?

160427 - Thrive Global
12 Ways to break your filter bubble

General

161211 - NPR
A finder's guide to facts

Behind the fake news crisis lies what's perhaps a larger problem: Many Americans doubt what governments or authorities tell them, and also dismiss real news from traditional sources. But we've got tips to sharpen our skepticism.


Web Literacy for Student Fact-Checkers by Mike Caulfield @holden

Work in progress but already excellent. Recommended by @DanGillmor

161209 - The Guardian
Opinion: Stop worrying about fake news. What comes next will be much worse

By Jonathan Albright @d1gi, professor at Elon University in North Carolina, expert in data journalism
In the not too distant future, technology giants will decide what news sources we are allowed to consult, and alternative voices will be silenced

161128 - Fortune
What a map of the fake-news ecosystem says about the problem

By Mathew Ingram @mathewi, Senior Writer at Fortune

Jonathan Albright’s work arguably provides a scientifically-based overview of the supply chain underneath that distribution system. That could help determine who the largest players are and what their purpose is.

 161128 - Digiday
The underbelly of the internet': How content ad networks fund fake news
Forces work in favor of sketchy sites. As ad buying has become more automated, with targeting based on audience over site environment, ads can end up in places the advertiser didn’t intend, even if they put safeguards in place.

161125 - BPS Research Digest
Why are some of us better at handling contradictory information than others?

Trump Presidency


'Alternative Facts': how do you cover powerful people who lie?
A collaborative initiative headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits. View only.

170216 - Politico
How a Politico reporter helped bring down Trump’s Labor Secretary pick

"This was the most challenging story I’ve ever done. But it taught me that with dedication and persistence, and trying every avenue no matter how unlikely, stories that seem impossible can be found in the strangest of ways." - Marianne LeVine


Reuters
Covering Trump the Reuters Way
Reuters Editor-in-Chief Steve Adler


170115 - Washington Post
A hellscape of lies and distorted reality awaits journalists covering President Trump
Journalists are in for the fight of their lives. They will need to work together, be prepared for legal persecution, toughen up for punishing attacks and figure out new ways to uncover and present the truth. Even so — if the past really is prologue — that may not be enough.


Dec 2016 - Nieman Lab
Feeling blue in a red state

I hope the left-leaning elements of journalism (of which I would be a card-carrying member if we actually printed cards) take a minute for reflection before moving onto blaming only fake news and Russian hacking for the rise of Trump.

161111 - Medium
What’s missing from the Trump Election equation? Let’s start with military-grade PsyOps
Too many post-election Trump think pieces are trying to look through the “Facebook filter” peephole, instead of the other way around. So, let’s turn the filter inside out and see what falls out.

161109 - NYMag
Donald Trump won because of Facebook

Social media overturned the political order, and this is only the beginning.

International

170113 - The Guardian
UK media chiefs called in by minister for talks on fake news

Matt Hancock, the minister of state for digital and culture policy, has asked UK newspaper industry representatives to join round-table discussions on the issue of fake news.

170107 - The Guardian
German police quash Breitbart story of mob setting fire to Dortmund church

170105 - Taylor Francis Online
Russia’s strategy for influence through public diplomacy and active measures: the Swedish case
Via Patrick Tucker @DefTechPat Tech editor at @DefenseOne

161224 - The Times of Israel
Pakistan makes nuclear threat to Israel, in response to fake news


161215 - The Guardian
Opinion: Truth is a lost game in Turkey. Don’t let the same thing happen to you
We in Turkey found, as you in Europe and the US are now finding, that the new truth-building process does not require facts. But we learned it too late

61223 - The Wire
The risks of India ignoring the global fake news debate

A tectonic shift in the powers of the internet might be underway as you read this. 

161123 - Naked Security
Fake news still rattling cages, from Facebook to Google to China

Chinese political and business leaders speaking at the World Internet Conference last week used the spread of fake news, along with activists’ ability to organize online, as signs that cyberspace has become treacherous and needs to be controlled.



161220 - NYT
Russian hackers stole millions a day with bots and fake sites

A criminal ring is diverting as much as $5 million in advertising revenue a day in a scheme to show video ads to phantom internet users.


160418 - Politico
Putin's war of smoke and mirrors
We are sleepwalking through the end of our era of peace. It is time to wake up.

Full Archive [work in progress]


Press Room: Research and articles related with this project

'Alternative Facts': how do you cover powerful people who lie?      
A collaborative project headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits

170207 - Bill Moyers
Your guide to the sprawling new Anti-Trump Resistance Movement

170203 - Mashable
Google Docs: A modern tool of powerful resistance in Trump's America

How fake news sparked a political Google Doc movement

170108 - The Guardian
Eli Pariser: activist whose filter bubble warnings presaged Trump and Brexit

“The more you look at it, the more of a complicated it gets,” he says, when asked whether he thinks Facebook’s plan will solve the problem. “It’s a whole set of problems; things that are deliberately false designed for political ends, things that are very slanted and misleading but not false; memes that are neither false nor true per se, but create a negative or incorrect impression. A lot of content has no factual content you could check. It’s opinion presented as fact.”


Fake news has exposed a deeper problem – what Pariser calls a “crisis of authority”.

“For better and for worse, authority and the ability to publish or broadcast went hand in hand. Now we are moving into this world where in a way every Facebook link looks like every other Facebook link and every Twitter link looks like every other Twitter link and the new platforms have not figured out what their theory of authority is.


61223 - The Wire
The risks of India ignoring the global fake news debate

A tectonic shift in the powers of the internet might be underway as you read this. 

161215 - Washington Post
Fake news is sickening. But don’t make the cure worse than the disease.

161215 - USA Today
Fake-news fighters enter breach left by Facebook, Google
A cottage industry of fake-news fighters springs up as big platforms move slowly to roll out fixes.

161206 - Digital Trends
Forget Facebook and Google, burst your own filter bubble 

161130 - First Draft News
Timeline: Key moments in the fake news debate

161129 - The Guardian
How to solve Facebook's fake news problem: experts pitch their ideas

161127 - Forbes
Eli Pariser's Crowdsourced Brain Trust is tackling fake news 

Upworthy co-founder and hundreds of collaborators gather the big answers

161125 Wired
Hive Mind Assemble 
by Matt Burge @mattburgess1 
Upworthy co-founder Eli Pariser is leading a group of volunteers to try to find a way to determine if the news online are real or not

161119 - Quartz
Facebook’s moves to stamp out “fake news” will solve only a small part of the problem

161118 - CNET
The internet is crowdsourcing ways to drain the fake news swamp
Pundits and even President Obama are bemoaning fake news stories that appeared online leading up to the election. A solution might be found in an open Google Doc.

161116 - The Verge
The author of The Filter Bubble on how fake news is eroding trust in journalism

‘Grappling with what it means to look at the world through these lenses is really important to us as a society’

161115 - Digiday [Podcast 23:12] 
Nieman’s Joshua Benton: Facebook has ‘weaponized’ the filter bubble

161109 - Nieman Lab
The forces that drove this election’s media failure are likely to get worse

By Joshua Benton @jbenton 

Segregated social universes, an industry moving from red states to the coasts, and mass media’s revenue decline: The disconnect between two realities shows no sign of abating.

[Press Room - Full Archive]




About Eli Pariser


Eli is an early online organizer and the author of The Filter Bubble, published by Penguin Press in May 2011.

Shortly after the September 11th terror attacks, Eli created a website calling for a multilateral approach to fighting terrorism. In the following weeks, over half a million people from 192 countries signed on, and Eli rather unexpectedly became an online organizer.

The website merged with MoveOn.org in November of 2001, and Eli -– then 20 years old -- joined the group to direct its foreign policy campaigns. He led what the New York  Times Magazine termed the “mainstream arm of the peace movement”; -- tripling MoveOn’s member base in the process, demonstrating for the first time that large numbers of small donations could be mobilized through online engagement, and developing many of the practices that are now standard in the field of online organizing.

In 2004, Eli co-created the
 Bush in 30 Seconds online ad contest, the first of its kind, and became Executive Director of MoveOn. Under his leadership, MoveOn.org Political Action has grown to five million members and raised over $120 million from millions of small donors to support advocacy campaigns and political candidates, helping Democrats reclaim the House and Senate in 2006.

Eli focused MoveOn on online-to-offline organizing, developing phone-banking tools and precinct programs in 2004 and 2006 that laid the groundwork for Barack Obama’s remarkable campaign. MoveOn was one of the first major progressive organizations to endorse Obama for President in the presidential primary.

In 2008, Eli transitioned the Executive Director role at MoveOn to
Justin Ruben and became President of MoveOn’s board.

Eli grew up in Lincolnville, Maine, and graduated summa cum laude in 2000 with a B.A. in Law, Politics, and Society from Bard College at
Simon's Rock. He is currently serving as the CEO of Upworthy and lives in Brooklyn, NY.

Contact: @elipariser 

Conferences

Combating Fake News: An Agenda for Research and Action 

February 17, 2017 - 9:00 am - 5:00 pm
Full programme - #FakeNewsSci on Twitter


Related articles

170214 - Forbes
Political issues take center stage at SXSW

170205 - The College Reporter
Workshop provides students with knowledge pertaining to fake news



170207 - Backchannel
Politics have turned Facebook into a steaming cauldron of hate

170201 - Triple Pundit
Upworthy and GOOD announce merger, join forces to become the leader in Social Good Media


170127 - Observer
These books explain the media nightmare we are supposedly living in


170118 - OpenDemocracy.net
The internet can spread hate, but it can also help to tackle it

161216 - NPR TED Radio Hour
How can we look past (or see beyond) our digital filters?

161122 - NYT Magazine

Is social media disconnecting us from the big picture? 
By Jenna Wortham
@jennydeluxe 

161112 - Medium
How we broke democracy

Our technology has changed this election, and is now undermining our ability to empathize with each other

1108 Ted Talks
Eli Pariser: Beware online "Filter Bubbles"

110525 - Huffington Post
Facebook, Google giving us information junk food, Eli Pariser warns


0305 - Mother Jones
Virtual Peacenik


030309 - NYT Magazine
Smart-mobbing the war

[Eli Pariser - Full Archive]


Bulletin Board

Conference.jpg

Flickr

Upcoming Events

Bits and pieces: Liberal democracy in the digital era


April 25, 2017 12:00 PM - 1:30 PM
RSVP required by 5PM April 19

CISAC Central Conference Room

Encina Hall, 2nd Floor

616 Serra St

Stanford, CA 94305

Speaker: Toomas Hendrik Ilves, former President of Republic of Estonia (2006 - 2016)

The event is a joint sponsorship between CISAC, The European Security Initiative (Europe Center) and the Center for Russian, East European and Eurasian Studies (CREEES).

International Fact Checking Day

April 2nd, 2017

International Fact-Checking Day will be held on April 2 2017, with the cooperation of dozens of fact-checking organizations around the world. Organized by the International Fact-Checking Network, it will be hosted digitally on
 www.factcheckingday.com. the main components of our initiative will be:

  1. A lesson plan on fact-checking for high school teachers.
  2. A factcheckathon exhorting readers to flag fake stories on Facebook.
  3. A “hoax-off” among top debunked claims.
  4. A map of global activities.

If you are interested in finding out more/participating, reach out to factchecknet@poynter.org.

Workshop: Propaganda and Media Manipulation

On May 19, 2017, Data & Society @datasociety will host a workshop in NYC on the ways in which technology and algorithmic practices have altered dynamics of propaganda and media manipulation.

The structure of the D&S Workshop is designed to maximize scholarly thinking about the evolving and societally important issues surrounding data-driven technologies.  Participants will be asked to read three full papers in advance of the event and prepare comments for intensive discussion. 

Deadline for application: February 15, 2017.

Find out more
 here


Ongoing Initiatives

The News Literacy Project 

@TheNewsLP

A national educational program that mobilizes seasoned journalists to help students sort fact from fiction in the digital age.

Stony Brook’s Center for News Literacy 

Hosted on the online learning platform Coursera, the course will help students develop the critical thinking skills needed to judge the reliability of information no matter where they find it — on social media, the internet, TV, radio and newspapers.

Each week will tackle a challenge unique to the digital era:

Week 1:         The power of information is now in the hands of consumers

Week 2:         What makes journalism different from other types of information

Week 3:        Where can we find trustworthy information

Week 4:         How to tell what’s fair and what’s biased

Week 5:         How to apply news literacy concepts in real life

Week 6:         Meeting the challenges of digital citizenship

Course is free
but people can opt to pay $49 and do the readings and quizzes (which are otherwise optional) and if they pass muster, end up with a certificate.


Past Events

The Future of News in an Interconnected World

01 Mar 2017
12:30 - 15:00
European Parliament, Room P5B00


Independent journalism is under pressure as a result of financial constraints. Local media is hardly surviving and free online content is sprawling. On social media platforms that are built for maximum profit, sensational stories easily go viral, even if they are not true. Propaganda is at an all-time high and personalised newsfeeds result in filter bubbles, which has a direct impact on the state of democracy. Just some of the issues that will be explored in this seminar, as we explore how journalists and companies see their position and the role of social media and technology.

MisinfoCon: A Summit on Misinformation

Feb 24 - 27, 2017
Cambridge, MA

A summit to seek solutions - both social and technological - to the issue of misinformation. Hosted by The First Draft Coalition
 @firstdraftnews, The Nieman Foundation for Journalism @Niemanfdn and Hacks/Hackers @HacksHackers   

Find out
more.

Follow at
@Misinfocon  #misinfocon

MisinfoCon: Pre-event Reading & Creative Studio Resources

Combating Fake News: An Agenda for Research and Action

February 17, 2017 - 9:00 am - 5:00 pm
Harvard Law School
Wasserstein Hall 1585
Massachusetts Ave, Cambridge, MA 02138

Full programme. Follow #FakeNewsSci on Twitter.

Write up:

170217 - Medium
Countering Fake News

Media Learning Seminar 

February 13 - 14, 2017

What do informed and engaged communities look like today?

Find videos of the discussion here or access comments Twitter via #infoneeds

The Future of News: Journalism in a Post-Truth Era

Tuesday, Jan 31, 2017

4:00 - 6:00 pm EST

Sanders Theatre, Harvard University

Co-sponsored by the Office of the President, the Nieman Foundation for Journalism, and the Shorenstein Center on Media, Politics, and Public Policy

Speakers include: Gerard Baker, editor-in-chief of The Wall Street Journal; Lydia Polgreen, editor-in-chief of The Huffington Post; and David Leonhardt, an op-ed columnist at The New York Times

Full Programme

Video coverage of the event is available.


170201 - NiemanLab
The boundaries of journalism — and who gets to make it, consume it, and criticize it — are expanding
Reporters and editors from prominent news organizations waded through the challenges (new and old) of reporting in the current political climate during a Harvard University event on Tuesday night.

Dear President: What you need to know about race 

Jan 27, 2017 - 2:30 pm – 4 pm

Newark Public Library, Newark, NJ.
Community conversation hosted by
Free Press News Voices: New Jersey
Via Craig Aaron
@notaaroncraig, President and CEO of Free Press.

Knight Foundation & Civic Hall Symposium on Tech, Politics, and the Media

Jan 18, 2017 - 8:30 am - 6:00 pm

New York Public Library

5th Ave at 42nd St, Salomon Room

Berkeley Institute for Data Science
UnFakingNews Working Group

Meeting Monday, January 9, 5-7pm -- 190 Doe Library

A group of computer scientists, librarians, and social scientists supporting an ecosystem of solutions to the problem of low quality information in media. For more information, contact nickbadams@berkeley.edu


Start of Document 

Basic concepts

  • Define concepts clearly[t][u][v][w]. Is the “fake” / “true” dichotomy the best approach here? There can be multiple dimensions to help inform people: “serious / satire[x][y],” “factual / opinion[z],” “factually true / factually untrue,” “original source / meta-commentary on the source,” etc. +Kyuubi10

  • To expand on the previous concept… This is more of a question to keep in mind rather than a solution, but I believe important nonetheless:

    How should
    fact-checking be done? How can we confirm that the “powers that be” are not messing with the systems created? How do we avoid human bias in fact-checking? How to we avoid censorship efforts, to use our systems to censor content? Or avoid our systems being used to promote propaganda? --Kyuubi10

  • Not only define or suggest terms, but perhaps try to express what people think is the problem. i.e. Is it that people are misled[aa][ab][ac], that political outcomes are being undemocratically influenced, that civil discussion is being undermined and polarized? There may be many problems people in this discussion have in mind, implicitly or explicitly, and to discuss solutions it's important to agree on what is the problem (or aspect of it) being addressed. [@tmccormick / @DiffrMedia] +@IntugGB 

  • Note [30 Nov 2016] @tmccormick: it seems there are a few overlapping problems, and varying definitions, in this discussion. “Fake news” is used variously to mean deliberate false news; false but not necessarily deliberate; propaganda, as in information created to influence, which may be false or not; or information that is biased or misleading. Also, in some case we aren’t talking about ’news,’ for example false reviews, false or disputed health/medical information (anti-vaxxers issue, eg).  - Note [15 Dec 2016] @linmart Add false equivalencies: A debate where information 99% verified is set up in equal standing as a 1% view.

    Many parts of this discussion concern issues of confirmation bias and the polarization of opinion groups. This intersects with “fake news” topic because it’s one reason people create, share, and accept misinformation, but it is really a broader issue, in that it describes how we tend to form and maintain, narrow or broaden, our views in general.

    Related to polarization, there is another lens on this topic area, which is trust - trust in media institutions, or in civic institutions generally
     trust or support media organizations, the truth of those orgs’ news doesn’t really matter. (This is generally the angle of the Trust Project, one of the biggest existing media collaborations in this field).. Trustworthiness is not the same as truthfulness, as for example we may have degrees of trust in opinion, analysis/interpretation, or prediction, none of which reduce to true or not. Trust is driven by many factors besides news truthfulness, and if the public does not ... [gm corrected - please confirm]

    I[ad] note these differing lenses/problems because I am hoping this project will remain an open network for different issues and projects to intersect. The better it maps and organizes the core issues,  considering these different points of view, the better it can be a useful hub for many interested contributors and organizations to learn from and complement each other’s work.

    << I agree. The Onion may be satire, but it communicates sharp societal critiques rather effectively/Same with Daily Show, et al. +Diane R

  • Backfire Effect.  There is little hope of fighting misinformation with simple corrective information.  According to a study by Nyhan & Reifler (Political Behavior, 2010, vol. 32; draft manuscript version here) “corrections frequently fail to reduce misperceptions of the targeted ideological group.”  In some cases there is a “backfire effect”[ae][af] (a.k.a. Boomerang effect) in which corrections actually strengthen the mistaken beliefs.  This is especially true when the misinformation aligns with the ideological beliefs of the audience.  More here and here.  This suggests that corrective information strategies are only likely to be successful with audiences that are not already predisposed to believe the misinformation.

  • Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.[ag][ah]

  • Identify the emotional [ai]content of fake news. The public falls for fake news that validate their feelings (and their defenses against unwanted feelings). Respond to the emotional content of fake news.

  • Wondering if it might be necessary to open up a new area within this study: fake reviews[aj][ak][al]. Even if they are just a joke, they can change public perception and who knows what else. Thinking companies, writers, politicians...

    State sponsored, click farms,
    comedy writers, no idea.  - @linmart


Definition of Fake News


At this time, there does not appear to be a definition of “fake news” in this document.

  • If readers/contributors do a Ctrl+F for the words “define” and “definition,” they will not find a definition of “fake news” in this document. The phrase “fake news” is used over 150 times without a definition.  

  • It seems that a handful of clear cut examples of fake news have been recycled over and over again since the election, but examples are not definitions. Examples do not tell us what fake news is. Establishing a working definition might be preferable to proceeding with work with no definition.

I’d like to see the term, ‘Fake News’ retired. If it’s fake, it’s not news. It’s lies, inventions, falsehoods, fantasy or propaganda. A humorist would call it, “made up shit”. 


Compilation of basic terms

P[am][an]ossible areas of research and working definitions: 

- fake / true
- fake vs. fraudulent
- factual / opinion
- factually true / factually untrue

- logically sound / flawed
- original source / meta-commentary on the source

- user generated content

- personal information

- news

- paid content

- commercial clickbait
- gaming system purely for profit

Motive:
- prank / joke
- to drive followers/likes
- create panic
- brainwashing / programming / deprogramming
- state-sponsored (external / internal)
- propaganda
- pushing agenda

- local ideology

- local norms and legislation - restrictions and censorship (i.e. Thailand, Singapore, China)


- fake accounts
- fake reviews
- fake followers
- click farms
- patterns

-[ao] satire

- bias
- misinformation
- disinformation

- libel


- organic / non organic

- viral




Further reference:

161128 - The New York Times 

News outlets rethink usage of the term ‘alt-right’ 

via Ned Resnikoff @resnikoff  Senior Editor, @thinkprogress 


161122 - Nieman Lab 
“No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites

161122 - Medium 
Fake news is not the only problem 

By @gilgul Chief Data Scientist @betaworks 



Classifying fake news, fake media & fake sources

Thread by @thomasoduffy

There are different kinds of “fake” all of which need to be managed or mitigated.  Cumulatively, these fake signals find their way into information in all kinds of places and inform people.  We need to build a diction to classify and conceptualise this aspects to think about them clearly:

  • Fake article / story:  For example, a fabricated story, based on made-up information, presented as true.
  • Fake reference:  A not-intentionally fake article that cites a fake source
  • Fake meme:  A popular media-type for viral syndication, usually comprising of an image and a quote.  In this case, one that contains false/fake information.
  • Fake personality: A person controlling a social profile who pretends to be who they are not, unbeknownst to the public.  E.g. a troll pretending to be a celebrity
  • Fake representative: A person who falsely claims to represent an organisation, sometimes for the purposes of getting attention, sometimes for the purposes of discrediting that organisation.
  • Fake social page: A social page claiming to or portraying itself as officially representing a person/brand/organisation that has no basis
  • Fake website: A whole website that purports to be what it is not, with content that might be cited in topics of interest.
  • Fake reviews:  Reviews, be them published online or within a review section on an ecommerce site that are incentivised or intentionally biased, whereby, if an honest person understood their approach to writing the review, that honest person would mind.  Arguably, this applies to non-disclosed product placement or native advertising that is not disclosed clearly.
  • Fake portrayal:  As video becomes a primary way by which information is transmitted, in any situation where a person is behaving as an actor, to communicate something they don’t hold to be true, and are not doing this purely for entertainment, this could be described as a “fake portrayal”.  For example, if a voice-over artist reads a script for a brand knowing it to be false but uses their skills to present that compellingly, the output is a kind of fake-media.  For example, if a celebrity fitness model showcases a lifestyle using a product they don’t habitually consume as may be inferred by an ordinary person watching the show or advert, this is a kind of fake-media that ought to be limited.


To some extent, it is worth decoding strategies used by lobbyists, spin doctors, marketing agencies and PR companies - and considering - what measures could limit their ability to syndicate information of a “warped-accuracy” can counter intentionally fake-news.

  • How would the above be “verified” as fakes? In the intent to avoid censorship, the process of verifying fakes must be much more strict than the one to verify “facts”.
    I believe that Bias and propaganda would easily fall within the realm of fake (or better, should fall within), but this would easily mean that the size of the structures you might be fighting against will make this a hard fight. --
    Kyuubi10

  • Governments + Businesses will use their wealth and power in to create a force towards censorship by utilizing the method of “verifying” fakes in order to remove opposing content. +@IntugGB

    The same can be true about creating Verified Sources and Outlets, where they can be their own crowd-source to push their content to be verified. --
    Kyuubi10


Considerations → Principles → The Institution of Socio - Economic Values

by: Timothy Holborn 

A Perspective by Eben Moglen[1] from re:publica 2012

The problem of ‘fake news’ may be solved in many ways.  One way involves mass censorship of articles that do not come from major sources, but may not result in news that is any more ‘true’.  Another way may be to shift the way we use the web, but that may not help us be more connected. Machine-readable documents are changing our world.

It is important that we distill ‘human values’ in assembly with ‘means for commerce’. As we leave the former world of broadcast services where the considerations of propaganda were far better understood; to more modern services that serve not millions, but billions of humans across the planet, the principles we forged as communities seem to need to be re-established.  We have the precedents of Humans Rights[2], but do not know how to apply them in a world where the ‘choice of law’[3] for the websites we use to communicate, may deem us to be alien[4].  Traditionally these problems were solved via the application of Liberal Arts[5], however through the advent of the web, the more modern context becomes that of Web Science[6] incorporating the role of ‘philosophical engineering’[7] (and therein the considerations of liberal arts via computer scientists).


So what are our principle, what are our shared values? And how to we build a ‘web we want’ that makes our world a better place both now, and into the future?

It seems many throughout the world have suffered mental health issues[8] as a result of the recent election result in the USA.  A moment in time where seemingly billions of people have simultaneously highlighted a perceived issue where the results of a populous exacting their democratic rights resulted in global issues that pertained to the outcome being a significant surprise.   So perhaps the baseline question becomes; how will our web better provide the means in which to provide us (humans) a more accurate understanding of world-events and circumstances felt by humans, via our ‘world wide web’.

General Ideas 

B[ap]ehavioral economics and other disciplines

  • Invest in Behavioral Economics, the application of psychological insights[aq] into human behavior to explain decision-making. Have standards to test the impact of any of these ideas on intended outcomes.

  • Invest in Social Cues. People create and share fake news because they cannot be held accountable for its content. Accountability cues[ar] for content should be part of any technological and behavioral solutions to the fake news problem.
  • A[as][at] novel concept proposed in the following article: An algorithm that finds truth even if most people are wrong, Drazen Prelec, Massachusetts Institute of Technology Sloan School, Cambridge MA 02139 dprelec@mit.edu  +alex@coinfund.io





Human Editors
[au][av][aw]



  • For more established outlets: consider immersing your team in alternate realities. There is always going to be a bias in human judgement but having experienced being in someone else’s shoes might cut through those perceptions. -@linmart[bc]

  • With the speed of how fake facebook posts propagate, the news coverage might be similar to what unfolds in an emergency crisis. A special manual was set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts.  Described as “a groundbreaking resource for journalists and aid providers providing the tools, techniques and guidelines for how to deal with user-generated content (UGC) during emergencies”. - @JapanCrisis

  • Figure out how Journalists find about a viral news, either fake or real.  Collecting data in this google form to find out: Viral News Discovery for Journalists. If you have suggestions how I can formulate the question better or you think that something should be added, give a sign. - Florin 

  • Hire more human editors to vet what makes it to trending topics[bd][be][bf]

  • By then it’s too late however. It needs to be stopped before it gets there. There are ways to detect if a story is approaching virality, and if human editors monitor what’s going viral and vet those articles they can can kill its distribution before millions see it.[bg][bh][bi][bj][bk][bl] Transparency is key to trust (last time they had human editors they were accused of being biased against conservative news outlets). Saying “this was removed because it was fabricated by Macedonian teens” is better than some vague message about violating TOS.  Eventually this data can train a machine learned classifier

Related:

Management Science
The structural virality of online diffusion - Vol. 62, No. 1, January 2016, pp. 180–196 

  • Facebook already has infrastructure for dealing with graphic content, and could easily employ something similar to mitigate fake news

  • Bring back human editors, but disclose to news outlets why specific articles are being punished. Transparency is the only way for news organizations to improve, instead of making them guess. -@advodude

  • Might have a bit too much overhead, but a thought: 1. Begin to fingerprint virally shared stories, as you likely already do in order to serve ads. 2. Have a human editor go through the resulting list and manually flag fake news. 3. Use these “verified fakes” to train an algo to recognize fake content and flag it for review. 4. Use these manual reviews to refine the algorithm. 5. Use user content flags as an additional signal. 6. A human is still required for the foreseeable future, but as the algo improves, the amount of work will decrease. [bm]-Steve

  • This could also be some sort of central-server[bn] (API) approach. Having a fact-checking server/API which any user or website can query with a given news URL[bo]. This fact-checking server/API then returns all the information it knows about the given news URL including whether it assumes the news URL to be fake, reasons for that, and alternative (non-fake) news sources on given topics.[bp] Yet, this still requires human involvement to check stories, but it could be a somewhat independent organization where (also) actual news outlets invest in. +@IntugGb

  • Perhaps look at how Artificial Intelligence (AI) is being applied across industries. Search through case studies involving lawyers or doctors where massive amounts of information need to be looked at but where, in the end, the final call is made by a human. +@linmart

  • We are a group of former journalists and executives from Canada’s public broadcaster (CBC) whose company, Vubble, has been working on a solution to solve fake news, pop filter bubbles and engineer serendipity into the digital distribution of content (our focus is video). We’ve created an efficient and scaleable process that marries professional human editors with algorithm technology to filter and source quality content (and flag problematic/fake content -- we believe the audience needs more agency, not less). We are currently putting together the funding to build a machine learning layer that would produce responsive feeds, giving a user a quality experience of content ‘inside’ their comfort zone, but also deliberately popping in an occasional challenging piece to take her slightly outside her comfort zone on difficult subjects. We’re building this as an open platform, providing transparency on how these types of systems work, and routine auditing for bias within the code that drives it. (@TessaSproule) +Mathew Ingram +Eli
  • Create a news site that rewards honest reporting and penalizes dishonest reporting. - dvdgdnjsph
  • In order to post or vote on content, contributors must purchase site credit.
  • Reddit-style upvoting/downvoting determines the payment/penalty for contributions.
  • More details here 

        



Under the Hood

  • Insert domain-expertise back into the process of development: Engage media professionals and scholars in the development of changes to the Facebook algorithm, prior to these moment when these changes are inputted. [bq]Allow a period of public comment where civil society organizations, representatives from major media outlets, scholars, and the public, can begin to tease out the potential implications of these changes for media organizations - @RobynCaplan, @datasociety, @ClimateFdbk  

>> I elaborated on this a bit here: - @msukmanowsky

161110 - Medium
Using quality to trum misinformation online 

Using page and domain authority seems like a no brainer as a start. I advocated for adding this information to something like Common Crawl 


>> The problem with this approach is that fake-news is not only generated by web domains but via
UGC sites such as youtube, facebook and twitter. - yonas


  • Use a source reliability algorithm [bx]to determine general reliability of a source and how truthful the facts[by][bz] in a particular article are. This has the benefit that newer news sources still get a fair chance at showing their content. -- Micha

  • Look up the DNS entry to see when the site was first registered - For example, washingtonpost.com was registered in 1995[ca][cb], while conservativestate.com was registered in Sept 2016 in Macedonia (+Daniel Mintz)

  • Track all news sources, to include when the website was first registered and any metadata suggesting links to fake/fraudulent activity, as part of an authenticity metric.

  • Provide strong disincentive for domains propagating false information, e.g. if a domain has demonstrably been the source of false information 10 times over the past year, dramatically decrease the probability that links pointing to it will be shown in users’ feed. -- Man


  • Genetic Algorithms using many of these ideas which may be boiled down into discrete values. Spam filtering is very challenging due to the need to avoid false positives. Start with a seed of known false and true stories.[cc][cd] Create genetic algorithms using several of these variables to compete over samples of these stories (need a large set and to rotate sample to avoid overfitting). Once a satisfactory false positive rate is reached, keep test algorithms running in a non-production environment to look for improvements. -- Steve

  • Audit the algorithm regularly for misinformation--you are what you measure, and a lot of the effects are second-order effects of other choices. --z



Facebook 

  • Facebook polarizes discourse and thus becomes an unpleasant place to “be.” So many people are walking away from the platform as a result of this election. This is a problem in their best interest to solve, and the fake news problem is part of this. -@elirarey

Recommended

161120 - NPR
Post-election, overwhelmed Facebook users unfriend, cut back

  • Facebook needs to be more transparent about the incentives[ch][ci][cj] that are driving the changes to their algorithm at different points in time. This can help limit the potential for abuse of actors seeking to take advantage of that system of incentives. - @RobynCaplan, @datasociety +Eli +Anton

  • Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.  

    Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’.  It makes it personal and defensive: there is a difference between a comment on a shared article criticising the author and criticising the ‘sharer’, as if they’d written it.  They may not agree with all of it.  They  may be open-minded.  By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’.  [Amanda Harris]
    [ck][cl]

  • Implement a Time Delay on FB Re-shares: Political articles[cm] shared on Facebook could be subject to a time delay once they reach over 5,000 shares. Each time an article is re-shared, there is a one hour time delay between when the article is shared and when it appears in the timeline. When the next person shares it, there is another one hour delay, etc. This “cool down” effect will prevent false news from spreading rapidly[cn]. There could be an exponential filter: Once an article reaches 20,000 shares, there could be a 4 hour time delay, etc. A list of white-labelled, verified sites, such as the New York Times and Wall Street Journal[co], would be exempt from this delay[cp][cq][cr].[cs]-  Peter@quill.org +BJ (This is a good idea!)

    >> This suggestion would apply only to FB? Of little use if parallel to that, one single post spreads like wildfire on Twitter. --
    @linmart  Peter: Twitter could also implement this system, where political posts are delayed before appearing in the Timeline. Still, Facebook has far more traction than Twitter internationally, so it’s a better place to start.

  • In countries with strong public media ethos, Facebook should present users with a toggle option on the “trending” --  option #1 is as is, fully driven by Facebook’s algorithm, or option #2 is a dynamic, curated feed that is vetted by professional editors (independent of Facebook -- programmed by a third party, from the country’s public broadcaster or some other publicly-accountable media entity.) --@TessaSproule 


    Recommended:

    161120 - NYT
    How Fake Stories Go Viral
[ct]

  • Facebook already knows when stories are fake

    When you click a “like” button on an article, it pops up a list of articles that people have also shared, and a lot of times that list includes, for example, a link to a Snopes article debunking it.[cv][cw] So they already know that people respond to fake news with comments linking to snopes. They could add a “false” button or menu item to flag as fake news.  -Abby[cx][cy][cz] +Eli +@linmart 

  • Pressure Facebook’s major advertisers to pressure Facebook over the legitimacy of news stories upon which their ads are being displayed.
  • Yes, I second this, but how??

  • This  is showing up: 

FB filter.png

Facebook message that now shows for the link provided by Snopes to the original source of the hoax.

As reported in this story:

161123 - Medium
How I detect fake news 

by @timoreilley 



 

Analysis of Headlines and Content

  • Sentiment Analysis of headline - I suspect most fake news outlets use click-bait headlines with extremely strong verbiage to accentuate the importance of the story. Many clickbait headlines will be from legitimate stories so this is a signal, not a panacea. - Steve[da][db][dc][dd][de] +BJ

  • Search deep links for sources which are known to be legitimate - looking for sources within an article using textual analysis (such as “as reported by the AP”, or “Fox News reports”), and checking the domains of said sources for a similar story (or checking the link to the source if provided) is a useful signal for a story not being fake. In the case that this is gamified a programmatic comparison of content between the source, and the referring article may be useful - Steve +Kyuubi10 (This is a great idea!)


  • Cross-partisan index: Articles that people beyond a narrow subgroup are willing to share get more reach[df][dg]. -- Eli + Jesse + Amanda + Peter +CB +@IntugGB +Rushi +JS +BJ +NBA

  • Cross-partisan index II: Stories/claims that are covered by wide variety of publications (left-leaning, right-leaning) get higher Google ranking or more play on Facebook. --Tamar +1NBA
  • Cross-spectrum collaboration: Outlets perceived as left-leaning (eg NYT) partner on stories with those perceived a right-leaning (eg WSJ). -- Tamar +CB +@linmart

  • Compare Content with partisan language databases.  Some academic research[dh]  on Wikipedia has assembled a database of partisan language (i.e. words more likely to be used by Republicans or Democrats) after analyzing the congressional record.  Content could be referenced against this database to provide a measure of relative “bias.”  It could then be augmented by machine learning so that it could continue to evolve. --@profkane[di]


(Related comments in section on
Surprising Validators -- @rreisman)



Reputation systems

  • Authority of the sharers: Articles posted by people who share articles known to be true get higher scores.-- Eli

  • The inverse -- depress / flag all articles (and sources) shared by people known to share content from false sources. -- John

  • Author authority, as well. Back in the day Google News used Google+ profiles to identify which authors were more legitimate than others and then factored that into their news algorithm. - Kramer +Eli +@linmart
  • I think authorship data might still be available in the form of “rich snippets” embedded in the articles. -Jesse
  • We’re looking at author bios tied back to a source like LinkedIn[dj] - @journethics +@linmart

  • There are fundamentally not that many fake sources that a small team of humans could not monitor/manage. Once you start flagging initial sources then the “magic algorithm” can take over: those who share sources are themselves flagged; other items they share are flagged in turn; those sources are themselves flagged; and so on. -- John (+Tamar)
  • Upvoting/downvoting - (Andrew)  There are better approaches than just simply counting the number of upvotes and downvote.  (+JonPincus)

  • Reuters Tracer

    Fall/Winter 2016 - CJR
    The age of the cyborg [AI]
    Already, computers are watching social media with a breadth and speed no human could match, looking for breaking news. They are scanning data and documents to make connections on complex investigative projects. They are tracking the spread of falsehoods and evaluating the truth of statistical claims. And they are turning video scripts into instant rough cuts for human review...

  • Fake news is usually an editorial tactic/strategy. So it’s something that is planned, repeated and with specific individuals working on it. An open standard reputation system[dk] just like Alexa rank will do the job. It will be first crowd-populated. We at Figurit are currently working on implementing this internally to discover stories while eliminating fake ones. So instead of ongoing filtering/policing of the news, an open reputation system adopted by major social networks and aggregators will kill fake news websites.

    NOTE: must put exception for The Onion! ;[dl]]


  • Higher ranking for articles with verified authors. -- Eli[dm]

  • Ask select users -- perhaps verified accounts, publisher pages, etc -- when posting/ sharing to affirm that the content is factual. Frontloading the pledge with a pop-up question (and asking those users to put their reputations at stake) [dn]should compel high-visibility users to consider the consequences of posting dubious content before it’s been shared, not after. (This is based on experiments that show people are more honest when they sign an integrity statement before completing a form than after.) -- Rohan +Manu

  • It’s a strange thing what happened with Google+ When it started, the group was very select. Content was extraordinary -  as were the conversations. Once they opened the floodgates, all hell broke loose and all sorts of ‘characters’ started taking over. Conversations went from being quite academic to … well, different. --@linmart

  • I maintain an (open-source, non-profit) website called lib.reviews, which is a generic review site for anything, including websites. It allows collaborators to form teams of reviewers with shared processes/rules. I run one such team, which reviews non-profit media sources (so far: TruthOut, Common Dreams, The Intercept, Democracy Now!, ThinkProgress, Mother Jones, ProPublica). I think this is essential so news sources in the margins don’t get drowned out by verification systems or efforts to discredit them. Here’s the list of reviews specifically of non-profit media:

    Reviews by Team: Non-profit media

    It’s easy to expand this concept in different ways. If you’re interested in collaborating on the tech behind it or on writing reviews of news sites, see the site itself, or drop me a note at <eloquence AT gmail DOT com>. See our
    FAQ for general issues w/ user reviews.--Erik Moeller @xirzon 



A possible method of implementing reputation systems is to make the reputation calculation dynamic and system based, and mapping the reputation scores of sources on a reverse sigmoid curve. The source scores will then be used to determine the visibility levels of its articles on social media and search engines. This ensure that while credibility takes time to be built, it can be lost very easily.

Where,

Ss -> Source Score

Sa -> Cumulative of its article scores

However, this system needs to be dynamic and allow even newer publications a fair chance to get noticed. This needs to be done by monitoring the reputation of both the sources and the channels the articles pass through.

Have fleshed out the system in a bit more detail in the following open document if anyone is interested in taking a look.

Concept System for Improved Propagation of Reliable Information via Source and Channel Reliability Identification [PDF] 

Anyone interested in collaborating on this can contact me at sid DOT sreekumar AT gmail



Build an algorithm that privileges authority over popularity. Create a ranking of authoritative sources, and score the link appropriately. I’m a Brit, so I’ll use British outlets as an example: privilege the FT with, say, The Times, along with the WSJ, the WashPo, the NYT, the BBC, ITN, Buzzfeed, Sky News, ahead of more overtly partisan outlets such as the Guardian, the Telegraph, which may count as quality publications but which are more inclined to post clickbaity, partisan bullshit. Privilege all of those ahead of the Mail, the Express, the Sun, the Mirror.

Also privilege news pieces above comment pieces; privilege authoritative and respected commentators above overtly partisan commentators. Privilege pieces with good outbound links - to, say, a report that’s being used a source rather than a link to a partisan piece elsewhere. [do][dp]

Privilege pieces from respected news outlets above rants on Medium or individual blogs. Privilege blogs with authoritative followers and commenters above low-grade ranting or aggregated like farms. Use the algorithm to give a piece a clearly visible authority score and make sure the algorithm surfaces pieces with high scores in the way that it now surfaces stuff that’s popular.

Of course, those judges of authority will have to be humans; I’d suggest they’re pesky experts, senior journalists with long experience of assessing the quality of stories, their relative importance, etc. If Facebook can privilege popular and drive purchasing decisions, I’m damn sure it can privilege authority and step up to its responsibilities to its audience as well as its responsibilities to its advertising customers. @katebevan



I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy. [dq]

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[dr]

[ds]

Possible requirements

  • National outlet: Should have a paid employee in a minimum number of states.
  • International outlet: Paid employees in multiple locales.
  • Breadth of coverage: A solely focused political outlet should not be a certified outlet.
  • Minimum number of page views prior to certification

Time in existence, numbers of field reporters in each country/local should be required


Verified sites

  • Verified sites. Rather than try to get into the quagmire of trying to identify all "fake" news, there could be a process by which publishers could apply to be marked in the news feed as "verified." [dt](Think: Verified people on Twitter… but less arbitrary.)  

    Example criteria: all published articles are linked to a real person, sources for stories are specifically cited, fact-checkers, high-authority sites link to the site as a reputable source, etc. Basically, a combination of factors listed in this doc.

    When you get accustomed to seeing stories in the NewsFeed marked with a badge that marks the source as verified, you’d automatically be a little skeptical of things
    not marked verified. And FB gets to avoid getting directly involved in that impossible “policing fake news” quagmire. --Sonders (+Andy) Nice (Andrew) +@linmart

  • This may potentially be enough, if FB doesn’t want to go all the way down this path of ‘verified’ sites it may be possible to simply build in ‘speed brakes[SP] to slow down stories from suspect sources until they are caught by FB’s other methods (IE user reporting). Issue seems to be that things can go viral too fast to be caught under the current model --Cam (+Daniel Mintz)

  • Riffing on the speed bump idea that Cam wrote above, I’d just say that creating a very permissive whitelist for verified news and speed-bumping other “news” that isn’t on the white list seems like it would make a big dent with very little effort and next to no downsides. And  when I say very permissive, I mean it. NYT would get through, but so would Daily Kos, Breitbart, The Blaze and Upworthy. But the Macedonian sites and their ilk wouldn’t. Wouldn’t come close to solving the whole problem, but would make a dent at very low cost.
  • This is essentially the model thetrustproject.org is using. Trust Indicators include author bio (ID): citations: label of news, analysis: opinion & sponsored content; original reporting; opportunities for public to dispute, etc. @journethics

  • Thoughts/questions on a board to verify sources: 1) Though it should be a regulatory body, it absolutely must be independent of the government. 2) Standards shouldn't have to be money or scope-based--this limits the capacity of citizen journalists, smaller outlets, or independent news producers and freelancers. That's the beauty of the internet, but it's also the danger--anyone can say anything. Why not use it as a soapbox for those folks who will provide a megaphone for real news and stories beyond the headlines and major outlets? 3) On that note, what kind of standards can journalists agree on? Credibility of sources? Journalistic policies? Ethics rules? 4) I don't know enough about the tech to say this definitively, but I'm not sure this is something you could accomplish with an algorithm at this point. I think this is a place for human editors/boards. For the sake of allowing this to be a tool to verify all outlets from small citizen 00bloggers to the NY Times, it could be a peer review system--volunteer journalists review the outlets/sites that apply for verification? You could have a higher board of paid editors (funded by some non-profit source or facebook/twitter/google) whose job it is to audit larger sources, but in general, is independence from tech firms (which are money-making entities to be covered themselves) something we should seek? 5) Enforcement: when I google news search a topic, a range of articles come up, not all of which are news---some are fraudulent/fake, some are biased--why is this being called news? Why should it get a checkbox when other, unverified sources, shouldn't appear next to it in the first place? If anyone is interested in discussing the specifics of what I’m thinking, contact me--alexleedsmatthews@gmail.com



I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy.

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[du]

[dv]

Possible requirements

  • National outlet: Should have a paid employee in a minimum number of states.
  • International outlet: Paid employees in multiple locales.
  • Breadth of coverage: A solely focused political outlet should not be a certified outlet.
  • Minimum number of page views prior to certification

(Time in existence, numbers of field reporters in each country/local should be required)


Distribution - Social Graph

  • Do other outlets pick up the story? If not, it’s probably false. Could downrank domains on this basis over time -- it’s very unlikely that a site originates highly shareable, true stories that no one else picks up. --Eli +pvollebr[dw][dx][dy][dz][ea][eb] +BJ
  • Google News did a “syndicated content” meta tag back in the day. It was used by news sites with original content to signal the GNews algorithm to treat it differently. Any site using similar content would add weight to the original piece, pushing it higher in the rankings.[ec][ed][ee] - kramer
  • “Original reporting” is an indicator we’re working on, but it’s tricky. Use language analysis to ID derivative text? - @journethics
  • Related: “fake news” topics + wording thereof probably exhibit vastly different clustering behavior relative to “real news” -- I can’t necessarily anticipate how, but the data’s there to figure it out. So, in short: can train a classifier on the types of features already extracted by algorithms that perform automated “summarization” and other text analysis tools --Andy

    Assuming everyone is in an echo chamber, there might be some value in injecting some form of alternate viewpoint. Verified sources, e.g. NYT would signal “quality” but NYT opinion is slanted; sometimes that may be good suggestion, others not. --ac
  • Can’t solve a technological problem with a technological solution. We need to invest in:
  • Is this a graph clustering problem? ie. if you have a bunch of fake websites that primarily link to each other, you ought to be able to find this somehow[ef]. Spectral analysis of the graph matrix? -- N.
  • I agree with this, I think graph clustering may be a large issue here, but it’s just from personal experience, need more info -Cam
  • I use social graph analytics in my newsroom at Liquid Newsroom to analyze information markets. It is possible to detect such patterns and to build a neural network system trained to detect similar patterns. (Steffen)
  • (See related ideas under heading Surprising Validators -- @rreisman)
  • We have a recent research paper on trying to solve this problem from a graph perspective, on Twitter.. The idea is to connect users with opposing view points by generating content recommendations from the other side. More info here: https://northernbytes.co/2017/02/06/reducing-controversy-by-connecting-opposing-views/


  • Okay, here’s a stupid idea from that paper: textual analysis.[eg][eh] Take a webcrawler, look at the “promoted” box, and assume that you have poor trust for any website that is in the same textual link area of the webpage as “one free wrinkle trick” or anything to do with local doctors being furious (say). Chances are, anything that is in there (“Crooked Hillary is Done!”) is probably also not trustworthy. -- N.



Fact-Checking

  • News organizations that have known fact checkers and issue corrections should have higher weight +pvollebr
  • Pair questionable news with fact-checking sites (and invest in fact-checking sites[ei]) - zeynep +Eli +@ClimateFdbk

  • Micro-bounties for fake news?  Maybe Facebook/ Twitter/ Google News -- or some outside philanthropist group -- could set up a small fund to reward non-affiliated users who identify fake news stories (thus incentivizing the exposure of fake news rather than the creation of it -- and crowdsourcing that hunt) +Kyuubi10
  • Create a scoring system: Use human editors, in conjunction with fact checking organizations, to score sites for the news they post. “Pants on Fire” scores a 5. “Mostly False” scores a 4 and so on. Once a site reaches a predetermined number of points, they get banned and removed from Facebook.

    Note: Facebook already has this system in place for individuals - if you violate the rules too often, you get banned. Do the same for pages.
  • Be careful with this. The banning systems are often gamed, in that they can be overloaded or ganged up on or have people with many multiple accounts to affect the results. This must be monitored and cleared by humans, which means fact checking by independent sources.

  • Full Fact is the UK’s independent fact checking charity.
  • Fact Checking has to be completely transparent and auditable, with anyone being able to dispute any ready checked fact. --Kyuubi10



Creation of multiple, user-selected fact checkers
(no banning/ censorship/ editorial control/ coercion):
 

  • Social media platforms create an API for independent fact checkers organizations to plug into and provide verdicts for individual links as well as sites in general (based on statistics for that site). Verdicts must be restricted to one out of a small number of well-designed categories, similar to what e.g. Snopes is providing today, perhaps with additional verdicts for “satire”.
  • They invite organizations across the spectrum to participate and become a fact checker.
  • Then, they allow users of social media platforms to explicitly select one or more fact checkers for themselves.
  • The verdict of fact checkers on any given external link becomes an annotation (e.g. color-coded) in the platform’s presentation of the content. The annotation should be link to the fact checker’s site.
    The annotation would also be presented in the platform’s “editor” as soon as a link is added to a post.

This may achieve the following:

  • Users might welcome this as a useful service because it saves them the work of consulting their chosen fact checkers’ sites themselves, and find the correct article. Many have been embarrassed when they posted a provably false story, then got called out on it by others.
  • There is no censorship. In fact, it would get social media platforms out of the fray of editorship - a position they may prefer. Notably, in this option, there would continue to be no more restriction on the type of links shared than today.
  • It makes trust a first-class citizen on the platform. Being asked to select a fact checker from a wide spectrum makes it immediately clear to the user that there is no absolute truth in any of the external content they are presented with on social media, and that there is a significant amount of content on the web that is not based on proven facts. It clearly conveys that it is everyone’s responsibility to apply their own judgment here. The platform will assist with that process and make it easy.

    Choosing a fact-checker is quite obviously a much more consequential choice than clicking on an article, so it would likely, on average, be made more judiciously.

    If deemed acceptable or desired, the social media platform could give more reputable (or popular) fact checkers better placement in the fact checker choosing UI.

  • Many would select multiple fact checkers if it’s easy to do and unobtrusive, perhaps even across the spectrum because, in the end, everybody wants more information. Seeing the verdicts or even conflicts between fact checkers may give rise to critical thinking at the margin.
  • It raises the semantic level of the discourse. In short, a piece of web content is mostly measured by how entertaining it is. A fact checker is measured by how accurate it measures the veracity of an article. Very different discussion.

Most importantly, it injects high-precision information into the system that is not easily obscured. Anybody can write and publish an article that contains lies or half-truths or dog whistles or satire. Fact checkers, however, must provide a clear verdict (through the API). They also cannot easily state that something is true when it is probably not and vice versa without losing some trust when other fact checkers’ verdicts are readily accessible. (+KP)


Special databases


Linking corrections to the fake news they correct:

  • High quality fact checking is of little use if the misinformation has been duplicated 100 times across 100 blogs and been spread throughout social media. There needs to be a way to associate the fact check to the misinformation so that whenever the misinformation is shared, the fact check is presented alongside it. - @rbutrcom <<< Could you create a digital signature of the underlying misinformation that “matches” the 100 instances, and the associate the fact checking with the signature (class) vs. the URLs (instances)? @alecramsay

  • Rbutr has been working on a solution to this for a while, storing URLs in a database where one URL is a critique or rebuttal of the other. Eg:

  • This lets rbutr tell people when the page they are viewing has been critiqued. Eg: Foodbabe article connected to Snopes via rbutr
  • Facebook, Twitter and Google could easily use this database as a way of identifying and displaying rebuttals for content shared in their feeds and search results too

Recommended

161122 - Nieman Lab 
No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites
Via Nieman Journalism Lab at Harvard @NiemanLab


Reuters Institute
Rise of the fact checker - A new democratic institution?
Via Nieman Journalism Lab at Harvard @NiemanLab


Interface - Design

  • Change the user interface so the credibility of the source is reflected[ej]: “Denver Guardian” doesn’t look the same as Washington Post.-z    I like this. It’s much like how media orgs (should) use visual design cues to distinguish sponsored content from independent journalism. You’re still in the same experience, but you get a visual cue that helps prioritize the solid information and hopefully turns on your BS meter for sketchy stuff. --mfuvio
  • Issue here could be distinguishing between falsity and satire. This solution should contemplate and allow both in the UI.  Austin @atchambers
  • Satire could be labeled. Colors could differentiate between Satire and Fake.

  • Make popularity vs. truth distinct measures from each other[ek][el] (often they are conflated) - Berkun + Srila +Peter +JS +NBA
  • Perhaps it’s as simple as shading stories from suspect or satirical sources a different color in the news feed. People could choose to turn this feature on or off, or limit certain sources --@citylifejc

  • Create an “editor” role[em] - similar to likes, allow credible people to up or down vote whether or not a particular piece is fake or not.  Opt-in current editors (NOT journalists, reporters, but editors).  Shift the question of accreditation from article to people (as in how newsrooms themselves are built). Good independent content can still rise up. - Brandon (+Peter)

  • Create a “FAKE!” overlay t[en][eo][ep]hat replaces the photo of any story/url proven to be false. Then people can’t keep spreading[eq][er][es] it on Facebook without a huge, visual warning to everyone that the story isn’t true. (Like Memebuster)
  • Take a look at all stories that are getting significant engagement: if they are outright fraudulent (“Pope endorsed Trump”) either severely downgrade them or present them with “debunked” UX if you must[et]. (I don’t see why there is a free speech right to hoax people without being challenged or dampened).
    [eu]
  • Add a “credible news” box, instead of just “trending”--if you are going to push news stories to people, might as well push good ones.

  • Flag FB shared stories whose "original" publication date is way off from the current date. Many times I've been fooled/dismayed by stories that have been favored in my TL or in that egregious "People Also Shared" ribbon because multiple friends shared them, but annoyed that when I click the sensational headline is weeks or months old. Of course there are good/useful and bad/misleading examples of this (I belatedly learned that Bannon had possibly committed voter fraud in August, but also, clicked a headline from March about Trump that was initially relatively heartening but ultimately totally misleading because so old). Either way, date discrepancies should still be flagged[ev]. @HolleyA 


Flags

  • Petition Facebook to add a “fake news” flag[ew], and when it gets to a high enough flags vs. clicks ratio (10%?), display a “Users have flagged this link as containing potentially untruthful content” warning next to the shared post. Whitelist certain reputable publications vs. blog-sites.[ex] Petition Google to do the same. -@lpnotes +Eli

  • If news is flagged as fake, it needs to be replaced by real news from the same political viewpoint. There is a disproportionate amount of fake conservative news out there, but there are plenty of legitimate conservative news sites that can provide fact-based reporting. For example, flag an article about Clinton selling weapons to ISIS as fake news, and offer a legitimate critique of her approach to foreign policy. This might only be possible with human editors (see bullet point above) but it also addressed the issue with human editors being accused of bias. -@elirarey (+Tamar)


    >> But how can you be really sure, that Clinton did not sell weapons to ISIS? People will be concerned if you make moves like this. Who is going to be the verifier of said claim? Clinton herself?
      +Karl Point taken :D
  • User flagging of false news, adjusted by ideology. (e.g. a Dem flagging a site that is typically visited by more Dems gets more weight).[ey][ez][fa][fb] -- Eli[fc]
  • Develop credibility scores for users who accurately flag fake / non-fact checked news. Send accelerating content to those trustworthy citizen fact checkers and ask: “would you share this?” -Rohan +Manu
  • Or do credibility scores on a site/article level. Allow users to downvote a la reddit/digg
  • Differentiate between ‘fake’--where information is fabricated; ’unverified’--where information cannot be confirmed as true or untrue because available data do not justify the conclusion; and then ‘biased’--where the authors of the story have made no attempt to represent both sides. Especially for the latter I think you’d need human editors, but it seems like it should be possible to train an algorithm to look for cases where quotes are one-sided (perhaps relying on a database like OpenSecrets that classifies organizations as predominantly liberal or conservative, although ideally this should apply to nonpartisan sides as well, e.g. in a police shooting situation).


Downrank, Suspension, Ban of accounts

  • Punish (downrank, suspend, ban) accounts that post fake news, thereby disincentivizing their spread - @Jesse_VFA
  • If there’s one thing we’ve learned (from Macedonian teens and this guy) it’s that this is an economic problem, largely. Fake news is a good business -- A lot of people posting fake news are doing it because it’s profitable.
  • In other words, if you made posting fake news an existential threat for high volume accounts (they could lose access to thousands/millions of subscribers), then you could stop the problem at its root.



Contrasting Narratives  

  • Improve suggestions for further reading on basis of quality, diversity rather than ‘echo’ - Austin @atchambers  +@IntugGB +Nic

    Suggestions based on viral articles from verified sources instead of just related articles? (Andy C)
    Yes, but would also be good to provide different viewpoint. media literacy at large scale so kids get good bullshit detection as early as the nth grade. -Ben @benrito --> (This isn’t the right place for this suggestion; this is derailing the issue. I’m all for more education, but this isn’t what the thread is about and I strongly resist making this about things that are not relevant. - zeynep[fd][fe]) + Amanda (+Tamar)

  • Highlight stories that contradict one another on a particular event/ claim[ff], emphasize their differences, and provide analysis that either confirms or places claims into question (corollary: how to ensure whistleblowing news is not marked as ‘unusual’?) -@yelperalp

  • Pair stories, such as fake news, with stories that debunk them, directly within the feed,[fg] regardless of whether a user likes that publisher or not. That way users will be exposed to diverse viewpoints on an issue[fh], on things that matter to them, irrespective of source. - @RobynCaplan, @datasociety +@linmart
  • View As feature to see what appears in others’ filter bubbles. - Nathan +nick

  • Create an excerpting function for cards that encourages readers to seek out and excerpt text from article supporting headline claim (or other focus), replace standard description of story supplied by site
  • Invert current relationship of story source and “friend source”: put story source prominently on top, with friend and comment

  • Counter-narrative the emotional content with memes, pictures and videos to assuage the targeted feelings.  Otherwise rational people believe/fail to question fake news because it validates their existing feelings (and/or their defenses against unwanted feelings of loss, shame, fear, vulnerability, frustration, jealousy, inadequacy etc.) In lieu of fact checking (which studies show actually reinforces prejudices.

Points, counterpoints and midpoints

Thread by @juliemaupin

  • Create FB Point, FB Counterpoint and FB Midpoint.  @juliemaupin. Make FB default to showing three stories side-by-side whenever a news story appears in a user’s feed:  
  • Story 1 (FB Point):  the story that was placed into your feed by one of FB’s standard mechanisms, with clear labels to indicate whether the story was:
  • Placed into your feed because it was shared by your friend, OR
  • Placed into your feed by FB’s automatic algorithm on the basis of your past likes, OR
  • Placed into your feed because someone paid FB to put it there.
  • Story 2 (FB Counterpoint):  the counterpoint story, which is the version of the same story (i.e. addressing the same underlying news item as story 1) currently receiving the most views from people who have been identified as holding views opposite to yours.  
  • Note this would require using the combined techniques of big data and psychographics which firms like Cambridge Analytica[fi][fj][fk][fl] are currently using to influence political campaigns (OCEAN method + consumer data + political data, etc).  In this case, however, the information would be used to identify the FB user’s psychographic profile only for the purpose of feeding him/her the story most likely to be viewed by his/her psychographic opposite.
  • The counterpoint story provides a perspective check for the user, so that s/he is confronted with the reality of which end of the spectrum the “shared” story comes from and how far out from the center that perspective lies.
  • Story 3 (FB Midpoint):  the version of story 1 that lies in the psychographic middle - i.e. the median - of all online social media activity.
  • Again this would require identifying the midpoint story using big data + psychographics techniques.
  • Note it’s important to come up with a method for identifying the median rather than mean, since users with certain types of profiles are more likely to be active sharers than users with other profile types (hence skewing the midpoint).
  • The Midpoint story could also be used to identify the degree of skew of the point story and the counterpoint story.  E.g. one could illustrate graphically right below the 3 side-by-side stories how far to one side or the other of the midpoint story the other two lie.
  • Other notes:
  • In principle, one could design the same kind of system for Twitter feeds & other social media platforms.
  • In principle, one could design a similar system for browser search results, e.g. browsers could be designed to tee up three sets of search results, perhaps separated by “tabs” within the search results page, showing:
  • Google you:  the ordinary search results you see when you allow google to learn your browsing habits and tailor the search results it shows you to your own past behavior
  • Using a name like “google you” makes it clear that this is not unbiased information you’re receiving in your results list.  It’s based on YOU!  Your preferences, your online behavior.  It’s a reminder that you are not representative of everyone.
  • Google counterpoint:  the search results google would tee up if someone with a psychographic profile on the opposite end of the spectrum from yours were to search the same terms you just searched
  • Google middle:  the search results google’s data indicates are the most popular across all users for the search terms you entered (the median results)
  • Visually, in order to be effective, the three stories should be placed side-by-side with the same amount of space, prominence etc given to each.
  • This proposal could also be combined with others like fact checking.  E.g. one could display the fact-checked ratings of each of the three side-by-side stories right below the stories themselves in the user’s feed.
  • Some (of many) difficult questions:
  • Should the counterpoints & midpoints be constructed using national or international data?  International provides more diverse perspectives to confront social media users with and makes more sense given the realities of today’s global information-sharing environment.  However, the data sets are currently much richer on a country basis, especially within the US.  This means the counterpoint and midpoint feeds would probably have lower statistical validity and/or lower relevance for non-US users, of which there are many.
  • Should digital/social media users have a right to interact with social media without having any psychographic data collected on them?  If so, how could that be facilitated while also allowing users who want to benefit from algorithmic feed methods to do so?  Note that allowing opt-outs creates selection bias problems for the remaining data sets.
  • What does academic research tell us about how far away we should expect a FB midpoint or Google Middle type result to lie from a fact-checked news story of the old-school “mainstream news media” variety?  No doubt there’s some literature on this.




Model the belief-in-true / belief-in-fake lifecycle

Thread by @thomasoduffy


We need to build narratives that help us clarify, in a crystal clear way, the whole life-cycle and “CX” along the journey of fake news.  For example, how a person goes from not holding an opinion on an issue, to inputting fake-news across single, multiple and/or compound reinforcing sources (like brand touchpoints) and believing fake news,  holding that perception for a period of time, to later discovering it was not true (when and if they happens) in a way where they change their mind.  

By focusing on people who have overcome the belief in some fake-news source, we can ask, how can we get more people to go through a fake-news recovery process sooner. 

Similarly, we need to contrast this with how people become resilient against fake news and know it’s not true, so we can nudge people vulnerable to fake news towards more resilience against it.  

The goal of this would be to reduce the half-life of fake news, reduce vulnerability to fake news, reduce the incidence of fake news success conditions, and shorten the half-life of successful fake news.


Verified Pages - Bundled news 

  • Verify pages before letting them declare themselves “News/media website” -- until then they must call themselves a community group. Must be some sort of newsgathering
  • Facebook could play a role in helping people bundle subscriptions to news sites and publications so that publishers are not only driven by ad-revenue, which tends to exacerbate clickbait concerns regardless of authority of publication. Understand, however, that this further bottlenecks the media industry and makes it reliant on Facebook... @RobynCaplan, @datasociety

  • Add news center that aggregates quality, validated news that is shared across facebook rather than inserting into newsfeed or relying on trending sidebar. Plus if it includes metrics or data relating to the ‘why’ of quality -- Austin @atchambers
  • There are only so many actual news sources in the world. Instead of allowing everything and trying to filter out new trash that keeps spawning, have an exclusive news feed[fm][fn] that local and national news sources need to apply to and be vetted for. By humans. Those sources can post links to actual news stories as long as they are on good behavior (posting real news)
  • This seems similar to the Apple news model… wonder what else they have been doing?
  • If this suggestion is headed for FB, be careful. They already tried this is India with specific websites that would be selected under their Free Basics Initiative. The concerns and criticism it drew of how the vetting process was carried out turned out to be quite a nightmare. - @IntugGB

    Background:

    160512 - The Guardian
    The Inside Story of Facebook’s Biggest Setback 

  • Facebook and other media companies need to have a team in-house who engages on issues about media ethics and the public interest, that can help facilitate discussion between relevant stakeholders. This team needs to not be held to the same business incentives, i.e. increasing click-rates, to which other teams building the News Feed algorithm are held to account. This would be akin to technology companies having teams dedicated to issues such as accessibility concerns. This group can also provide a research basis for any authority score that different publications receive. Like PageRank did for ordering pages based on links driving traffic to the site, this can include a higher score for publications that deal directly with official communication coming from institutionalized sources, as well as for original reporting, such as speaking directly to concerned citizens, or relevant actors --@RobynCaplan, @datasociety + @linmart



Viral and Trending Stories

  • Create a publicly visible feed of rapidly trending stories[fo]. Allow users to report in links to debunks and have a team of FB editors evaluating those and deciding whether to take action: slow the spread, attach a debunk story to suggested media, or remove the post from newsfeeds entirely  --@eylerwerve 
  • Imperfect but you could flag stories that spread virally within a narrow cohort, using the same tags you use to target advertising. Then have a human editor look at them. The trick is to eliminate the worst offenders, not the edge cases --@sbspalding

  • Slow down content’s velocity until it gets fact checked, either by a hired team or by crowdsourcing (either way there should be external oversight, because Facebook) --Rohan[fp][fq][fr][fs][ft] (+Peter) +Rushi

  • For users, differentiate between sharing the news and questioning the news. Create an “ask your friends” sharing feature that looks like a regular share but asks your facebook friends “is this real?” Many times people share content and remark “if this is true, then…” If those shares are treated differently than normal news, users can better understand their friends’ intentions, slowing down how quickly fake news moves --Rohan (+Tamar, Dave)
  • For users, pop up a warning before they share an article likely to be fake. "Are you sure you want to share this? Many users say this is inaccurate or a hoax." Or,  "Many users have reported that this source spreads inaccurate articles or hoaxes" --Dave

  • Incentivize FB users to report fake news and hoaxes. Give them a discount on boosting $$ for their own page, or a discount code for products/services, dinner with Zuckerberg, badges for top “reporters,” etc. --@janeeliz
  • I just have to throw this in: I would tell FB to blast every user about being skeptical and critical thinkers[fu].  Meanwhile, for sure mark the source of origin of stories if that source is unreliable. Even after it is shared and the source is vetted as bad - --MarkG
  • Use tools like NewsWhip to humanly curate stories that are spreading to focus fact checking approaches with flay for mechanisms that limit syndication of viral-fake news stories.   Connect with algorithms like those used within Mendeley to trace derivative stories.  --@thomasoduffy



Patterns and other thoughts 

Thead by Heather

  • Prioritize recommended news based on verified news outlet domains, not just domain authority. Add this into logic, not just what is getting shared by friends. Verified news outlets should feel verified. If something is flagged as not true, or follows characteristics of being flake, tell the end user somehow. --Heather
  • Allow flagging of suspected posts (would need to add this into prioritization logic in order for it to be effective so could be sketchy). --Heather
  • Learn patterns for fake news stories - I would imagine there must be some that could at least trigger a moderation / blacklist. URLs, the people who first share them, patterns in the content itself.

>> To build on what Heather said, there are specific “clickbait” patterns that a lot of these stories use. Perhaps use that as part of a signal to downrank stories or flag them for human review

  • Great Idea!! Since clickbait titles are optimized to get maximum exposure, this would be a good countermeasure to disincentivize their effect. Even better: it would disincentivize real news from using clickbait formatting, so fake news, which depends disproportionately on engagement from headlines alone and therefore can’t afford NOT to clickbait, would stick out - nick  

    >> Washington Post is doing it … (?)

    This researcher programmed bots to fight racism on Twitter. It worked.

  • To further add to what Heather said about ads - work with FB to create alternative ad structures (image only, no click throughs, etc) so predatory companies can’t use FB to generate revenue that are clearly designed to confuse and mislead --Emily @emilycr0w
  • One more pattern:

  • Quotes from sources

    Pay attention: This may run contrary to every SEO-minded person who just unlearned the text tricks to make a page rise in Google ranks.

    New should include source quotes. Not every news story will; however, the most effective articles will have direct quotes from sources. As articles are shared and reposted, those quotes hold most of their patterns.

    Fake news features entirely original quotes, too.

    Quotes from a mitochondrial DNA for any news thread and topic -- and computers know how to track large chunks of copied text.

    When it comes to identifying fake news, track the quotes.

    Are the quotes posted on other trusted sites?
    Are the quotes original? If so, is the post on a trusted source?
    In the entire news thread featuring a particular quote, has no single, trusted source posted it?

    Quotes may be the muddy footprints that lead to the final solution.



Ad ecosystem [fv]

  • Make Facebook disclose more regularly numbers from its ad views. They should also be including the manner in which they are calculating those numbers, i.e. x = users who watched video for 10 seconds or more. Make these particular processes auditable by third-parties. --@RobynCaplan, @datasociety
  • Create public database of fake news websites to help advertisers to prevent their ad from showing on sites that can damage their reputation. --@filip_struharik

  • List is created by evaluation committee consisted of teachers, lectors, journalists and marketing specialists

  • Advertisers can use this list to exclude fake news websites from their Google AdWords campaigns

  • Serve up ads from verified professional news organizations that display factual stories on the same topic. (They can do it with my shoe shopping, why can’t they do it with my news content?) Google is making this effort. --@janeeliz

 

  • Only problem is that news organizations are broke as it is  

Related:

161117 - NBC News
Google Think Tank launches new weapon in fight against ISIS


From the article:

“In traditional targeted advertising, a new mother searching for information on Google about getting a baby to sleep might start seeing ads for blankets and white-noise machines in their feeds.

Through
Redirect, someone searching for details about life as an ISIS fighter might be offered links to independently produced videos that detail hardships and dangers instead of the stirring Madison Avenue-style propaganda the terror group puts online.”

  • Analyze the quality of the ads on the news site and create a trust metric. May also be able to create an ideology metric. The wider the reach of the advertising, the higher the trust metric. Check for spam advertising. - @iomcoi



More ideas…

Please note we are tagging contributions with certain [keywords] at this moment. As the document evolves, these will transferred to specific topics already in place - @IntugGB [16 Dec 16] 

  • Check for domain/brand spoofing and flag/blacklist when discovered. Some of the worst fake news examples I’ve seen have been fake sites masquerading as credible news sites, including images of mainstream news organization logos, and domains like “washingtonpost.com.co” and “espn.com-magazine.online” (those examples are from a post by Ev Williams on Medium). It should be possible to detect logos, and then verify that links actually go to the sites identified by the logos. Another domain-based approach would be to compile a whitelist of credible news sites (hundreds, maybe low thousands), and then look for variations of those URLs used in news links which are, according to WHOIS, owned by someone else. For instance, with the aforementioned “washingtonpost.com.co,” though the domain is dead now, you can see from archive.org and reddit that it was full of conspiracy garbage and unaffiliated with the Washington Post. That could have been detected algorithmically based on the domain and registry info. Interestingly, that domain was transferred to the real Washington Post company on 11/18--I think their lawyers have been at work :-) --John McGrath (@wordie) (+Tamar) [fake domains]

  • Work with existing structures, inside and outside of Facebook. Use already existing algorithms to filter, team up with credible sites, newspapers, independent fact-checkers. It’s in everyone’s interest.

  • A good model for fighting fake news is the war on spam. Spamhaus is a nonprofit that maintains a large blacklist of spammer IP addresses, which is used by most ISPs to block spam. A similar nonprofit (fakehaus?) could create a shared resource--a database of known fake sites and spoofed URLs--that could be used by Facebook, Twitter, and others. In general the security world is good inspiration--both blacklists and whitelists are useful to have in the toolbox (@wordie) [SPAM]