Media ReDesign[a]: The New Realities

Before we start …        8

Stay informed        8

Know who the key players are        9

Create alliances and partnerships        9

'Truth in Media' Ecosystem        9

Keep a tally of solutions - and nbvv g y gl Everybody else in the back two spaces mess ups        9

Delve into the Underworld        10

Be prepared to unlearn everything you know        11

Stay connected        11

A final word        12

Stay Informed        13

Background        13

Initiatives        13

Robert Reich: Inequality Media        13

Dan Rather: On journalism & finding the truth in the news        14

Web Literacy for Student Fact-Checkers        14

Syllabus: Social Media Literacies        14

Bulletin Board        16

Upcoming Events        16

Ongoing Initiatives        17

High-Level Group on Fake News and online disinformation  NEW        17

The News Literacy Project        17

Stony Brook’s Center for News Literacy        17

Past Events [Sample]        19

The Future of News in an Interconnected World        20

MisinfoCon: A Summit on Misinformation        20

Combating Fake News: An Agenda for Research and Action        21

Media Learning Seminar        21

The Future of News: Journalism in a Post-Truth Era        21

Dear President: What you need to know about race        22

Knight Foundation & Civic Hall Symposium on Tech, Politics, and the Media        22

Berkeley Institute for Data Science
UnFakingNews Working Group
        22

In the News        24

Essential reads        24

Featured        25

General        25

Trump Presidency        26

International        27

Press Room: Research and articles related with this project        28

About Eli Pariser        32

Conferences        33

Related articles        33

Start of Document        35

Basic concepts        35

Definition of Fake News        37

Compilation of basic terms        37

Classifying fake news, fake media & fake sources        40

Considerations → Principles → The Institution of Socio - Economic Values        46

General Ideas        47

Behavioral economics and other disciplines        47

Human Editors        49

Under the Hood        52

Facebook        54

Analysis of Headlines and Content        59

Reputation systems        60

Verified sites        64

Distribution - Social Graph        68

Fact-Checking        70

Special databases        73

Interface - Design        74

Flags        75

Downrank, Suspension, Ban of accounts        77

Contrasting Narratives        78

Points, counterpoints and midpoints        79

Model the belief-in-true / belief-in-fake lifecycle        82

Verified Pages - Bundled news        83

Viral and Trending Stories        84

Patterns and other thoughts        86

Ad ecosystem        88

More ideas…        91

Factmata        99

FiB extension        99

Taboola and Outbrain involvement        99

WikiTribune        102

Verified Trail + Trust Rating        105

Bias Dynamics & Mental Models        108

Neuroscience of philosophical contentions        109

Thread by @thomasoduffy        109

Pattern & Patch Vulnerabilities to Fake News        111

The problem with “Fake News”        112

Surprising Validators        114

[Update] “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds”        117

Snopes        119

Ideas in Spanish - Case Study: Mexico        122

Not just for Facebook to Solve        124

A Citizen Science (Crowd Work) Approach        128

The Old School Approach
Pay to Support Real Journalism
        133

Critical Thinking        134

Media Literacy        135

Programs        136

Suggestions from a Trump Supporter on the other side of the cultural divide        138

Ethics of Fake News        144

Your mind is your reality        146

Who decides what is fake?        146

Journalism in an era of private realities        147

Propaganda        147

Scientific Method        147

Alternate realities        148

Legal aspects        151

First Amendment/ Censorship Issues        151

Espionage Act        152

Copyright        153

Trademarks        153

Libel        153

Online Abuse        156

Harassment        156

Trolling and Targeted Attacks        156

Threats        156

Hate Speech        156

Specialized Agencies        158

Rating Agencies        158

Media Governing Body        160

Specialized Models        162

Create a SITA for news        162

Outlier analysis        163

Transparency at user engagement        163

“FourSquare” for News        164

Delay revenue realisation for unverified news sources        164

Other ideas        164

Linked-Data, Ontologies and Verifiable Claims        168

Reading Corner - Resources        172

Social Networks        172

Facebook        172

Twitter        173

Google        173

Filter Bubbles        173

Automated Systems        175

Algorithms        175

User Generated Content        175

Dynamics of Fake Stories        177

Joined-up Thinking - Groupthink        177

Manipulation and ‘Weaponization’ of Data        177

Click Farms - Targeted attacks        178

Propaganda        178

Viral content        179

Satire        179

Behavioral Economics        181

Political Tribalism - Partisanship        182

Ethical Design        182

Journalism in the age of Trump        183

Cultural Divide        184

Cybersecurity        184

Experiences from Abroad        184

Media Literacy        184

Resources list for startups in this space        186

Interested in founding        186

Interested in advising        188

Interested in partnering        188

Interested in investing/funding        190

A Special Thank You        192

ANNEX        195

Themes and keywords to look for        195

Hashtags        200

Fact Checking Guides        201

Selected reads        201

Russian interference        202


Note: Hi, I am Eli, I started this document. Add idea below after a bullet point, preferably with attribution. Add +[your name] (+Eli) if you like someone else’s note. Bold key phrases if you can. Feel free to crib with attribution.

A number of the ideas below have significant flaws. It’s not a simple problem to solve -- some of the things that would pull down false news would also pull down news in  general. But we’re in brainstorm mode.

  • November 17, 2016

 



This document is maintained by
@Media_ReDesign, with updates from an extraordinary community of collaborators spanning across many disciplines. Some topics are under the supervision of specific teams, as is the case of news updates, and the Event Calendar (partly updated, linked as a reference, for ideas). All the same, please feel free to contribute with your ideas, as we expand and continue on this journey.


 INVADING FORCES???? Have you seen ANY Photos of these families fleeing Certain Death because and Only Because they tried to defend their God given right to Freedom and Democracy? We Americans rightfully take a lot of Pride in this country for our Many Successes but it would Serve us ALL much better if it wasn't so Damn Near Impossible to Learn of our Occasional COLOSSAL Failures! Perfect Example would be how "WE" Helped overturn a Democratically Elected Leader in IRAQ and PUT Saddam Hussein in Power. It's a great example because pretty much everyone knows how we screwed the pooch on that one...The Role we have played in Creating a Humanitarian CRISIS in Honduras is easily as pervasive with the Only Real Difference being that the IRAQ MESS could pretty much be blamed on a specific few and all in in the Republican Party... In Honduras - our screw ups date as far back as 1911 and have only grown worse with each Major Shift in US Political Power! Reagan used Honduras as a "Staging Ground" for his illegal and ill fated Iran - Contra Fiasco and funneled US Weaponry into what is THE Original Textbook perfect "Banana Republic". As always, when Superior Power is given to societies too poorly structured to assure "Equal Protections" and Many of the other things "WE" take for granted..The Struggles to Possess and Control such Superior Power are Never Ending with Each Group who seizes Power Briefly PUNISHING all of those they "took" Power from! To be clear - I am by NO stretch of the imagination saying that Reagan is to blame for the Many Atrocities "we" have been responsible for in Honduras, or El Salvador, or Panama,or pretty much ALL of Central America! Meddling by our CIA and Profiteering on a Vulgar Scale by American Gas & Oil companies have progressively made matters worse and worse... Even President Obama and Sec of State Clinton made matters worse (and "matters' were already Horrific) by NOT denouncing the Military COUP in 2009 that overturned Honduras's last Freely Elected Leader and Permitted a Regime not all that different from Fidel Castro to seize total,violent, control despite MANY well publicized and well known (in that region at least) Massacres of Hondurans who tried to rally support for Democracy! WHY would ANY American Leader look the other way while people who believe in what "we" believe in and who only want to live in a society "Of the People, By The People and For the People".. Why would we NOT Try to Help these folks??? That's a damn good question! It is THE question that these thousands of REFUGEES are begging to have answered: America You Promised to Be a Shining Beacon on a Hill for ALL who love Freedom and Democracy to See". WHY have you Abandoned Us when We were “there” for you when needed us? We were only Doing EXACTLY what you Asked Us to Do? We have tried to Stand Up for Our Rights, The Rights YOUR Presidents (Kennedy, Reagan and Beyond) Told us We were Entitled To ONLY If We believed in these Rights enough to Fight For Them! We’ve been your Ally, We’ve done EXACTLY as You Asked us to.. Why Didn’t Come to Out Aid?? YOU sent Troops half way around the world to Kill People who didn't have a damn thing to do with 9-11 but that's beside the Point. We are "Americans" - OK Central Americans... But doesn't being in the Same Hemisphere AND sharing part of name, and being the source of soooo many billions of dollars in profits for Your Companies, Doesn't that mean ANYTHING? "WE" Thought your Word, Your Promises Meant Something but Believing your words Only got our Freedom Fighters Gunned Down by Weapons YOU provided to our Oppressors! We Stood Opposed to Communists and Fascist Dictators because we believed "The Shining Beacon for Freedom and Democracy" would Not sit idly while Enemies of Freedom massacred us - Clearly we were wrong. Are these the honest Questions of an “Invading Force” intent on causing us harm? OF Course Not! Why in God’s Name would young families Without SHOES attempt to walk Thousands of Miles over rough and rocky terrain? In Hopes of seeing Six Flags??? The ONLY Reason ANY human would attempt a trek so long and soooo arduous is Because They Have NO Other Choice that ends with them Alive!!  PLEASE pause and ask yourself WHY? WHY would any one undertake such a brutal journey? AND - Why would the Majority of Americans SUPPORT an "INVADING FORCE"?? WHY Would Most Americans be in Favor of giving OUR Tax Dollars to People just coming here to spread Disease and Violence? People walking THOUSANDS of Miles to get here because they are Too Lazy To Work?? ON What Level would “that accusation” make Sense?? WHY Would "we" want to greet an "Invading Force" with Kindness (well I mean other than that was Exactly What Christ Instructed to us do whenEVER we encounter the Poor, the Weak, the Traveler...) Other than as an Act of Christian Kindness WHY would "We" WELCOME an "Invading Forces"?? You Guessed It! NO ONE would "welcome" an Invading Force so Either: (a.) Most Americans have Lost Their minds and are Executing a Secret Handshake DEATH WISH - OR (b.) Trump is LYING his Ass Off Because He’s Losing Political Power and SCARING People worked for him Last Time! GOOGLE - America’s Role in Creating the CRISIS in Honduras and you will Find thousands of sources of Information!  Many are Decades old because this is NOT a New thing… It’s just recently reached it’s Most Horrendous Point - Precisely because Trump’s Anti Brown People Rhetoric has Emboldened the Current Regime to Complete the Slaughter of “it’s” Enemies Free of ANY Concern that America MIGHT come to the Aid of it’s long time Ally! It comes down to this:  No One in The Democratic Party has EVER Doubted that America is THE Greatest Country in the History of the World - BUT… Being the Very Best is a Piss Poor Reason to Quit Trying to Be even Better!

   And THANK GOD for the Red Cross! The Red Cross was formed to be an IMPARTIAL Source of Kindness and Aid for The INNOCENT Victims of Man Made War and Strife!  “THAT” is EXACTLY the Role they are fulfilling in this Crisis and MOST Americans are Damn Thankful that at least the Red Cross can’t Be Bullied into Abandoning All Sense of Morality!


PaleBlueDot

- Carl Sagan

 

      “None of us is as smart as all of us” ― Kenneth H. Blanchard

banner 2.jpg

Flickr

Before we start …

 

Stay informed

Dozens of articles and studies are being published daily on the topic of2 ‘fake news’. The more we know about what is going on, all the different angles, implications, etc.  the better off we are.
3

Know who the key players are


Technologists, journalists, politicians, academics, think tanks,  librarians, advocacy organizations and associations, regulatory agencies,
corporations, cybersecurity experts, military, celebrities, regular folk... all have vested interest in this topic. Each can give a different perspective.
 

Throughout the document, you will see some symbols, simply pointers alongside @names, to serve as guides:

verified account     key contact       collaborator  

Create alliances and partnerships

See what has been done or published that could serve as a blueprint going forward. Mentioned in this work, for example, is a special manual set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts. Four contacts there, alone, that might be interested in this project.

Related research:

'Truth in Media' Ecosystem

Organizations and individuals important to the topic of fake news and its solutions
Work in progress - Contributions and suggestions welcome

Keep a tally of solutions - and mess ups

Aside from this, what else has been implemented by Google, Facebook, Twitter and other organizations? How have people reacted? What are they suggesting? How is the media covering this? Have there been any critical turning points? The bots -- so in the news nowadays -- how are they being analyzed, dealt with? What has been the experience with them… abroad?

So many questions...

Statement by Mark Zuckerberg 

November 19, 2016

Delve into the Underworld

FB Fake.jpg

One can’t assume that there is criminal intent behind every story but, when up against state actors, click farms and armies of workers being hired for specific ‘gigs’, it helps to know exactly how they operate. In any realm, be it ISIS, prostitution networks, illegal drugs, etc. they are experts on these platforms.

Recommended

Future Crimes  by Marc Goodman @FutureCrimes 

The Kremlin Handbook - October 2016
Understanding Russian Influence in Central and Eastern Europe

Cybersecurity Summit Stanford
Munich Security Conference -
Agenda

21 September 2016

Panel Discussion:
“Going Dark: Shedding light on terrorist and criminal use of the internet” [1:29:12]

Gregory Brower (Deputy General Counsel, FBI), Martin Hellman (Professor Emeritus of Electrical Engineering, Stanford University), Joëlle Jenny (Senior Advisor to the Secretary General, European External Action Service), Joseph P. McGee (Deputy Commander for Operations, United States Army Cyber Command), Peter Neumann (Director, International Centre for the Study of Radicalisation, King's College London), Frédérick Douzet (Professor, French Institute of Geopolitics, University of Paris 8; Chairwoman, Castex Chair in Cyber Strategy; mod.)

Related


170212 - Medium
The rise of the weaponized AI Propaganda machine  There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.


Signal

Install it. Just because.


160622 - The Intercept
Battle of the secure messaging apps: How Signal beats WhatsApp

Be prepared to unlearn[b][c] everything you know


For years, other countries hav
e dealt with issues of censorship, propaganda, etc. It is useful to understand what has happened, to see what elements of their experience we can learn from. Case studies, debates, government interventions, reasoning, legislation, everything helps.

Essential here - insights from individuals who have experienced it and understand the local language and ideology.


Note. This also means learning from the “natives”, the ones born with - you know - a chip in their brain.

Stay connected

Please join us on Twitter at @Media_ReDesign and on Facebook for the latest news updates.

A Slack team (group messaging) pertaining to a number of related projects is available for those who wish to connect or do further research on the topic. You can sign up here. For those not familiar, two introductory videos are available, one showing how it can be used in teams, the other describing the platform itself.

Twelve channels
[update pending] have been created so far. Click on the CHANNEL heading to expand the different categories and click on any you want to join. Clicking on DIRECT MESSAGES allows you to contact all members who are currently on there; the full “Team Directory” is accessible through the menu.

A final word

Before starting, go through the document quickly to get a sense of the many areas of discussion that are developing. Preliminary topics and suggestions are being put in place but many other great ideas appear further on, unclassified for the time being while the team gets to them.

This is a massive endeavour but well worth it. Godspeed.


Stay Informed  

Background

A superb summary dealing with fake news is being written up over at Wikipedia. With almost 185 sources to date [August 28, 2018], it gives an overview of much of what the issues are, starting with a detailed look at prominent sources going on to impact by country, responses on the part of industry players and, finally, academic analysis.

As a starting point and perhaps a guideline to better structure the document going forward, it is highly recommended. -
@linmart  [26 DEC 16]

,

Initiatives

Robert Reich: Inequality Media


After years of collaboration, Jacob Kornbluth @JacobKornbluth worked with Robert Reich @RBReich to create the feature film Inequality for All. The film was released into 270 theaters in 2013 and won the U.S. Documentary Special Jury Award for Achievement in Filmmaking at the Sundance Film Festival.  Building off this momentum, Kornbluth and Reich founded Inequality Media in 2014 to continue the conversation about inequality with viewers.

Inequality for All: Website - FB /InequalityForAll - @InequalityFilm - Trailer

Inequality Media:  Website - FB /InequalityMedia - @InequalityMedia 

Robert Reich: LinkedIn -  FB /RBReich  FB Videos - @RBReich

Saving Capitalism.jpg


Kickstarter Campaign
How does change happen? We go on a trip with Robert Reich outside the “bubble” to reach folks in the heartland of America to find out.

3,790 backers pledged $298,436 to help bring this project to life.

Saving Capitalism: For the Many, Not for the Few  
#SavingCapitalism @SavingCapitalism

Perhaps no one is better acquainted with the intersection of economics and politics than Robert B. Reich, and now he reveals how power and influence have created a new American oligarchy, a shrinking middle class, and the greatest income inequality and wealth disparity in eighty years. He makes clear how centrally problematic our veneration of the free market is, and how it has masked the power of moneyed interests to tilt the market to their benefit.

… Passionate yet practical, sweeping yet exactingly argued,
Saving Capitalism is a revelatory indictment of our economic status quo and an empowering call to civic action.”


As featured in:

160120 - Inc
5 Books that billionaires don't want you to read



Dan Rather: On journalism & finding the truth in the news

Learn to ask the right questions & tell captivating stories. Practical advice for journalists & avid news consumers.

 


9

Web Literacy for Student Fact-Checkers 

by Mike Caulfield @holden

Work in progress but already excellent. Recommended by @DanGillmor

Syllabus: Social Media Literacies

Instructor: Howard Rheingold - Stanford Winter Quarter 2013


Andrew Wilson, Virtual Politics: Faking Democracy in a Post-Soviet World (Yale University Press, 2005)

This  seminal work by one the world’s leading scholars in the field of “political technology” is a must-read for anyone interested in how the world of Russian propaganda and the political technology industry works and how it impacts geo politics. It has received wide critical acclaim and offers unparalleled insights. Many of the names seen in connection with both Trump’s business dealings and the Russian propaganda apparatus appear in Wilson’s work.


Bulletin Board

Upcoming Events

Event Calendar: Trust, verification, & beyond 
Full listing of events curated by the
@MisinfoCon community

There are a lot of conversations happening right now about misinformation, disinformation, rumours, and so-called "fake news."

The event listing
here is an attempt to catalogue when and where those conversations are happening, and to provide links to follow-up material from those conversations. You can help out by filling in the blanks: What's missing?


01 misinfo.jpg

Latest event update: May 29, 2018


Ongoing Initiatives

European Commission - Digital Single Market

High-Level Group on Fake News and online disinformation  NEW

Colloquium on Fake News and Disinformation Online
27 February 2018
2nd Multistakeholder Meeting on Fake News
Webcast   RECORDED

The News Literacy Project 

@NewsLitProject[d][e][f][g]

A national educational program that mobilizes seasoned journalists to help students sort fact from fiction in the digital age.

Stony Brook’s Center for News Literacy 

Hosted on the online learning platform Coursera, the course will help students develop the critical thinking skills needed to judge the reliability of information no matter where they find it — on social media, the internet, TV, radio and newspapers.

Each week will tackle a challenge unique to the digital era:

Week 1:         

The power of information is now in the hands of consumers

Week 2:         

What makes journalism different from other types of information

Week 3:        

Where can we find trustworthy information

Week 4:         

How to tell what’s fair and what’s biased

Week 5:         

How to apply news literacy concepts in real life

Week 6:         

Meeting the challenges of digital citizenship

Course is free but people can opt to pay $49 and do the readings and quizzes (which are otherwise optional) and if they pass muster, end up with a certificate.


Conference.jpg

            Flickr

Past Events [Sample]

The Center for Contemporary Critical Thought - Digital Initiative

Tracing Personal Data Use

April 13-14, 2017

Cambridge Analytica: Tracing Personal Data
(from ethical lapses to its use in electoral campaigns)

Thursday, April 13, 2017
11:00am
East Gallery, Maison Francaise
Columbia University

Speaker:         Paul-Olivier Dehaye @podehaye with Tamsin Shaw 

Respondant:         Cathy O'Neil @mathbabedotorg as respondant
Moderated by:         
Professor Michael Harris

Find out more

International Fact Checking Day

April 2nd, 2017

International Fact-Checking Day will be held on April 2 2017, with the cooperation of dozens of fact-checking organizations around the world. Organized by the International Fact-Checking Network, it will be hosted digitally on
 www.factcheckingday.com. the main components of our initiative will be:

  1. A lesson plan on fact-checking for high school teachers.
  2. A factcheckathon exhorting readers to flag fake stories on Facebook.
  3. A “hoax-off” among top debunked claims.
  4. A map of global activities.

If you are interested in finding out more/participating, reach out to factchecknet@poynter.org


The Future of News in an Interconnected World

01 Mar 2017
12:30 - 15:00
European Parliament, Room P5B00


Independent journalism is under pressure as a result of financial constraints. Local media is hardly surviving and free online content is sprawling. On social media platforms that are built for maximum profit, sensational stories easily go viral, even if they are not true. Propaganda is at an all-time high and personalised newsfeeds result in filter bubbles, which has a direct impact on the state of democracy. Just some of the issues that will be explored in this seminar, as we explore how journalists and companies see their position and the role of social media and technology.

MisinfoCon: A Summit on Misinformation

Feb 24 - 27, 2017
Cambridge, MA

A summit to seek solutions - both social and technological - to the issue of misinformation. Hosted by The First Draft Coalition
 @firstdraftnews, The Nieman Foundation for Combating Fake News: An Agenda for Research and Action

February 17, 2017 - 9:00 am - 5:00 pm
Harvard Law School
Wasserstein Hall 1585
Massachusetts Ave, Cambridge, MA 02138

Full programme. Follow #FakeNewsSci on Twitter.

Write up:

170217 - Medium
Countering Fake News

Media Learning Seminar 

February 13 - 14, 2017

What do informed and engaged communities look like today?

Find videos of the discussion here or access comments Twitter via #infoneeds

The Future of News: Journalism in a Post-Truth Era

Tuesday, Jan 31, 2017

4:00 - 6:00 pm EST

Sanders Theatre, Harvard University

Co-sponsored by the Office of the President, the Nieman Foundation for Journalism, and the Shorenstein Center on Media, Politics, and Public Policy

Speakers include: Gerard Baker, editor-in-chief of The Wall Street Journal; Lydia Polgreen, editor-in-chief of The Huffington Post; and David Leonhardt, an op-ed columnist at The New York Times

Full Programme

Video coverage of the event is available.


170201 - NiemanLab
The boundaries of journalism — and who gets to make it, consume it, and criticize it — are expanding
Reporters and editors from prominent news organizations waded through the challenges (new and old) of reporting in the current political climate during a Harvard University event on Tuesday night.

Dear President: What you need to know about race 

Jan 27, 2017 - 2:30 pm – 4 pm

Newark Public Library, Newark, NJ.
Community conversation hosted by
Free Press News Voices: New Jersey
Via Craig Aaron
@notaaroncraig, President and CEO of Free Press.

Knight Foundation & Civic Hall Symposium on Tech, Politics, and the Media

Jan 18, 2017 - 8:30 am - 6:00 pm

New York Public Library

5th Ave at 42nd St, Salomon Room

Berkeley Institute for Data Science
UnFakingNews Working Group

Meeting Monday, January 9, 5-7pm -- 190 Doe Library

A group of computer scientists, librarians, and social scientists supporting an ecosystem of solutions to the problem of low quality information in media. For more information, contact nickbadams@berkeley.edu

BLANK PAGE


In the News


Essential reads


170212 - Medium
The rise of the weaponized AI Propaganda machine 

There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.


170127 - IFLA
Alternative Facts and Fake News – Verifiability in the Information Society

161228 - MediaShift
How to fight fake news and misinformation? Research helps point the way

161122 - NYT Magazine

Is social media disconnecting us from the big picture? 
By Jenna Wortham
@jennydeluxe 

161118 Nieman Lab
Obama: New media has created a world where “everything is true and nothing is true” 

By Joseph Lichterman @ylichterman √  

161118 - Medium
A call for cooperation against fake news 

by @JeffJarvis 
@BuzzMachine blogger and j-school prof; author of Public Parts, What Would Google Do?

161116 - CNET
Maybe Facebook, Google just need to stop calling fake news 'news' 
by
Connie Guglielmo @techledes 
Commentary: The internet has a problem with fake news. Here's an easy fix.

Featured


Knight Foundation - Civic Hall Symposium on Tech, Politics and Media 
Agenda and Speakers
New York Public Library -
January 18, 2017

2017
Ethical Journalism Network Ethics in the News [PDF]
EJN Report on the challenges for journalism in the post-truth era

 

161219 - First Draft News
Creating a Trust Toolkit for journalism

Over the last decade newsrooms have spent a lot of time building their digital toolbox. But today we need a new toolbox for building trust

170114 - Huffington Post
Why do people believe in fake news?

160427 - Thrive Global
12 Ways to break your filter bubble

General

161211 - NPR
A finder's guide to facts

Behind the fake news crisis lies what's perhaps a larger problem: Many Americans doubt what governments or authorities tell them, and also dismiss real news from traditional sources. But we've got tips to sharpen our skepticism.


Web Literacy for Student Fact-Checkers by Mike Caulfield @holden

Work in progress but already excellent. Recommended by @Dan

161209 - The Guardian
Opinion: Stop worrying about fake news. What comes next will be much worse

By Jonathan Albright @d1gi, professor at Elon University in North Carolina, expert in data journalism
In the not too distant future, technology giants will decide what news sources we are allowed to consult, and alternative voices will be silenced

161128 - Fortune
What a map of the fake-news ecosystem says about the problem

By Mathew Ingram @mathewi, Senior Writer at Fortune

Jonathan Albright’s work arguably provides a scientifically-based overview of the supply chain underneath that distribution system. That could help determine who the largest players are and what their purpose is.

 161128 - Digiday
The underbelly of the internet': How content ad networks fund fake news
Forces work in favor of sketchy sites. As ad buying has become more automated, with targeting based on audience over site environment, ads can end up in places the advertiser didn’t intend, even if they put safeguards in place.

161125 - BPS Research Digest
Why are some of us better at handling contradictory information than others?

Trump Presidency


'Alternative Facts': how do you cover powerful people who lie?
A collaborative initiative headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits. View only.

170216 - Politico
How a Politico reporter helped bring down Trump’s Labor Secretary pick

"This was the most challenging story I’ve ever done. But it taught me that with dedication and persistence, and trying every avenue no matter how unlikely, stories that seem impossible can be found in the strangest of ways." - Marianne LeVine


Reuters
Covering Trump the Reuters Way
Reuters Editor-in-Chief Steve Adler


170115 - Washington Post
A hellscape of lies and distorted reality awaits journalists covering President Trump
Journalists are in for the fight of their lives. They will need to work together, be prepared for legal persecution, toughen up for punishing attacks and figure out new ways to uncover and present the truth. Even so — if the past really is prologue — that may not be enough.


Dec 2016 - Nieman Lab
Feeling blue in a red state

I hope the left-leaning elements of journalism (of which I would be a card-carrying member if we actually printed cards) take a minute for reflection before moving onto blaming only fake news and Russian hacking for the rise of Trump.

161111 - Medium
What’s missing from the Trump Election equation? Let’s start with military-grade PsyOps
Too many post-election Trump think pieces are trying to look through the “Facebook filter” peephole, instead of the other way around. So, let’s turn the filter inside out and see what falls out.

161109 - NYMag
Donald Trump won because of Facebook

Social media overturned the political order, and this is only the beginning.

International

170113 - The Guardian
UK media chiefs called in by minister for talks on fake news

Matt Hancock, the minister of state for digital and culture policy, has asked UK newspaper industry representatives to join round-table discussions on the issue of fake news.

170107 - The Guardian
German police quash Breitbart story of mob setting fire to Dortmund church

170105 - Taylor Francis Online
Russia’s strategy for influence through public diplomacy and active measures: the Swedish case
Via Patrick Tucker @DefTechPat Tech editor at @DefenseOne

   161224 - The Times of Israel
Pakistan makes nuclear threat to Israel, in response to fake news


161215 - The Guardian
Opinion: Truth is a lost game in Turkey. Don’t let the same thing happen to you
We in Turkey found, as you in Europe and the US are now finding, that the new truth-building process does not require facts. But we learned it too late

61223 - The Wire
The risks of India ignoring the global fake news debate

A tectonic shift in the powers of the internet might be underway as you read this. 

161123 - Naked Security
Fake news still rattling cages, from Facebook to Google to China

Chinese political and business leaders speaking at the World Internet Conference last week used the spread of fake news, along with activists’ ability to organize online, as signs that cyberspace has become treacherous and needs to be controlled.



161220 - NYT
Russian hackers stole millions a day with bots and fake sites

A criminal ring is diverting as much as $5 million in advertising revenue a day in a scheme to show video ads to phantom internet users.


160418 - Politico
Putin's war of smoke and mirrors
We are sleepwalking through the end of our era of peace. It is time to wake up.

Press Room: Research and articles related with this project

'Alternative Facts': how do you cover powerful people who lie?      
A collaborative project headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits

170207 - Bill Moyers
Your guide to the sprawling new Anti-Trump Resistance Movement

170203 - Mashable
Google Docs: A modern tool of powerful resistance in Trump's America

How fake news sparked a political Google Doc movement

170108 - The Guardian
Eli Pariser: activist whose filter bubble warnings presaged Trump and Brexit

“The more you look at it, the more of a complicated it gets,” he says, when asked whether he thinks Facebook’s plan will solve the problem. “It’s a whole set of problems; things that are deliberately false designed for political ends, things that are very slanted and misleading but not false; memes that are neither false nor true per se, but create a negative or incorrect impression. A lot of content has no factual content you could check. It’s opinion presented as fact.”


Fake news has exposed a deeper problem – what Pariser calls a “crisis of authority”.

“For better and for worse, authority and the ability to publish or broadcast went hand in hand. Now we are moving into this world where in a way every Facebook link looks like every other Facebook link and every Twitter link looks like every other Twitter link and the new platforms have not figured out what their theory of authority is.


61223 - The Wire
The risks of India ignoring the global fake news debate

A tectonic shift in the powers of the internet might be underway as you read this. 

161215 - Washington Post
Fake news is sickening. But don’t make the cure worse than the disease.

161215 - USA Today
Fake-news fighters enter breach left by Facebook, Google
A cottage industry of fake-news fighters springs up as big platforms move slowly to roll out fixes.

161206 - Digital Trends
Forget Facebook and Google, burst your own filter bubble 

161130 - First Draft News
Timeline: Key moments in the fake news debate

161129 - The Guardian
How to solve Facebook's fake news problem: experts pitch their ideas

161127 - Forbes
Eli Pariser's Crowdsourced Brain Trust is tackling fake news 

Upworthy co-founder and hundreds of collaborators gather the big answers

161125 Wired
Hive Mind Assemble 
by Matt Burge @mattburgess1 
Upworthy co-founder Eli Pariser is leading a group of volunteers to try to find a way to determine if the news online are real or not

161119 - Quartz
Facebook’s moves to stamp out “fake news” will solve only a small part of the problem

161118 - CNET
The internet is crowdsourcing ways to drain the fake news swamp
Pundits and even President Obama are bemoaning fake news stories that appeared online leading up to the election. A solution might be found in an open Google Doc.

161116 - The Verge
The author of The Filter Bubble on how fake news is eroding trust in journalism

‘Grappling with what it means to look at the world through these lenses is really important to us as a society’

161115 - Digiday [Podcast 23:12] 
Nieman’s Joshua Benton: Facebook has ‘weaponized’ the filter bubble

161109 - Nieman Lab
The forces that drove this election’s media failure are likely to get worse

By Joshua Benton @jbenton 

Segregated social universes, an industry moving from red states to the coasts, and mass media’s revenue decline: The disconnect between two realities shows no sign of abating.

[Press Room - Full Archive]




About Eli Pariser


Eli is an early online organizer and the author of
The Filter Bubble, published by Penguin Press in May 2011.

Shortly after the September 11th terror attacks, Eli created a website calling for a multilateral approach to fighting terrorism. In the following weeks, over half a million people from 192 countries signed on, and Eli rather unexpectedly became an online organizer.

The website merged with MoveOn.org in November of 2001, and Eli -– then 20 years old -- joined the group to direct its foreign policy campaigns. He led what the New York  Times Magazine termed the “mainstream arm of the peace movement”; -- tripling MoveOn’s member base in the process, demonstrating for the first time that large numbers of small donations could be mobilized through online engagement, and developing many of the practices that are now standard in the field of online organizing.

In 2004, Eli co-created the
 Bush in 30 Seconds online ad contest, the first of its kind, and became Executive Director of MoveOn. Under his leadership, MoveOn.org Political Action has grown to five million members and raised over $120 million from millions of small donors to support advocacy campaigns and political candidates, helping Democrats reclaim the House and Senate in 2006.

Eli focused MoveOn on online-to-offline organizing, developing phone-banking tools and precinct programs in 2004 and 2006 that laid the groundwork for Barack Obama’s remarkable campaign. MoveOn was one of the first major progressive organizations to endorse Obama for President in the presidential primary.

In 2008, Eli transitioned the Executive Director role at MoveOn to
Justin Ruben and became President of MoveOn’s board.

Eli grew up in Lincolnville, Maine, and graduated summa cum laude in 2000 with a B.A. in Law, Politics, and Society from Bard College at
Simon's Rock. He is currently serving as the CEO of Upworthy and lives in Brooklyn, NY.

Contact: @elipariser 

Conferences

Combating Fake News: An Agenda for Research and Action 

February 17, 2017 - 9:00 am - 5:00 pm
Full programme - #FakeNewsSci on Twitter


Related articles

170214 - Forbes
Political issues take center stage at SXSW

170205 - The College Reporter
Workshop provides students with knowledge pertaining to fake news



170207 - Backchannel
Politics have turned Facebook into a steaming cauldron of hate

170201 - Triple Pundit
Upworthy and GOOD announce merger, join forces to become the leader in Social Good Media


170127 - Observer
These books explain the media nightmare we are supposedly living in


170118 - OpenDemocracy.net
The internet can spread hate, but it can also help to tackle it

161216 - NPR TED Radio Hour
How can we look past (or see beyond) our digital filters?

161122 - NYT Magazine

Is social media disconnecting us from the big picture? 
By Jenna Wortham
@jennydeluxe 

161112 - Medium
How we broke democracy

Our technology has changed this election, and is now undermining our ability to empathize with each other

1108 Ted Talks
Eli Pariser: Beware online "Filter Bubbles"

110525 - Huffington Post
Facebook, Google giving us information junk food, Eli Pariser warns


0305 - Mother Jones
Virtual Peacenik


030309 - NYT Magazine
Smart-mobbing the war

[Eli Pariser - Full Archive]


Start of Document 

Basic concepts

  • Define concepts clearly[h][i]. Is the “fake” / “true” dichotomy the best approach here? There can be multiple dimensions to help inform people: “serious / satire[j][k],” “factual / opinion[l],” “factually true / factually untrue,” “original source / meta-commentary on the source,” etc. +Kyuubi10

  • To expand on the previous concept… This is more of a question to keep in mind rather than a solution, but I believe important nonetheless:

    How should
    fact-checking[m] be done? How can we confirm that the “powers that be” are not messing with the systems created? How do we avoid human bias in fact-checking? How to we avoid censorship efforts, to use our systems to censor content? Or avoid our systems being used to promote propaganda? --Kyuubi10

  • Not only define or suggest terms, but perhaps try to express what people think is the problem (ie. Is it that people are misled[n][o][p], that political outcomes are being undemocratically influenced, that civil discussion is being undermined and polarized?) There may be many problems people in this discussion have in mind, implicitly or explicitly, and to discuss solutions it's important to agree on what is the problem (or aspect of it) being addressed. [@tmccormick / @DiffrMedia] +@IntugGB 

  • Note [30 Nov 2016] @tmccormick: it seems there are a few overlapping problems, and varying definitions, in this discussion. “Fake news” is used variously to mean deliberate false news; false but not necessarily deliberate; propaganda, as in information created to influence, which may be false or not; or information that is biased or misleading. Also, in some case we aren’t talking about ’news,’ for example false reviews, false or disputed health/medical information (anti-vaxxers issue, eg).  - Note [15 Dec 2016] @linmart Add false equivalencies: A debate where information 99% verified is set up in equal standing as a 1% view.

    Many parts of this discussion concern issues of confirmation bias and the polarization of opinion groups. This intersects with “fake news” topic because it’s one reason people create, share, and accept misinformation, but it is really a broader issue, in that it describes how we tend to form and maintain, narrow or broaden, our views in general.

    Related to polarization, there is another lens on this topic area, which is trust - trust in media institutions, or in civic institutions generally trust or support media organizations, the truth of those orgs’ news doesn’t really matter. (This is generally the angle of the
    Trust Project, one of the biggest existing media collaborations in this field).. Trustworthiness is not the same as truthfulness, as for example we may have degrees of trust in opinion, analysis/interpretation, or prediction, none of which reduce to true or not. Trust is driven by many factors besides news truthfulness, and if the public does not ... [gm corrected - please confirm]

    I[q] note these differing lenses/problems because I am hoping this project will remain an open network for different issues and projects to intersect. The better it maps and organizes the core issues,  considering these different points of view, the better it can be a useful hub for many interested contributors and organizations to learn from and complement each other’s work.

    << I agree. The Onion may be satire, but it communicates sharp societal critiques rather effectively/Same with Daily Show, et al. +Diane R

  • Backfire Effect.  There is little hope of fighting misinformation with simple corrective information.  According to a study by Nyhan & Reifler (Political Behavior, 2010, vol. 32; draft manuscript version here) “corrections frequently fail to reduce misperceptions of the targeted ideological group.”  In some cases there is a “backfire effect”[r][s] (a.k.a. Boomerang effect) in which corrections actually strengthen the mistaken beliefs.  This is especially true when the misinformation aligns with the ideological beliefs of the audience.  More here and here.  This suggests that corrective information strategies are only likely to be successful with audiences that are not already predisposed to believe the misinformation.

  • Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.[t][u]

  • Identify the emotional [v]content of fake news. The public falls for fake news that validate their feelings (and their defenses against unwanted feelings). Respond to the emotional content of fake news.

  • Wondering if it might be necessary to open up a new area within this study: fake reviews. Even if they are just a joke, they can change public perception and who knows what else. Thinking companies, writers, politicians...

    State sponsored, click farms,
    comedy writers, no idea.  - @linmart


 Definition of Fake News


At this time, there does not appear to be a definition of “fake news” in this document.

  • If readers/contributors do a Ctrl+F for the words “define” and “definition,” they will not find a definition of “fake news” in this document. The phrase “fake news” is used over 150 times without a definition.  

  • It seems that a handful of clear cut examples of fake news have been recycled over and over again since the election, but examples are not definitions. Examples do not tell us what fake news is. Establishing a working definition might be preferable to proceeding with work with no definition.

I’d like to see the term, ‘Fake News’ retired. If it’s fake, it’s not news. It’s lies, inventions, falsehoods, fantasy or propaganda. A humorist would call it, “made up shit”. 

Compilation of basic terms

Possible areas of research and working definitions: 

- fake / true
- fake vs. fraudulent
- factual / opinion
- factually true / factually untrue

- logically sound / flawed
- original source / meta-commentary on the source

- user generated content

- personal information

- news

- paid content

- commercial clickbait
- gaming system purely for profit

Motive:
- prank / joke
- to drive followers/likes
- create panic
- brainwashing / programming / deprogramming
- state-sponsored (external / internal)
- propaganda
- pushing agenda

- money

- local ideology

- local norms and legislation - restrictions and censorship (i.e. Thailand, Singapore, China)


- fake accounts
- fake reviews
- fake followers
- click farms
- patterns

- satire

- bias
- misinformation
- disinformation

- libel


- organic / non organic

- viral




Further reference:

161128 - The New York Times 

News outlets rethink usage of the term ‘alt-right’ 

via Ned Resnikoff @resnikoff  Senior Editor, @thinkprogress 


161122 - Nieman Lab 
“No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites

161122 - Medium 
Fake news is not the only problem 

By @gilgul Chief Data Scientist @betaworks 


Classifying fake news, fake media & fake sources

Thread by @thomasoduffy 

There are different kinds of “fake” all of which need to be managed or mitigated.  Cumulatively, these fake signals find their way into information in all kinds of places and inform people.  We need to build a diction to classify and conceptualise these aspects to think about them clearly:

  • Fake article / story:  For example, a fabricated story, based on made-up information, presented as true.
  • Fake reference:  A not-intentionally fake article that cites a fake source
  • Fake meme:  A popular media-type for viral syndication, usually comprising of an image and a quote.  In this case, one that contains false/fake information.
  • Fake personality: A person controlling a social profile who pretends to be who they are not, unbeknownst to the public.  E.g. a troll pretending to be a celebrity
  • Fake representative: A person who falsely claims to represent an organisation, sometimes for the purposes of getting attention, sometimes for the purposes of discrediting that organisation.
  • Fake social page: A social page claiming to or portraying itself as officially representing a person/brand/organisation that has no basis
  • Fake website: A whole website that purports to be what it is not, with content that might be cited in topics of interest.
  • Fake reviews:  Reviews, be them published online or within a review section on an ecommerce site that are incentivised or intentionally biased, whereby, if an honest person understood their approach to writing the review, that honest person would mind.  Arguably, this applies to non-disclosed product placement or native advertising that is not disclosed clearly.
  • Fake portrayal:  As video becomes a primary way by which information is transmitted, in any situation where a person is behaving as an actor, to communicate something they don’t hold to be true, and are not doing this purely for entertainment, this could be described as a “fake portrayal”.  For example, if a voice-over artist reads a script for a brand knowing it to be false but uses their skills to present that compellingly, the output is a kind of fake-media.  For example, if a celebrity fitness model showcases a lifestyle using a product they don’t habitually consume as may be inferred by an ordinary person watching the show or advert, this is a kind of fake-media that ought to be limited.

  • Half-Truth: This is most common in reporting. half-truths are a mostly deliberate attempt to mislead an audience while using the truth and/or intentionally leaving out facts or part of the story. This is mostly done in an attempt to control a narrative. A more political yet practical example of a half-truth is when Bill Clinton claimed to “not have sexual relations with that woman”. He used a different definition of sexual relations so that way when the court did call him out he could claim via a technicality his statement were correct. Within the context of news half-truths should possibly be seen ethically on par with or possibly more severe than deliberately fake news.


To some extent, it is worth decoding strategies used by lobbyists, spin doctors, marketing agencies and PR companies - and considering - what measures could limit their ability to syndicate information of a “warped-accuracy” can counter intentionally fake-news.

  • How would the above be “verified” as fakes? In the intent to avoid censorship, the process of verifying fakes must be much more strict than the one to verify “facts”.
    I believe that Bias and propaganda would easily fall within the realm of fake (or better, should fall within), but this would easily mean that the size of the structures you might be fighting against will make this a hard fight. --
    Kyuubi10

  • Governments + Businesses will use their wealth and power in to create a force towards censorship by utilizing the method of “verifying” fakes in order to remove opposing content. +@IntugGB

    The same can be true about creating Verified Sources and Outlets, where they can be their own crowd-source to push their content to be verified. --
    Kyuubi10

Header[w]

@Meighread Dandeneau, Comm Law and Ethics, 19 October 2017

Straight out of a modern dystopian novel comes the Orwellian headline “Obama: New media has created a world where ‘everything is true and nothing is true’’ - or so we would think. Surprisingly, the very real article is less than a year old, and based entirely in nonfiction. In November of 2016, The Nieman Lab published the report after a recent Trump tweet alleged to save a Kentucky Ford automotive business from closing. The information later was proven to be false, but the damage had already been done. The post had been seen and shared by millions of active followers. Obama addressed the event in a press conference in Germany, saying, "If we are not serious about facts, and what's true and what's not … then we have problems” The first amendment protects our right to speak freely, but with fake news becoming more predominant in politics today, we have to ask ourselves - how far-reaching is the law?

Ethically, most people would say it is wrong to mislead or intentionally misinform another person. It’s dishonest, and from a young age, society instills in us the virtue not to lie. When slander is committed, the government has systems of handling it. When fraud is committed, punishment is a court case and conviction away from being enacted. But fake news has no such precedent. The rise of social media has aided and abetted the spread of such stories, and many companies profit from peddling the gossip.

To continue using the ‘Trump saving Ford automotive factory’ example, measurable impact followed when several media organizations picked up the story. Included in the mix were The New York Times, USA Today, and Detroit Free Press, who all spread Trump’s claim unchecked. Further, these companies were unofficially endorsed by public figures who shared them, giving the story enough traction to appear on Google news. James Poniewozik who condemned news organizations during the event later tweeted, “Pushing back on fake news—some spread by the president—is going to become a bigger part of the media’s job.”

But what about alternative media companies, such as Facebook? Mark Zuckerberg deflects facebook’s role in the deliberate spread of fake news, taking the role of “aggregators”. Even if Facebook tried to filter fake news, the process would be nearly impossible, implies Zuckerberg. He states, “While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual”. This has prohibited human moderators from examining news. “There is always a risk that the accusation of being ‘fake’ will be abused to limit free speech. Not all ‘fake’ news is ‘real’, and in any case, one person’s fake news is another person’s opinion,” says blogger Karolina.

Many look at fake news today and consider its circulation just a part of modern media literacy. Others, such as Jonathan Albright, data scientist at Elon University, Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, believe fake news has “a much bigger and darker” purpose. They agree, “By leveraging automated emotional manipulation...a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion”. Now they are equipped to be used as an “impenetrable voter manipulation machine” and change elections as we know them forever. Hints of this have already been seen in the most recent election, which is what sparked their research. They call it “The Weaponized AI Propaganda Machine” and it “has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts”.

One question remains; what can we do? The obvious remark is to hold liars accountable for their actions, in cases where the fault is black and white. But in cases like Trump’s, some could argue that his tweet was hyperbole. The fault can then be found in the media companies who shared the post as if it were raw news. As of now, the speech is covered legally. However as we move toward a future where companies like Cambridge Analytica manipulate the public as a weapon, examination into our changing world and its technology is paramount.

The problem with “Fake News”

How many times over this past year have you heard the term “Fake news” tossed around?  Most likely an uncountable number of times and over a vast variety of subjects.  Fake news has become so prevalent in our society that many of us don’t go a single day without a piece of fake news sneaking its way onto our computers, our smartphones, or our televisions.  This creates both ethical and legal issues that ripple through our society and ultimately take away from the medium of journalism as a whole.  

The term “Fake News” refers to the act of knowingly writing or distributing falsities that assert themselves as fact.  The easiest, and thus most popular way to spread fake news is by sharing these misleading articles on facebook, twitter, or on a plethora of other social media sites.  These articles spread quickly, and because many do not have any reason to believe that the information may be false they take every line as gospel.  So why does this matter?

According to usnews.com, over the past year team LEWIS, (who defines fake news as (an) “ambiguous term for a type of journalism which fabricates materials to perpetuate a preferred narrative suitable for political, financial, or attention-seeking gain")  conducted a study which sought to realize the overall effect that fake news had on people's views of American news and brands.  The study found that only 42% of millennials checked the credibility of publications, and this sank to 25% for baby boomers. This means that there is a chance that 58% of millennials, and 75% of baby boomers may be fed false information on an almost daily basis and accept it as truth.  This is a problem for a number of reasons.  

Usnews.com states that one of the main reasons that fake news is spread is to promote political ideologies, and this is where all of this seriously matters. Let’s say you get most of your news about a politician from Facebook, and let’s say that at least 60% of what you’re reading is either entirely false or misleading. This means that you could potentially be voting for a candidate that totally appeals to you, when in reality they might be the exact opposite of what you’re looking for and you were simply pushed and mislead by the media. Now imagine that this didn’t only happen to you, but this was the case of half, or even a quarter of the people who also voted for this candidate. This is a direct depreciation of fake news, and if is very scary.

Now it’s not that we don’t have the tools to fact check these articles ourselves, in fact it’s really not very difficult to determine the credibility of an article if you know what to look for. The major problem here is that many people don’t have any reason to believe that they’re being mislead, especially the older generations. People tend to read an article or even just the title of an article and have it stuck with them for some reason, and at some point they’ll share it in conversation with their friends because it was so gripping that they wouldn’t want it to be fake in the first place which deters them from even having the thought to check.

The ethical problems that arise because of fake news are significant and have a trial life impact. The people who push these articles have very little to answer for as it is almost impossible to police all media in an attempt to fight this sort of thing. The best battle against fake media is to check everything before preaching it as truth. Scrutinize your sources, and know that there is a chance that anything you are reading online is false. That’s all we’ve got until AI can start determine whether a post is considered fake news.

Fake News Problems

Fake news is everywhere. Many people are unaware of the amount of fake news that is out there in the media, which can include television, radio and the internet including social media like Facebook or Twitter. There are many problems that are related to fake news. As people who use different mediums of the media every day, some of the problems that would be related would be the fact that very few people know how to figure out if something is fake, and the fact that it’s unethical. Once these problems are brought to light, we, as a society, can try to make the issue of fake news known among those who use the media.

When it comes to detecting if something is classified as fake news, not many people know how to determine that. The article, Ten Questions for Fake News Detection, shows 10 different ways people can find out if something is considered fake news. Within the article, it asks, questions like “Does it use excessive punctuation(!!) or ALL CAPS for emphasis?” to determine if something in the media is fake news. There are red flags for answers that should not be answered a particular way. The more red flags there are, the worse it looks. There are other things media users can look at to determine if something is considered fake news. Some are obvious, like excessive punctuation, and some are not as obvious, such as whether or not “the “contact us” section include an email address that matches the domain (not a Gmail or Yahoo email address).” The Ten Questions for Fake News Detection article is just one of many articles people can access if they want to figure out if something is really fake news or not. Legally, these different websites or articles in the media do not have to mention whether or not they are in fact fake news. The First Amendment protects these pieces from being destroyed or put away. The First Amendment grants people freedom of speech and those who are making the articles online or talking about it on television or radio and expressing their freedom of speech. Therefore, it won’t stop anyone from getting that out in the media and in the public’s eye. There are things people can look into that one wouldn’t have even thought of unless they looked further into the matter.

Another main issue with fake news is the ethos aspect of it, ethos meaning the character and credibility of the “news” and the source that it comes from. There are many sources that are considered fake news and a large amount of people would have to agree that it is unethical to be publishing something that is fake. It throws people off when they’re looking through the media and seeing these things. Yes, it does make the viewer question the piece and whether or not it is actually true but depending on the topic, it can affect the viewer in a large way, mostly negative. It can cause anger and rage if it says one thing and the viewer takes that as the truth. This being said, a lot of fake news can be considered unethical because of the pain and frustration it puts people through. People need to take into consideration that maybe it is fake news and if it is, then they need to ignore it even though that may be easier said than done.

It’s crazy to think how popular fake news has become and how much society indulges in it in their everyday lives. There are so many problems that relate to fake news that can be realized if one was to think about it more in regards to how it’s affecting people’s lives. Fake news is an issue and it’s something people need to be aware of and they need to be able to determine if it is fake and why, and it’s also something that is unethical. With the amount of media being used in today’s society, people should have a better understanding of fake news, especially when it’s all over social media, which is extremely important to thousands of people.

Considerations → Principles → The Institution of Socio - Economic Values

by: Timothy Holborn 

A Perspective by Eben Moglen[1] from re:publica 2012

The problem of ‘fake news’ may be solved in many ways.  One way involves mass censorship of articles that do not come from major sources, but may not result in news that is any more ‘true’.  Another way may be to shift the way we use the web, but that may not help us be more connected. Machine-readable documents are changing our world.

It is important that we distill ‘human values’ in assembly with ‘means for commerce’. As we leave the former world of broadcast services where the considerations of propaganda were far better understood; to more modern services that serve not millions, but billions of humans across the planet, the principles we forged as communities seem to need to be re-established.  We have the precedents of Humans Rights[2], but do not know how to apply them in a world where the ‘choice of law’[3] for the websites we use to communicate, may deem us to be alien[4].  Traditionally these problems were solved via the application of Liberal Arts[5], however through the advent of the web, the more modern context becomes that of Web Science[6] incorporating the role of ‘philosophical engineering’[7] (and therein the considerations of liberal arts via computer scientists).


So what are our principle, what are our shared values? And how do we build a ‘web we want’ that makes our world a better place both now, and into the future?

It seems many throughout the world have suffered mental health issues[8] as a result of the recent election result in the USA.  A moment in time where seemingly billions of people have simultaneously highlighted a perceived issue where the results of a populous exacting their democratic rights resulted in global issues that pertained to the outcome being a significant surprise.   So perhaps the baseline question becomes; how will our web better provide the means in which to provide us (humans) a more accurate understanding of world-events and circumstances felt by humans, via our ‘world wide web’.

  General Ideas 

Behavioral economics and other disciplines

  • Invest in Behavioral Economics, the application of psychological insights[x] into human behavior to explain decision-making. Have standards to test the impact of any of these ideas on intended outcomes.

  • Invest in Social Cues. People create and share fake news because they cannot be held accountable for its content. Accountability cues[y] for content should be part of any technological and behavioral solutions to the fake news problem.
  • A[z][aa] novel concept proposed in the following article: An algorithm that finds truth even if most people are wrong, Drazen Prelec, Massachusetts Institute of Technology Sloan School, Cambridge MA 02139 dprelec@mit.edu  +alex@coinfund.io





Human Editors
[ab]



  • For more established outlets: consider immersing your team in alternate realities. There is always going to be a bias in human judgment but having experienced being in someone else’s shoes might cut through those perceptions. -@linmart[ah]

  • With the speed of how fake facebook posts propagate, the news coverage might be similar to what unfolds in an emergency crisis. A special manual was set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts.  Described as “a groundbreaking resource for journalists and aid providers providing the tools, techniques and guidelines for how to deal with user-generated content (UGC) during emergencies”. - @JapanCrisis

  • Hire more human editors to vet what makes it to trending topics[ai][aj][ak]

  • By then it’s too late however. It needs to be stopped before it gets there. There are ways to detect if a story is approaching virality, and if human editors monitor what’s going viral and vet those articles they can can kill its distribution before millions see it.[al][am][an][ao][ap][aq] Transparency is key to trust (last time they had human editors they were accused of being biased against conservative news outlets). Saying “this was removed because it was fabricated by Macedonian teens” is better than some vague message about violating TOS.  Eventually this data can train a machine learned classifier

Related:

Management Science
The structural virality of online diffusion - Vol. 62, No. 1, January 2016, pp. 180–196 

  • Facebook already has infrastructure for dealing with graphic content, and could easily employ something similar to mitigate fake news

  • Bring back human editors, but disclose to news outlets why specific articles are being punished. Transparency is the only way for news organizations to improve, instead of making them guess. -@advodude

  • Might have a bit too much overhead, but a thought: 1. Begin to fingerprint virally shared stories, as you likely already do in order to serve ads. 2. Have a human editor go through the resulting list and manually flag fake news. 3. Use these “verified fakes” to train an algo to recognize fake content and flag it for review. 4. Use these manual reviews to refine the algorithm. 5. Use user content flags as an additional signal. 6. A human is still required for the foreseeable future, but as the algo improves, the amount of work will decrease. [ar]-Steve

  • This could also be some sort of central-server[as] (API) approach. Having a fact-checking server/API which any user or website can query with a given news URL[at]. This fact-checking server/API then returns all the information it knows about the given news URL including whether it assumes the news URL to be fake, reasons for that, and alternative (non-fake) news sources on given topics.[au] Yet, this still requires human involvement to check stories, but it could be a somewhat independent organization where (also) actual news outlets invest in. +@IntugGb

 

  • Perhaps look at how Artificial Intelligence (AI) is being applied across industries. Search through case studies involving lawyers or doctors where massive amounts of information need to be looked at but where, in the end, the final call is made by a human. +@linmart

  • We are a group of former journalists and executives from Canada’s public broadcaster (CBC) whose company, Vubble, has been working on a solution to solve fake news, pop filter bubbles and engineer serendipity into the digital distribution of content (our focus is video). We’ve created an efficient and scaleable process that marries professional human editors with algorithm technology to filter and source quality content (and flag problematic/fake content -- we believe the audience needs more agency, not less). We are currently putting together the funding to build a machine learning layer that would produce responsive feeds, giving a user a quality experience of content ‘inside’ their comfort zone, but also deliberately popping in an occasional challenging piece to take her slightly outside her comfort zone on difficult subjects. We’re building this as an open platform, providing transparency on how these types of systems work, and routine auditing for bias within the code that drives it. (@TessaSproule) +Mathew Ingram +Eli
  • Create a news site that rewards honest reporting and penalizes dishonest reporting[av]. - dvdgdnjsph
  • In order to post or vote on content, contributors must purchase site credit.
  • Reddit-style upvoting/downvoting determines the payment/penalty for contributions.
  • More details here 

        



Under the Hood

  • Insert domain-expertise back into the process of development: Engage media professionals and scholars in the development of changes to the Facebook algorithm, prior to these moment when these changes are inputted. [aw]Allow a period of public comment where civil society organizations, representatives from major media outlets, scholars, and the public, can begin to tease out the potential implications of these changes for media organizations - @RobynCaplan, @datasociety, @ClimateFdbk  

>> I elaborated on this a bit here: - @msukmanowsky

161110 - Medium
Using quality to trum misinformation online 

Using page and domain authority seems like a no brainer as a start. I advocated for adding this information to something like Common Crawl 

>> The problem with this approach is that fake-news is not only generated by web domains but via
UGC sites such as youtube, facebook and twitter. - yonas

  • Use a source reliability algorithm [bd]to determine general reliability of a source and how truthful the facts[be][bf] in a particular article are. This has the benefit that newer news sources still get a fair chance at showing their content. -- Micha

  • Look up the DNS entry to see when the site was first registered - For example, washingtonpost.com was registered in 1995[bg][bh], while conservativestate.com was registered in Sept 2016 in Macedonia (+Daniel Mintz)

  • Track all news sources, to include when the website was first registered and any metadata suggesting links to fake/fraudulent activity, as part of an authenticity metric.

  • Provide strong disincentive for domains propagating false information, e.g. if a domain has demonstrably been the source of false information 10 times over the past year, dramatically decrease the probability that links pointing to it will be shown in users’ feed. -- Man
  • Genetic Algorithms using many of these ideas which may be boiled down into discrete values. Spam filtering is very challenging due to the need to avoid false positives. Start with a seed of known false and true stories.[bi][bj] Create genetic algorithms using several of these variables to compete over samples of these stories (need a large set and to rotate sample to avoid overfitting). Once a satisfactory false positive rate is reached, keep test algorithms running in a non-production environment to look for improvements. -- Steve

  • Audit the algorithm regularly for misinformation--you are what you measure, and a lot of the effects are second-order effects of other choices. --z



Facebook 

  • Facebook polarizes discourse and thus becomes an unpleasant place to “be.” So many people are walking away from the platform as a result of this election. This is a problem in their best interest to solve, and the fake news problem is part of this. -@elirarey

Recommended

161120 - NPR
Post-election, overwhelmed Facebook users unfriend, cut back

  • There are two main reasons that Facebook was so easy to spread fake news on, as it allows anyone to post whatever they want with little to no restrictions. Even if they do break regulations, it most likely wouldn’t be noticed for some time, as there would be quite a bit of posts to monitor. It may even be impossible to completely stop fake news from being spread on Facebook, but there are a few attempts that could be made that would work both legally and ethically. Listed below are problems/reasons why fakes news is spread so easily on Facebook and what could be a possible solution to prevent or drastically lower these issues from happening.
  • One reason Facebook has become an outlet for fake is due to the fact that at least during the time of the 2016 presidential election and any time prior to it, Facebook did not have a strong policy on fake news, so they did not crack down on it due to fact that it was not receiving large amounts negative feedback. After the election, citizens of the U.S. started calling Facebook out, as there were so many fake news articles that were getting shared around within three months of election day. The most shared fake news articles varied from close to 500,000 to nearly one million shares. Since then, Facebook has made a claim that they will start to punish and limit Facebook more. It will be tricky as the website allows all freedom of speech unless it is not a threat towards anyone. They have the right to remove any post they want as it is their website, but if they were to do this frequently, they would lose members of their site, spark debates, and possibly be brought to court (although they would win the case), all of which they would rather avoid. The way they plan on being able to find fake news and remove it is by giving Facebook users the option to mark a post as being fake news. Then, Facebook will check the domain of the site, and lastly, it will be sent to a third party investigation team, and they will be responsible for fact checking. It’s not an awful start, but it could have some downsides such as:
  • How clear is it that people can now mark something as fake news? For instance, would it be clear to the average Facebook user that this is a tool? Most people probably wouldn’t recognize it unless it was obvious, as people tend to use Facebook to merely “swipe through” and see what friends are up to.
  • Would Facebook employees be able to tell what is meant to be fake news? What if a site like The Onion was spammed for being fake news? Clearly it is satire. The same could be said for any opinionated articles being shared.
  • Would sharing the article be halted during the review process? If so, what if it turned out to be legit? It could prevent real news from getting shared.

  • Another reason fake news is popular on Facebook is because most of the users go on Facebook to get a brief overview of what’s going on in the world, mainly within their social lives. The problem with this is the briefness they want. For instance, if a student is on Facebook to kill time before class, he or she wouldn’t want to spend the whole time on the site reading an article. If he or she read a fake news article title that was shocking and engaging, as they usually are, and didn’t have the knowledge or perception of the site being fake news, he or she would most likely take it as true. Then, regarding to how much emotion he or she had towards the piece, they might as well share it, spreading the fake news. A solution to tackling this scenario would be to make it so that in order for a piece that directs you to another page out Facebook to get shared, one must have actually clicked on the link. Although this will not stop all fake news all together, as people may not realize it is fake, it would cut it down. Another way would be to install a merit system, allowing pages to become verified, which would show that they are a trustworthy source.                - Alexander Rajotte

        

Facebook’s first attempt/plan of action to fight fake new.

Facebook’s fake news problem.

  • Facebook needs to be more transparent about the incentives[bn][bo] that are driving the changes to their algorithm at different points in time. This can help limit the potential for abuse of actors seeking to take advantage of that system of incentives. - @RobynCaplan, @datasociety +Eli +Anton

  • Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.  

    Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’.  It makes it personal and defensive: there is a difference between a comment on a shared article criticising the author and criticising the ‘sharer’, as if they’d written it.  They may not agree with all of it.  They  may be open-minded.  By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’.  [Amanda Harris]
    [bp][bq]

  • Implement a Time Delay on FB Re-shares: Political articles[br] shared on Facebook could be subject to a time delay once they reach over 5,000 shares. Each time an article is re-shared, there is a one hour time delay between when the article is shared and when it appears in the timeline. When the next person shares it, there is another one hour delay, etc. This “cool down” effect will prevent false news from spreading rapidly[bs]. There could be an exponential filter: Once an article reaches 20,000 shares, there could be a 4 hour time delay, etc. A list of white-labelled, verified sites, such as the New York Times and Wall Street Journal[bt], would be exempt from this delay[bu][bv][bw].[bx]-  Peter@quill.org +BJ (This is a good idea!)

    >> This suggestion would apply only to FB? Of little use if parallel to that, one single post spreads like wildfire on Twitter. --
    @linmart  Peter: Twitter could also implement this system, where political posts are delayed before appearing in the Timeline. Still, Facebook has far more traction than Twitter internationally, so it’s a better place to start.

  • In countries with strong public media ethos, Facebook should present users with a toggle option on the “trending” --  option #1 is as is, fully driven by Facebook’s algorithm, or option #2 is a dynamic, curated feed that is vetted by professional editors (independent of Facebook -- programmed by a third party, from the country’s public broadcaster or some other publicly-accountable media entity.) --@TessaSproule 


    Recommended:

    161120 - NYT
    How Fake Stories Go Viral
[by]

  • Facebook already knows when stories are fake

    When you click a “like” button on an article, it pops up a list of articles that people have also shared, and a lot of times that list includes, for example, a link to a Snopes article debunking it.[ca][cb] So they already know that people respond to fake news with comments linking to snopes. They could add a “false” button or menu item to flag as fake news.  -Abby[cc][cd][ce] +Eli +@linmart 

  • Pressure Facebook’s major advertisers to pressure Facebook over the legitimacy of news stories upon which their ads are being displayed.
  • Yes, I second this, but how?

  • This  is showing up:  

FB filter.png

Facebook message that now shows for the link provided by Snopes to the original source of the hoax.

As reported in this story:

161123 - Medium
How I detect fake news 

by @timoreilley 

  • Facebook already has a good source of verifying fake news that they abuse and need to use better.

A few important facts to consider first:

  • When a person ‘likes’ something they are:
  1. Saying to the poster, or site, that they like or approve
  2. Saying to Facebook this is what I like and want to see more of1
  3. Showing to their friends that they like this and approve
  • When sharing fake news all of these factors are bad for accidentally making fake news viral. But #2 can be even more devastating. If you consider Mat Honan’s experiment in his article “I Liked Everything I Saw on Facebook for Two Days. Here's What It Did to Me”1: When you like a fake news article, Facebook may feed you more from the site or more articles like it.
  • On the reverse side if you got tired of seeing something, and it happened to be legitimate, like a Presidential Candidate's Official site, you might tell Facebook “I don’t want to see this”2 You would then not see the legitimate news at the same time you were seeing the additional fake news that Facebook thinks you liked.

The smallest abuse here is that at the individual level a person can no longer trust their own friends to deliver real news. Then they have to go beyond Facebook to try to figure out what is real or fake.

What Facebook needs to do better is realize that trust among your family and actual friends (that you know outside of Facebook) is an invaluable tool for them and for Facebook.

Facebook needs to:

  • Post a verified site internally that educates users how to verify Fake or real news.
  • Have the above page(s), visible by a button or something similar, at all times while a person is in their news feed reading.
  • Have a badge that a person can earn for knowledge on verifying fake news.
  • This badge should only be seen and utilized by a person’s trusted family and known friends, which they should be able to designate themselves.
  • This badge could allow a person to have talking point with their friends when they recognize their friend has shared fake news.
  • The ‘offender’ can then be directed to the Facebook Fake news verification site, and could earn a badge to get their friends confidence back.
  • Badges could a have a level to them that could be raised or lowered by a person’s trusted friends (not friends or friends, or public)
  • In the process Facebook could accumulate these reports of the Fake sites that are verified at a personal level among trusted friends.


Important fact: You trust your actual family and friends, and Facebook needs to acknowledge this, respect it, and tap into it, to help make Facebook a more legitimate site for news sharing.

  1. Worley, Becky (08 May 2013)  Facebook Scam Alert - What really happens when you "like". Yahoo! News.   Retrieved: 18 Oct. 2017.
  2. Honan, Mat (14 August 2014)  I Liked Everything I saw on Facebook for two days. Here's what it did to me. Wired.com. Condé Nast. Retrieved: 18 October 2017
  3. Hide a Story (n.d.) “That Appears in My News Feed?" Facebook Help Center. Facebook. Retrieved: 18 Oct. 2017.


 

Analysis of Headlines and Content

  • Sentiment Analysis of headline - I suspect most fake news outlets use click-bait headlines with extremely strong verbiage to accentuate the importance of the story. Many clickbait headlines will be from legitimate stories so this is a signal, not a panacea. - Steve[cf][cg][ch][ci][cj] +BJ

  • Search deep links for sources which are known to be legitimate - looking for sources within an article using textual analysis (such as “as reported by the AP”, or “Fox News reports”), and checking the domains of said sources for a similar story (or checking the link to the source if provided) is a useful signal for a story not being fake. In the case that this is gamified a programmatic comparison of content between the source, and the referring article may be useful - Steve +Kyuubi10 (This is a great idea!)


  • Cross-partisan index: Articles that people beyond a narrow subgroup are willing to share get more reach[ck][cl]. -- Eli + Jesse + Amanda + Peter +CB +@IntugGB +Rushi +JS +BJ +NBA

  • Cross-partisan index II: Stories/claims that are covered by wide variety of publications (left-leaning, right-leaning) get higher Google ranking or more play on Facebook. --Tamar +1NBA
  • Cross-spectrum collaboration: Outlets perceived as left-leaning (eg NYT) partner on stories with those perceived a right-leaning (eg WSJ). -- Tamar +CB +@linmart

  • Compare Content with partisan language databases.  Some academic research[cm]  on Wikipedia has assembled a database of partisan language (i.e. words more likely to be used by Republicans or Democrats) after analyzing the congressional record.  Content could be referenced against this database to provide a measure of relative “bias.”  It could then be augmented by machine learning so that it could continue to evolve. --@profkane[cn]


(Related comments in section on
Surprising Validators -- @rreisman)


Reputation systems

  • Authority of the sharers: Articles posted by people who share articles known to be true get higher scores.-- Eli

  • The inverse -- depress / flag all articles (and sources) shared by people known to share content from false sources. -- John

  • Author authority, as well. Back in the day Google News used Google+ profiles to identify which authors were more legitimate than others and then factored that into their news algorithm. - Kramer +Eli +@linmart
  • I think authorship data might still be available in the form of “rich snippets” embedded in the articles. -Jesse
  • We’re looking at author bios tied back to a source like LinkedIn[co] - @journethics +@linmart[cp]

  • There are fundamentally not that many fake sources that a small team of humans could not monitor/manage. Once you start flagging initial sources then the “magic algorithm” can take over: those who share sources are themselves flagged; other items they share are flagged in turn; those sources are themselves flagged; and so on. -- John (+Tamar)
  • Upvoting/downvoting - (Andrew)  There are better approaches than just simply counting the number of upvotes and downvote.  (+JonPincus)

  • Reuters Tracer

    Fall/Winter 2016 - CJR
    The age of the cyborg [AI]
    Already, computers are watching social media with a breadth and speed no human could match, looking for breaking news. They are scanning data and documents to make connections on complex investigative projects. They are tracking the spread of falsehoods and evaluating the truth of statistical claims. And they are turning video scripts into instant rough cuts for human review...

  • Fake news is usually an editorial tactic/strategy. So it’s something that is planned, repeated and with specific individuals working on it. An open standard reputation system[cq] just like Alexa rank will do the job. It will be first crowd-populated. We at Figurit are currently working on implementing this internally to discover stories while eliminating fake ones. So instead of ongoing filtering/policing of the news, an open reputation system adopted by major social networks and aggregators will kill fake news websites.

    NOTE: must put exception for The Onion! ;[cr]]


  • Higher ranking for articles with verified authors. -- Eli[cs][ct]

  • Ask select users -- perhaps verified accounts, publisher pages, etc -- when posting/ sharing to affirm that the content is factual. Frontloading the pledge with a pop-up question (and asking those users to put their reputations at stake) [cu]should compel high-visibility users to consider the consequences of posting dubious content before it’s been shared, not after. (This is based on experiments that show people are more honest when they sign an integrity statement before completing a form than after.) -- Rohan +Manu

  • It’s a strange thing what happened with Google+ When it started, the group was very select. Content was extraordinary -  as were the conversations. Once they opened the floodgates, all hell broke loose and all sorts of ‘characters’ started taking over. Conversations went from being quite academic to … well, different. --@linmart

  • I maintain an (open-source, non-profit) website called lib.reviews, which is a generic review site for anything, including websites. It allows collaborators to form teams of reviewers with shared processes/rules. I run one such team, which reviews non-profit media sources (so far: TruthOut, Common Dreams, The Intercept, Democracy Now!, ThinkProgress, Mother Jones, ProPublica). I think this is essential so news sources in the margins don’t get drowned out by verification systems or efforts to discredit them. Here’s the list of reviews specifically of non-profit media:

    Reviews by Team: Non-profit media

    It’s easy to expand this concept in different ways. If you’re interested in collaborating on the tech behind it or on writing reviews of news sites, see the site itself, or drop me a note at <eloquence AT gmail DOT com>. See our
    FAQ for general issues w/ user reviews.--Erik Moeller @xirzon 


A possible method of implementing reputation systems is to make the reputation calculation dynamic and system based, and mapping the reputation scores of sources on a reverse sigmoid curve. The source scores will then be used to determine the visibility levels of its articles on social media and search engines. This ensure that while credibility takes time to be built, it can be lost very easily.

Where,

Ss -> Source Score

Sa -> Cumulative of its article scores

However, this system needs to be dynamic and allow even newer publications a fair chance to get noticed. This needs to be done by monitoring the reputation of both the sources and the channels the articles pass through.

Have fleshed out the system in a bit more detail in the following open document if anyone is interested in taking a look.

Concept System for Improved Propagation of Reliable Information via Source and Channel Reliability Identification [PDF] 

Anyone interested in collaborating on this can contact me at sid DOT sreekumar AT gmail



Build an algorithm that privileges authority over popularity[cv]. Create a ranking of authoritative sources, and score the link appropriately. I’m a Brit, so I’ll use British outlets as an example: privilege the FT with, say, The Times, along with the WSJ, the WashPo, the NYT, the BBC, ITN, Buzzfeed, Sky News, ahead of more overtly partisan outlets such as the Guardian, the Telegraph, which may count as quality publications but which are more inclined to post clickbaity, partisan bullshit. Privilege all of those ahead of the Mail, the Express, the Sun, the Mirror.

Also privilege news pieces above comment pieces; privilege authoritative and respected commentators above overtly partisan commentators. Privilege pieces with good outbound links - to, say, a report that’s being used a source rather than a link to a partisan piece elsewhere. [cw][cx][cy]

Privilege pieces from respected news outlets above rants on Medium or individual blogs. Privilege blogs with authoritative followers and commenters above low-grade ranting or aggregated like farms. Use the algorithm to give a piece a clearly visible authority score and make sure the algorithm surfaces pieces with high scores in the way that it now surfaces stuff that’s popular.

Of course, those judges of authority will have to be humans; I’d suggest they’re pesky experts, senior journalists with long experience of assessing the quality of stories, their relative importance, etc. If Facebook can privilege popular and drive purchasing decisions, I’m damn sure it can privilege authority and step up to its responsibilities to its audience as well as its responsibilities to its advertising customers. @katebevan



I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy. [cz]

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[da]

[db]

Possible requirements

  • National outlet: Should have a paid employee in a minimum number of states.
  • International outlet: Paid employees in multiple locales.
  • Breadth of coverage: A solely focused political outlet should not be a certified outlet.
  • Minimum number of page views prior to certification

Time in existence, numbers of field reporters in each country/local should be required

Verified sites

  • Verified sites. Rather than try to get into the quagmire of trying to identify all "fake" news, there could be a process by which publishers could apply to be marked in the news feed as "verified." [dc](Think: Verified people on Twitter… but less arbitrary.)  

    Example criteria: all published articles are linked to a real person, sources for stories are specifically cited, fact-checkers, high-authority sites link to the site as a reputable source, etc. Basically, a combination of factors listed in this doc.

    When you get accustomed to seeing stories in the NewsFeed marked with a badge that marks the source as verified, you’d automatically be a little skeptical of things
    not marked verified. And FB gets to avoid getting directly involved in that impossible “policing fake news” quagmire. --Sonders (+Andy) Nice (Andrew) +@linmart

  • This may potentially be enough, if FB doesn’t want to go all the way down this path of ‘verified’ sites it may be possible to simply build in ‘speed brakes[SP] to slow down stories from suspect sources until they are caught by FB’s other methods (IE user reporting). Issue seems to be that things can go viral too fast to be caught under the current model --Cam (+Daniel Mintz)

  • Riffing on the speed bump idea that Cam wrote above, I’d just say that creating a very permissive whitelist for verified news and speed-bumping other “news” that isn’t on the white list seems like it would make a big dent with very little effort and next to no downsides. And  when I say very permissive, I mean it. NYT would get through, but so would Daily Kos, Breitbart, The Blaze and Upworthy. But the Macedonian sites and their ilk wouldn’t. Wouldn’t come close to solving the whole problem, but would make a dent at very low cost.
  • This is essentially the model thetrustproject.org is using. Trust Indicators include author bio (ID): citations: label of news, analysis: opinion & sponsored content; original reporting; opportunities for public to dispute, etc. @journethics

  • Thoughts/questions on a board to verify sources:

    1) Though it should be a regulatory body, it absolutely must be independent of the government.

    2) Standards shouldn't have to be money or scope-based--this limits the capacity of citizen journalists, smaller outlets, or independent news producers and freelancers. That's the beauty of the internet, but it's also the danger--anyone can say anything. Why not use it as a soapbox for those folks who will provide a megaphone for real news and stories beyond the headlines and major outlets?

    3) On that note, what kind of standards can journalists agree on? Credibility of sources? Journalistic policies? Ethics rules?


    4) I don't know enough about the tech to say this definitively, but I'm not sure this is something you could accomplish with an algorithm at this point. I think this is a place for human editors/boards.

    For the sake of allowing this to be a tool to verify all outlets from small citizen bloggers to the NY Times, it could be a peer review system--volunteer journalists review the outlets/sites that apply for verification?  You could have a higher board of paid editors (funded by some non-profit source or Facebook/ Twitter/ Google) whose job it is to audit larger sources, but in general, is independence from tech firms (which are money-making entities to be covered themselves) something we should seek?
  • 5) We can do a lot of fact checking on our own without using a group of people to verify sources. Though we should have more people verifying sources and not publishing fake news, we should always be on the lookout ourselves. That means not just taking something that you read as is, but comparing it with other sources. That sometimes means just doing a quick google search to see what other sources have to say about it. We also should make sure that we are getting our news from the right sources. It is easy to check the about links on web sites and see if what it is saying comes from a reliable source.

6) Enforcement: when I google news search a topic, a range of articles       come up, not all of which are news---some are fraudulent/fake, some are biased--why is this being called news? Why should it get a checkbox when other, unverified sources, shouldn't appear next to it in the first place?

If anyone is interested in discussing the specifics of what I’m thinking, contact me --
alexleedsmatthews@gmail.com



I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy.

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[dd]

[de]

Possible requirements

  • National outlet: Should have a paid employee in a minimum number of states.
  • International outlet: Paid employees in multiple locales.
  • Breadth of coverage: A solely focused political outlet should not be a certified outlet.
  • Minimum number of page views prior to certification

(Time in existence, numbers of field reporters in each country/local should be required)


A broad architecture for reputation systems is outlined below in “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds” - Richard Reisman @rreisman 


Distribution - Social Graph

  • Do other outlets pick up the story? If not, it’s probably false. Could downrank domains on this basis over time -- it’s very unlikely that a site originates highly shareable, true stories that no one else picks up. --Eli +pvollebr[df][dg][dh][di][dj][dk] +BJ
  • Google News did a “syndicated content” meta tag back in the day. It was used by news sites with original content to signal the GNews algorithm to treat it differently. Any site using similar content would add weight to the original piece, pushing it higher in the rankings.[dl][dm][dn] - kramer
  • “Original reporting” is an indicator we’re working on, but it’s tricky. Use language analysis to ID derivative text? - @journethics
  • Related: “fake news” topics + wording thereof probably exhibit vastly different clustering behavior relative to “real news” -- I can’t necessarily anticipate how, but the data’s there to figure it out. So, in short: can train a classifier on the types of features already extracted by algorithms that perform automated “summarization” and other text analysis tools --Andy

    Assuming everyone is in an echo chamber, there might be some value in injecting some form of alternate viewpoint. Verified sources, e.g. NYT would signal “quality” but NYT opinion is slanted; sometimes that may be good suggestion, others not. --ac
  • Can’t solve a technological problem with a technological solution. We need to invest in:
  • Is this a graph clustering problem? ie. if you have a bunch of fake websites that primarily link to each other, you ought to be able to find this somehow[do]. Spectral analysis of the graph matrix? -- N.


  • Okay, here’s a stupid idea from that paper: textual analysis.[dp][dq] Take a webcrawler, look at the “promoted” box, and assume that you have poor trust for any website that is in the same textual link area of the webpage as “one free wrinkle trick” or anything to do with local doctors being furious (say). Chances are, anything that is in there (“Crooked Hillary is Done!”) is probably also not trustworthy. -- N.



Fact-Checking

  • News organizations that have known fact checkers and issue corrections should have higher weight +pvollebr

  • Link users to the original agreed upon factual sources in real time so that they can do the fact checking themselves rather than rely on someone else, and thus enhance statistical literacy. This is what we are building at Factmata.
  • Pair questionable news with fact-checking sites (and invest in fact-checking sites[dr]) - zeynep +Eli +@ClimateFdbk

  • Micro-bounties for fake news?  Maybe Facebook/ Twitter/ Google News -- or some outside philanthropist group -- could set up a small fund to reward non-affiliated users who identify fake news stories (thus incentivizing the exposure of fake news rather than the creation of it -- and crowdsourcing that hunt) +Kyuubi10
  • Create a scoring system: Use human editors, in conjunction with fact checking organizations, to score sites for the news they post. “Pants on Fire” scores a 5. “Mostly False” scores a 4 and so on. Once a site reaches a predetermined number of points, they get banned and removed from Facebook.

    Note: Facebook already has this system in place for individuals - if you violate the rules too often, you get banned. Do the same for pages.
  • Be careful with this. The banning systems are often gamed, in that they can be overloaded or ganged up on or have people with many multiple accounts to affect the results. This must be monitored and cleared by humans, which means fact checking by independent sources.

  • Full Fact is the UK’s independent fact checking charity.
  • Fact Checking has to be completely transparent and auditable, with anyone being able to dispute any ready checked fact. --Kyuubi10



Creation of multiple, user-selected fact checkers
(no banning/ censorship/ editorial control/ coercion):
 

  • Social media platforms create an API for independent fact checkers organizations to plug into and provide verdicts for individual links as well as sites in general (based on statistics for that site). Verdicts must be restricted to one out of a small number of well-designed categories, similar to what e.g. Snopes is providing today, perhaps with additional verdicts for “satire”.
  • They invite organizations across the spectrum to participate and become a fact checker.
  • Then, they allow users of social media platforms to explicitly select one or more fact checkers for themselves.
  • The verdict of fact checkers on any given external link becomes an annotation (e.g. color-coded) in the platform’s presentation of the content. The annotation should be link to the fact checker’s site.
    The annotation would also be presented in the platform’s “editor” as soon as a link is added to a post.

This may achieve the following:

  • Users might welcome this as a useful service because it saves them the work of consulting their chosen fact checkers’ sites themselves, and find the correct article. Many have been embarrassed when they posted a provably false story, then got called out on it by others.
  • There is no censorship. In fact, it would get social media platforms out of the fray of editorship - a position they may prefer. Notably, in this option, there would continue to be no more restriction on the type of links shared than today.
  • It makes trust a first-class citizen on the platform. Being asked to select a fact checker from a wide spectrum makes it immediately clear to the user that there is no absolute truth in any of the external content they are presented with on social media, and that there is a significant amount of content on the web that is not based on proven facts. It clearly conveys that it is everyone’s responsibility to apply their own judgment here. The platform will assist with that process and make it easy.

    Choosing a fact-checker is quite obviously a much more consequential choice than clicking on an article, so it would likely, on average, be made more judiciously.

    If deemed acceptable or desired, the social media platform could give more reputable (or popular) fact checkers better placement in the fact checker choosing UI.

  • Many would select multiple fact checkers if it’s easy to do and unobtrusive, perhaps even across the spectrum because, in the end, everybody wants more information. Seeing the verdicts or even conflicts between fact checkers may give rise to critical thinking at the margin.
  • It raises the semantic level of the discourse. In short, a piece of web content is mostly measured by how entertaining it is. A fact checker is measured by how accurate it measures the veracity of an article. Very different discussion.

Most importantly, it injects high-precision information into the system that is not easily obscured. Anybody can write and publish an article that contains lies or half-truths or dog whistles or satire. Fact checkers, however, must provide a clear verdict (through the API). They also cannot easily state that something is true when it is probably not and vice versa without losing some trust when other fact checkers’ verdicts are readily accessible. (+KP)


Special databases


Linking corrections to the fake news they correct:

  • High quality fact checking is of little use if the misinformation has been duplicated 100 times across 100 blogs and been spread throughout social media. There needs to be a way to associate the fact check to the misinformation so that whenever the misinformation is shared, the fact check is presented alongside it. - @rbutrcom <<< Could you create a digital signature of the underlying misinformation that “matches” the 100 instances, and the associate the fact checking with the signature (class) vs. the URLs (instances)? @alecramsay

  • Rbutr has been working on a solution to this for a while, storing URLs in a database where one URL is a critique or rebuttal of the other. Eg:

  • This lets rbutr tell people when the page they are viewing has been critiqued. Eg: Foodbabe article connected to Snopes via rbutr
  • Facebook, Twitter and Google could easily use this database as a way of identifying and displaying rebuttals for content shared in their feeds and search results too

Recommended

161122 - Nieman Lab
No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites
Via Nieman Journalism Lab at Harvard @NiemanLab


Reuters Institute
Rise of the fact checker - A new democratic institution?
Via Nieman Journalism Lab at Harvard @NiemanLab


Interface - Design

  • Change the user interface so the credibility of the source is reflected[ds]: “Denver Guardian” doesn’t look the same as Washington Post.-z    I like this. It’s much like how media orgs (should) use visual design cues to distinguish sponsored content from independent journalism. You’re still in the same experience, but you get a visual cue that helps prioritize the solid information and hopefully turns on your BS meter for sketchy stuff. --mfuvio
  • Issue here could be distinguishing between falsity and satire. This solution should contemplate and allow both in the UI.  Austin @atchambers
  • Satire could be labeled. Colors could differentiate between Satire and Fake.

  • Make popularity vs. truth distinct measures from each other[dt][du] (often they are conflated) - Berkun + Srila +Peter +JS +NBA
  • Perhaps it’s as simple as shading stories from suspect or satirical sources a different color in the news feed. People could choose to turn this feature on or off, or limit certain sources --@citylifejc

  • Create an “editor” role[dv] - similar to likes, allow credible people to up or down vote whether or not a particular piece is fake or not.  Opt-in current editors (NOT journalists, reporters, but editors).  Shift the question of accreditation from article to people (as in how newsrooms themselves are built). Good independent content can still rise up. - Brandon (+Peter)

  • Create a “FAKE!” overlay t[dw][dx][dy]hat replaces the photo of any story/url proven to be false. Then p        eople can’t keep spreading[dz][ea][eb] it on Facebook without a huge, visual warning to everyone that the story isn’t true. (Like Memebuster)
  • Take a look at all stories that are getting significant engagement: if they are outright fraudulent (“Pope endorsed Trump”) either severely downgrade them or present them with “debunked” UX if you must[ec]. (I don’t see why there is a free speech right to hoax people without being challenged or dampened).
    [ed]
  • Add a “credible news” box, instead of just “trending”--if you are going to push news stories to people, might as well push good ones.

  • Flag FB shared stories whose "original" publication date is way off from the current date. Many times I've been fooled/dismayed by stories that have been favored in my TL or in that egregious "People Also Shared" ribbon because multiple friends shared them, but annoyed that when I click the sensational story is weeks or months old. Of course there are good/useful and bad/misleading examples of this (I belatedly learned that Bannon had possibly committed voter fraud in August, but also, clicked a headline from March about Trump that was initially relatively heartening but ultimately totally misleading because so old). Either way, date discrepancies should still be flagged[ee]. @HolleyA 
  • On Facebook, we should be able to have an option to flag a story and report it as false. There should also be a dedicated group of people who work for Facebook, verifying how many upvote or downvote an article. If a news article has too many verified down votes, it would be taken down because that many people said it was unreliable.

Flags

  • Petition Facebook to add a “fake news” flag[ef], and when it gets to a high enough flags vs. clicks ratio (10%?), display a “Users have flagged this link as containing potentially untruthful content” warning next to the shared post. Whitelist certain reputable publications vs. blog-sites.[eg] Petition Google to do the same. -@lpnotes +Eli

  • If news is flagged as fake, it needs to be replaced by real news from the same political viewpoint. There is a disproportionate amount of fake conservative news out there, but there are plenty of legitimate conservative news sites that can provide fact-based reporting. For example, flag an article about Clinton selling weapons to ISIS as fake news, and offer a legitimate critique of her approach to foreign policy. This might only be possible with human editors (see bullet point above) but it also addressed the issue with human editors being accused of bias. -@elirarey (+Tamar)


    >> But how can you be really sure, that Clinton did not sell weapons to ISIS? People will be concerned if you make moves like this. Who is going to be the verifier of said claim? Clinton herself?
      +Karl Point taken :D
  • User flagging of false news, adjusted by ideology. (e.g. a Dem flagging a site that is typically visited by more Dems gets more weight).[eh][ei][ej][ek] -- Eli[el][em]
  • Develop credibility scores for users who accurately flag fake / non-fact checked news. Send accelerating content to those trustworthy citizen fact checkers and ask: “would you share this?” -Rohan +Manu
  • Or do credibility scores on a site/article level. Allow users to downvote a la reddit/digg
  • Differentiate between ‘fake’--where information is fabricated; ’unverified’--where information cannot be confirmed as true or untrue because available data do not justify the conclusion; and then ‘biased’--where the authors of the story have made no attempt to represent both sides. Especially for the latter I think you’d need human editors, but it seems like it should be possible to train an algorithm to look for cases where quotes are one-sided (perhaps relying on a database like OpenSecrets that classifies organizations as predominantly liberal or conservative, although ideally this should apply to nonpartisan sides as well, e.g. in a police shooting situation).
  • There should also be an option not only on facebook to report something as fake, but on certain news websites also. When we do a google search for news, before we click into an article it could show us how many verified people reported this as fake. To separate between fake and not 100 percent proven, websites could also have votes for how many people claim this to be true or false or not proven.

Downrank, Suspension, Ban of accounts

  • Punish (downrank, suspend, ban) accounts that post fake news, thereby disincentivizing their spread - @Jesse_VFA
  • If there’s one thing we’ve learned—from Macedonian teens and this guy—it’s that this is an economic problem, largely. Fake news is a good business -- A lot of people posting fake news are doing it because it’s profitable.
  • In other words, if you made posting fake news an existential threat for high volume accounts (they could lose access to thousands/millions of subscribers), then you could stop the problem at its root.
  • There should be people running these news websites who are in charge of making sure that it is verifiable. If not verifiable the article should clearly state that it is an opinion or not a proven fact. If an article is claiming to be true when it is not true, these people who are in charge of these sites should be able to ban that person from posting. To make it more fair, it should be multiple people who get to decide whether to flag an article and if it is flagged enough that person should be suspended or permanently banned.



Contrasting Narratives  

  • Improve suggestions for further reading on basis of quality, diversity rather than ‘echo’ - Austin @atchambers  +@IntugGB +Nic

    Suggestions based on viral articles from verified sources instead of just related articles? (Andy C)
    Yes, but would also be good to provide different viewpoint. media literacy at large scale so kids get good bullshit detection as early as the nth grade. -Ben @benrito --> (This isn’t the right place for this suggestion; this is derailing the issue. I’m all for more education, but this isn’t what the thread is about and I strongly resist making this about things that are not relevant. - zeynep[en][eo]) + Amanda (+Tamar)

  • Highlight stories that contradict one another on a particular event/ claim[ep], emphasize their differences, and provide analysis that either confirms or places claims into question (corollary: how to ensure whistleblowing news is not marked as ‘unusual’?) -@yelperalp

  • Pair stories, such as fake news, with stories that debunk them, directly within the feed,[eq] regardless of whether a user likes that publisher or not. That way users will be exposed to diverse viewpoints on an issue[er], on things that matter to them, irrespective of source. - @RobynCaplan, @datasociety +@linmart
  • View As feature to see what appears in others’ filter bubbles. - Nathan +nick

  • Create an excerpting function for cards that encourages readers to seek out and excerpt text from article supporting headline claim (or other focus), replace standard description of story supplied by site
  • Invert current relationship of story source and “friend source”: put story source prominently on top, with friend and comment

  • Counter-narrative the emotional content with memes, pictures and videos to assuage the targeted feelings.  Otherwise rational people believe/fail to question fake news because it validates their existing feelings (and/or their defenses against unwanted feelings of loss, shame, fear, vulnerability, frustration, jealousy, inadequacy etc.) In lieu of fact checking (which studies show actually reinforces prejudices.

Points, counterpoints and midpoints

Thread by @juliemaupin

  • Create FB Point, FB Counterpoint and FB Midpoint.  @juliemaupin. Make FB default to showing three stories side-by-side whenever a news story appears in a user’s feed:  
  • Story 1 (FB Point):  the story that was placed into your feed by one of FB’s standard mechanisms, with clear labels to indicate whether the story was:
  • Placed into your feed because it was shared by your friend, OR
  • Placed into your feed by FB’s automatic algorithm on the basis of your past likes, OR
  • Placed into your feed because someone paid FB to put it there.
  • Story 2 (FB Counterpoint):  the counterpoint story, which is the version of the same story (i.e. addressing the same underlying news item as story 1) currently receiving the most views from people who have been identified as holding views opposite to yours.  
  • Note this would require using the combined techniques of big data and psychographics which firms like Cambridge Analytica[es][et][eu][ev] are currently using to influence political campaigns (OCEAN method + consumer data + political data, etc).  In this case, however, the information would be used to identify the FB user’s psychographic profile only for the purpose of feeding him/her the story most likely to be viewed by his/her psychographic opposite.
  • The counterpoint story provides a perspective check for the user, so that s/he is confronted with the reality of which end of the spectrum the “shared” story comes from and how far out from the center that perspective lies.
  • Story 3 (FB Midpoint):  the version of story 1 that lies in the psychographic middle - i.e. the median - of all online social media activity.
  • Again this would require identifying the midpoint story using big data + psychographics techniques.
  • Note it’s important to come up with a method for identifying the median rather than mean, since users with certain types of profiles are more likely to be active sharers than users with other profile types (hence skewing the midpoint).
  • The Midpoint story could also be used to identify the degree of skew of the point story and the counterpoint story.  E.g. one could illustrate graphically right below the 3 side-by-side stories how far to one side or the other of the midpoint story the other two lie.
  • Other notes:
  • In principle, one could design the same kind of system for Twitter feeds & other social media platforms.
  • In principle, one could design a similar system for browser search results, e.g. browsers could be designed to tee up three sets of search results, perhaps separated by “tabs” within the search results page, showing:
  • Google you:  the ordinary search results you see when you allow google to learn your browsing habits and tailor the search results it shows you to your own past behavior
  • Using a name like “google you” makes it clear that this is not unbiased information you’re receiving in your results list.  It’s based on YOU!  Your preferences, your online behavior.  It’s a reminder that you are not representative of everyone.
  • Google counterpoint:  the search results google would tee up if someone with a psychographic profile on the opposite end of the spectrum from yours were to search the same terms you just searched
  • Google middle:  the search results google’s data indicates are the most popular across all users for the search terms you entered (the median results)
  • Visually, in order to be effective, the three stories should be placed side-by-side with the same amount of space, prominence etc given to each.
  • This proposal could also be combined with others like fact checking.  E.g. one could display the fact-checked ratings of each of the three side-by-side stories right below the stories themselves in the user’s feed.

  • Some (of many) difficult questions:
  • Should the counterpoints & midpoints be constructed using national or international data?  International provides more diverse perspectives to confront social media users with and makes more sense given the realities of today’s global information-sharing environment.  However, the data sets are currently much richer on a country basis, especially within the US.  This means the counterpoint and midpoint feeds would probably have lower statistical validity and/or lower relevance for non-US users, of which there are many.
  • Should digital/social media users have a right to interact with social media without having any psychographic data collected on them?  If so, how could that be facilitated while also allowing users who want to benefit from algorithmic feed methods to do so?  Note that allowing opt-outs creates selection bias problems for the remaining data sets.
  • What does academic research tell us about how far away we should expect a FB midpoint or Google Middle type result to lie from a fact-checked news story of the old-school “mainstream news media” variety?  No doubt there’s some literature on this.




Model the belief-in-true / belief-in-fake lifecycle

Thread by @thomasoduffy


We need to build narratives that help us clarify, in a crystal clear way, the whole life-cycle and “CX” along the journey of fake news.  For example, how a person goes from not holding an opinion on an issue, to inputting fake-news across single, multiple and/or compound reinforcing sources (like brand touchpoints) and believing fake news,  holding that perception for a period of time, to later discovering it was not true (when and if they happens) in a way where they change their mind.  

By focusing on people who have overcome the belief in some fake-news source, we can ask, how can we get more people to go through a fake-news recovery process sooner. 

Similarly, we need to contrast this with how people become resilient against fake news and know it’s not true, so we can nudge people vulnerable to fake news towards more resilience against it.  

The goal of this would be to reduce the half-life of fake news, reduce vulnerability to fake news, reduce the incidence of fake news success conditions, and shorten the half-life of successful fake news.

Another way to fix the cycle is to make sure that the sources that publish this fake news come out right away and admit it as being wrong. The longer that a news source keeps fake news out there, the more people are tricked to believe in the wrong thing. A source that comes out with false news should as soon as possible take it down and admit they were wrong or should be banned from that site. It is very important that people get to know the truth and are not just left hanging with that one false story.


Verified Pages - Bundled news

  • Verify pages before letting them declare themselves “News/media website” -- until then they must call themselves a community group. Must be some sort of newsgathering
  • Facebook could play a role in helping people bundle subscriptions to news sites and publications so that publishers are not only driven by ad-revenue, which tends to exacerbate clickbait concerns regardless of authority of publication. Understand, however, that this further bottlenecks the media industry and makes it reliant on Facebook... @RobynCaplan, @datasociety

  • Add news center that aggregates quality, validated news that is shared across facebook rather than inserting into newsfeed or relying on trending sidebar. Plus if it includes metrics or data relating to the ‘why’ of quality -- Austin @atchambers
  • There are only so many actual news sources in the world. Instead of allowing everything and trying to filter out new trash that keeps spawning, have an exclusive news feed[ew][ex] that local and national news sources need to apply to and be vetted for. By humans. Those sources can post links to actual news stories as long as they are on good behavior (posting real news)
  • This seems similar to the Apple news model… wonder what else they have been doing?
  • If this suggestion is headed for FB, be careful. They already tried this is India with specific websites that would be selected under their Free Basics Initiative. The concerns and criticism it drew of how the vetting process was carried out turned out to be quite a nightmare. - @IntugGB

    Background:

    160512 - The Guardian
    The Inside Story of Facebook’s Biggest Setback 

  • Facebook and other media companies need to have a team in-house who engages on issues about media ethics and the public interest, that can help facilitate discussion between relevant stakeholders. This team needs to not be held to the same business incentives, i.e. increasing click-rates, to which other teams building the News Feed algorithm are held to account. This would be akin to technology companies having teams dedicated to issues such as accessibility concerns. This group can also provide a research basis for any authority score that different publications receive. Like PageRank did for ordering pages based on links driving traffic to the site, this can include a higher score for publications that deal directly with official communication coming from institutionalized sources, as well as for original reporting, such as speaking directly to concerned citizens, or relevant actors --@RobynCaplan, @datasociety + @linmart

Viral and Trending Stories

  • Create a publicly visible feed of rapidly trending stories[ey]. Allow users to report in links to debunks and have a team of FB editors evaluating those and deciding whether to take action: slow the spread, attach a debunk story to suggested media, or remove the post from newsfeeds entirely  --@eylerwerve 
  • Imperfect but you could flag stories that spread virally within a narrow cohort, using the same tags you use to target advertising. Then have a human editor look at them. The trick is to eliminate the worst offenders, not the edge cases --@sbspalding

  • Slow down content’s velocity until it gets fact checked, either by a hired team or by crowdsourcing (either way there should be external oversight, because Facebook) --Rohan[ez][fa][fb][fc][fd] (+Peter) +Rushi

  • For users, differentiate between sharing the news and questioning the news. Create an “ask your friends” sharing feature that looks like a regular share but asks your facebook friends “is this real?” Many times people share content and remark “if this is true, then…” If those shares are treated differently than normal news, users can better understand their friends’ intentions, slowing down how quickly fake news moves --Rohan (+Tamar, Dave)
  • For users, pop up a warning before they share an article likely to be fake. "Are you sure you want to share this? Many users say this is inaccurate or a hoax." Or,  "Many users have reported that this source spreads inaccurate articles or hoaxes" --Dave[fe]

  • Incentivize FB users to report fake news and hoaxes. Give them a discount on boosting $$ for their own page, or a discount code for products/services, dinner with Zuckerberg, badges for top “reporters,” etc. --@janeeliz
  • I just have to throw this in: I would tell FB to blast every user about being skeptical and critical thinkers[ff].  Meanwhile, for sure mark the source of origin of stories if that source is unreliable. Even after it is shared and the source is vetted as bad - --MarkG
  • Use tools like NewsWhip to humanly curate stories that are spreading to focus fact checking approaches with flay for mechanisms that limit syndication of viral-fake news stories.   Connect with algorithms like those used within Mendeley to trace derivative stories.  --@thomasoduffy

Patterns and other thoughts

Thead by Heather

  • Prioritize recommended news based on verified news outlet domains, not just domain authority. Add this into logic, not just what is getting shared by friends. Verified news outlets should feel verified. If something is flagged as not true, or follows characteristics of being flake, tell the end user somehow. --Heather
  • Allow flagging of suspected posts (would need to add this into prioritization logic in order for it to be effective so could be sketchy). --Heather
  • Learn patterns for fake news stories - I would imagine there must be some that could at least trigger a moderation / blacklist. URLs, the people who first share them, patterns in the content itself.

>> To build on what Heather said, there are specific “clickbait” patterns that a lot of these stories use. Perhaps use that as part of a signal to downrank stories or flag them for human review

  • Great Idea!! Since clickbait titles are optimized to get maximum exposure, this would be a good countermeasure to disincentivize their effect. Even better: it would disincentivize real news from using clickbait formatting, so fake news, which depends disproportionately on engagement from headlines alone and therefore can’t afford NOT to clickbait, would stick out - nick  

    >> Washington Post is doing it … (?)

    This researcher programmed bots to fight racism on Twitter. It worked.

  • To further add to what Heather said about ads - work with FB to create alternative ad structures (image only, no click throughs, etc) so predatory companies can’t use FB to generate revenue that are clearly designed to confuse and mislead --Emily @emilycr0w
  • One more pattern:

  • Quotes from sources

    Pay attention: This may run contrary to every SEO-minded person who just unlearned the text tricks to make a page rise in Google ranks.

    News should include source quotes. Not every news story will; however, the most effective articles will have direct quotes from sources. As articles are shared and reposted, those quotes hold most of their patterns.

    Fake news features entirely original quotes, too.

    Quotes from a mitochondrial DNA for any news thread and topic -- and computers know how to track large chunks of copied text.

    When it comes to identifying fake news, track the quotes.

    Are the quotes posted on other trusted sites?
    Are the quotes original? If so, is the post on a trusted source?
    In the entire news thread featuring a particular quote, has no single, trusted source posted it?

    Quotes may be the muddy footprints that lead to the final solution.



Ad ecosystem [fg]

  • Make Facebook disclose more regularly numbers from its ad views. They should also be including the manner in which they ar        e calculating those numbers, i.e. x = users who watched video for 10 seconds or more. Make these particular processes auditable by third-parties. --@RobynCaplan, @datasociety
  • Create public database of fake news websites to help advertisers to prevent their ad from showing on sites that can damage their reputation. --@filip_struharik

  • List is created by evaluation committee consisted of teachers, lectors, journalists and marketing specialists

  • Advertisers can use this list to exclude fake news websites from their Google AdWords campaigns

  • Serve up ads from verified professional news organizations that display factual stories on the same topic. (They can do it with my shoe shopping, why can’t they do it with my news content?) Google  is making this effort. --@janeeliz
  • Only problem is that news organizations are broke as it is  

Related:

161117 - NBC News
Google Think Tank launches new weapon in fight against ISIS


From the article:

“In traditional targeted advertising, a new mother searching for information on Google about getting a baby to sleep might start seeing ads for blankets and white-noise machines in their feeds.

Through
Redirect, someone searching for details about life as an ISIS fighter might be offered links to independently produced videos that detail hardships and dangers instead of the stirring Madison Avenue-style propaganda the terror group puts online.”

  • Analyze the quality of the ads on the news site and create a trust metric. May also be able to create an ideology metric. The wider the reach of the advertising, the higher the trust metric. Check for spam advertising. - @iomcoi

  • The single best piece I've read this week on Fake News architecture and probable solution - lies with advertisers
    MT Emily Bell @emilybell, Director of Tow Centre for Digital Journalism at Columbia J School 


The issue of clickbaiting, a way of getting users to click on links that lead to other websites, has been a problem in America since the advancement of technologies and the popularization of the internet. The ever growing phenomenon of social media has not alleviated this issue, it has gotten much worse because of this. I am less likely to go on Facebook now because I will come into contact with ignorance spreading more ignorance. Relatives of mine and individuals who are older and do not know any better about technology are typically victims of fake news. But fake news has been around since before my relatives were born. It all comes from yellow journalism. Most stories written for these papers had almost no research done. Attention-grabbing headlines were the key to selling more papers. Now, using this same formula,  similar “publishers” online are being funded by corporations like Google for putting out these false articles. If funding were to be cut, then fake news can ultimately be phased out.

There is tons of evidence of individuals outside the United States publishing false articles about United States officials. These individuals cannot be charged with slander or libel, because these servers reside outside the physical boundaries of the American law. These individuals profit from the American population and any general form of web traffic. I believe these individuals are aware of the American societal push for new and innovative technologies. Most citizens are merely just following the trend. Older individuals, typically baby boomers, consider themselves to be “computer illiterate,” but refuse to be willing to learn how the internet and computers work. My mother is unfortunately one of these individuals, and shares fake news constantly. These “publishers” cannot be affected by American laws, and they are baiting the weak-minded over the internet to gain some web traffic through their articles. This act of using others for their own profit is completely unethical. For a society to progress, they must be presented with the absolute truth. As of yet, the government has no reason to put laws on fake news, because there has not been an event where the government suffered from fake news.

The best way to take control over the amount of fake news being spread is in the hands of private companies like Facebook and Google. With their large amount of users, and how widespread this issue has been, they would have the right to stop fake news sources that cause harm to it’s users. Journalism was meant to deliver factual news that leaned towards the public’s needs instead of potentially causing harm or discomfort. Most fake news sites ultimately cause harm overall, as the readers are blindly funding these “publishers” merely by clicking on their link. I believe that Facebook, as well as Google, should cut off their funding from sites that are commonly reported to be spreading fake news. In addition, they should be stricter rules for signing up with AdSense. One form of criteria should be that the owner/publisher of the website should only receive funding if they reside in the same country as the companies that give ad revenue. So for companies like Google, they not only would be able to phase out any foreign servers, but they would also be able to track any server in America, and can charge them with slander or libel, depending on the severity of the falsehoods.

When it comes down to it, fake news originates from an unethical individual that is milking from the weak minded to receive ad revenue. These companies that fund these sites need to step up to the plate in order for this issue to be addressed. If they just overlook this issue, it can have potential to grow into something that is much worse, potentially causing more harm for anyone who reads these articles down the line. I believe making certain criteria to receive ad revenue can help alleviate this issue, especially if the ad revenue is land-locked.



161123 - NPR [Podcast] 
We tracked down a fake-news creator in the suburbs. Here's what we learned



More ideas…

Please note we are tagging contributions with certain [keywords] at this moment. As the document evolves, these will transferred to specific topics already in place - @Media_ReDesign [16 Dec 16] 

  • Check for domain/brand spoofing and flag/blacklist when discovered. Some of the worst fake news examples I’ve seen have been fake sites masquerading as credible nhttps://twitter.com/IntugGBews sites, including images of mainstream news organization logos, and domains like “washingtonpost.com.co” and “espn.com-magazine.online” (those examples are from a post by Ev Williams on Medium). It should be possible to detect logos, and then verify that links actually go to the sites identified by the logos. Another domain-based approach would be to compile a whitelist of credible news sites (hundreds, maybe low thousands), and then look for variations of those URLs used in news links which are, according to WHOIS, owned by someone else. For instance, with the aforementioned “washingtonpost.com.co,” though the domain is dead now, you can see from archive.org and reddit that it was full of conspiracy garbage and unaffiliated with the Washington Post. That could have been detected algorithmically based on the domain and registry info. Interestingly, that domain was transferred to the real Washington Post company on 11/18--I think their lawyers have been at work :-) --John McGrath (@wordie) (+Tamar) [fake domains]

  • Work with existing structures, inside and outside of Facebook. Use already existing algorithms to filter, team up with credible sites, newspapers, independent fact-checkers. It’s in everyone’s interest.

  • A good model for fighting fake news is the war on spam. Spamhaus is a nonprofit that maintains a large blacklist of spammer IP addresses, which is used by most ISPs to block spam. A similar nonprofit (fakehaus?) could create a shared resource--a database of known fake sites and spoofed URLs--that could be used by Facebook, Twitter, and others. In general the security world is good inspiration--both blacklists and whitelists are useful to have in the toolbox (@wordie) [SPAM]

    161216 - Medium
    Lessons the spam wars: Facebook’s fight to Kill Fake News is just starting

  • Look at spam and other ways in which low-quality content is dampened. -z [SPAM]
    6

  • Work with Google get the news api working again. First check the credible [API]
  • Use existing corrections and critiques against fake news and misinformation. Connect them together in a central database which maps all such relations, and make that database accessible to everyone through an API, so Facebook, Google and Twitter can check every link against the database and get a list of critical responses to the link. Prototype already up and running at http://rbutr.com. See some example claim-rebuttal pairs here. [API] [models] [Central data base]

  • This amplifies the impact of the journalistic efforts of organisations likes Snopes and politifact etc by ensuring the corrections get delivered to the scene of the misinformation/fake news - especially if Facebook et al incorporate integrate the system to their feeds. (example)

  • This approach has a long term benefit of promoting critical thinking over dogmatic belief, and avoids the problem of deciding who gets to decide what is and isn’t true.[fh] [Critical thinking]

  • People are lazy[fi]. To make something sustainable, we need to educate th[fj][fk]em about what critical thinking is, if in a article there is a logical fallacy, error in argumentation or the source is missing, journalists and fact checkers alike can highlight that part of the article, and tell what's the problem with this. This can be done with a google chrome extension. Doing this, users need just to install a simple extension, and they will be educated in time, when they go to a fake article, or a misleading one. - Florin  [critical thinking]

  • If we live in a post-fact world, why do we assume presenting more factual news will solve anything? Any source can be and has been dismissed by people who disagree with the analysis.[fl] Journalists are humans that research and offer an interpretation. Perhaps we leave all reporting to machine learning and cold presentation of facts. Many news stories are already being written by bots / programs -- maybe they will be trusted more than human reporters to present facts in a non-emotional way? [fm]-- AWood [bias] [critical thinking]

  • I think there has to be an editor role or a gatekeeper or some kind. It’s naive to leave that job up to anyone who comes in. Look at the Wikipedia model for guidance on how to police content while maintaining a communal/democratic function for knowledge collection. (@wbratko) +@linmart, @IntugGB [Wikipedia]

  • Compare meta tag titles to headlines/page titles for jump-the-shark-y facebook headlines - @mlambright

  • We have to start with the assumption that most people don’t read much, period. Many people simply read the headline and don’t even look at the source, or even click through, or read beyond the first paragraph. (This is how people ingest information now.) Can we enforce some text-normalization scan / software requirement that reduces sensationalism in headlines? Example: Our HR department forces every job description to go through software called Textio that finds gendered text and suggests changes. -- AWood[fn][fo] +damienwillis  [Headlines]

  • Somewhat related to the above, there is often a disconnect between the headline and the content of the article. Adding a flag, not just for “article is false,” but “headline doesn’t match content” could be useful. - @hannahkane +@linmart, @IntugGB [Headlines]

  • Give Facebook sole editing control over the headlines so when something has been flagged or discovered as fake they can change the headline to signify it. [Headlines]

  • Verified Sources. Allow sources that do their own reporting to have a special denoting with a statement that while it’s not guaranteed to be true, it certainly has more validity than blogs or punditry. Posts that are blogs or punditry should be marked.[fp][fq][fr] +1 Kyuubi10  [first person accounts] [blogs]

  • This is the same way they mitigated cat videos, clickbait headlines and listicles, which once (like fake news) played a disproportionate role on news feed
  • If you change the incentives, you change the content

  • Change what you measure: engagement or time-spent on site are distorting measures; people will still log on to a higher-quality Facebook. Measure, for example, how misinformed people become, etc. after Facebook exposure[fv]. -z

  • Survey users on what sites they share they find credible. --Eli[fw]


  • Quick note here: ‘Dark Social’- i.e. stories shared on messaging apps like whatsapp, are a massively important conduit for fake news, particularly relevant in India. They’re also the most difficult design problem, since they’re the most difficult to track.  +Rushi 
  • The underlying problem is that Facebook[fx][fy] is a monopoly. So they cannot be contradicted.. The only working solution is to solve this. -pvollebr

    Possible options:
  • Split them up
  • Give other parties the right to spread information and people can subscribe. This will be an API and more[fz][ga]
  • Integrate solutions directly into brows[gb][gc]ers
  • What would Snopes do?
  • Use the current focus on this issue as a chance to launch competing startups
  • Create a list of banned/approved websites[gd], bipartisan panel determines who is on it. Could potentially be crowdsourced. -@itsjoekent

  • Acquire a news organization that has an independent editorial staff, and provide a permanent page-top link to it.  Have a legit media arm that is not crowdsourced.-MAB
  • Create a link quality metric. Many of the fake news sites (as opposed to the partisan sites, which are fine) have only been online for short periods of time. If a site is “low-quality” and is being shared virally, flag it for review.
  • Stop prioritizing[ge] images/memes[gf] over articles! They are reductive and more difficult to assess because the author usually isn’t clear. -Rohan

  • Objectively false news articles lead to shadow bans of that outlet handled by Facebook support. Puts the onus on outlets to report factual information or be removed from the platform.[gg] - kaiuhl
  • Model how known true stories spread along the graph vs false news, train an algo to flag the latter for human review and/or place a ‘speed-brake’ on them (if not being done already, hard to tell, it’s a bit opaque) - Cam  +1 

  • Host stories on Facebook’s own platform rather than on their home domains so that if something is proven to be fake, FB can break the link so it won’t work anymore or go to a page that says “This was fake news.”[gh][gi][gj][gk][gl][gm][gn][go][gp][gq]
  • Approve sites. Perhaps automatically. If they publish garbage, unapprove. - Adam
  • Link posts with blogs or punditry with valid sources underneath, such as Politifact or Snopes.

  • Ban the most blatant news imposters by algorithmically cross checking the self-reported name of the news organization with the domain name of the site. I.E. filter out a fake news site hosted on abcnews.com.co that claims to be ABC news.- @iomcoi
  • Invoke a code of conduct. Even if a news source is deemed false, people will still believe it’s true. If you disagree with them, typically, there is no way to ration with them. If you compare this behavior with some of other online communities (like Reddit, which has subreddits that enforce code of conducts), controversial conversations are better handled in Reddit - because the offender knows if they aren’t civil, they will be banned. - Olivia (@dothecodeolivia)/@lpnotes
  • Pay corporate taxes / invest in education and other social services which make fake news less of a threat +@linmart

  • A downvote button on Facebook - Olivia (@dothecodeolivia)
  • Many of the Facebook-specific solutions outlined in this document (e.g., automated fact checking integration, quality flagging, etc) could be applied more broadly at the web browser level. Why do we hold the News Feed and Search responsible, rather than Chrome, Firefox, Safari, etc? - @SeanPWojcik
  • Recommended content around politicized issues should present opposing views, ideally from verified, trusted sources. There may be unintended consequences, but research shows this reduces confirmation bias, evaluation bias, and polarization. This RC can be incorporated into Newsfeed, Search, browser plug-ins. - @SeanPWojcik

  • Currently, users can only customize their newsfeed by blocking publishers (which is inefficient) or by blocking friends (which just feels bad). If there were a mechanism to dial up or down specific genres of content (e.g., politics, friend updates, photos, etc), this would combat the over-saturation of political content generally, and by extension, the over-saturation of fake and hyper-partisan news. - @SeanPWojcik
  • Ben Hanowell has done some efforts on truth scoring, with Malark-O-Meter in the 2012 elections and now Soundchecks @SoundCheks:

    Google Knowledge Graph & Vault, and web-aided truth scoring  

    I sent him an invite to the doc. - @ilse_ackerman +@linmart
  • One part of this problem is in complex media ecosystems where it’s not just falsified news accounts, but also falsified citizen journalism and UGC, and where part of the challenge is for verifiable UGC and citizen accounts to stand out and help protect themselves vs. being contested (an example right now would be the back-forth on social media posts around documentation of hate crimes). So part of the solution here is how we enable citizen journalists/witnesses to help others/news outlets to assess the verifiability of their media. One part of this is skills (how to create verifiable media, which WITNESS and others works on), one part of this is tools (in the hands of people, and scaled on platforms like Facebook), and a final part is the platforms supporting these media literacies.

    As a concrete tech  e.g. of something we’ve worked on at WITNESS (
    witness.org) we’ve worked with Guardian Project to build a one-click way that people can choose to add rich metadata, cryptographically sign and hash a photo or video they share (the InformaCam/Camera V project, latest is a proof mode version). Then it’s easier to confirm provenance. Obviously to scale this you need to find ways for people to be able to do this type of ‘proof’ on any cellphone camera and run the check on platforms they share on (or have the platforms help support this role) - so there’s a Facebook/mobile OS role here as well. - Sam Gregory (@samgregory)


  • Fund lawsuits[gr]. Fund researchers to find high-profile examples of winnable lawsuits, even if the plaintiff is going to have to prove actual malice because they’re a full public figure. Use crowdsourced litigation financing if necessary (and legal). See Floyd Abrams in The Hollywood Reporter. - Andria Krewson (@underoak) (+1 @wordie)

  • Diminish the financial incentive to generate fake news by creating more nuanced modes of engagement. We currently measure engagement in clicks, likes and shares. A more powerful metric would measure what users think about particular elements of news content. This could be achieved by developing a debating platform that acts as an overlay to articles: users can highlight a sentence, agree or disagree with it, and be invited to debate someone who took the opposite position. The idea is to gamify discourse (and so to nudge people into having discussions with those who disagree with them), but also to create clear incentives to produce better quality content by having more fine-tuned metrics for engagement. -Pawel Wargan (@pawelwargan)

  • Traditional newspapers separate news and opinion pages and visibly mark analysis pieces that are written by established columnists. Contemporary online news sources frequently blend news reporting, analysis and opinion.  Readers and FB posters do not know where the lines fall.  People frequently say “I can’t trust any media these days” because they read pieces with opinions different than their own, and they discount any original reporting that site may have as well.
  • A call to online news publishers: Start tagging your “OPINION” and “ANALYSIS” pieces more prominently, and separate them from news reporting.
  • Media literacy is going to be a big topic in the coming months. This is small, actionable step in the right direction. Education is important, but content sources can be doing more to help the process. - Michelle Wu.

  • Librarians are a trusted profession, and many would welcome a chance to participate in the human work of vetting news sources (“information literacy” and “media literacy” are already functional & documented skills sets, totally in their wheelhouse). What about a crowd-sourced rating system by librarians? Not for indiv. articles, but for sources in general (e.g. CNN, NYTimes, Des Moines Register Mashable) Use reasonably objective criteria (e.g. site regularly provides author names & bios, headlines match substance of articles, site provides editorial and publisher info, 5 W’s and H present in lead & match article substance, source clearly distinguishes between editorial, breaking story, investigative reporting, etc….), and make those criteria transparent, so that the experience is both informative (this site is suspect) and educational (here are criteria you should bring to judging a site’s reliability). (@steverunge) (+1 @wordie)
  • Paralogisms detection : Using text analysis to detect common logical fallacies/paralogisms[gs]








Please, for all for all posting, follow the sequence of ideas while we continue to classify by topics. Thank you  ---Adm





From our email conversation

Note: Not clear who the contributors are, perhaps important to include given the title

Factmata

“Using Factmata, you should be able to verify statements on social media, article comments, articles and any text on the internet about important economic issues. If someone makes a statement about the number of teenagers without jobs going up, you should be able to view the latest ONS youth unemployment data; if its a claim that economic growth is collapsing and the economy is a mess, we will link you to the official GDP growth figures. As individual users use the tool to help them verify statements (by receiving relevant data and stats), the system will be able to provide a “truth score” on statements, by aggregating users’ votes.”

Ref: https://medium.com/factmata/whats-next-for-factmata-2df231bd6fe9

FiB extension


"De, with Anant Goel, a freshman at Purdue University, and Mark Craft and Qinglin Chen, sophomores at the University of Illinois at Urbana-Champaign, built a Chrome browser extension that tags links in Facebook feeds as verified or not verified by taking into account factors such as the source’s credibility and cross-checking the content with other news stories. Where a post appears to be false, the plug-in will provide a summary of more credible information on the topic online.


They’ve called it FiB.


Since the students developed it in only a day and a half (and have classes and schoolwork to worry about), they’ve released it as an “open-source project,”  asking anyone with development experience to help them improve it. The plugin is available for download to the public, but the demand was so great that their limited operation couldn’t handle it.​"​


161118 - The Washington Post
Fake news on Facebook is a real problem. These college students came up with a fix in 36 hours

Taboola and Outbrain involvement


Feedback on our Taboola article:


"Traditional ads tend to be intrusive. Thereby blocking a majority of them hinders that. Taboola and outbrain are unfortunately helping spread misinformative/non-factual and/or politically biased articles through their sponsored links, something that in my opinion is worse than commercials for the sake of products/services. Keep that in mind for your "Acceptable Ads" program moving forward."

From 

161129 - AdExchanger
Ad Blocking declined in Germany; Advertisers are concerned with Snapchat video averages

Revcontent and RevDefender both serve Taboola-style ads using web sockets and are really, really hard to block.

"Facebook is a prominent distributor of online news, including the fake variety. Analytics company Jumpshot tracked Facebook referrals from more than 20 news sites from September to November, finding 80% of unique visitors to hyperpartisan news sites came from the platform, Poynter reports. Fake news site abcnews.com.co  can attribute 60% of total visits to Facebook during that period. Compare that to The New York Times and CNN, which attribute 20% and 11% of site visits to Facebook, respectively. Even as they’re cut from Facebook and Google exchanges, fake news sites are finding new purchase in content recommendation widgets like Taboola and Revcontent, which manufacture scale through long-tail networks."

The link to "content recommendation widgets":

 161128 - Digiday
The underbelly of the internet': How content ad networks fund fake news

Also, a story about how Le Monde is trying to automate hunting down fake news:

 161126 - Digiday


How Le Monde is taking on fake news

"The plan is to build a hoax-busting database, which incorporates information on which sites are fake and which are verified, trusted sources, and readers can access via Google and Firefox Chrome extensions. The idea is that once a user has downloaded the extension, when they come across articles online a red flag will appear if the site or news is deemed fake, yellow if the source is unreliable or green if it’s ok.... Laurent has plans to expand the hoax-busting database beyond France’s borders. “Our goal is to be open source, so that everyone can use it. And we hope to have a bigger database by sharing our database of news sites and other fake news from other countries,” he added."



WikiTribune

UPDATE - 180228 

“The news is broken but we figured out how to fix it.”

– Jimmy Wales, Founder of Wikipedia

 

Wikipedia has endlessly been ridiculed as the poster child of ‘unreliable resources’ on the Internet. Ironically enough, in this day and age, Wikipedia proves to be not only one of the more accurate and informative resources on the internet, but also one of few amongst the list of the ever dwindling ‘real’ and ‘trusted’ news outlets.

Jimmy Wales compares Fakes News to the email spam situation that was much more prevalent a few years ago. Today, email spam is miniscule in comparison to how much of an actual issue it was at its prime. Wales predicts that Fake News will be infiltrated and banished in the years to come. However, unlike email spam, Fake News has serious capabilities of altering public opinion (so much so that it could, you know, potentially throw a presidential election). The impact of Fake News will not even be quantifiable until years have passed and historians can look back on what actually happened. Mark Zuckerberg himself is reluctant to admit that the sharing of these articles may have had extreme effects on public opinions. Wales sites the whole situation as an “asleep at the switch” case that probably lead to some unfavorable voter suppression.

 

Wales sites advertising and ‘click-bait’ driven journalism as the single most detrimental piece of the Fake News crisis. The monetary payout tied to the amount of views and clicks an article can accumulate are tainting the quality and authenticity of information being shared. The payout is probably the little voice in the back of the minds of platforms like Facebook and Twitter that are well aware of the circulation of Fake News, yet have not taken any serious actions to combat it.

 

In the beginning of its creation, Wikipedia was considered the epitome of unreliable. However, Jimmy Wales created a community of rather insightful and well-informed individuals that only seem to be getting more precise and scrutinizing over the accuracy of their information. It is interesting that Wikipedia is a site completely built on inviting anyone to edit and contribute to their pages, however Wikipedia was not attacked by the Fake News epidemic in the slightest. Have you ever noticed that Wikipedia is an advertisement free website?

 

In light of the growing dilemma of Fake News, Wikipedia’s founder Jimmy Wales offers a solution. Much like this doc itself, Wikipedia provides a platform for open editing that allows a constant flow of new and changing information to be checked and elaborated on by any number of contributors. Because Wikipedia has been built around the idea of providing accurate information and constantly reassuring the exactitude of concepts being shared, Wales has created an arsenal of professional fact-checkers. From this concept a brilliant idea was born.

 

People seek out “very basic, very straight presented facts”. This is what has made Wikipedia so successful. This is why Wikipedia was not infiltrated by Fake News reports. The community of Wikipedia is constantly checking their own information to make sure it is as straightforward and accurate as possible. People seek out information on Wikipedia, whereas a platform like Facebook, information is simply presented to you, and you more of less have to take it at face value (pun completely intended).

 

So how do we present news in a basic, straightforward manner? Jimmy Wales is in the process of creating a news empire called Wiki Tribune. Built very similar to the way Wikipedia is constructed; Wales contends we start by completely stripping the advertisements from online journalism. If we take the advertisements and click-bait out of the journalism equation all together, we take away the monetary reward that is driving such ridiculous articles to transpire. He is also attempting to approach the build up of this company from the grassroots. Wales wants to raise funds for the start up on a donation basis. This way, sponsors have no way of pushing their own agendas by looming their money over the heads of the company. The all-encompassing idea of this form of news delivery is, “by the people, for the people”. Wales wants to partner professional journalists with the public to determine what news needs to be reported on, and then deliver that information to broadcasting stations. If we build an insightful, articulate community, we have potential to reshape the public opinion with truth.

Check out Wiki Tribune, which has been growing its platform since April 2017:

https://www.wikitribune.com/

Provided below are some videos of Jimmy Wales further explaining his mission:


Up Front (13 October 2017)
How do you fight back against fake news? AlJazeera. Retrieved: 27 February 2018

 

FORMAT

https://www.youtube.com/watch?v=nho5NaLzc5Q

 

https://www.youtube.com/watch?v=buO_lk0fHwM


Verified Trail + Trust Rating

Thread by Kyuubi10 (29/11/2016)

After posting my rumblings in “More ideas...” I decided to reorganised the ideas in a more comprehensible format.


I’d like to expand on the notion of verifying content and provide an
expandable structure which would be both efficient and cost effective.


The common ideas provided so far covered automation, crowd-sourcing and machine learning very well. What I have done is simply pull from existing ideas and improve upon them.

  • Let’s begin with singling out the impossible. Verifying content is impossible. Too much content, from too many sources and not enough people, even if crowd-sourcing is considered. +@linmart

    While verifying content is not possible, content can receive a “rating” and category tags which will help other readers to identify the probability of the content being accurate or factual. This can be done both through my process, and through crowdsourcing and machine learning.
  • Second it is important to identify what can be “verified”. Some ideas thrown around was websites, content, people etc…

    As defined in the first bullet point,
    content is impossible to be verified.[gt] Which leaves us with websites and people. I believe that is also wrong, since they provide no simple structure for the flow of information. Therefore not providing a simple way for automation to track who could be verified and who should be stripped of verified status.
  • Third, my idea is using the flow of information itself to give us clues into how to use the verification method effectively.
    Information/content will always have a
    source and an outlet. The source can be the source of information, or an independent distributor...such as a news company, or a blogger etc… This creates the idea that you could be a trustworthy distributor of information, but you are not a source yourself. While a source doesn’t have to distribute information, but it can provide NEW information.
  • Verification shouldn’t be limited to one type. But the verification should be compliant with the steps in the information trail. Which would mean that while content wouldn’t be verified, news companies could become verified outlets, and certain people or websites could be classed as verified sources.

    This will avoid the monopoly on truth by news companies by not allowing them to be their own sources, and it will limit the amount of content having to be checked for facts to content released by sources only.

How to get verified?

  • Begin by verifying only sources. Teach news outlets how to use this new system, and to utilize sources appropriately.
    Sources will be verified by people and volunteers trained in fact checking content. After a source becomes verified, future fact checking can be fully crowd sourced.
  • Certain people/websites can be automatically classed as verified sources, spokespeople or websites which are used to announce new movies, books, music, products could be eligible to become a verified source straight away.
    The same applies to social media accounts.

  • For outlets to be verified they need to consistently distribute content from verified sources.
  • In order to automate the process of adding new sources, if multiple verified outlets publish content from the same source, that source will be considered as eligible to become a verified source.


How will the trust rating work?

  • The trust rating for content will be automatically calculated based on the trail of the information, therefore a verified trail will result in trustworthy information.
  • Trust Rating for sources will be based on how accurate their original content is, and how often the content is factual.
    This could be done in a 2 point system per content, if the content is factual it receives 1 point, if it is also accurate it receives another point. Content with 2 points is both factual and accurate. While content which is worth 1 point is either factual or accurate. Once a large enough number of content is fact-checked and awarded points, the totals can be averaged, and 2 = 100%.
  • Trust Rating for outlets will be based on the percentage of their content that has verified sources.
    Independent Journalists will also be classed as outlets.
  • Trust Rating for content will be divided 60:40 between source and outlet trust rating. Therefore more weight is put on the source than the outlet. As long as a source is trustworthy, any outlet can distribute factual information.
  • Content can be tagged by the source and/or outlet. If content is not tagged, or tagged incorrectly then the trust rating of the content and the originator of the tags will be penalised. Tags consist of things that describe what type of content the article is.


Another direction

I'm going to go out on a limb and say this is exactly the wrong direction to go in. The solution to a divided America lies in more politics and more debate, not figuring out how to design away what you disagree with.

Consider the top "true" mainstream stories referenced in this often cited BuzzFeed analysis all have a big partisan slant:

16116 - BuzzFeed
This analysis shows how fake election news stories outperformed real news on Facebook

The pro-Trump crowd has a point when they note that the vast, vast majority of the mainstream media voted against him and ask how it can be truly fair and objective.

Sure that is not an excuse to draw an equivalence between the WaPo and some random tweet on the internet that get picked up by infonews or whatever Pravda-esque news outlet the pro-Trump crowd uses today.

Neither should that be an excuse to shut down debate through well-intentioned though ultimately profoundly misguided design solutions.

Those of us that disagree with Trump need to win the war of ideas. Here is my two cents in that vein:

161124 - Calbuzz
Op Ed: Young man’s hope for spirit of California

- Patrick Atwater @patwater


Bias Dynamics & Mental Models[gu][gv][gw]

Thread by @thomasoduffy

  • Selective emphasis of some news (real or fake) distorts perception and introduces/amplifies biases as much as fake news.
  • The fake news problem must be addressed by considering “How to make people smarter” by helping them correct their biases and form higher quality mental models.  In theory, algorithms can help with this too.
  • Humans prioritise their attention which influences the information they input based on how their Recticular Activating System (the brains auto-focus auto-notice mechanism, that once primed based on focused attention, highlights some things while ignoring others)  is programmed and primed.  Fake news works at an emotional-level. To make people smarter, we need to codify how news primes people to input information.  This may lead towards a superset of fake-news-vulnerabilities that can be patched by the prioritised syndication of information that corrects their biases.
  • Fake news is only a problem if people don’t recognise it as “obviously fake”.  We need to upgrade people’s skills in understanding what is true or false.  This could be achieved by prioritising syndication of content that helps them learn how to know the difference.   On large platforms, people who seem to respond to and buy into fake news, could be tested for vulnerability to fake news, and then presented information that helps them learn to know the difference.
  • Facebook doesn’t have to be the “arbiter of truth” to be an “arbiter of intentionally-false or misleading”.  It should be possible to calculate what intentionally-false or misleading information people have bought into, initiated or amplified via Facebook, based upon their patterns of engagement.  This has to work by algorithmically calculating their mental-models and biases.
  • Behaviourally, we need to screen against people misclassifying false information as true or helping them correct their misunderstanding if they have.
  • Content that helps people form empirically proportionate views of a subject area or domain should be more visible alongside the news.  This gives people the chance to interpret the info with better deference to the big picture, and makes the big picture more visible.  This can limit skewed prioritisation of minor data points, by real or fake news.


Neuroscience of philosophical contentions

Thread by @thomasoduffy

It is important to recognise the biological and neurological factors that underlie a person’s ability to consider points of view that are different to their own.  

Biologically, when a person’s fight or flight system is activated, their brain seems to shut down their philosophical centers and render them more certain about what seems to be true according to their present mental model.  This seems to be an evolutionary protective function; under threat, deep uncertain thought is unsafe thus in fight or flight mode, a person’s brain optimises them to take rapid action to run away or fight, requiring certainty upon which action can be taken… where survival depends on winning or exiting, not deep thinking in that moment.  

Experientially, many stressed people get “stuck in mindsets” they can’t unravel or exit from through rational thought or even qualified information alone.  But later, on holiday, typically when sufficiently relaxed, their brain changes mode (increasing the depth of their parasympathetic access) which in turn allows them to feel safe and uncertain, the minimum viable brain state to contemplate views different to their own and to change their thinking.  When a person is relaxed and mindful, they have more neuroplasticity - the capacity to change their mind.

However, when a person is traumatised, i.e. is exposed to extreme events, their neural pathways get phosphorylated, and this “stuck in mindset” factor can be amplified.   Many people who are traumatised seem to carry a mindset through their life unless they successfully deprogramme that same trauma which is not trivial, nor seems to happen based upon ordinary life conditions.

The second part of this is to recognise the whole media system makes money by triggering peoples alert systems - i.e. tricking them into paying attention for their own safety.  Thus the media is keeping many people in various degrees of fight or flight / threat perception which has a side effect of keeping them more stuck in mindsets and unable to autocorrect from fake news they have inputed.  In order to think differently or correct their biases, a greater proportion of parasympathetic access (the nervous system configuration that is opposite of fight or flight) and neuroplasticity is required, in tandem with educational journeys that guide people to think more intelligently, rationally and accurately.  

(note: maybe an qualified neurobiologist can confirm my chemisty, where principles are correct)

 


Pattern & Patch Vulnerabilities to Fake News

Thread by @thomasoduffy

We must be equally cognisant of how specific classes of people end up being exposed to, inputting, reacting to and re-transmitting fake news or derivations of fake news signals.  We need to think about this in terms of different kinds of people and relationship with the media they consume and the channels/platforms/device types from which they consume it.   This can help us get our head around  the reality of the problem beyond us over generalising from our own limited personal experience and behaviours.

We need simple rules-of-thumb that help bring clarity to the issue…
for example: Does an individual’s behaviour marginally increase or reduce the quantity of fake news information, authority, reach and transmission in the World?

Just like you can make pretty good choices about food using metrics like ratio of calories: nutrients, we need design principles that scale up to increase the ratio and authority of accurate news, accurate thinking about a multiplicity of information sources, and the whole lifecycle of news related behaviours.   ++Kyuubi10

  • We need to understand for different news types, ratios of headline visibility, to article part completion, to whole article readership.  For example, ratio of headlines read (across different traffic channels), excerpts read, part articles read, whole articles read. This can allow us to weigh the cost of fake news in a headline versus in a long form article.  

  • If we can model the permutations of news sources, for example, a specific subset of 20 year olds might average 60 minutes smartphone usage, 45 minutes TV including average exposure to 10 minutes of news, plus 10 minutes exposure to news headlines from print publications, for a specific age-class… we can start to compute the bias confirming or bias questioning signals.

  • We need to craft user narratives and day-in-the-life of different types of people including their passive and active exposure to news sources, and ratifying or conflicting interactions.  This can help us zone in on sets of people with specific vulnerabilities to specific kinds of fake news so we can focus attention to solve it for them.

  • As part of this, we need to get a gist of prominence and authority.  For example, certain classes of people may take stories reported on TV as being more prominent than stories on social media channels.  

    The outcome of this should be enough clarity that we can reasonably compute how to fool someone, according to their patterns of news consumption.  If we can compute how to fool someone, we should be able to compute how to make it harder to fool them, or prioritise our attention so they are less mislead so reducing fake-news becomes a resource efficient endeavour.

The problem with “Fake News”

        How many times over this past year have you heard the term “Fake news” tossed around?  Most likely an uncountable number of times and over a vast variety of subjects.  Fake news has become so prevalent in our society that many of us don’t go a single day without a piece of fake news sneaking its way onto our computers, our smartphones, or our televisions.  This creates both ethical and legal issues that ripple through our society and ultimately take away from the medium of journalism as a whole.  

        The term “Fake News” refers to the act of knowingly writing or distributing falsities that assert themselves as fact.  The easiest, and thus most popular way to spread fake news is by sharing these misleading articles on facebook, twitter, or on a plethora of other social media sites.  These articles spread quickly, and because many do not have any reason to believe that the information may be false they take every line as gospel.  So why does this matter?

        According to usnews.com, over the past year team LEWIS, (who defines fake news as (an) “ambiguous term for a type of journalism which fabricates materials to perpetuate a preferred narrative suitable for political, financial, or attention-seeking gain")  conducted a study which sought to realize the overall effect that fake news had on people's views of American news and brands.  The study found that only 42% of millennials checked the credibility of publications, and this sank to 25% for baby boomers. This means that there is a chance that 58% of millennials, and 75% of baby boomers may be fed false information on an almost daily basis and accept it as truth.  This is a problem for a number of reasons.  

        Usnews.com states that one of the main reasons that fake news is spread is to promote political ideologies, and this is where all of this seriously matters. Let’s say you get most of your news about a politician from Facebook, and let’s say that at least 60% of what you’re reading is either entirely false or misleading. This means that you could potentially be voting for a candidate that totally appeals to you, when in reality they might be the exact opposite of what you’re looking for and you were simply pushed and mislead by the media. Now imagine that this didn’t only happen to you, but this was the case of half, or even a quarter of the people who also voted for this candidate. This is a direct depreciation of fake news, and if is very scary.

       Now it’s not that we don’t have the tools to fact check these articles ourselves, in fact it’s really not very difficult to determine the credibility of an article if you know what to look for. The major problem here is that many people don’t have any reason to believe that they’re being mislead, especially the older generations. People tend to read an article or even just the title of an article and have it stuck with them for some reason, and at some point they’ll share it in conversation with their friends because it was so gripping that they wouldn’t want it to be fake in the first place which deters them from even having the thought to check.

       The ethical problems that arise because of fake news are significant and have a trial life impact. The people who push these articles have very little to answer for as it is almost impossible to police all media in an attempt to fight this sort of thing. The best battle against fake media is to check everything before preaching it as truth. Scrutinize your sources, and know that there is a chance that anything you are reading online is false. That’s all we’ve got until AI can start determine whether a post is considered fake news.



Surprising Validators

Related to cross-partisan / cross-spectrum notes above - Richard Reisman (@rreisman)

See [Update] “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds” below

This outlines some promising strategies for making the filter bubble more smartly permeable and making the echo chamber smarter about what it echos. Summarizing from my 2012 blog post: Filtering for Serendipity — Extremism, “Filter Bubbles” and “Surprising Validators”:  

  • Balanced information may actually inflame extreme views — that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, “Breaking Up the Echo” (9/17/12). Sunstein is drawing on some very interesting research, and this points toward an important new direction for our media systems. Sunstein’s suggestion is that what we need are what he calls “surprising validators,” people one gives credence to who suggest one’s view might be wrong. While all media and public discourse can try to leverage this insight, an even greater opportunity is for electronic media services to exploit this insight that “what matters most may be not what is said, but who, exactly, is saying it.

...Quoting Sunstein:

People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”[gx]

Our initial convictions are more apt to be shaken if it’s not easy to dismiss the source as biased, confused, self-interested or simply mistaken. This is one reason that seemingly irrelevant characteristics, like appearance, or taste in food and drink, can have a big impact on credibility. Such characteristics can suggest that the validators are in fact surprising — that they are “like” the people to whom they are speaking.

It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.

Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it.

… My post picked up on that:

This struck a chord with me, as something to build on. Applying the idea of “surprising validators” (people who can make us think again):

  • The media and social network systems that are personalized to serve each of us can understand who says what, who I identify and agree with in a given domain, and when a person I respect holds views that are different from views that I have expressed that I might be wrong about. Such people may be “friends” in my social network, or distant figures that I am known to consider wise. (Of course it is the friends I consider wise, not those I like but view as misguided, that need to be identified and leveraged.)
  • By alerting me that people I identify and agree with think differently on a given point, such systems can make me think again — if not to change my mind, at least to consider the idea that reasonable people can differ on this point.
  • Such an approach could build on the related efforts for systems that recognize disagreement and suggest balance noted above. …But as Sunstein suggests, the trick is to focus on the surprising validators.
  • Surprising validators can be identified in terms of a variety of dimensions of values, beliefs, tastes, and stature that can be sensed and algorithmically categorized (both overall and by subject domain). In this way the voices for balance who are most likely to be given credence by each individual can be selectively raised to their attention.
  • Such surprising validations (or reasons to re-think) might be flagged as such, to further aid people in being alert to the blinders of biased assimilation and to counter foolish polarization.


This provides a specific, practical method for directly countering the worst aspects of the echo chambers and filter bubbles…


This offers a way to more intelligently shape the “wisdom of crowds,” a process that could become a powerful force for moderation, balance, and mutual understanding.
We need not just to make our “filter bubbles” more permeable, but much like a living cell, we need to engineer a semi-permeable membrane that is very smart about what it does or does not filter.


Applying this kind of strategy to conventional discourse would be complex and difficult to do without pervasive computer support, but within our electronic filters (topical news filters and recommenders, social network
 services, etc.) this is just another level of algorithm. Just as Google took old academic ideas about hubs and authority, and applied these seemingly subtle and insignificant signals to make search engines significantly more relevant, new kinds of filter services can use the subtle signals of surprising validators (and surprising combinators) to make our filters more wisely permeable.


(My original
post also suggested broader strategies for managed serendipity: “with surprising validators we have a model that may be extended more broadly — focused not on disputes, but on crossing other kinds of boundaries — based on who else has made a similar crossing…”)

(Update: my 12/15/16 post adds broader and more current context to my original Surprising Validators post: 2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect)



- Richard Reisman (
@rreisman) (#SurprisingValidators)


[Update] “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds”

(Expanding on Surprising Validators, above  -- added 11/1/18) -- Richard Reisman (@rreisman)

Here are links to work on a broad architecture for “Augmenting the Wisdom of Crowds” to create “A Cognitive Immune System for Social Media” drawing a nuanced view of “The Tao of Truth.” This draws on work detailed in 2002-3 and added to in the past few years (reinforced by recent work and meetings on “fake news” and disinformation).

It builds a reputation system that includes both external and algorithmically emergent authority of both news sources and those who disseminate and comment on them. Much like Google PageRank it is recursively based on “Rate the Raters and Weight the Ratings,” where raters/ratings with high imputed authority are weighted higher than those with lower imputed authority. Thus sources and raters have only as much weight as other sources and raters give them.

The main body of ideas on such an architecture is in these posts:

  • The Tao of Fake News – on the limits of experts, moderators, and rating agencies --and the need for augmented wisdom of the crowd as essential to maintaining our democratic/enlightenment values.

A separate but related body of innovative work is on a strategy for changing our social media business models -- to remove the inherent disincentives (the ad model) that have led the platforms to promote disinformation when they should be down-ranking it.  Introductions to that work are in this post and journal article:

(Currently a pro-bono effort.)

- Richard Reisman (@rreisman)

(#AugmWoC, #WisdomOfTheCrowd: #RateTheRaters #SurprisingValidators)


Snopes

  • Partner up with Snopes to see if a particular story/ part thereof has been debunked by Snopes. - Nitin

  • I agree with the Snopes idea (+Politifact etc). It’s hard to rely on the (un-cultivated)[gy] wisdom of crowds for truth. The very fact that false news is widely shared shows that isn’t working. Sometimes it needs some proper legwork. But I would also be wary of censorship. If people want to share   false news, I don’t know that we should be stopping them. But if news came with ‘traffic lights’ (red for dismissed by fact checking sites), that in itself would be a disincentive to share them. I’d also provide a feed to Snopes etc of which stories were trending, allowing them to fact check the potentially most dangerous stories quicker - Hopping_Rhino

    >> “Wisdom of the crowds”. Where would Quora come in?
  • There is benefit to removing false stories from Facebook. People ignore Ads quite easily and they will ignore ‘red flags’ or ‘supporting Snopes links’ easily as well and will continue sharing them. I’ve seen it on WhatsApp. You tell someone on one group that the image they’re shared is fake and two minutes later, they’re sharing that elsewhere. So the better option is to verify veracity and remove - Nitin

  • @Nitin, It is my understanding that the aim is to inform rather than cater.
    The problem with removing content is that it can easily enable irreversible censorship, where not even crowdsourcing can double check content. --Kyuubi10

  • Unfortunately those who tend to fall for fake news believe Snopes is a biased outlet - [source?][gz]

  • They don’t need to be told FB is tying up with Snopes. If they’re told, they’ll know Snopes is biased. FB doesn’t tell us 99% of things it does in the backend, this’ll be one of them - Nitin
  • Please, be careful of encouraging “Tailored” news, where an algorithm or person decides what we can or cannot see. It can quickly turn into spoonfeeding propaganda to the gullible masses. --Kyuubi10 (+KP)

  • There is a potential to make this less ‘binary’ - visually show how many sources mark the article as suspect. (@benjaminellis)
  • @benjaminellis, can you please explain this a little better? (I think I found the issue. A hyphen was missing there :D (Maybe I should remove this comment now?) -
  • I agree with the Snopes idea. I would prefer color coding
    [4 OPTIONS]:
    1- unverified
    2 - verified and true
    3 - verified and false
    4 - opinion
  • Distinction between facts and opinion are blurred on the news feed. FB needs to establish that distinction.

  • The trending news should never trend unverified, verified and false as top news. +Kyuubi10 +Manu

  • The Snopes idea is a great one and also adding in other cross-referenced sources, if only to run a check prior to posting saying, “Are you sure that you want to post this? According to XYZ sources, it looks like it might not be credible.

    This would still allow the user to post what they want, but might curb the overall spread of false information --
    @mistrchristophr

  • Remove the filter bubble. My feed is littered with fake news right now from various groups. I should be able to choose who & what I want to see in my feed.
  • You more or less can. I have blocked a large percentage of the news feeds people share. Noise has no space on FB, unless you want noise.
  • If they share another page, you can hide that from your feed, though on mobile that is problematic if the post is a share of another person sharing the fake news (desktop/web gives you a dropdown of which element/person it is you want to hide from your feed, mobile doesn’t). More problematically, if someone posts a link to a fake news site as a post (rather than sharing another post), there doesn’t seem to be any convenient filter mechanism to blacklist links/sites in original posts.

  • Just as a meta point on the reliance of fact checking for some of the possibilities mentioned in the pages above -- and this from someone in an organization that will participate in the effort -- but not everyone agrees on what counts as a reliable organization for fact checking a piece of news.

    This American Life’s “Seriously?” Episode (the Prologue and Part 1 are key for this) has an excerpt from Rush Limbaugh, for example, where he explains why his listeners should discount fact checking:

    599: Seriously?  [1:03:38] ++@IntugGB

    From the site:

    Every day until Election Day, we'll have a new Chris Ware & John Kuramoto animation.
    See them all.

    Watch the video of Leslie Odom, Jr. singing "Seriously," written by Sara Bareilles.

    The internet version of this episode contains curse words. If you prefer, here is a
    bleeped version.

    --Connie

Ideas in Spanish - Case Study: Mexico

I am adding some ideas in Spanish but I do believe that there is a way to identify the fake news analyzing it’s patterns of propagation. I’ll be glad if someone can translate. >> @LoQueSigue

  • Most fake news leave a trace about how that information was shared. People sharing news enable corresponding networks to examine how news were propagated, and how interested they are in some topics. Fake news has a distinct propagation network, with unique qualities. For example: sometimes when an article is fake, nobody shares it except the people who were paid to spread it. In that case, the network traced by the shares contains just a few communities and in patterns with distinct network shape. When people are really interested in an article the network topology is different- there are a lot of communities supporting that idea.

d trolls are working some years ago and maybe the knowledge developed here could help around th

  • If true, the fake news also can be identified by checking the topology of the graph along which it was shared. But there is another element that can be used to improve this heuristic- the reactions, if we can combine topologies of network graphs and the corresponding reactions maybe we can design an algorithm that detects when a fake network is beginning.

      I’ve tried years ago to develop software that could fight this battle on Twitter, and there are some ideas that could maybe also be useful on Facebook:

Súmate como socio para crear un medio 3.0 en México [Pantalla Completa]

In Mexico we are in a kind of future where fake news, armies of bots ane world.

Muchas de las noticias falsas dejan un rastro de propagación según han sido compartidas. Así he podido identificar si se han generado de forma orgánica, es decir por el interés legítimo de la gente en esa noticia y también cuando se trata de un equipo de personas que se dedican a propagar información falsa. Les dejo un par de ejemplos con data de Twitter que bien se puede usar en Facebook. – @LoQueSigue_[ha][hb][hc]

More case examples  (in spanish) here:

El[hd] día en que la sociedad derrotó a los bots:
#EstamosHartosdeEPN vs #EstamosHartosCNTE
[he] :

Así fue el ataque masivo de bots al @GIEIAYOTZINAPA. Demostración

Trace of a fake news/Trending “non organic”[hf][hg] 

Case study on Twitter:
#LosSecretosDeAristegui.
People were paid for are spreading false information about journalist.

Link to full screen video:
Carmen Aristegui ¿verdad o manipulación? - [Pantalla completa]

Captura de pantalla 2016-11-17 a la(s) 16.34.53.jpg

Trace of a real “organic”[hh][hi] news. A lot of people is sharing a real information connecting communities

Captura de pantalla 2016-11-17 a la(s) 16.35.48.jpg


(Source? Because if true, hoo boy we can do this with graph theory.-- N.) +@IntugGB[hj]

@LoQueSigue_ --- That graphs are my own, I’ve just generated them to try to explain. The first one is about a trending topic generated yesterday spreading a fake news about a popular mexican journalist. The second one is about a Change.org campaign popular in Mexico.

You can find something about this idea on Wired: 

Pro-Government Twitter Bots Try to Hush Mexican Activists

Not just for Facebook to Solve

  • Facebook clearly has a problem, but it is not just Facebook’s problem. There are too many lies and too much misinformation in our public and political discourse. Also, the buzzing environment we live in (with our digital interfaces) probably makes it more difficult for us to think clearly.[hk]

    Society is beginning to break down and we need a MAJOR PUSH to move things in a more truthful direction.

  • In an article published by The Guardian, Tim Cook @tim_cook, CEO of Apple, states the following:

    “All of us technology companies need to create some tools that help diminish the volume of fake news. We must try to squeeze this without stepping on freedom of speech and of the press, but we must also help the reader. Too many of us are just in the ‘complain’ category right now and haven’t figured out what to do.

    We need the modern version of a public service announcement campaign. It can be done quickly, if there is a will. It has to be ingrained in the schools, it has to be ingrained in the public.

    We have to think through every demographic... It’s almost as if a new course is required for the modern kid, for the digital kid. In some ways, kids will be the easiest to educate. At least before a certain age, they are very much in listen and understand [mode], and they then push their parents to act. We saw this with environmental issues: kids learning at school and coming home and saying why do you have this plastic bottle? Why are you throwing it away?”
    -- @Media_ReDesign  [11 FEB 17]

  • I am in the process of starting a NONPROFIT to “make truth a virtue again.”[hl] I see a need for a single, sustainable, blast of resources to attack this problem. Judging from how this document is filling out with creative ideas, what’s needed is to unleash this creativity and put it to work. It’s great to get quick ideas, but real work, deep work, building of real tools and practices will require resources and resolve. +@linmart
  • The nonprofit’s name will be Truth For Democracy.
  • Soon, I will be starting a crowdfunding effort to raise startup funds (NONPROFIT STARTUP).
  • I am looking for help.
  • Check out my small beginnings here: http://MakeTruthGreatAgain.org[hm]


My name is Will Thalheimer. Not a member of the political caste. Just an American who is frightened. Twitter:
@MakeTruthGreat 

  • I agree with this - “Not just for Facebook to solve.” The state of news has changed completely since it went digital. Before, when it was only distributed via print, it wouldn’t have been plausible or appealing to start a fake news outlet, with the costs of getting any significant distribution being a major hurdle. Credibility used to matter, too. I think the problem comes before Facebook. I think the opportunity to conduct journalism as a business now needs to require a greater commitment and promise to conduct ethical journalism[hn].

    Just spit-balling here, but I imagine something like requiring
    a license to deliver these services, like legal and medical professions require, and for the internet, allow only these licensed operations to register under an exclusive domain, like “.news” that makes them easy to identify. Nowadays, there’s nothing stopping me from purchasing a .com myself and posing as a real organization and writing false stories. That has to change. The issue remains with or without Facebook. -@ashcraftjoseph 

  • In general, these articles tip their hand with the headlines. “FBI just dropped a bombshell…”. One could use a headline parser to identify click-hairy headlines relying heavily on immediacy (...just…) to prevent their wide circulation in news feeds. Simply lowering the heat of headlines would focus attention on the quality of the content, rather than the sensationalism of the piece.  [Headlines]

  • It’s not just for social media platforms to solve. We need a multi-stakeholder coalition to implement, monitor, and improve these fixes.[ho][hp][hq] 

    The technical/design fixes proposed throughout this doc are important, but carry a high risk of being circumvented in the future or becoming irrelevant in the face of new platforms or technologies.

    We need an institutional space where the stakeholders who are putting these solutions into place—social media platforms, news outlets (from multiple political perspectives), advertisers, academics, technologists, etc.—can come together, agree on changes, agree on standards, adapt their approaches, hold one another publicly accountable, etc. [hr]There might be a role for gov’t too, though I’m a bit wary of their overt involvement here. There’s much to learn (positive and negative) from efforts like EITI and OGP, but those provide a general model. - Dave 

    >> I am wondering how viable this is - starting from scratch. As is, we are designing a project for such an organization, but that has decades of experience dealing with issues affecting ICT users - in general. Perhaps as a blueprint, a working model, it can be looked at but keep in mind that it needs to be very specific in this case, meaning media/ journalism.  

    The potential is astounding but you need to create extraordinary alliances to start off with, alongside experts in every one the fields you mention. -
    @linmart, @IntugGB  Some background: The INTUG Story

  • Journalism / News programs once had the opportunity to be financed by the government[hs], but decided independence through advertising revenues would keep them honest. Journalists and the truth should be the “fourth arm” of a democratic world. Independent and disconnected from government, corporation and ideologies. Determining the facts based on evidence. A non-profit supported by the public through donations and not influenced by multi-stakeholders. Civil society is the stakeholder, governments and corporation have central power and bureaucracy issues that sway the truth.  +MarkNelson

  • Emojis for newspaper links.  Big papers could pay for distinctive links, while fake news sites (hopefully) don't have the $ to pay for an special looking link.  Sets news apart from fake news for users without reducing freedoms.  Just makes it easier to identify real newspaper from fake via link - aesthetics.@ath2ad


I wrote up a couple of suggestions here:

A Suggestion on how Facebook could fix its Fake News problem


In short:

  • Algorithm and human driven trust rankings for articles - based on reputation of publisher, reliability of previous articles, # of corroborating sources or articles suggesting similar things, etc. Machine learning very important here.

  • Presenting below every shared article a carousel of related articles on the same subject. This provides a win-win for consumers and Facebook… allows contradicting or debunking articles to appear, gets people out of the filter bubble by providing contrasting views, and also gives FB opportunity for even more engagement

Darrin Lim  @darrinlim


  • Teach verification as part of the process of flagging something on social networks. If you want to flag something, you have to go through the steps the reporters themselves use to verify true/false information. -- Gabe
  • Maybe too simplistic, and obviously has shortcomings (will people trust it? could get political) but what about a LEED for news? Independent organization sets out standards for quality journalism, then ranks news orgs and sites accordingly. FB could display the ratings boldly next to headlines, so people know what they're reading or watching -- Jake (@hellerjake) <

I think the most feasible and therefore most likely implemented solution will be one that:

  1. Utilizes a simple rating system, like others have suggested
  2. Is logistically easy for Facebook to implement
  3. Partners with existing platforms / harnesses existing resources such as factcheck.org
  4. If including member ratings at all, minimizes their impact to avoid a "feedback war"
  5. Has a two-pronged mechanism to efficiently evaluate existing "news" sources and accurately rate new ones.

— zach.fried@gmail


 A Citizen Science (Crowd Work) Approach

We are building citizen science software producing and displaying credible and legitimate ‘truthiness’ scores. Here’s how it works: Everyday, participating citizen scientists around the world are invited to read one or a few of the the top trending articles in their language. These lightly-trained citizen scientists will tag words and phrases in the articles that appear to commit common fallacies like presenting a false dichotomy, mistaking correlation for causation, ignoring selection effects, and more. When many citizen scientists independently verify the fallacy, the news article text committing the fallacy is flagged so that news readers can recognize the potential mistake. (Journalists will be given an opportunity to correct (or contest) the mistake during a 72 hour window.) Journalists and their publishers will be (visually) rated by the number of fallacies they commit and propagate. These ground → up measures will be seen as more legitimate than holistic, more easily-biased metrics. When the project is operational, humans will be able to measure and visualize — regardless of their agreement with the content of a journalist or news source — the truth value and quality of published ideas. Over time, the project will advance public literacy and the quality of our discourse.

(for more information, email
nick@goodlylabs.org )


Special Campaigns

Promote relevant social issues applying viral techniques. - @IntugGB

All too often, people are unaware of what is stake either because they do not understand or it simply does not interest them.


Case Study: India

SaveTheInternet.in 

(Info taken from website)

The SaveTheInternet campaign is a volunteer led effort to uphold a fair and open Internet. Over one million Indian citizens have participated at one point or another in the campaign that was started by a group of like minded individuals across India in May 2015.

Background

  • In 2014, Indian Redditors began discussing net neutrality when Airtel decided to charge extra for Voice Over IP (VoIP) services like Skype. This discussion led to the launch of netneutrality.in, a website that explained the importance of net neutrality and Airtel’s threat to it. After seeing public outrage and discussion, Airtel withdrew their decision and this marked the first time that the principles of net neutrality gained mainstream media and public attention. This also resulted in the important question being raised around the lack of policy around net neutrality in India.

AIB.jpg

Watch:

AIB: Save the Internet [Fullscreen] 

  • Save The Internet volunteers created an abridged version of the paper and the campaign went into full swing when savetheinternet.in, a platform to submit comments to the TRAI consultation paper, was launched and pushed out by the comedy collective All India Bakchod.

  • Through various interventions and with the support of the public, dozens of influencers, organisations, and experts, in early 2016, we won a large part of our net neutrality campaign when TRAI announced that differential pricing would be banned in India.




“Pooled” White House News Dashboard

Just as there is a White House Pool that covers the President, would like to propose something similar for a new kind of aggregation site for specific topics, such as covering the President. In the case of covering the President, the “page” might have the following sections and draw on the existing reporters and publications that are part of the White House Press Pool[hu][hv].

The site / page could include:

  • A list of articles by topic and in chronological order by reporters in the “pool.” (For example: One could look at news on Cabinet Appointments separate from business dealings. Each publication would have the last X number of their reports showing in each topic area, with publication name and date clearly shown. Clicking a link would take the user to the site of the publication that wrote the article.)  
  • A live updating blog section where pool reporters are able to comment on the articles and associated breaking developments -- (Some publications did something along these lines during the Presidential Debates, along with links to related articles / fact checking sources. This would allow for human intervention, commentary, perspective)
  • Alternatively, maybe this could be a widget displaying tweets by those pool reporters
  • Sections showing latest social media updates from key elected officials and the White House
  • Depending on news of the day, may be appropriate to have a map showing where key individuals are / where major events are happening
  • Links to video footage and transcripts where appropriate
  • Links to Reporters Bios and Profiles of the Publications involved
  • May be appropriate to have “bio” pages for key players in the Administration too. Page could have links to latest press releases from that person’s department and a collection of articles from publications participating on the site.
  • Counter for when last press conference was held
  • Some kind of interactive tag cloud / infographic / timeline with key themes of last 24 hours, week, month

While it may be difficult to encourage competing publications to collaborate on this initiative, it may help highlight the different and competing narratives that are circulating. It may also be a way of having a neutral place to debunk news articles. So, when a friend shares something on social media that is incorrect (example: Trump incorrectly stating how much he self-funded his campaign), you can go to this pooled site, look at relevant articles and suggest your friend go there to confirm.


This is not a perfect approach and doesn’t address all the issues recently raised (example: Separating fact from commentary). Curious to read comments from others on something like this.


The Old School Approach
Pay to Support Real Journalism

  • Love this project and the ideas that have been shared so far. While I understand that my suggestion may not entirely jive with the spirit of the question, the simplest way you can fight fake news TODAY is to support real journalism.

    Pay for a subscription to a real, reputable newspaper that's doing the hard work of uncovering the truth. Digital subscriptions will usually come with a few extra user seats that you can give to friends and family. You should do just that for those that can't afford to subscribe themselves. While figuring out innovative ways to flag and suppress fake news is an important conversation, we shouldn't delay action to support the spread of truth by voting with our wallets today.

    Spreading that truth requires real journalism. If you value it. Pay for it. Old school.
    -- DBrow

  • The design of the solutions should  follow an as universal and holistic approach as possible to reclaim accuracy, fairness, and high professional standards in news reporting. A global partnership of like minded public service news media and relevant civil organisations could pursue this goal by organising a series of micro events (like this rapid brainstorming, hackathons, round tables, workshops etc) that lead to a global congress with the goal to release the version 0.1 model of what might be called “Multiplatform [SP] Prototype of Accurate News” -- @almel +@IntugGB



Thread


Critical Thinking

  • Communication--make stopping the spread of fake news a cause: We know that our attention spans are short and, although we can teach news literacy and critical thinking, the spread of fake news is probably more a combination of the proliferation of partisan news, the ease of publishing viral content, and social behavior (I like and share what my friends like and share). It’s much harder to change behavior. I think it’s important to think about how to turn stopping fake news into a cause for the good of our democracy, with all of the emotional urgency a call to action must entail, so we need a strong theory of change that speaks to people emotionally. There is hope for this when you look at the move by consumers to support news publishers through subscriptions. - @TracyViselli

  • Stop thinking about this as “teaching the kids.” Every citizen in this society can and must improve their critical thinking. That includes journalists. Fact-checking should involve readers in drawing conclusions, so everyone’s practicing these skills together. Journalists + libraries can partner to offer in-person events, online interactives. Someone smarter than me once suggested gamification - big up to whoever that was. (Tamar) +Robin[hw][hx] +1NBA


161214 - The Journal Gazette
Bursting the Bubble: Absence of critical thinking among young has troubling implications for nation

150415 - Telegraph
Humans have shorter attention span than goldfish, thanks to smartphones


120620 - Telegraph
Children with short attention spans 'failing to read books'



Media Literacy 

  • The NYT times ran a curious piece explaining to readers how their comments were moderated. Included is an actual test where you are placed in ‘charge’ of approving or rejecting an opinion, with an explanation of what criteria they used - should you mess up. Interesting analysis, perhaps applicable to increasing media literacy - @linmart


Recommended


161116 - Medium
Facebook, Google, Twitter et al need to be champions for media literacy 
By
@DanGillmor




Programs[hy] 

There has been a *lot* of research in this area. I’m working on a PhD in this topic.

Broadly, there are three ways of looking at the whether or not the information is in fact trustworthy - Credibility-based, Computational, and Crowdsourced. 

All of them can be
gamed and hacked, so FB does have a tough problem. I think that there are patterns of interactions with information that users have with information that are different with respect to whether or not they are trying to find an answer to a question or merely trying to support a bias. The trick is in teasing out how to reliably find out whether the user you’re watching at the moment is engaged in trying to do one or the other. Aggregating the traces of the people who are looking hard for answers might wind up being very helpful.

A pop up box that comes up when someone is about to share an article that’s been frequently flagged as fake that just says “many Facebook users have reported this article contains false information - do you still want to share it?”[hz][ia] Could slow the spread of bad information but would not allow flagging as warfare to totally drown out opposition.

  • Require civics classes from a young age and include curriculum about disseminating truth from fiction in the news - this is something future gerations will have to understand

volve the platform’s objective:

  • First Acknowledge that your platform is being used as a news website. This idea that ‘it’s all about harmless connection’ is a very false one. Be real about what the platform has evolved into. The philosophy on which you build and redesign your platform will inform its consequent programming, and algorithms. As opposed to now, where the algorithms and programming is largely driven by commercial incentives. A  traditional news organization does not allow its audience to determine what is or isn’t news. It decides it’s broad philosophy and works forward on that basis.

  • In terms of short-term ideas some sort of labelling is required. Nothing obtrusive, or censor-heavy but just as you would expect food, or items to be labelled into a grocery or department store - the same needs to be done when you are providing access to inforion. There are safety, regulatory standards that inform their consumption choices. You can still have as much propaganda, or fake news or whatever it is you are so concerned about not suppressing - as long as it is called what it is.

  • Industry leaders, or Credible Experts (Meaning proven track record within the field) can then grade information - as a collective - within their specific area of expertise. News within a specific subject can be graded a specific color, Biotechnology vs Popular Science. The standards for both those types of information are very different and while tedious[ib], some sort of signalling (maybe it can be a specific color tab or something)  will encourage an environment  where information is identified as what it is will build a healthier ecosystem, and help people ‘connect’ better which is what FB is apparently so concerned about.  

  • Furthermore, there is a difference between being ‘fair’ and calling a spade a spade i.e. having integrity. If one really wanted to be fair, they can pull out fake articles from either side of the political aisle. So the question of being fair or not becomes moot.

  • The difference with FB is that an environment where opinion based rhetoric is conflated with factual information exists and without any labels. I have a choice to buy something unhealthy at the grocery store - but I can clearly identify and label that choice. And there is a large consensus that candy is different from broccoli.

  • If I were FB I’d be less concerned about how fair I’m being to the availability of propaganda and lies that fuels xenophobic, racist rhetoric, especially when I’m earning money and running a company while enjoying the benefits of liberalism and liberal values[ic]. Decide what you are and don’t be so terrified about who you are. Take a stand.

The biggest problem with fake news is many of the links looked like real news.  Sure there were the hyper-partisan sites that were clearly iffy, but I’m talking about ones like abcnews.com.co and the like.  This needs to rectified, and can be in a bunch of ways, mostly a sort of blacklist with keywords from legit news sites.  Like abcnews can be keyed out of any other permissible link. Secondary to this, a flagging feature which allows users (through an algorithm) to flag a story as ‘false, misleading, inaccurate’ etc.  Thirdly, FB/Twitter/et al need to take it upon themselves to ban hate speech posting/sharing.

Second, there needs to be a human editorial staff, hands down.  The problem with this is you have to hire a legit staff of non-partisan journalists that can curate the sidebar news section and deal with other flags that pop up.


Third, a ‘trusted source’ white-list.  National TV, national print, major local papers, etc. can be whitelisted and re-approved (annually, bi-annually, etc) to post and the ‘flagging’ feature would have to be remarkably high to merit a review of the article posted/shared.


Lastly, there needs to be a clear divide between “news” and “opinion”.  TV news, especially Fox News, has taken the “news” and twisted it into “opinion based consumption.”  News Organizations need to be held accountable on clearly stating that an article on FB/Twitter/etc is NEWS or OPINION. Hard news, facts, fact checks, quotes, etc that are said/done/reported on as news should be posted and verified as that.  If the NYT/WaPo/Fox/MSNBC/etc posts an editorial or opinion piece, post that that’s what it is.   -- Ian


Suggestions from a Trump Supporter on the other side of the cultural divide[id][ie][if]


Please note this section has a number of contributions with very valuable perspectives. We ask that the text be left as is, intact, with no modifications of their personal opinions. Thank you -@linmart
 


I’m not clear on whether you want comments from any member of the public or only full-time researchers and specialists that you know. But since I managed to find my way in here, I’ll go ahead and offer some good faith suggestions to help your project. I have a background in user interface design and software engineering, but I’m not here to comment as a specialist only as an interested observer from the other side of the cultural divide.

I think you can get buy-in from people on my side of the divide to stop circulating dubious news if you devise a process perceived to be fair and open[ig].

 

Some Suggestions:

 

- Avoid devising processes or algorithms that rely on sites like Snopes or fact-check. The quality of the debunking work on those sites is uneven, excellent at times horrible at others. It is the reason many people on my side of the cultural divide do not consider those sites credible. Any debunking tied to snopes will be dismissed by many people for legitimate reasons.

 

- Use as much transparency as possible.[ih]

 

  • Let people see the code behind the algorithms that determine what trends [ii]

 

  • Write a plain language description of what the algorithm does so people without the right background can at least understand what the approach is

 

  • If humans are to be involved, make their procedures, guidelines and processes public. Show the world how people assigned to do this work are going to go about it.

  • Also, if humans are involved, maintain an audit trail of their activities. If a human spikes an item, or chooses to “dampen” an organic trend, that will affect what 100s of millions of people might see, he or she should be required to log a note in the system giving a rationale for the action (ex: fraudulent site registered in Macedonia). There should be a way of identifying the people who make these choices so they can be held accountable for a choice if necessary.

  • Maintain an open database of spiked news items and items that a human or an algorithm prevented or tried to prevent from organically going viral. Let people verify for themselves that only provable fakes are being spiked or prevented from trending.

- Ensure there is real and meaningful ideological diversity on human teams.

 

  • Facebook had a human team curating the trending news items seen by millions of people, and it appears that team abused their authority by suppressing news of interest to conservative readers. For this reason, there is now an issue of credibility.

 

  • In order mitigate potential conscious or unconscious bias by human curators, I suggest, again, that any human teams contain substantial ideological and other diversity, including representation for the political extremes as well as the political middle, people who prefer alternative to corporate news, and all different kinds of racial/ethnic, class, regional, cultural, religious and generational backgrounds.

 

  • This FB incident is an interesting example (to me anyway) of the unintended consequences of echo chambers on the culture. Reportedly, the people on that FB team went to elite schools, so, most likely, they were quite intelligent and capable people. Nevertheless, they were apparently so ensconced in their particular world view’s echo chamber they either didn’t know or didn’t care that the net effect of their actions was to suppress legitimate news of interest to conservative readers. There was, as a result, an understandable public uproar over that outcome, and FB turned over the news trend to an algorithm that appears to have made the problem of bringing dubious news to the forefront worse. If the team FB assigned those duties had been fair in curating the trend in the first place, rather than knowingly or unknowingly managing the trend based on their own perspective and bubble-reinforced biases, FB may not have had a reason to turn that job over to an algorithm that seems to have run wild before the election

Adding some observation to this, in the context of what you just mentioned (underlined text). All points I am identifying with my name but please add any additional ones, as needed. The comments function I am not using as I need this to appear in the preview - @linmart, @IntugGB

  • This you mentioned seems incredibly important in the sense that all news have to be taken within a culture context. When doing research on the ongoing #DeMonitisation crisis in India, we were up against a number of competing narratives. The NYTs, the Guardian and others gave their accounts (rather late) of the situation while the really in-depth coverage was found searching within key members of the Indian community. Of major importance, especially when working from afar, is knowing who the credible sources are. They usually link to the best and most relevant content at astounding speed - @linmart, @IntugGB

  • Every situation needs to be looked at from many angles, perspectives. Those experiencing the full wrath are perhaps the best sources. Thinking of the RustBelt scenario (companies vs workers), issues with ad-blockers (advertisers vs consumers), pro-life, etc. @linmart, @IntugGB
  • In regards to specific case scenarios that come to mind, some of the best experience to be gained is “on the ground”. Being thrown into the situation enables more neutral coverage. One  extreme importance in some situations, where expert opinion is essential. (i.e. Medical knowledge when describing abortions, legal basis for remaining in EU, as in #brexit) @linmart, @IntugGB


    Related

    170104 - Poynter
    Could changing the way bylines look help increase trust with readers?



Supplement

I wrote the above comments related to the cultural divide. Additional comments:


Definition of Fake News

At this time, there does not appear to be a definition of “fake news” in this document.

  • If readers/contributors do a Ctrl+F for the words “define” and “definition,” they will not find a definition of “fake news” in this document. The phrase “fake news” is used over 150 times without a definition.  

  • It seems that a handful of clear cut examples of fake news have been recycled over and over again since the election, but examples are not definitions. Examples do not tell us what fake news is. Establishing a working definition might be preferable to proceeding with work with no definition.


Moral authority of news gatekeepers

The global perspective that some researchers here appear to be bringing to this work is indispensable. This “perspective” clarification, however, is about the cultural climate in the United States specifically.

  • The community of millions of people in the United States that have revealed themselves as willing to politically coalesce around a president like Donald Trump despite his various issues will never accept that hyper-partisan academics, researchers and journalists should get the final word on what is and is not legitimate news in the United States.

  • More broadly, no cultural group is ever going to accept that hyper-partisans from a rival cultural group should decide what information is and is not legitimate for news coverage across the whole of society. It’s not a reasonable expectation. In an American context, making the fake news problem a partisan issue will only make the conflicts and cleavages in the United States worse.



The urgency of the ongoing political situation in the US, not only for the country but for the world, makes it imperative that the so called “Fake News” issue be considered through this very specific lens, venturing perhaps even further to suggest that it be analysed from the perspective that
at least part might be attributed to ongoing propaganda efforts by a foreign nation.

Incidentally, at @IntugGB, we are looking at this from a global perspective, examining how this problem ties in with such critical events as the
#Aleppo crisis, with constant mentions throughout that the news coming from the combat areas are part of a Globalist agenda. “Fake”.

Going further, there seems potential for the creation a multinational entity in charge of dealing with issues of this nature. As it stands, INTUG is the only organization that represents users’ rights in the field of ICT policy -- on a global scale. While ‘fake news’ as a topic is not exactly under its jurisdiction, the effect it has on users’ experience is of great concern.


These two links shed some light on work being carried out in the European Union:

161212 - Open Democracy
We need European regulation of Facebook and Google

GRIP Initiative

Hope this helps.  -  @linmart, @IntugGB [14 Dec 16]


Thread posted under The Old School Approach has been transfered to this section: - @linmart

I[ij]t’s tough when media essentially sells their stories to viewers. You are always going to have those people trying to push out something that isnt worth anything but is really flashy just to turn a quick dollar. This is also the problem in most media today. The press should not collude with gov officials. The press should report the facts against ALL officials. Media figure heads should not be pushing their opinions on the masses.  I shouldn’t have to dig into the internet to find the truth about people. The press should be telling me personal histories. Especially during the elections. 

I heard all of this negative press for trump and what he did in the past and the mean things he says. But I never heard anything about hillarys past scandals. Lets not forget that they DO exist. It's not conspiracy or lies. It's in the headlines of newspapers from decades ago. Yet I never heard an ounce of that. Why? I understand that trump was not a great person morally. But most of us common men and women aren’t, and if you think that you are, then maybe you should try practicing some humility. Let's not set these ideals that we should have some immaculately clean personas running for president. Clinton was just as bad as trump in her own ways.

Lets dial it back to JFK and his address to the press. This is what media should be like. Reporting without bias. Without opinions. Telling facts to the American people. You can't call someone a racist just because they say a stereotyped blurb about another race. That’s not racism. We need the media now more than ever to start reporting unbiased news. Facts, fair and balanced facts and not embellish them with opinion or hearsay. It's so dangerous to the people as a whole. Put away your personal feelings and report fact, and if you want you bash a candidate like trump you should dig some dirt up on his opponent as well. People didn’t vote for racism or bigotry or against gays or women. They voted against the governmental system, against a failing and biased press.

As soon as we stop selling these dramatic stories that ignorant americans eat up, then we will be able to distinguish fake news from real news. Embellishing truth distorting words, these are tactics of a corrupt press. Yet these prevail in almost every media outlet in the nation. Why? Because it sells. People love drama in their dull lives and media outlets know this. They need numbers and viewers. It makes money. The press isn’t in it to help the people out anymore. It's just in it for the money. Just like the fake news sites.

So really, can they be stopped? No. Not in this day and age. Not until we unwind the current state of the media and transform it to something informative and real. Not speculate on someone’s personality. Report the facts even if it's something bad about our government. Go against the grain. Give us real stories, keep your opinions out of it.

The masses love personalities in the media, that’s why we all know their names, but too often these personalities get in the way of reporting what’s real and what's the personalities opinion. We identify with them. We emulate them. If they say I don’t like Clinton or I don’t like trump, then they will mirror that. The media is in a state of distress. It’s no wonder they are falling to these barbaric hordes of Inquirer type medias. Because people can't distinguish real from fake.

The media is all hype and passion and opinion. Even if it is laced with fact, there is a heavily biased undertone to all of it. If you are intelligent enough to see this, you may be safe. If not, you are going to end up being the one that believes the fake news headlines. Please please please put away your hate and divisiveness for the other side of the opinion, practice listening more than you do speaking. Listen and try to understand both sides. See the facts in both sides, don’t dismiss what you don’t hear from your circle as BS and if you hear something from the news that you seem interested in, dig a little further and see how much truth there is to it before you plaster it all over the brains of your cohorts like it's fact.

Stop labelling people too, it's disgusting. In a world where we are so passionate about gender pronouns and offending people, we need to look at ourselves. Are we labeling someone Corrupt? Racist? Sick? Bigot? Stop. We honestly don’t know more about that person than what the press tells us and that is no ground to stick labels on people. Think about a bad thing that you said or did once or maybe twice. Think about if the press got a hold of that and blasted the world with labels for you based off of that mistake. Is that who you are? No you are human. Just like I expect our president to be.  As soon as the press starts being compassionate to people as humans, we may be able to defeat fake news.


Related reads

161213 - Vox
This Trump voter didn't think Trump was serious about repealing her health insurance
Why would people vote for a presidential candidate who campaigned on taking away their health insurance? Last week, we went to Corbin, Kentucky, to try to answer that question. It’s a small city in southeastern Kentucky, an area of the country that has seen huge declines in its uninsured rate — but also voted overwhelmingly for Trump.



How about testing out everything we are discussing on the platform itself? Call it “Proof of concept”

161117 - The Washington Post
This researcher programmed bots to fight racism on Twitter. It worked.

Ethics of Fake News

By Ciara Allen

Looking back at the last couple of years I believe we’ve come to realized that Journalism (in America) has been influenced by competing political agendas. These political agendas have been speed up with the help of social media platform, spreading lies and misinformation to most people in this nation. Regarding the 2016 election where fake news seemed to circulate freely, without the proper fact checking, and how it spread through social media it called us to question once again our sources of journalism. While it seems few things are going to change regarding how to stop fake news right now, it is clear from things I have previously read in this Google Doc that there are plenty of people scratching their heads trying to figure out a sensible solution.

Perhaps the definition of fake news is still a bit ambiguous. A simple Google search and you can see the definition is still one based on opinion. I doubt it will enter our dictionaries any time soon perhaps not just because it's opinion based but also because it’s self-explanatory. That being said when reading an article on the subject of fake news I think it should be necessary for the author to put down their own definition, this help put the article into perspective. My definition of fake news would be: a pocket of misinformation that can be damaging, in regards to an agency, entity, or person. Thinking about it, we could run into the problem that credible sources could be labelled “Fake news” without any real evidence by opposing views.

I wonder how the jilted American public can ever regain trust in the media. How the qualities of Journalism can ever be returned to it. Has the media realized how damaged its connection to the people has become? There is a lot of work to be done. If there was perhaps some way for a new form of media maybe we could start fresh. Would that be easier than trying to fix our old platforms?

A lot of blame for the spread of these fake new articles has fallen on the more well-known social media sources such as Google, Facebook, and Twitter. That being said our television news sources, having been part of the political agendas for years. But the biggest question in all this fake news business is why a significant portion of the public did not seem to care about the deceit. A question I myself am curious about is that if Americans knew that they were being lied to what stopped them from fact checking on the internet to expose these articles lies? I’m not sure myself, but it’s an interesting question to ask. Despite all the chaos that fake news brought the last election and continues to bring today, the biggest advantage we have right now is that people are discussing it. I think the conversation has and will continue to be a productive one.



Your mind is your reality

I’d like to see the term, ‘Fake News’ retired. If it’s fake, it’s not news. It’s lies, inventions, falsehoods, fantasy or propaganda. A humorist would call it, “made up shit”.

Who decides what is fake?

Reliable Sources - Who decides what is fake?
By Brian Stelter & the CNNMoney Media team via Jay Rosen @jayrosen_nyu
18:48 UTC - 16 Dec 15 

BS Fake News 2 ed.jpg

I'm going to chime in here and say this is the crux of the issue.   We shouldn’t be deciding what is fake or isn’t - we should be helping people communicate these thoughts more clearly to each other.

If 10 of my friends think a news source is lying, it doesn’t matter to me what “Distinguished Institution X” thinks.  
Nobody is going to trust a source that claims to be authoritative if it disagrees with them.

If we enable people to share their dispositions with their friends in a
formal manner, we can help fix the problem.    Trying to start with some high level, abstract, difficult-to-prove thing like climate change is never going to work.   If we’re honest with ourselves, we have to admit that none of us (who aren’t researching climate change) have done any of those experiments. We believe it as an article of trust in the community of people we’ve interacted with. That's it.   That’s why we believe in climate change. It sure as hell isn’t because “we’ve looked at the evidence and we know it’s there.” Almost nobody is in a position to read that evidence or to analyze experimental designs or to look at relationships between computer climate models and actual climate outcomes.  We believe it because we trust the people saying it.

That doesn’t make it bad.  That doesn’t make the beliefs wrong. We just need to come at this problem from the right angle.

Instead of saying “news X is fake”, we should be showing “these are the people, institutions, and groups who think it’s fake.” , and showing the viewer’s relation to them in the social graph.

Journalism in an era of private realities


Dec 2016 - Nieman Lab
Public trust for private realities

Propaganda



170212 - Medium
The rise of the weaponized AI Propaganda machine 
There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.

161216 - The Globe and Mail
Opinion: The fake war on fake news
By Sarah Kendzior ‏@sarahkendzior
Call "fake news" what it really is -- propaganda


Scientific Method

161208 - Quartz
If you want to save democracy, learn to think like a scientist

161130 - New Scientist
Seeing reason: How to change minds in a ‘post-fact’ world

Alternate realities        

(hillarybeattrump.org) by, Jackie Brown

Although The United States of America’s constitution and laws regulate the country’s population, US residents differ from one another in regards to ethical and moral codes. Typically, American individuals identify under the political party that stands under similar ethics. For example, a person who identifies with the conservative party more commonly stands against abortion rights, whereas a liberal person is more often a prochoice advocate. This all links back to their personal ethical beliefs, which can be influenced by a number of things, like a religious practice, or any significant property of their personal lifestyle. Morality will never override public policy because law and ethics are two completely different concepts, even if they often have an influence on one another.

Long before fake news media was a universally recognized notion, there have been issues with political controversy in media. Before expiring in 1801 for purportedly violating the first amendment, the Sedition Act of 1798 made it illegal to write, print, utter, or publish anything that criticized the Congress or the US President. Although today this kind of criticism can be seen as an act of trolling, which often is considered to be bullying in other scenarios, federal laws cannot prevent or act on such behavior. Our society has come a long way in recognizing unethical bully behavior, though it seems to more often be addressed in the physical world than it is in the virtual world.

This hateful behavior is continuously given opportunity for existence with technological growth and web-media channels. Moderation is nearly impossible with the prevalent anonymity of the internet, especially if we wanted to strictly regulate our own country’s Internet activity. In efforts towards distinguishing the difference between fake and real news, one’s ethical code or moral beliefs holds an impact on their perception of the media.

A satirical journalist site has gone viral for generating news that exists in an alternate universe where Hillary Clinton won the presidential election last November.  This webpage’s header reads “News From The Real America, Where The Majority Rules” and can be retrieved with the URL (www.hillarybeattrump.org). Regarding names and online advertising or publication, there are currently no set standards. Therefore, legally, but not ethically, creators can use shrewd article titles that can make the fake news appear real to the group of people they are writing to. This site is known for their dozens of fake news articles, however someone who has some awareness of US politics and fake news can distinguish the alternate reality of it. Someone who may be out of the knowledge circle (like a child, or a person from another country, etc.) may take this information seriously. The content addresses political figures, but figures of pop culture as well. Not only does this site carry a range of fictional information, it is clear that the satire can only be amusing to a person who does not support President Donald Trump.

Articles particularly attack persons of the conservative party, one example being Betsy DeVos, who is the conservative-leaning Secretary of Education. (HBT) On the site’s alternate reality, an article written about her is titled Betsy Devos Blames Campus Rape on Women’s Higher Education and incorporates many made-up quotations that were supposed statements of DeVos regarding the college rape epidemic. The unethical behavior behind these rumored quotes are simply unfair, and many lies are said about DeVos to make her seem unethical, especially to a liberal audience. Untrue statements are written as if they were made by DeVos herself, one example being, “Devos writes, Look: When women were barred from attending college 40 years ago, no female students were raped, because there were no female college students. Thus, educating women is dangerous. The numbers clearly show that female college students are at fault. If we're going to stop campus rape, the key is returning to all-male campuses. That’s just science!” (HBT)

In order to fully wrap my head around these lies, I needed to find evidence proving DeVos’s statements wrong. The New York Times writers released an article in September addressing Betsy DeVos and her stand on campus sex assault titled, Betsy DeVos Says She Will Rewrite Rules on Campus Sex Assault.  The article points out that “Ms. DeVos did not say what changes she had in mind. But in a strongly worded speech, she made clear she believed that in an effort to protect victims, the previous administration had gone too far and forced colleges to adopt procedures that sometimes deprived accused students of their rights.” (Saul and Goldstein)  This is the needed proof that DeVos is not an unethical sexist, and did not make such misogynistic statements.

The HBD site’s creator, who remained anonymous for her Washington Post interview, claims her intention was for the platform to be like “a joyful middle finger.” She continued by stating, “I didn’t want to wallow or argue with people who can’t be argued with. There’s something about humor as confrontation that I instinctively thought would work — like a good right jab that I could keep using.” Although she did not see harm with the creation of this fake media in an alternate reality, others do not share her view. The website itself has been bashed in multiple occasions, and it seems the anger is aimed mostly at the site’s anonymous creator. The platform is argued as “seemingly designed to thrill liberals and progressives - and drive conservatives and Donald Trump supporters crazy” by Brett Barrouquere in his 2017 Chron article titled In alternate web universe, Hillary Clinton is President Barrouquere then continued to critic the creator saying, “whoever is managing the page is having a grand time at trolling Trump and other Republicans.”

Peter Hasson, associate editor of The Daily Caller, points out in his politics article Fake News Site Lets Liberals Live In Alternate Reality Where Hillary Is President “The site’s articles single out prominent Republicans like Texas Sen. Ted Cruz and White House press secretary Sean Spicer for mockery.” Hasson mentions, “In the midst of a Constitutional crisis, this is our response,” the site’s description reads. “Long live the true president, Hillary Rodham Clinton.”

Although The First Amendment sustains the existence of this website as lawful, it is positively unethical for a platform of cruel lies and insults towards a particular political party to exist. Unfortunately, it is unrealistic to please the entire nation’s standards when it comes to politics and morals. So until we come closer to equality and peace, there will continue to be arguments considering ethics and law in the media.

 

WORKS CITED:

1.     (Brett Barrouquere)

http://www.chron.com/news/politics/us/article/In-alternate-web-universe-Hillary-Clinton-is-10959389.php

 

2.     (Peter Hasson)

http://dailycaller.com/2017/02/20/fake-news-site-gives-liberals-alternate-reality-where-hillary-is-president/

 

3.     HBT (hillarybeattrump.org)

http://www.hillarybeattrump.org/home/2017/7/30/betsy-devos-blames-campus-rape-on-womens-higher-education

 

http://www.hillarybeattrump.org/home/2017/7/19/tiffany-trump-files-papers-to-legally-change-her-name

 

4.     (Stephanie Saul and Dana Goldstein)

https://www.nytimes.com/2017/09/07/us/devos-campus-rape.html

 




Legal aspects

First Amendment/ Censorship Issues

While we are focused on rooting out fake news -with legitimate concern for it- real or fake, it is protected under the First Amendment. More information is considered the antidote of bad information. SCOTUS took that to extremes with its “more money = more speech” philosophy.

That all said, the road to hell is often paved with good intentions. While I support the endeavor to ensure quality information is distributed, we must also be cautious not to lead ourselves into the waters of censorship. What may start out as a framework for determining truth or credibility could easily - especially under this Administration - turn far more Orwellian. The tools and algorithms we develop for good, could be used for straight up censorship OF the truth.

Regardless of what comes of the overall debate around fake news, we must build in transparency, accountability and a process that ensures that the purpose of the First Amendment - to engage and inform our citizenry with a marketplace of ideas - continues without censorship.  +Diane R  +@linmart


Recommended

161122 - NYT
Facebook said to create censorship tool to get back into China

The First Amendment’s right to free speech raises a lot of questions regarding whether limits should be placed on that free speech. Consider hate speech and slander, where laws are in place limiting hate speech and threats, as well as laws where someone is able to sue another person for posting fake news about them that could potentially harm their reputation. However, these laws usually end up going much on a case by case basis because of the fact that fake news has to be proven false in order to win the case in court. There are also the instances regarding public figures or people with high governmental or societal power. Should they be held accountable in the same way that an average citizen should be held accountable in regards to the First Amendment rights, or are their rights different because of the fact that they have influential power and are in the public eye, making them able to spread their message to a larger and wider audience?

Jenna Ellis’s article entitled “Trump is not threatening the First Amendment; Americans’ ignorance of what it means most definitely is” for Fox News accuses American citizens of not fully understanding the First Amendment when criticizing tweets and speeches given by President Trump. Ellis explains that citizens fear for their right to free speech, and they are afraid that Trump attempts to regulate the press and limit citizens’ rights to the First Amendment. However, according to Ellis, Trump is only concerned with limiting the right to post ‘fake news,’ such as the laws already in place to be able to sue against libel and slander.

In the cases of fake news, the article explains that President Trump wants more accountability for the places that publish and post fake news. For example, he accuses NBC of posting fake news and expresses his opinion about holding them more accountable to what they publish that is written by their journalists or talked about by their news reporters. However, that raises the question as to whether companies and publishers should be held accountable for fake news that they did not write themselves. One argument could be made that publication companies and news stations should be checking facts more tediously and ensuring that they are aware of the truth behind each article that they post. However, there is also the argument that the writer or broadcaster that created the news story or article should be the one held accountable, since they are the ones that could be knowingly sharing the fake news with their companies and audiences. Unfortunately, there are instances where journalists will interview considerably ‘reliable’ sources for information, but their sources give them incorrect information. That means that the wording of an article could also be misinterpreted. If the article is about a certain individual, that individual could sue the journalist for falsely representing information pertaining to them. Even if the information was based off of truth, the entirety of the information includes pieces that stretch the truth or misrepresent a situation.

In regards to Donald Trump’s twitter, some critics of the current president argue that some of his tweets post incorrect information regarding the government or the United States. Some people also argue that it is hard to determine which tweets are facts versus opinions given by the president on his personal account. This raises the question as to whether Trump should be allowed to post information regarding the United States in his account, considering he could be misinformed and would then be announcing the misinformation as the leader of this country. In my own opinion, his tweets should be regulated, and he should be held accountable for the misinformation he posts on his twitter account. For example, Ellis describes in her article about how a tweet by Trump raised fear that he was trying to take away citizens’ rights to free speech. However, this was only an opinion that he vaguely expressed about the frustration he felt toward fake news. As a leader of the country, his opinions are considered by citizens to represent the way he wants to lead as president of the United States. He should be held more accountable when expressing opinions regarding any matters because people could misinterpret these opinions as being motivation for him wanting to pass new laws and regulations. While there should be more laws in place for public figures and what they are able to say with the power and authority that they have, it would also limit their free speech as citizens of this country. That raises the question as to whether laws should be made so that these public figures can only have free speech on private accounts and not on verified social media accounts or in front of audiences or on live broadcasts.

Espionage Act


An interesting and provocative take over the gray areas in the legal protections for the press and  fears in the Age of Trump -
FB Dan Rather @DanRather

161211 - Politico
Donald Trump’s real threat to the press

Copyright

Trademarks

Libel

One main major legal issue that fake news can bring to the forefront is libel. Libel is a published statement that is false and harms a person’s reputation. Libel has been an issue since the beginning of the United States of America, with roots dating back to the Sedition Act of 1798, in which the United States Congress made it a crime to write any “false, scandalous and malicious” statements about the President or Congress. Predictably these same issues regarding false, scandalous and malicious statements are even more present today, as more people have access to a variety of ways to publish information.

There are multiple different ways these fake news organizations are attempting to avoid libel lawsuits against them. One of the most frequently used tools to do so is by hosting these websites that publish inaccurate news about political happenings and people in the United States (often presidential candidates from the 2016 election) on servers outside of the United States. In December of 2016, NBC News ran a story about an anonymous teen in Macedonia who had been making thousands of dollars over the past six months from publishing stories, most of which were shared on facebook over and over, that were not true and often very damaging to the reputations of presidential candidates. Most of the time, these articles garnered clicks by targeting Hillary Clinton.

Most of these cases are textbook examples of what we in the United States call “libel”. Article titles ranged from somewhat believable scenarios such as "JUST IN: Obama Illegally Transferred DOJ Money To Clinton Campaign!", to clear cases of “clickbait” titles such as "BREAKING: Obama Confirms Refusal To Leave White House, He Will Stay In Power!". Rather than being punished for these articles however, this teen has been living lavishly in his home country, preying on the curiosity of Trump supporters and Facebook users in general in the United States and apparently around the world.

Libel laws gained a lot of attention when in an early 2016 speech, then presidential candidate Donald Trump said that if elected he would “open up” libel laws. Predictably, due to the many dust-ups Donald Trump had already had with news organizations, this caused many people to raise their eyebrows to this statement, thinking that this would lead to a large amount of lawsuits involving the soon to be presidential elect Donald Trump.

Trump’s presidency is sure to bring libel to the forefront quite a bit over his tenure in office. It is a perfect storm of factors that can lead to lawsuits and opinions from people on both sides about what constitutes libel and who should be disciplined for what they say. For one, President Trump is very outspoken. One example of this is his continued use of his personal Twitter account @realDonaldTrump. On his twitter account he has already used the term “fake news” many times, accusing many people and organizations of practicing fake news. In the modern United States, we have never seen a president be so outspoken in making accusations of people attempting to damage his reputation.

Trump himself has been no stranger to controversy. He has said many things that have caused outrage, both in political and ethical nature. These controversial things have caused many media members to call him out, and when fake news is available to everybody in a matter of seconds, it is easy to assume that a lot of this fake news will be brought to the attention of Donald Trump himself, who as noted earlier, has already been on the record strongly advocating against libel. This makes fake news all the more dangerous.


In the last few years, especially during the 2016 presidential election, fake news has been a widespread concern.  First of all, the definition of “fake news” itself has seen some debate.   When we hear fake news mentioned in the media (namely the POTUS’s twitter account, where he often uses the term) it is sometimes used to mean any news article that might not be sympathetic to the values of  the poster, or in support of their ideals and goals, even when those reports are factually accurate.  When we, students and editors of media, talk about fake news we are referring to the publication of an actual false report that is intended to be taken as truth. Fake news attempts to pass off made up stories as factual information.  Doesn’t that make it libel or slander? Would that not make it extremely easy to prosecute legally?  Not entirely.  

Libel is defined as a published false statement that is damaging to the reputation of a person.  So while fake news is definitely the publication of false statements, it is not always defamatory to the character of others.  Often times, though, it is. Why then are we not quick to take action against these false publications? The answer is complicated.  The law itself is not exactly crystal clear on internet libel and a lot of the time it is difficult to track down anonymous content posters, and people who aim to circulate fake news often choose to have their sites hosted in places they know are not traceable by the US government.  Many times the case is that as long as the website itself did not post the offending content, they cannot be held responsible for the user who did.  This makes it exceedingly difficult to prosecute those who perpetuate fake news stories.  

Another thing to be concerned with in regard to widespread fake news is how powerful it can be.  If the general public is not careful to follow fact checking steps when reading articles, they are contributing to the power fake news has over them.  More and more often fake news articles (attempting to be legitimate truthful information) are shared around facebook, twitter, and other media outlets.  They are created with attention grabbing headlines (often outlandish) in the hopes of generating clicks, likes, and shares which in turn helps the host site to gain revenue from ads and other related sources.  During the presidential election of 2016, headlines claiming absurd things like the fact that Pope Francis was going to be backing Donald Trump as a candidate, and Hillary Clinton selling guns to ISIS, were seen all over the internet.  These articles became so prevalent that many believe they actually helped sway the election in Trump’s favor.  The general news reader needs to constantly be wary of the information presented to them as “news.”

That said, we need to be sure that we are being perfectly clear when referring to fake news.  As mentioned earlier, the definition of fake news differs from person to person.  When searching the definition, a broad array of interpretations is available. For example, there is satirical news (e.g. The Onion) which makes no attempt to be harmful and is not anything more than a clever joke, but does offer a serious potential to fool a reader who was not clear that what they were reading was satire.  This is why satirical websites should be forced to state their nature clear enough to be understood.  On the other hand there are also articles that are completely fabricated and are designed with the intent to do harm.  It is these types of fake news that can have the most negative impact, but we should worry about the harm fake news as a whole has done to the public’s trust of the mass media.  


Online Abuse

Harassment

161208 - The Washington Post
This is what happens when Donald Trump attacks a private citizen on Twitter

Via Farhad Manjoo @fmanjoo Tech writer for the NYT

 

Trolling and Targeted Attacks


161204 - The Guardian
The trolling of Elon Musk: how US conservatives are attacking green tech

Threats

161206 - The Guardian
Megyn Kelly accuses Trump social media director of inciting online abuse

The Fox News host says Dan Scavino is ‘a man who works for Donald Trump whose job it is to stir up’ nastiness and threats, and urges him to stop


Hate Speech



161213 - MIT Technology Review
If only AI could save us from ourselves

Google has an ambitious plan to use artificial intelligence to weed out abusive comments and defang online mobs. The technology isn’t up to that challenge—but it will help the Internet’s best-behaving communities function better.

170115 - BuzzFeed
How to use Facebook and fake news to get people to murder each other

In South Sudan, fake news and online hate speech has helped push the country toward genocide amid a three year civil war, according to independent researchers and the United Nations. “Social media has been used by partisans on all sides, including some senior government officials, to exaggerate incidents, spread falsehoods and veiled threats or post outright messages of incitement,” a separate report by a UN panel of experts released in November reads.

170112 - VOA
Researchers create South Sudan hate speech lexicon

June 2016

European Court of Human Rights - Hate Speech


Specialized Agencies 

Rating Agencies 

Non-partisan News and Information Objectivity

Could major news producers, distributors and advertisers come together to fund a ratings agency system, organized to objectively analyze the spread of misinformation in society?


Working like financial credit agencies or the Better Business Bureau, could ratings agencies provide valuable services to both consumers and organizations?

  • Consumer Ratings App: news and information rating app – driven by big-data, analytics and crowdsourcing







  • Citizen reports: could anyone submit a “potential misinformation report” for flagging and rating?
    [ik]
  • Organizational services: Big-data, analytics and other services to help advertisers sponsor platforms, channels, programs and presenters that align with their principles
  • Need for Scale. To succeed, would such an infrastructure need to operate at scale, supported by a critical mass of leading digital platform companies and news producers?
  • Next Steps. To further explore this idea, could a group of leaders could convene a conversation?
  • Venture capital investment. Once infrastructure approach was designed, could/would VCs invest in this new kind of infrastructure?

  • Language flags rooted in a lexicon of qualifiers - Sam
  • Standardization of news articles requiring the facts preface the article itself in a Tl;dr fashion for ease of cross-referencing what is known to have happened from the reported implications, opinions,  conjecture, etc. The facts would be linked to their sources. Viewer suggestions for additional sources (e.g., unpublished or unpopular video feed of the event) can be collected in comments and voted up or down (Reddit style, respectively increasing and decreasing rank (viewability) of source by fellow viewers. - Sam

  • User Passage-flagging with flags ranging from "biased language" to "specious" to "false." A flagged passage would be modified (by color-coded highlight or something similar), with a mouseover- or touch-retrieved report of the number of times it's been flagged as flag 1, flag 2, etc. Once a certain number of flags are generated, a fact-checker is sent the data to view and submit their findings, which might have its own color code. - Sam

  • Facebook Pan-User BS-Detection Education (not so invasive if you considered an appearance like Facebook Memories, and this wouldn't be nearly as frequent, if more than a one-time appearance). This could take any number of forms, from critical thinking tips that appear intermittently   to a pan-user video. - Sam
  • Website Ratings I like the idea of having a certain star rating for websites to help determine if they are credible or not. The problem with that is if people disagree with an article they will just give it a bad rating. Their should be a system of people who can give a rating to an article and then write a review why it is wrong and what they did not prove in the article.

Media Governing Body

In order for society to progress in the best possible direction we must seek the truth.  Covering up, misinforming, or selectively informing the public may be better for the short term, but for best long term results the full truth must be known.  

How does the public know what the truth is?  The sole purpose of “the media” or an individual journalist is to inform the public.  This brings up the question “who gets to decide what counts as ‘pertinent information?’” Zero bias is not possible, but the best we can hope for is to acknowledge this and get as close as possible to unbiased.

Members of the media need to get together and create some sort of national or even international governing body.  Similar to how practicing law is regulated by the BAR Association, I would argue that we must demand equal integrity and scrutiny of our media members.  This cannot be controlled by the government ever, and should only be geared towards regulating integrity issues.  This cannot make it illegal to “practice journalism,” but would be more of a stamp of approval. There should be no restriction on freedom of speech.  

As previously mentioned, the government cannot be involved in this.  The members of the media need to compete amongst each other to gain the viewer’s trust.  Therefore, journalists from a variety of political viewpoints need to be at the head of an organization that does the fact checking and investigates misconduct.  There will be no restriction of what is allowed to be reported, but simply a removal of this approval stamp due to misconduct.

Similar to other organizations (MLB, NBA, NRA, etc.), the governing body would need to have elections for some sort of panel to ensure every flavor of perspective (bias) is represented.  Subject matter experts need to be consulted, and ideas put to a vote on what counts as misconduct, and how exactly to deal with suspected individuals or incidents.  Just like how there are ethics involved in practicing law, enforcing the law, military ethics, etc., this panel needs to agree on journalism ethics.  I would also suggest that it should be made transparent to the public who contributes money to the news source, or possibly even restrict who is allowed to contribute financially to a news source just like we wouldn’t want shady contributions given to a judge or police officer.  

In short, the media needs to control itself.  This is not a Facebook issue, or a Twitter issue, or even a fake news issue.  This is a leadership issue.  Although clickbait, fake news, and other forms of misinformation are a problem, they are not the root of the problem.  + @IntugGB


Specialized Models 


Keywords:

Blockchain
Smart Contracts
Crowd Knowledge
Machine Learning
 


Below are some ideas we have been working on for a while: @nickhencher

Create a SITA for news 


Without collective skin in the game it is unlikely this one initiative will work - this initiative should offer bottom line benefit. Fake news is not the only problem.

“SITA or Société Internationale de Télécommunications Aéronautiques, was founded in February 1949 by 11 airlines in order to bring about shared infrastructure cost efficiency by combining their communications networks.

A shared cooperative initiative for news would seem to offer wide and far reaching benefits”

Knowledgeable, validated but anonymous validation and fact checking by the crowd

News articles are graphed and outliers are … (?)

Users (checkers) graphs are created - scoring and matching against articles being questioned.  Interests. Location. Knowledge.

Use BlockOneId (Reuters is open to the use of this technology, contact: Ash) to ensure anonymity of fact checkers and ensure collusion is prevented.

Fact checkers have to be rewarded - blockchain and smart contracts can facilitate this.

Outlier analysis

Leverage machine learning to graph stories.  Look for stories that are outliers, flag as exceptional and begin further validation checks, these can be both automated and human (crowd). Reward checkers

Transparency at user engagement

Fact checking is not simple and is time consuming - assuming readers will do this is not a solution.

Assume that Fake News is going to become more sophisticated and weaponised.  At the moment this initiative seems to be focused on static event news - this is going to move to live events and there will be consequences - fake news consumed at an event can quickly turn a demo into a riot. Witness accounts and location data will become essential when validating eyewitness accounts.

“FourSquare” for News

Hand in hand with the above it should be possible to allow positive validation of an article. If someone reads an article then can check in as having read it, this engagement can then be taken down a multiple of paths .

Delay revenue realisation for unverified news sources

Turn the model on it’s head and give the fact checkers the revenue from the fake news stories. A (excluding state sponsored) reason these stories are being produced is for money - delay the money and change the terms

(similar to Youtube model with pirated content)

Wikipedia: How to Identify Reliable Sources (Mel @mkramer)




Other ideas


  • Another concept, somewhat related to what Connie mentions [Wikipedia for news sources]. - @IntugGB

    It deals with an
    ad-blocker but the premise behind it is anything like what you would expect. D&AD designed a tool that, when you access YouTube, shows an ad before every single video. The ‘catch’ is that only the absolute best are shown; the work is so beautiful, you really do not mind watching the whole thing (some are even small films). Search deeper and you realize that D&AD is behind an award recognized globally as the ultimate creative accolade. Members are part of a vibrant community, inspired by their world-class training program.

    An ad they ran, for example, brilliant in its simplicity and perfect for this project:  

    Mute [Full Screen version]

    Read the message. Let it sink in.

  • FB and TWITTER- Fake news would not be that big of a problem if not for social networks and twitter where powerful people re-tweet dumb and false things and then the media picks it up. We need to devise a hack or fix for this.

Ideas:

  • Drown them out. Make it such that their tweets are not heard.

  • Have articles marked in Twitter as false so people see a retweet that the article is false.

  • Fake news looks too much like real news so it needs to be clearly marked and labeled as false news. Perhaps a ratings system on Twitter or confidence meter in facts.

  • It is also in a way for Twitter to decide: are they a devoid of morality Atomic Bomb? Do they let their platform be used for Evil or good or do they do their best to let their product be good.  It isn’t a knife but someone who uses a knife does have repercussions. With speech causing harm in a movie theater there is harm and so perhaps with a powerful tool like Twitter where words can lead to violence there ought to be some kind of understanding that there may be bodily harm to people and that there is a responsibility to do something about that and hold people accountable for harm they do with a mass communication tool like twitter.

  • A Wikipedia-style hosted-life-bits [data via self-talk.. so a wikipedia-like gathering of everyone's insight on some blockchain-type host.. captured/sourced per topic.. like billions of wikipedia editors via mechanism facilitating that chaos].. [w/in gershenfeld sel.. so less fake happening on purpose - huge - perhaps what we’re missing most] -@monk51295
  • This may exist elsewhere in this document, but I haven’t come across it: public education / courses about media and manipulation of fact.  Carl Bergstrom and Jevin West are two college professors in Seattle have a proposed course for University of Washington called Calling Bullshit”. There’s some good thinking on their site:  -- @joshua_berger 



Solutions for Fake News

Fake News is becoming an epidemic as online media is making it increasingly more available to create propaganda that is misleading readers from what is the truth. Finding solutions how to handle this outbreak is proving to be quite troublesome as far as sifting through what is fake news and what is real news is much like sifting through the sand in a desert. Understanding the reasons for the creation of fake news might play a key role to discourage against it. Another problem could be that people are not informed enough about fake news or how to avoid falling for it be a lack of knowledge or concern.

Generating revenue seems to be one of the leading reasons that fake news had become so popular. Many of the “click-bait” sites are made to generate revenue from ads or endorsing products by tricking people to go to them with false headlines. The more popular the site becomes the more incentive for the creators to flood the internet with more websites to increase profit.

 

One solution to ending fake news would be to mandate that sites without properly certified or researched information to place disclaimers before entering a website that produces articles such as blogs. The problems faced with this solution would require domain hosts to perform checks on websites that register as a blog website. This allows for free speech to continue on the internet and for creators still to prosper from ad revenue without informing the person that they were visiting an unregistered site. Other larger sites would have to step up and have a method to alerting users to external links that may be false or are not certified as non-“click-bait” sites.

Fake news in the newest form of shanghai-ing people or tricking them into doing something that they do not wish to do, such as read false information on something they believe to be true. The most efficient way to stop this epidemic would be to inform and teach people what to look out for such as the big warning signs like false titles. Some attempts to battle this have already been attempted as seen in a series of seminars by Dr. Kyle Moody, Assistant Professor of Communications Media at Fitchburg State University, that were held in various towns in Massachusetts earlier this year in 2017. Fake news prays on people seeking information of interest but without being educated on today’s media it’s like shooting arrows in the dark.


Linked-Data, Ontologies and Verifiable Claims
[iq][ir][is][it][iu][iv][iw]

By:  @Ubiquitous 


Linked-Data
[9] is a technology that produces machine and human readable information that is embedded in webpages.  Linked-Data powers many of the online experiences we use today, with a vast array of the web made available in these machine-readable formats.  The scope of linked-data use, even within the public sphere, is rather enormous[10].

Right now, most websites are using ‘linked data’ to ensure their news is being presented correctly on Facebook[11] and via search, which is primarily supported via Schema.org[12] [13].

The first problem is: that these ontologies do not support concepts such as genre[14].  This means in-turn that rather than ‘news’ becoming classified[15], as it would in any ordinary library or newspaper, the way in which ‘news’ is presented in a machine-readable format is particularly narrow and without (machine readable) context.

This means, in-turn, that the ability for content publishers to self-identify whether their article is an ‘advertorial’, ‘factual’, ‘satire’, ‘entertainment’ or other form of creative work - is not currently available in a machine-readable context.

This is kind of similar to the lack of ‘emotions’ provided by ‘social network silos’[16] to understand ‘sentiment analysis’[17][18] through semantic tooling that offer means to profile environments[19] and offer tooling for organisations.  Whilst Facebook offers the means to moderate particular words for its pages product[20] this functionality is not currently available to humans (account holders).  

The mixture of a lack of available markup language for classifying posts, alongside the technical capabilities available to ‘persona ficta’ in a manner that is not similarly available to Humans, contributes towards the lack of ‘human centric’ functionality these platforms currently exhibit.

Bad Actors and Fact-Checking


In dealing with the second problem (In association to the use of Linked-Data), the means in which to verify claims is available through the application of ‘credentials’[21] or Verifiable Claims[22] which in-turn relates to the Open Badges Spec[23].

These solutions allow an actor to gain verification from 3rd parties to provide their audience greater confidence that the claims represented by their articles.  Whether it is the means to “fact check” words, ensure images have not been ‘photoshopped[24]’ or other ‘verification tasks’, one or more reputable sources could use verifiable claims to in-turn support end-user (reader / human) to gain confidence in what has been published.  Pragmatically, this can either be done locally or via the web through 3rd parties through the use of Linked-Data.  For more information, get involved in W3C[25], you’ll find almost every significant organisation involved with Web Technology debating how to build standard to define the web we want[26].


General (re: Linked Data)

If you would like to review the machine-readable markup embedded in the web you enjoy today, one of the means to do so is via the Openlink Data Sniffer[27]  An innovative concept for representing information was produced by Ted Nelson[28] via his Xanadu Concept[29]

Advancements in Computing Technology may make it difficult to trust media-sources[30] in an environment that seemingly has difficulty understanding the human-centric foundations to our world; and, where the issues highlighted by many, including Eben Moglen[31], continue to grow.  Regardless of the technical means we have to analyse content[32], it will always be important that we consider virtues such as kindness[33]; and, it is important that those who represent us, put these sorts of issues[34][35] on the agenda in which “fake news” has become yet another example (or symptom) of a much broader problem (imho).

A simple (additional) example of how a ‘graph database’ works as illustrated by this DbPedia example[36].  The production of “web 3.0”[37] is remarkably different to former versions due to the volume of pre-existing web-users.  Whilst studies have shown that humans are not really that different[38], the challenge becomes how to fund the development costs of works that are not commercially focused (ie: in the interests of ‘persona ficta’[39]) in the short-term, and to challenge issues such as ‘fake news’ or indeed also even, how to find a ‘Toilets’[40].  As ‘human centric’ needs continue to be unsupported via the web or indeed also, the emerging intelligent assistants[41] working upon the same datasets; the problem technologists have broadly produced becomes that of a world produced for things that ‘sell’, without support for things we value. Whether it be support for how to help vulnerable people, receipts that don’t fade (ie: not thermal, but rather machine-readable), civic services, the means to use data to uphold ‘rule of law’, vote and participate in civics or the array of other examples in which we have the technology, but not the accessible application in which to apply the use of our technology to social/human needs.  

Indeed the works we produce and contribute on the web are for the most-part provided not simply freely, but at our own cost.   The things that are ‘human’ are less important and indeed, poorly supported.

This is the bigger issue.  We need to define means to distil the concept of ‘dignity’ on the web. Apps such as Facebook often have GPS history from our phones; does that mean the world should use that data to identify who broke into a house? If it is said you broke a speed limit in your vehicle when the GPS records show you were somewhere else, how should that help you?  



Reading Corner - Resources


Developing stories, research, ongoing initiatives, tools, etc. all related with the project.

Please post
links[ix] with a reference to who posts. MT indicates the article has been brought in from a post on Twitter. Some will include description of the source, along with areas of expertise.

< verified account < key contact  < collaborator



Social Networks

Facebook

161125         The Guardian - Facebook doesn't need to ban fake news to fight it via @charlesarthur Freelance tech journalist; The Guardian's Technology editor 2009-14 <<

161123         MIT Review - Facebook’s content blocking sends some very mixed messages via @techreview <

161122        Reuters - Facebook builds censorship tool to attain China re-entry MT @dillonmann Communications Director @webfoundation 

161122         NYT - Facebook said to create censorship tool to get back into China - MT @lhfang Investigative Journalist. @theintercept lee.fang@theintercept.com

161122         Reuters - Facebook builds censorship tool to attain China re-entry MT @dillonmann Communications Director @webfoundation <

161119        Recode - Here’s how Facebook plans to fix its fake-news problem - Steffen Konrath @LiquidNewsroom <

160520         Guardian - The inside story of Facebook’s biggest setback - MT @GrahamBM << Founder of Learning Without Frontiers (LWF) 

Twitter 

161123        VB - Twitter Cortex team loses some AI researchers MT @LiquidNewsroom <

161107        The Washington Post - This researcher programmed bots to fight racism on Twitter. It worked. MT @mstrohm


Google

161013         Google - Journalism & News: Labeling fact-check articles in Google News By Richard Gingras, Head of News, Google

Google Support : What does each source label (e.g., “blog”) mean?


Source labels are a set of predefined, generally understood terms that describe the content of your news site and serve as hints to Google News to help classify and show your content.


Filter Bubbles

170113         The Guardian - Self-segregation: how a personalized world is dividing Americans

161122        NYT Magazine - Is social media disconnecting us from the big picture? - MT Howard Riefs @hriefs Director, Corporate Communications @SearsHoldings  

161120        NPR - Post-election, overwhelmed Facebook users unfriend, cut back - MT @newsalliance <

161112         Medium - How we broke democracy

161116        Tiro al aire: Romper la burbuja - by @noalsilencio

WSJ - Blue Feed Red Feed

The Filter Bubble: What the Internet is hiding from you  Slide presentation by @EliPariser <<< MT @noalsilencio


Automated Systems

Algorithms

170222        Seeker How a Twitter algorithm could bring Democrats and Republicans closer
        togethe
r By Kiran Garimella
        Via
@gvrkiran

161123        Medium - Detecting fake viral stories before they become viral using FB API by @baditaflorin <<  Data Scientists at Organised Crime and Corruption Reporting Project (OCCRP) 

161123 - Medium - How I detect fake news by @timoreilley << Founder and CEO of O’Reilley Media @OReillyMedia  

170317 - Could an auto logic checker be the solution to the fake news problem? @CrispinCooper (apologies for self promotion)


Common Crawl - @msukmanowsky 
Page Rank - authority of web domain - @timoreilly via @elipariser


User Generated Content 

161121 - Slate - Countries don't control the Internet. Companies do.

141023 - Wired - The laborers who keep dick pics and beheadings out of your Facebook feed 


Firstdraftnews.com

A site with readings and resources about verifying information that circulates via social media

Verification Handbook [PDF] - @Storify

Wikipedia: Identifying reliable sources



Dynamics of Fake Stories[iy][iz]

161123        NPR - We tracked down a fake news creator in the suburbs. Here's what we learned 

161123         Medium - Fixing fake news: Treat the problem not just the symptom

161122         Medium - Fake news is not the only problem by @gilgul Chief Data Scientist @betaworks, co-founder @scalemodel | Adjunct Professor @NYU | @globalvoices

161120        NYT - How fake stories go viral

161111         Medium - How we broke democracy MT @TobiasRose 

150215        Tow Center for Digital Journalism - Lies, damn lies and viral content - @TowCenter via Steve Runge

Joined-up Thinking - Groupthink

161118         Medium - Does fake news on Facebook make smart people stupid?

Manipulation and ‘Weaponization’ of Data

170212         Medium - The rise of the weaponized AI Propaganda machine 
There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.

170130         The myth that British data scientists won the election for Trump 
A piece of data science mythology has been floating around the internet for several weeks now. It surfaced most recently in Vice, and it tells the story of a firm, Cambridge Analytica, that was supposedly instrumental in Donald Trump’s campaign.
MT Jonathan Albright @d1gi1

170128         Motherboard - The data that turned the world upside down

161118 -         Medium - How the Trump campaign built an identity database and used Facebook ads to win the election  Joel Wilson @MedicalReport Consumer protection litigator. Former deputy attorney general in Trenton. Via @WolfieChristl Researcher, activist

16117        Medium - What’s missing from the Trump election equation? Let’s start with military-grade psyOps by Jonathan Albright @d1gi
Too many post-election Trump think pieces are trying to look through the “Facebook filter” peephole, instead of the other way around. So, let’s turn the filter inside out and see what falls out.


160201        The Guardian - Ted Cruz erased Trump's Iowa lead by spending millions on voter targeting


Click Farms - Targeted attacks

161118        The Drive - These are the lobbyists behind the site attacking Elon Musk and Tesla via @ElonMusk Tesla, SpaceX, SolarCity, PayPal & OpenAI

161120         Never mind the algorithms: The role of click farms and exploited digital labor in Trump's election - MT @FrankPasquale < Author The Black Box Society: The Secret Algorithms Behind Money & Information 

Propaganda

160912          Autostraddle - This is how Fox News brainwashes its viewers: Our in-depth investigation of the propaganda cycle

161124        Washington Post - Russian propaganda effort helped spread ‘fake news’ during election, experts say 
MT @jonathanweisman Deputy Washington Editor, The New York Times via Centre for International Governance Innovation @CIGIonline 

161002         RT - Russian server co. head on DNC hack: ‘No idea’ why FBI still has not contacted us


Key takeaway - Take note of verification procedures:

[Vladimir Fomenko, owner of the Russian server company implicated in the DNC hack told RT] he was as surprised to learn from US media that his company was somehow implicated. He also believes that the only connection to Russia the Americans really have is the servers being from there.

“Thinking that the criminals must likewise also be from Russia is just absurd,” he says. “No one blames Mark Zuckerberg when criminals use Facebook for their own ends? … As soon as we learnt our servers were involved, we disconnected the perpetrators from our equipment. And conducted our own investigation. We have learnt certain things and are ready to share it with special services at their first call.”

Viral content

161123 - Digiday - 'It was a fad': Many once-hot viral publishers have cooled off - via  @Digiday  

161111 - The Verge - Understanding how news goes viral: Facebook buys CrowdTangle, the tool publishers use to win the internet - MT @betaworks

Lies, damn lies, and viral content [PDF - 168 pages] by David Silverman, Tow Center for Digital Journalism

Satire

170106          The Guardian Will satire save us in the age of Trump?

161123         The Media Briefing - Could satire get caught in the crossfire of the fake news wars?[ja] - MT @JeffJarvis


Behavioral Economics

La démocratie des crédules [Democracy of the Credulous][jb][jc] by Gerald Bronner

Excerpt from the book [translation]:


Why do the myths of the plot invade the minds of our contemporaries? Why does the treatment of politics tend to "peopolize" itself? Why are men always suspicious of science? How could a young man claiming to be the son of Michael Jackson and have been raped by Nicolas Sarkozy be interviewed in a major newspaper during a span of 20 hours? How, in a general way, do imaginary or invented facts, even frankly false, succeed in spreading, attracting public support, influencing policy decisions, in short, shaping a part of the world in which we live? Was it not reasonable, however, to hope that with the free flow of information and the increase in the level of study, democratic societies would tend towards a form of collective wisdom?


This invigorating essay proposes, by convening numerous examples, to answer all these questions by showing how the conditions of our contemporary life have allied with the intimate functioning of our brain to make us dupes. It is urgent to understand it. If you have only one book to read, this is it.


Interview [
Audio in French]: Les Matins de France Culture - Gérald Bronner

Predictably Irrational: The hidden forces that shape our decisions by Dan Ariely @danariely << Professor of Psychology and Behavioral Economics

Thinking, Fast and Slow by Daniel Kahneman Professor of Psychology and Public Affairs

Engaging the reader in a lively conversation about how we think, Kahneman reveals where we can and cannot trust our intuitions and how we can tap into the benefits of slow thinking. He offers practical and enlightening insights into how choices are made in both our business and our personal lives―and how we can use different techniques to guard against the mental glitches that often get us into trouble.


http://www.huffingtonpost.co.uk/entry/what-is-fake-news_uk_5878e135e4b04a8bfe6a612e?tw9uj8aggrhnqm2t9

Political Tribalism - Partisanship

170113         Fortune - What’s driving fake news is an increase in political tribalism by Mathew Ingram ‏@mathewi 

Researchers argue that this powerful desire to be seen as a member of a specific group or tribe influences the way we behave online in a variety of ways, including the news we share on social networks like Facebook. In many cases it's a way to signify membership in a group, rather than a desire to share information.”

170111         NYT - The real story about fake news is partisanship

“Today, political parties are no longer just the people who are supposed to govern the way you want. They are a team to support, and a tribe to feel a part of. And the public’s view of politics is becoming more and more zero-sum: It’s about helping their team win, and making sure the other team loses.

… Partisan bias fuels fake news because people of all partisan stripes are generally quite bad at figuring out what news stories to believe. Instead, they use trust as a
shortcut. Rather than evaluate a story directly, people look to see if someone credible believes it, and rely on that person’s judgment to fill in the gaps in their knowledge.”

161114        Medium - I’m sorry Mr. Zuckerberg, but you are wrong - MT Danah Boyd @zephoria <<

Ethical Design

160816        Nieman Lab Designing news products with empathy: How to plan for individual users’ needs and stresses

160518 - How technology hijacks people’s minds — from a magician and Google’s design ethicist by Tristan Harris @tristanharris < Ex-Design Ethicist @Google <<

161122        Medium - An open letter to my boss, IBM CEO Ms. Ginni Rometty MT @katecrawford Expertise: machine learning, AI, power and ethics  

161113         Medium - The code I am still ashamed off

Journalism in the age of Trump

170214          Politico - Have TV media had their fill of Kellyanne? Via Poynter Institute @Poynter

170214         Vogue - CNN’s Jake Tapper on emerging the journalistic hero—and Internet sensation—of the Trump era

161205        MSNBC - Trump allies defend his election lie as ‘refreshing

161202        Los Angeles Times - Trump talks to the public through Twitter. Here's what happens when your next president blocks you

161201         Bill Moyers - Trump’s seven techniques to control the media via Robert Reich @RBReich

161122        Medium What journalism needs to do post-election  by @Brizzyc Social Journalism Director at CUNY MT @jeffjarvis

161122         CJR Maneuvering a new reality for US journalism MT @astroehlein European Media Director, @HRW

161122         The Washington Post - What TV journalists did wrong — and the New York Times did right — in meeting with Trump - MT @JayRosen << Professor of Journalism at @NYUniversity

161121         In Trump territory, local press tries to distance itself from national media

161109         NYT - A ‘Dewey defeats Truman’ lesson for the digital age MT Karen Rundlet @kbmiami Journalism Program Officer @KnightFdn

http://www.foxnews.com/opinion/2017/10/16/trump-is-not-threatening-first-amendment-americans-ignorance-what-it-means-most-definitely-is.html

Cultural Divide

161116        Vox - For years, I've been watching anti-elite fury build in Wisconsin. Then came Trump.

161115         CRJ - Q&A: Chris Arnade on his year embedded with Trump supporters - MT @mlcalderone Senior media reporter, @HuffingtonPost; Adjunct, @nyu_journalism <<

161123        CNN - What about the black working class? - MT @tanzinavega CNN National reporter race/inequality

Cybersecurity

161123         Washington Post - Journalists report Google warnings about ‘government-backed attackers’

Experiences from Abroad

161122 - CJR - Maneuvering a new reality for US journalism by  @NicDawes Media at Human Rights Watch (@hrw)  MT @astroehlein European Media Director, @HRW <<

161123 - [IND] The Times of India - Ban on misleading posts: Collector served legal notice- MT @jackerhack  Co-founder @internetfreedom <

Media Literacy

161122 - Quartz - Stanford researchers say young Americans have no idea what’s news - MT @MatthewCooney Principal @DellEMC

160829 - Common Sense Media - How to raise a good human in a Digital World - MT @CooneyCenter 



Resources list for startups in this space

Everyone is interested in this right now, so I think it’d be useful to have a list of those wanting to actively work on it, and any other resources that could be applied to the task.

Interested in founding

  • Craig Ambrose: Senior developer / social entrepreneur from Enspiral. Some ideas here. Particularly interested in platform cooperative approaches.

  • Lina Rodriguez: Special advisor on community informatics. Currently serving as Liaison to the Board of Directors for INTUG. Founding member of SENATUS, a private network based in Singapore, similar to LinkedIn but with more advanced features (in stealth mode). Lead Strategy for @Media_ReDesign, adm of this project.
  • Tim McCormick:  Working on Diffr: research + ideas collector, and solutions incubator, for issues of filter bubble and media diversification. [update 29 Nov: discussing with Craig Ambrose (see above) a project FilterBurst to explore alternate soc-media models addressing these issues.]   Twitter: @tmccormick;  tmccormick@gmail.

  • Pawel Wargan: Currently working as a lawyer. Interested in building a cross-partisan news website featuring an online debating platform which replaces traditional comments.[jd] Users highlight a segment of text, agree or disagree with its content[je][jf], and have the option to debate someone who took the opposite position.[jg][jh][ji] The idea is to gamify public discourse but also to create more fine-tuned metrics for engagement and create financial incentives for moving beyond mere likes, shares or comments. Please get in touch for more details

  • James Slezak: Working on how we measure the persuasive impact / cross-over appeal of media content
  • Peter Gault: Developing a K-12 tool that now teaches writing, and we will be launching tools in the near future that teach critical thinking and debate. Looking for people interested in design and building educational tools that build logical reasoning skills - peter@quill.org 

  • Shane Greenup: I’ve been working on rbutr since 2012. Rbutr is a memetic immune system for the web, allowing corrections and rebuttals to be connected to the webpages they correct.
  • Florin Badita: OCCRP data journalist, activist. Working on a platform that will detect fake viral news, before they go viral. Also, working on a open graph press standard that will allow us to do better data analysis of the content of an article, by allowing us to backtrace it until the original source.
  • Raymond Zhong: Product/ design/ engineering at Predata, a predictive analytics startup that tracks online conversation metadata.

  • Aaron Goldzimer:  Former international environmental and human rights advocate (Environmental Defense Fund) who, for the past several years, has been focused on U.S. political dysfunction and polarization, including at Yale Law School and Stanford’s Graduate School of Business.  Working on several ideas in this space, including a potential advertiser boycott.
  • Kjetil Kjernsmo (email): Ph.d. in Informatics, University of Oslo, Norway. Started working with disinformation on the Web in 1996, got industry experience developing social media. Working to establish a startup and/or research project with collaborators in Oslo, Norway on decentralised social media. I think the main technical contribution towards this problem is platform disruption and to enable dialog, however, most of the problems are not technical.

  • Dan Whaley (email):  Founder of the non-profit Hypothes.is.  We’re funded through major grants from Sloan, Mellon, Knight, Omidyar and Helmsley Foundations, have a full time team of 14 and are building infrastructure to focus on this problem at web scale.  We helped launch the Web Annotation Working Group under the W3C, which we believe is an important aspect of the necessary architecture.  Here’s an example of what we’re up to:  NY Times article.

  • Nick Adams (email), Ph.D: Sociologist/ Data Scientist @ Berkeley Institute for Data Science. Leads BIDS working group on “UnFaking the News” and an expanding software development team (including luminary Berkeley professors) that is currently prototyping citizen science software (described above) that could contribute to an ecosystem of solutions.

  • Aviv Ovadya (email): Building an org to provide visibility and data about the credibility of most popular media consumed online. Building impartiality and trust in the credibility rating process is a critical initial focus. Product designer and software engineer (working with journalists across the political spectrum). Aiming to have a public pilot by end of January, and interested in additional resources and collaborators to make that happen in time.

Interested in advising



Interested in partnering

  • Civic Hall, A collaborative work and event community based in New York City founded by Andrew Rasiej and Micah Sifry who also run the annual non partisan Personal Democracy Forum

  • Newsvoice - A crowdsourced news aggregator app, available on App Store and Google Play. Breaks filter bubbles by showing more perspectives to each story. Planning to also aggregate fact checks. Looking for volunteers, investors, and partners. Check the video or contact viktor@newsvoice.com.

  • Tribeworthy is dedicated to connecting people to the most ‘Trusted’ and ‘Newsworthy’ online articles. In order to discover which articles are most trusted and newsworthy, we’ve empowered the crowd to critically review any online article. The reviews create a trust rating for each article, author, and publisher, providing news consumers with helpful feedback when deciding where to get their news. This is what we’re calling Crowd Contested Media.[jj]

                Please email jared@tribeworthy.com to get in touch.

  • Factmata - We are building an annotation tool that focuses specifically on user-generated and collaborative fake news and fact checking. The aim of the tool is to assist journalists, fact checkers and citizens stamp out misleading content in real time on the web. We are looking to partner with journalists, investigative reporters, fact checking communities and more to help us get feedback and improve the tool to make the process of verification easier! Email info@factmata.com for more details.

  • Global Voices, an international citizen media organization, has partnered with Media Cloud (MIT Media Lab/ Berkman Center) to build/launch something we’re calling the “NewsFrames Initiative” (for now these posts have some details).  We’re tackling the issue of the relationship of the media/audience through data, and also working with our fellow partners at FirstDraftNews.com on the issue of fake news (stay tuned to that site for more hopefully soon!) -- yes, FB and Google Labs are partners.  Anyone interested in that media profile thing mentioned above,  please keep in touch!  That’s part of our design too. -- Connie Moon Sehat

  • First Draft We provide practical and ethical guidance in how to find, verify and publish content sourced from the social web. The focus of our work includes:
  • Misattributed and manipulated images that circulate widely online
  • Eyewitness photographs and videos captured at the scene of a news event
  • Claims and content shared on social media and on private messaging apps
  • Hoaxes and fake stories generated for financial or political gain

First Draft formed as a nonprofit coalition in June 2015 to raise awareness and address challenges relating to trust and truth in the digital age. These challenges are common to newsrooms, human rights organizations and social technology companies and also their audiences, communities and users. We offer quick reference resources, case studies and best practice recommendations on firstdraftnews.com.

In September 2016 we launched the First Draft Partner Network, the first of its kind to bring together the largest social platforms with global newsrooms, human rights organizations and other fact-checking and verification projects around the world. The Partner Network is based on the idea that the scale of the challenges that society faces around filtering factual information and authentic content can only be tackled via a global collaboration of organizations working together to find solutions. Our partners are journalism, human rights and technology organisations that have an international remit and work at the intersection of information distribution and social media --@cward1e

  • detHOAXicate, an early-stage open source project for tracking article sources, detect circular dependencies and crowdsource fact-checking. The mid-term purpose would be to create a social platform that could give the credibility of a news-source, but also trace every single fact that participate to the note in a fully transparent way.

  • UpRead We are building citizen science software that will allow students, newsreaders, and crowds to label words and sentences by various epistemologically-defensible metrics of truthiness. These metrics will aggregate into journalists’ and publishers’ reputations (that they could turn into profit by keeping the fishiness out of their reporting and op/eds). (And it’s W3C compliant.) We are open to collaborating, and hoping to have a working prototype by inauguration day. We would love to have developers join us and others in this ecosystem.

  • Explaain We’re building an overlay of facts to sit on top of articles, and signing up news sites as partners to use our technology. The result will be a common standard of facts across our network of news websites, and crucially we’re bringing the facts to the readers on the news sites they already visit, rather than expecting them to come to us. We’re very keen to work with established fact-checking organisations and other potential partners to make sure our content is as reliable and up-to-date as possible. Say hi to @JeremyNEvans!

  • Fiskkit Fiskkit is a news discussion platform which favors facts, logic and civility in ways that Facebook, Twitter and Reddit don’t. By putting civic discourse into structured data we can use analytics to enable new insights and tools for readers. At scale, Fiskkit has the potential to quality-check all online news in real time. We are currently working on adoption and fundraising to achieve critical mass. See this onstage demo for which Fiskkit received the Social Impact Award at the LAUNCH Festival in SF [watch video] Say hi to founder @johngpettus![jk][jl][jm]

  • Phittle We create a flexible paywall and lead generation software for publishers integrated with a news management portal for readers. We are incorporating fake news detection and rating system into our software. Since we are open source and offer plugin api’s we welcome any and all collaboration. We also run a news group on facebook called News Addicts which all thoughtful and informed or who want to be informed  are welcome to join. We are happy to work and assist all efforts of creating a better informed public and strengthening American Press and Journalism. @ThePhittle

Interested in investing/funding

  • Omidyar Network
  • They have become more interested in this space since the election. Fiskkit did not qualify for seed funding when we spoke, because we did not have enough traction yet, so their Governance and Civic Engagement team functions as a regular VC operation. They did put $4m into Civic Hall recently. - @johngpettus
  • ON also has a grant-making component called the Democracy Fund, which is run by Tom Glaisyer.

  • Craig Newmark / CraigConnects
  • The guy behind Craigslist is very interested in this problem. He is on the board on Poynter and Columbia Journalism Review (I think). - @johngpettus

  • Knight Foundation
  • Grants for journalism projects. I believe they funded hypothes.is. - @johngpettus

  • Democracy Defense Fund
  • New foundation making grants of up to $20,000 to defend democratic participation
  • Contact nlalwani218@gmail.com 



A Special Thank You


To all of you who are making this possible. None of us is smarter than all of us.

And for being our inspiration and allowing this dream to spread far and wide:

Eli Pariser @elipariser √

CEO and Founder
Upworthy
New York

Andrew Rasiej @Rasiej
Co-Founder Civic Hall
Personal Democracy Forum
Senior Advisor
Sunlight Foundation

New York

Micah L. Sifry @Mlsif
Co-Founder Civic Hall

Personal Democracy Forum

New York

Lenny Mendoca @Lenny_Mendoca
Director Emeritus
McKinsey & Company
San Francisco

Raju Narisetti @raju 

CEO
Gizmodo Media Group

New York

Craig Newmark @craignewmark 
Founder of Craiglist and Craigconnects
San Francisco

Jonathan Albright @d1gi

Assistant Professor (Media Analytics) at Elon University in North Carolina

Expert in data journalism

Mike Butcher @mikebutcher 

Editor-at-large TechCrunch

Zeynep Tufekci @zeynep
Sociology Associate Professor University of North Carolina @UNC

Contributing Op-Ed Writer New York Times @NYTimes 
Former Fellow Berkman Klein Center  @BKCHarvard
Author of forthcoming book on Networked Social Movements

Kent Grayson @KentGrayson
Associate Professor of Marketing - Kellogg School of Management @Kellogg
Northwestern University

Taylor Owen @Taylor_Owen
Assistant Professor of Digital Media & Global Affairs - University of British Columbia @UBC 
Senior Fellow Tow Center for Digital Journalism @towcenter 
Founder and Editor in Chief of OpenCanada.org  @OpenCanada 

Author of Disruptive Power: The Crisis of the State in the Digital Age


Connie Moon Sehat
News Frames Director
Global Voices Online
 
Global Voices @GlobalVoices


Sameer Padania @sdp
External Assessor for the Google Digital News Initiative's Innovation Fund
Program Officer Program on Independent Journalism - Open Society Foundations (OSF) @OpenSociety 

Christoph Schlemmer ‏@schlemmer
Journalist
Fellow
Reuters Institute for the Study of Journalism (RISJ) @risj_oxford
University of Oxford
Business Reporter Austrian Press Agency

Denice W. Ross @denicewross

Public Interest Technology fellow at New America

Co-founder #PoliceData Initiative

Sally Lerman @JournEthics

Fellow TrustProject.org

Kent Grayson @Kent Grayson

Marketing Professor at the Kellogg School of Business @Kellogg
Faculty Coordinator
TrustProject.org


[so many more still pending]


ANNEX

Themes and keywords to look for

[letter] links to related articles - @Media_ReDesign

Ad Ecosystem

Ad-buying system

Metrics

Online ad exchanges

AdScience
AppNexus

DoubleVerify

Moat

Algorithm

Artificial Intelligence (AI)
Common Crawl
DNS Entry
Domain-expertise

Domain Tools

Open Graph Standard
Page authority

Page domain


Web domain

Web traffic

Metadata

Social graph
User Generated Content - UGC

Context

Information ecosystem [16 FEB 17]

Disinformation

Misinformation [05 FEB 17]

‘Crust of lies’ [20 JAN 17]

Deception

Falsehoods

Fiction

Lies


Obedience

Cynicism


Alternate facts

Alternate reality
Gaslighting [27 JAN 17]

Reality-based community

Backfire effect
Behavioral Economics

Cognitive bias
Cognitive dissonance

Confirmation bias
Contact Hypothesis


Echo-chamber (media)
Filter bubble
Framing [18 FEB 17]

Groupthink
Ideological bubble

Ideological frames

Information Ghetto
Opinion corridor (Sweden)

Perspective

Preconceptions

Selective exposure theory

Self-segregation
Spiral of silence
Splinternet - cyberbalkanization

Tribes


Censorship

Opacity

Surveillance

Hannah Arendt
Vaclav Havel
George Orwell


Equal Time Rule
Partisan divide
Partisan refraction

Partisan tribalism
Partisanship
Polarization

Political extremism

Political framing

Political tribalism
Post-truth politics

Autocracy

Dictatorship

Denunciation
Hate speech
Harassment
Libel
Targeted attacks

Threats


Culture clash

Cultural divide

Islamophobia
Jewish
Muslim

Racialization [15 OCT 11]
Racism

White Supremacy

War on Terror

Civil discourse

Community

Compromise
Consensus
Empathy
Perspective

Trust - trustworthiness (media, institutions)

Understanding

Worldview

Media Literacy

News Literacy
Media trust
Public service announcements

First Amendment

Freedom of Information Act
Freedom of speech

Public’s right-to-know

Design Ethics
Journalism

Media Design
Media Structure

Regulations

Virality

Data analytics

Cambridge Analytica [16 FEB 17] 

Media manipulation

Weaponization of data

Social bot

Click Farms

Macedonian teens  [15 FEB 17]  [03 NOV 16]  [24 AUG 16] 

Propaganda

Radicalization

Counterintelligence

CRAP Test

Contextual understanding

Critical Thinking

Skepticism

Accountability

Corraboration

Credibility (source)

Fact Check

Primary sources


False Equivalencies
Spam filtering

Veracity

Reputation Systems

Scientific Method

Social Networks


Under the Hood (technical aspects)


Hashtags

#1lib1ref                                         Wikipedia ‘One Librarian, One Reference Campaign’

#infopros

#lisjs

#checkyoursources
#FactsMatter

#edchat

#edtechteam

#informationliteracy 

#medialiteracy

#datarefuge

#datarescue


Fact Checking Guides

170215 - University of Oregon School of Journalism
SOJC faculty's tips for spotting fake news

161211 - NPR
A Finder's Guide To Facts

161118 - CNN
Here's how to outsmart fake news in your Facebook feed

Boston College
News Know How: Pause before you click

Digital Polarization Initiative (AASCU)

Web Literacy for Student Fact-Checkers

Selected reads

Via @toddmilbourn

Vaclav Havel, Power of the Powerless

Hannah Arendt, Origins of Totalitarianism

Harry Frankfurt, On Bullshit

George Orwell, Politics and English Language



Russian interference


The following was a test done on Dec 10 to see how information could be transferred to this document, linking directly to our feed over at @IntugGB. Interesting experiment but one impossible to repeat; the platform is clearly not designed for this purpose.

Re-upping this thread by former CIA analyst/chief targeter @nadabakos

MT @yashar

23:51 UTC - 10 Dec 2016


#1 For the collective good of the American people a thorough Intel Community assessment or NIE should be drafted immediately

Thread
MT Nada Bakos @nadabakos

03:52 UTC - 10 Dec 2016

I repeat: I laid out what was going on with Russian campaign, based on leaks from European intel, before election:

MT Kurt Eichenwald @kurteichenwald, Senior writer Newsweek

23:46 UTC - 10 Dec 2016

161104 - Newsweek
Why Vladimir Putin's Russia is backing Donald Trump


The intel didn't state that Iraq had WMDs. The Bush-Cheney WH made that misrepresentation.

MT Nancy Pelosi @NancyPelosi

22:21 UTC - 10 Dec 2016

Guys, let's give Trump a chance.
He deserves a chance to hand the country over to Russia, Goldman and Exxon while we sit around and watch.

MT Judd Legum @JuddLegum

22:07 UTC - 10 Dec 2016

How can American history pivot so radically in the course of a few weeks? It really is awesome in its scope, and historic in its scale.

MT Eric Lipton @EricLiptonNYT

21:46 UTC - 10 Dec 2016

This critical moment for deterrence. Trump isn't just refusing to condemn Russian interference. He is committed to visibly rewarding it.

MT Susan Hennessey @Susan_Hennessey

21:33 UTC - 10 Dec 2016

I want to be magnanimous, even in what may be a rigged defeat, but Trump & GOP appear bent on destroying the Future of Life on Earth.

MT John Perry Barlow @JPBarlow

21:30 UTC - 10 Dec 2016

As US faces down crisis over Russia relations, Trump forgoes reconciliation, twists the knife by selecting Tillerson *for* Kremlin ties.

MT Susan Hennessey @Susan_Hennessey

21:29 UTC - 10 Dec 2016


Asked why Tillerson is qualified to be SecState, Trump cites his "massive deals w/Russia," he "knows the players"
MT Judd Legum @JuddLegum

21:24 UTC - 10 Dec 2016

Fox News

A preview of tomorrow's exclusive interview with President-elect Donald Trump. As questions swirl as to who Trump will pick for Secretary of State, he comments on leading candidate Rex Tillerson, CEO of ExxonMobile, saying he's "much more than a business executive, I mean he’s a world class player."

20:58 UTC - 10 Dec 2016



Really NYT? You're seriously going to frame up "both sides" reporting on the CIA's Russian hacking report?
MT Matt McDermott @mattmfm
20:36 UTC - 10 Dec 2016

NYT coverahe.jpg

Tillerson as Secretary of State would signify the greatest discontinuity in US foreign policy since the end of the Cold War.
MT Dimitri Trenin @DmitriTrenin

18:53 UTC - 10 Dec 2016

Let's also revisit this by @AliWatkins in September: The White House asked Congress to keep quiet on Russian Hacking
MT Miriam Elder @MiriamElder
17:53 UTC - 10 Dec 2016


161228 - BuzzFeed
The White House asked Congress to keep quiet on Russian Hacking

From July 2016, on Trump's broader links to Russia, and what they mean for Europe
MT Anne Applebaum @anneapplebaum
17:41 UTC - 10 Dec 2016

160721 - The Washington Post
Opinion: How a Trump presidency could destabilize Europe

Good time to (re)read @sheeraf: Meet Fancy Bear, the Russian group hacking the US Election

MT Miriam Elder @MiriamElder

17:40 UTC - 10 Dec 2016

161015 - BuzzFeed
Meet Fancy Bear, the Russian group hacking the US Election

For the first time in history, Washington has accused a foreign government of trying to influence the US election. Sheera Frenkel investigates the Russian group accused of hacking the US election — and finds they’ve been practicing for this moment for a long time.

One more time: It is totally plausible that Russia did what is being charged by anonymous sources. Still need to see actual evidence. Now.
MT  Dan Gillmor ‏@dangillmor
19:36 UTC - 10 Dec 2016

Harry Reid flagged the issue for Comey. But for Comey, only Hillary's emails mattered. Comey sat on this. Shameful.
MT George Takei @GeorgeTakei

19:27 UTC - 10 Dec 2016

https://assets.documentcloud.org/documents/3035844/Reid-Letter-to-Comey.pdf

Donald Trump and the GOP are going to have to face questions of treason like few other incoming admins ever have. Buckle up.
MT Isaac Saul @Ike_Saul
19:15 UTC - 10 Dec 2016

If you're looking for Tillerson's thoughts on stuff that might matter to a Secretary of State, try here first.
MT Daniel W. Drezner @dandrezner

19:13 UTC - 10 Dec 2016

120627 - CFR
The New North American Energy Paradigm: Reshaping the Future
Video - Full Screen


If you're convinced that Hillary is corrupt, flawed, unlikable, dishonest, that was EXACTLY the goal of Russian tampering. Congratulations.

MT Peter Daou @peterdaou
19:07 UTC - 10 Dec 2016

“Tillerson will be paired with former U.N. Ambassador John Bolton as his deputy secretary of state.”
MT Max Abrahms @MaxAbrahms
18:59 UTC - 10 Dec 2016


A lot of the negative/shocked reactions to Rex Tillerson as SecState seem to come from people w/limited understanding of private sector
Developing thread

MT Suzanne Maloney @MaloneySuzanne via Jack Tapper @jaketapper
18:59 UTC - 10 Dec 2016

161210 - NBC News
Rex Tillerson of Exxon Mobil expected to be named Trump's Secretary of State: Sources

I'm not challenging the outcome of the election, but very concerned about Russian interference/ actions at home & throughout the world.
MT Lindsey Graham @LindseyGrahamSC

18:55 UTC - 10 Dec 2016

Russia is trying to break the backs of democracies – and democratic movements – all over the world.
Lindsey Graham @LindseyGrahamSC

Hard to imagine Tillerson getting confirmed, but 2016 has made clear that just bc you can’t imagine something doesn’t mean it can’t happen.

MT Nicole Hemmer @pastpunditry

18:43 UTC - 10 Dec 2016

"We stand now at the most dangerous moment for liberal democracy since the end of World War II."
MT Howard Wolfson @howiewolf
18:39 UTC - 10 Dec 2016

161209 The Atlantic
Russia and the Threat to Liberal Democracy: How Vladimir Putin is making the world safe for autocracy


I never agree with @chuckschumer but he's correct on this one. Where is his Republican colleague joining his call?

MT Joe Walsh @WalshFreedom

18:38 UTC - 10 Dec 2016

161210 Politico
Schumer demands congressional inquiry on Russian meddling

When I wrote this on July 5, people said I was a paranoid red-baiter.
MT Franklin Foer @FranklinFoer
18:32 UTC - 10 Dec 2016

160704 - Slate
Putin's Puppet
If the Russian president could design a candidate to undermine American interests—and advance his own—he’d look a lot like Donald Trump.


As expected and on cue...  


So, I was getting troll army onslaughts so I installed Block Together, but I fear it has blocked real people who were just new. Apologies.

MT Summer Brennan @summerbrennan
18:32 UTC - 10 Dec 2016


Today the Trump team attempted to discredit the CIA as the people who falsely said Saddam had weapons of mass destruction. That is a lie.

Developing thread
MT Mark Harris @MarkHarrisNYC

17:55 UTC - 10 Dec 2016


150320 Business Insider
Here's the full version of the CIA's 2002 intelligence assessment on WMD in Iraq

161020 - Esquire
How Russia pulled off the biggest election hack in US history

Via @michikokakutani @Fahrenthold

17:37 UTC - 10 Dec 2016

An investigation must begin as soon as possible on any evidence Russia actively worked to hijack our election & elect Donald Trump.

MT Senator Patty Murray (D-WA) @PattyMurray

17:32 UTC - 10 Dec 2016


161210 - Teen Vogue
Donald Trump is gas lighting America and deliberately undermining the very foundation of our freedom 

By Lauren Duca @laurenduca

With Tillerson as possible Sec of State, book we should be reading. Exxon has its own foreign policy.
MT Eric Lipton @EricLiptonNYT

17:22 UTC - 10 Dec 2016

120608 - NYT
Well-Oiled Machine ‘Private Empire,’ Steve Coll’s book about Exxon Mobil

Tricky thing is, how does Obama respond to this mess without war powers, since we cannot declare war on Russia?  https://www.law.cornell.edu/uscode/text/47/606
MT Summer Brennan @summerbrennan,  Author, journalist,  former UN Disarmament & International Security
17:01 UTC - 10 Dec 2016


161210 - NYMag
Trump, McConnell, Putin, and the triumph of the Will to Power

16:07 ITC - 10 Dec 2016

TRUTH: Hillary's "unlikability" WAS the Russian strategy. Make her toxic with fake news, trolling, hacking. 66 million didn't fall for it.

MT Peter Daou @peterdaou
16:04 UTC - 10 Dec 2016

We are in an unprecedented situation: a president that 54% of voters opposed elected with the help of a Russian intelligence operation.
MT Ryan Lizza @RyanLizza

15:39 UTC * 10 Dec 2016


[The New York Times] once sat on facts that could change the 2004 election result, citing "fairness." Now CIA/FBI do the same.

MT Edward Snowden @Snowden

15:14 UTC - 10 Dec 2016

060813 - NYT Opinion Pages
Eavesdropping and the election: An answer on the question of timing

Neither the article featured nor the White House statement refers to Wikileaks at all. Just Russia. Something you want to tell us Julian?
MT Summer Brennan @summerbrennan
15:11 UTC - 10 Dec 2016

161209 - CNN
Obama orders review of Russian election-related hacking

The last time we declared war was in 1941, but of course we've fought nearly constant wars since then. But this is a cyber Pearl Harbor.

MT Summer Brennan @summerbrennan

15:03 UTC - 10 Dec 2016



If only we had been warned the cull of whistleblowers could result in voters lacking access to vital information.

MT Edward Snowden @Snowden
14:41 UTC - 10 Dec 2016

160406 - The Intercept
Obama's gift to Donald Trump: A policy of cracking down on journalists and their sources




161210 - The Intercept
Anonymous leaks to the Washington Post about the CIA’s Russia beliefs are no substitute for evidence
MT Glenn Greenwald @ggreenwald
12:12 UTC - 10 Dec 2016

The intel report on Russia's role in the 2016 election must be available for all electors before the electoral college meets Dec. 19

MT John Dean @JohnWDean

06:40 UTC - 10 Dec 2016 


Don't just read the headline and lede on this one. Keep going, paying particular attention to the White House/Gang of 12 meeting.
MT Steve Benen @stevebenen, Producer for the Rachel Maddow Show
01:09 UTC - 10 Dec 2016

A reminder to take every claim made by unnamed US officials about intelligence conclusions with healthy skepticism.
MT Christopher Hayes @chrislhayes, Editor at large, The Nation

00:54 UTC - 10 Dec 2016


'Friends and associates said few U.S. citizens are closer to Mr. Putin than Mr. Tillerson'
MT Casey Michel @cjcmichel Nationalism, extremism, post-Sovietism
Formerly
@HarrimanInst, @CrisisGroup, @TPM, @PeaceCorps 
00:27 UTC - 10 Dec 2016

161206 - WSJ
Rex Tillerson, a candidate for Secretary of State, has ties to Vladimir Putin



Here is a thread with links to stories about the US, Russia, the cyber war, and its role in our election. Work in progress. Begin here.
Summer Brennan @summerbrennan
00:06 UTC - 26 Nov 2016

161210 - The Guardian
Russian involvement in US vote raises fears for European elections

CIA investigation may have implications for upcoming French and German polls, even raising doubts over integrity of Brexit vote

Via Lukasz Olejnik @lukOlejnik,  Internet, Web Security & Privacy, Research and Engineering


161209 - The Washington Post
Secret CIA assessment says Russia was trying to help Trump win White House
The CIA has concluded in a secret assessment that Russia intervened in the 2016 election to help Donald Trump win the presidency, rather than just to undermine confidence in the U.S. electoral system, according to officials briefed on the matter.

… The CIA shared its latest assessment with key senators in a closed-door briefing on Capitol Hill last week, in which agency officials cited a growing body of intelligence from multiple sources. Agency briefers told the senators it was now “quite clear” that electing Trump was Russia’s goal…

Seven Democratic senators last week asked Obama to declassify details about the intrusions and why officials believe that the Kremlin was behind the operation. Officials said Friday that the senators specifically were asking the White House to release portions of the CIA’s presentation.


… [in reference to September pre-election proceedings] In a secure room in the Capitol used for briefings involving classified information, administration officials broadly laid out the evidence U.S. spy agencies had collected, showing Russia’s role in cyber-intrusions in at least two states and in hacking the emails of the Democratic organizations and individuals.

And they made a case for a united, bipartisan front in response to what one official described as “the threat posed by unprecedented meddling by a foreign power in our election process.”

161209 - NYT
Russian hackers acted to aid Trump in election, U.S. says


161209 - The Guardian

US election hacking: Obama orders 'full review' of Russia interference


161125 - The Washington Post
Americans keep looking away from the election’s most alarming story

By Eric Chenoweth @EricDChenoweth Co-director of the Institute for Democracy in Eastern Europe


END OF TWITTER NEWSFEED


161210 - The Guardian
Russian involvement in US vote raises fears for European elections

CIA investigation may have implications for upcoming French and German polls, even raising doubts over integrity of Brexit vote

Via Lukasz Olejnik @lukOlejnik,  Internet, Web Security & Privacy, Research and Engineering


161209 - The Washington Post
Secret CIA assessment says Russia was trying to help Trump win White House
The CIA has concluded in a secret assessment that Russia intervened in the 2016 election to help Donald Trump win the presidency, rather than just to undermine confidence in the U.S. electoral system, according to officials briefed on the matter.

… The CIA shared its latest assessment with key senators in a closed-door briefing on Capitol Hill last week, in which agency officials cited a growing body of intelligence from multiple sources. Agency briefers told the senators it was now “quite clear” that electing Trump was Russia’s goal…

Seven Democratic senators last week asked Obama to declassify details about the intrusions and why officials believe that the Kremlin was behind the operation. Officials said Friday that the senators specifically were asking the White House to release portions of the CIA’s presentation.


… [in reference to September pre-election proceedings] In a secure room in the Capitol used for briefings involving classified information, administration officials broadly laid out the evidence U.S. spy agencies had collected, showing Russia’s role in cyber-intrusions in at least two states and in hacking the emails of the Democratic organizations and individuals.

And they made a case for a united, bipartisan front in response to what one official described as “the threat posed by unprecedented meddling by a foreign power in our election process.”

161209 - NYT
Russian hackers acted to aid Trump in election, U.S. says


161209 - The Guardian

US election hacking: Obama orders 'full review' of Russia interference


161125 - The Washington Post
Americans keep looking away from the election’s most alarming story

By Eric Chenoweth @EricDChenoweth Co-director of the Institute for Democracy in Eastern Europe

Answering the question:

But how does one or many curtail this massive violation of our trust and common morality?

Writes MT Jacob Harris @harrisj - Mission Accomplished:

160712 - The New Yorker 

The real paranoia inducing purpose of Russian Hacks

When I began researching the story, I assumed that paid trolls worked by relentlessly spreading their message and thus indoctrinating Russian Internet users. But, after speaking with Russian journalists and opposition members, I quickly learned that pro-government trolling operations were not very effective at pushing a specific pro-Kremlin message—say, that the murdered opposition leader Boris Nemtsov was actually killed by his allies, in order to garner sympathy. The trolls were too obvious, too nasty, and too coördinated to maintain the illusion that these were everyday Russians. Everyone knew that the Web was crawling with trolls, and comment threads would often devolve into troll and counter-troll debates.

The real effect, the Russian activists told me, was not to brainwash readers but to overwhelm social media with a flood of fake content, seeding doubt and paranoia, and destroying the possibility of using the Internet as a democratic space. One activist recalled that a favorite tactic of the opposition was to make anti-Putin hashtags trend on Twitter. Then Kremlin trolls discovered how to make pro-Putin hashtags trend, and the symbolic nature of the action was killed. “The point is to spoil it, to create the atmosphere of hate, to make it so stinky that normal people won’t want to touch it,” the opposition activist Leonid Volkov told me.”

Updates

161120 The Intercept @theintercept
Some fake news publishers just happen to be Donald Trump’s cronies

161120 
Never mind the Algorithms: The role of click farms and exploited digital labor in Trump's selection



Case Study: The Washington Post

Interesting to see how this particular story evolves with time

New report provides startling evidence of a massive Russian propaganda operation using Facebook and fraud sites

MT @profcaroll Associate Professor of Media Design @parsonsdesign The New School @thenewschool

161124 - Washington Post
Russian propaganda effort helped spread ‘fake news’ during election, experts say  [jn]
MT @jonathanweisman Deputy Washington Editor, The New York Times via Centre for International Governance Innovation @CIGIonline 

Critique: 

This article is based on some fairly shaky ground IMO, which I wrote about here:
Mathew Ingram @mathewi

160925 - Fortune 
No, Russian agents are not behind every piece of fake news you see

@mathewi - This has been looked at in so much depth. But think of it this way. We are bringing everything to the table. If it is the Washington Post, fine. We are ready. Start checking…


Crucial to read this article in context of Russian psyop allegations. FB weaponized targeting by susceptibilities - MT @profcaroll

161119 - NYT
Opinion: The secret agenda of a Facebook quiz


rushack.jpg[jo][jp][jq]

It's now imperative that the Electoral College get schooled on how Facebook may have been used to swing the election - @profcaroll

Also consider the 98 personal data points that attackers can use on victims for propaganda campaigns on Facebook - @profcarroll

160819 - The Washington Post
98 personal data points that Facebook uses to target ads to you

Unfortunately, I fear media will be reluctant to poke at the underlying ad targeting issues here because their revenue still depends on it. But given National Emergency of a propaganda revelation journos must whittle this down to data privacy because that's security solution here.

Precision targeted ads based on a duopoly’s concentrated user
data profile is a national security risk to propaganda attacks. Now we know. - @profcarroll

Good thing Germans believe in defending their privacy knowing why it must be cherished and always defended: @profcarroll

161124 - Reuters 
Merkel fears social bots may manipulate German election



We at PropOrNot.com are integrating automated and manual approaches to identifying Russian propaganda[jr],[js][jt] [ju]which much of what this misinformation is: While some fake news is just commercial clickbait, and some is satire, much is state-sponsored, and echoes official and semi-official Russian propaganda in numerous ways. We’re building a browser toolbar to help identify it.[jv] This is a guide to manually identifying it, and this is an example of how that works in practice.

 



END OF DOCUMENT


[1] https://en.wikipedia.org/wiki/Eben_Moglen

[2] https://www.youtube.com/watch?v=aiFIu_z4dM8

[3] https://drive.google.com/open?id=1bHmB8_f7ASRHm97TwhZmmEQnTKU&usp=sharing

[4] https://en.wikipedia.org/wiki/Alien_(law)

[5] https://en.wikipedia.org/wiki/Liberal_arts_education

[6] http://www.webscience.org/

[7] https://twitter.com/WebCivics/status/492707794760392704

[8] https://www.google.com.au/search?q=trump+therapy

[9] https://www.youtube.com/watch?v=4x_xzT5eF5Q 

[10] http://lod-cloud.net/ 

[11] http://ogp.me/# 

[12] https://schema.org/docs/about.html 

[13] https://www.youtube.com/watch?v=_-6mhdjE1XE 

[14] https://en.m.wikipedia.org/wiki/Genre 

[15] https://en.m.wikipedia.org/wiki/List_of_genres 

[16] https://www.theguardian.com/technology/2012/apr/17/tim-berners-lee-monitoring-internet 

[17] https://blogs.msdn.microsoft.com/jennifer/2016/06/28/sentiment-analysis-on-social-network-data-twitter-facebook-etc/ 

[18] http://people.sabanciuniv.edu/berrin/share/LDA/Stanford-NLP-Course-termproject-ssoriajr-kanej.pdf 

[19] http://www.slideshare.net/kidehen/openlink-virtuoso-management-des 

[20] https://www.facebook.com/help/329858980428740/# 

[21] https://youtu.be/eWtOg3vSzxI

[22] https://w3c.github.io/vctf/

[23] http://www.badgealliance.org/

[24] http://www.independent.co.uk/news/the-best-photoshopped-images-and-the-stories-behind-them-10465812.html

[25] https://www.w3.org/community/

[26] https://www.ted.com/talks/tim_berners_lee_a_magna_carta_for_the_web

[27] http://osds.openlinksw.com/

[28] https://en.wikipedia.org/wiki/Ted_Nelson

[29] https://www.youtube.com/watch?v=En_2T7KH6RA

[30] https://www.youtube.com/watch?v=_2271hkn3MA 

[31] https://www.youtube.com/watch?v=9zXqHIJJVxk 

[32] https://my.redlink.io/#/apps/DEMO/playground 

[33] https://lists.w3.org/Archives/Public/public-schemaorg/2016Nov/0063.html 

[34] https://lists.w3.org/Archives/Public/public-credentials/2016Feb/0069.html 

[35] https://www.youtube.com/watch?v=aiFIu_z4dM8

[36] http://www.visualdataweb.org/relfinder/relfinder.php

[37] http://jeffsayre.com/2010/09/13/web-3-0-powering-startups-to-become-smartups/

[38] https://www.youtube.com/watch?v=k61nJkx5aDQ

[39] https://en.wikipedia.org/wiki/Legal_personality

[40] https://www.google.com.au/search?q=toilets+near+me

[41] https://en.wikipedia.org/wiki/Intelligent_personal_assistant

[a]Det här är en kommentar

[b]This should include becoming familiar with those working in countries as dissidents against a propaganda machine. They often use the Internet as bloggers because they cannot publish openly in their own countries without great personal risk, but are a font of knowledge when dealing with repressive regimes. Who are they? Find them and pick their brains. They are your brothers in arms.

[c]Great comment Alex!

[d]is this account deactivated?

[e]Thanks for noticing.  @TheNewsLP was updated with their new Twitter handle: @NewsLitProject

- Kate B.

[f]_Marked as resolved_

[g]_Re-opened_

[h]this is a key issue. At a minimum, there should be a working definition of what is meant by "fake news" (in this project, by others), and examination of how it relates to other, perhaps better-defined concepts like propaganda, misinformation, disinformation.  It seems clear that in current discussion and this document, people are using "fake news" to mean various things.

[i]I've been using the term "fraudulent" instead of fake news, as fraudulent better describes the intention to deceive people for clicks/ad dollars.

Many people are having a hard time differentiating between "fake news" sites that are pushing an agenda (Breitbart) and those that are gaming the system purely for profit, with no political leanings (Macedonian sites). We need to do a better job of separating the two categories. Both are problematic, but  we need to have different approaches to solving each.

[j]This goes back to the notion of a satire symbol. We can't trust the "masses" to be quick enough to recognize satire. But let's also realize there is a big difference between satire and FAKE news.

[k]+1. Real satire sites should have no problem identifying themselves as such.

[l]There are more-subtle and pernicious judgements to be made as well. Framing and omission can easily create filter bubbles as much as overt misinformation can. See +giladlotan@gmail.com's piece on Medium, here: https://points.datasociety.net/fake-news-is-not-the-problem-f00ec8cdfcb#.d0x5oyhxp

[m]I believe that, unfortunately, the responsibility of fact checking falls solely on the reader. It's economically unfeasible to have an independent body fact check all articles before publication, thus leading publishers to employ their own fact-checkers but the question "Who watches the watchmen?" rings true. A responsible reader should verify sources as well as statements on their own in order to be certain that what's being said is true.

[n]I'd say this is the simplest and most encompassing definition of the *problem* with "fake news" and other forms of misinformation

[o]Who gets to define that?  Such a broad definition is rife for potential censorship...

[p]Best to use metrics that all can agree are valid, and measure at the most granular level: to the word.

[q]Taking notes. That is the idea :)

[r]Recent study suggests backfire effect may actually be rare: http://www.poynter.org/2016/fact-checking-doesnt-backfire-new-study-suggests/436983/

[s]Boomerang effect (same thing) is well documented in persuasion research though https://en.wikipedia.org/wiki/Boomerang_effect_(psychology)

[t]Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.  Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’.  It makes it personal and defensive: there is a difference between a comment on a shared article criticising the author and criticising the ‘sharer’, as if they’d written it.  They may not agree with all of it.  They  may be open-minded.  By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’.  [Amanda Harris]

[u]Your post is being moved to the topic Facebook, with the main idea under Basic Concepts. What a great suggestion, thanks.

[v]How you define "emotional"? Most commentary is nothing than emotional, so, if the definition is vague, 100% of commentary will end up being flagged.

[w]Kate -

Insert Header for this section

[x]Dan Ariely at Duke should be able to help answer the question "Do people lie to pollsters, and if so, why?"

[y]Deindividuation theory

[z]Sorry, not super familiar with the structure of this document. Please feel free to move to a more appropriate section (I couldn't find one)

[aa]The doc is in constant evolution, Aleksandr. Let me go through the research first, to see where it can be added.

A special section for keywords is being worked on at the very end; that should help us get a general sense of the many topics and suggestions emerging as we start to sharpen the focus.

Many thanks,

- Lina

[ab]N.B. This all seems to be about social networks (where trending news is encouraged). I feel there can also be diversity in the number of news sites - and these are run by human editors. However, there would need to be a solid plan about how to communicate the knowledge of these sites to the public. So I expanded on the title.

[ac]Do NOT propose to hire editors as a solution to any problem, there are editors in all tabloids like The Sun, and The Daily Mirror, they can be useless and thus does not solve anything in the real world. Bad Idea!

[ad]Looks like Mark Zuckerberg might have heard you, Mathew. But instead of hiring them (in fact they sacked their staff), they prefer outsourcing the human work now https://twitter.com/LiquidNewsroom/status/799880674719203328 reported by Kurt Wagner, recode.

[ae]This gives rise to a dilemma. If you hire human editors to filter fake news, their decisions will disproportionately affect conservative websites (see Buzzfeed analysis). This will expose them to accusations of bias, and might exacerbate the fundamental problem that eliminating fake news is trying to solve: reducing polarisation.

[af]Great idea, but will it happen? I'm doubtful.

[ag]And who will pay for their time? Where will the money come from?

[ah]I think there's plenty of good media out there by nuanced and moderate writers. The main problem of media in the age of information isn't in the creation of good content, but it's monetization.

As long as people live in polarized filter bubbles, the actual quality of journalism available is irrelevant; clickbait hyperpartisan websites are going to dominate.

What needs to change is not the journalists themselves but the structure of the medium in which it's disseminated. How do we widen people's filter bubbles?

[ai]great except when they had human editors vetting trending topics they got vilified for that too

[aj]Had it right the first time

[ak]Yuuuuup.

[al]In practice, there are probably tens of thousands of unique articles going viral at any moment on Facebook. You'd need a smart algorithm to surface "probable hoaxes" to editors

[am]this is what i am working now on, a algorithm that will find viral news before they go viral

[an]Just read your medium post Badita. Looks interesting. here's a paper with a great overview from Duncan Watts and Sharad Goel discussing mechanisms for virality https://5harad.com/papers/twiral.pdf

[ao]I am developing an app to incentivize real news that addresses this issue.  If anyone could take a look and offer comments/sugestions it would be helpful:

https://github.com/qualisign/torrential-times/blob/master/fake-news.md

[ap]David - Is there another link? We are getting a 404 on that one.

[aq]Yes -- the Readme is a broad overview, and I am close to finishing a demo

that I'll link to soon.

https://github.com/qualisign/torrential-times

[ar]I suspect this is what FB has already done. I think the problem shows up in defining where the line is for "human no longer required." 99% accuracy? 99.9%? I'd assert it should be 4- or 5- nines, but I don't believe we can reach that level of accuracy with current machine learning techniques or data availability. How then do we convince FB that they need to employ a significant number of actual humans for the next 5-10 years?

[as]boy that won't get hacked in two minutes. good idea but federation is better.

[at]This is exactly the architecture of Web Annotation (w3.org/annotation).  The goal is federation, and an ability to provide insight or meta-information about resources that you do not control.  In other words, communities could run annotation servers, or moderate groups on larger annotation servers.  At scale, with 100s or 1000s of expert communities leveraging their expertise in areas in which they are familiar, this can be part of an overall solution.  Also, machine algorithms rather than humans can inhabit layers/groups of their own.  It's clear that combining machine analysis and human critique / curation is an important dimension of the problem.  An example of such an expert community is ClimateFeedback.org.  At Hypothes.is, we plan to begin working with expert communities on a much larger basis in 2017 to achieve exactly this.  Article: http://dotearth.blogs.nytimes.com/2016/05/03/scientists-build-a-hype-detector-for-online-climate-news-and-commentary/

[au]This sounds like what is happening at http://www.opensources.co/.  They are building a database of sources and tagging them.  There is an API (I think) that is being used by the folks at BS Detector (http://bsdetector.tech/) which is a browser plug-in.

[av]This rhetoric may be problematic, depending on the personas of the contributors. I like the idea of the consumer audience being anyone that seeks real news. In contrast, allowing any "Joe Shmo" to have ownership of site credit and control of up-voting and down-voting is opening doors to spammers. I think sooner than later, this idea would lose its efficiency. Unless the contributors are tested to determine if they qualify under certain standards in order to trust that they can detect false content.

[aw]Great idea, but why would Facebook who is already trying to cozy up to China with a censoring tool be open to this?

[ax]Sadly this is flawed since there are plenty of fake or pseudo-news sites that have high domain authority due to the very fact that a bunch of sites link to them as sources of "news."

[ay]Google has found lots of ways to deal with this--collaborate?

[az]They've figured out ways to handle it in a system where people are actively seeking specific information. Most of those programmatic solutions don't apply to FB. One of their *non*-programmatic solutions is getting publishers to manually apply and manually get approved if they want to show up in Google News results, but that's also the antithesis of the way the FB platform works. (It reeks of censorship/favoritism/non-neutrality/call-it-what-you-will and FB will always shy away from that.)

[ba]what about piggybacking on it and checking if something appears on Google News and mark it as such on the FB feed. Just let people know it's a verified source

[bb]Those sites with high domain authority can just be blacklisted. It'd be easy to identify which news sources have a high rate of fake news. You could also use clustering and ML by using fake news sites and users that share them to identify other fake news sources.

[bc]Google also got rid of Page Rank for the most part.

[bd]Seems cool but like it touches a relatively small part of the area covered by fake news.

[be]What would this be for climate change?

[bf]climatefeedback.org

[bg]Old domains are readily for sale on the secondary markets.

[bh]Looking at age is a dangerous precedent because of this exact issue. I like the idea of using this as a signal but it would warrant further discussion.

[bi]Yeah, there's something here.

[bj]Agree

[bk]I really like this idea.  I wonder if adding a "Do you want to read this before you share it?" step would be useful.

[bl]I LOVE your idea, Hannah! So good!

[bm]could also weight more if read to the bottom, or time spent on page to vet actual time-spent reviewing or reading

[bn]With increased transparency, though, wouldn't it increase the potential of abuse? If a malfeasant knows how FB ranks content, then content can be created that games the system.

[bo]+1 - what media company doesn't want to know this to help boost its own stories?

[bp]Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.  Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’.  It makes it personal and defensive: there is a difference between a comment on a shared article criticising the author and criticising the ‘sharer’, as if they’d written it.  They may not agree with all of it.  They  may be open-minded.  By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’.  [Amanda Harris]

[bq]Your post is being moved to the topic Facebook, with the main idea under Basic Concepts. What a great suggestion, thanks.

[br]How would this be defined?

[bs]This idea has merit, but seems to only be a partial step. It would be tricky to first determine which stories get speed-braked and which do not. If speed breaks are applied globally, to all articles, it limits the possibility of any story going viral, which has negative commercial implications that make it a non-starter.

[bt]Pre-conceived notion of which outlets are trustworthy might not be justified. For instance, WSJ is not a credible source when it comes to climate science topics, see http://climatefeedback.org/outlet/the-wall-street-journal/

[bu]I understand the idea, but this will be percived as a form of censorship. why this newspaper is on and other is not, etc

[bv]+=1

[bw]New York Times and wall Street Yournal are "white labeled, verified sites"? Since when? They publish the same biased crap as Onion, or world news daily.

[bx]I like the idea of using institutional credibility to verify, but it runs the risk reinforcing the dominance of few media voices at a time when we should value adherence to journalistic principles above name recognition. What about an independent journalistic standards board that provides a sort of 'stamp of approval' for any outlet seeking it. The approval would be based on journalistic practice, rather than name recognition, funding or anything else. This probably goes beyond the point we've reached with algorithms, but you could use a volunteer editors--like a sort of peer review for news.

[by]We are left however assuming that the NY Times is a gatekeeper of quality. yet as we know there are times when the times is itself not able to get past an institutional bias to be "an institution" and all that comes with that. access instead of toughness, credulousness when a source represents power, a belief that with power necessarily comes the need for respect, etc. all of this mitigates TOWARDS "fake" news in a way this document, and this whole meme, tend to avoid, because facebook fake news stories are an easier target.

[bz]Exactly. But how would this tag be visible? This is the problem that Web Annotation is intended to solve (w3.org/annotation / hypothes.is)

[ca]that doesn't mean "Facebook knows the story is fake"; when someone shares an accurate article, Facebook also suggests links to "fake news" to make the balance

[cb]Agree with Manu. Correlation does not imply causation. A Snopes (or any particular) article being recommended could simply be a factor of other people sharing that link in comments to the original post. Also, a Snopes article could be related that verifies the original article's truthfulness instead of falsehood.

[cc]The Berkman Center teamed up with Google for what's called StopBadware. The effort identified sites propogating malware, etc. and then notified users before they clicked through. Perhaps there's a way of evolving the concept for these purposes --

[cd]Perhaps but malware is much easier to define in code than truthfulness

[ce]Somewhat also as a response to the idea of a delay up there (in that if one is willing to change or obstruct the user's ability to share whatever they want this much, something less aggressive might be better, like what's suggested here)

Truthfullness is hard to check. But _suspiciousness_ maybe not as much, since there are some obvious tell-tale signs of suspicious/fake news stories after all. So once a story starts gaining crazy momentum, maybe questions like these are worth asking:

Does a quick google search of a whole sentence in the article spawn a hell of a lot of the same article but in different sites? Are articles almost exactly (or completely exactly) like the article found in different time periods? (ergo, is it recycled but maybe with a few changes here and there)? Is the author a known fake news writer (not the same as satire) or someone who is known to produce fake content people mistake for real (see Paul Horner). (I feel like bots and crawlers should be able to check this kind of simple stuff right?)

If yes mark that story as "suspicious" somehow and if a new user tries to share it warn them before leting that share go through (maybe link to a small explanatory page about what a content-mill is or something else to educate the user would be cool).

Have in mind that a lot of people question even Snopes (if you haven't seen that ever, maybe that's the fault of your echo chamber's size), so while I personally think that "is this debunked in snopes?" is a good indicator, I wouldn't go as far as to forbid the user from sharing content based on that, rather, a warning would be good enough.

EDIT: Actually, these ideas are somewhat repeated in bullet points a few pages below...

[cf]Rather than sentiment analysis of a headline - why not machine learn the articles and remove the baiting entirely

[cg]While I like the idea of easy identifying any content-based approach can ultimately be bypassed as fake-news outlets are capable of adapting to such circumstances. It's a bit like a hidden search-result algorithm and the whole SEO industry.

[ch]agree, whatever the solution - content only will not achieve by itself - it is one component and it needs to be agile

[ci]Sentiment analysis is quite biased and intrinsically flawed... I would stay away from it for this particular application.

[cj]Also, perhaps headline writers would simply adjust

[ck]This!

[cl]+1. Eminently doable with current data and without human intervention. But may not be effective, as people from various parts of the political spectrum may be more or less open-minded.

[cm]Added reference. Hello from Wikimedia UK. Wikipedia has a big role to play in preventing fake news.

[cn]Yes, indeed, machine learning can identify partisan bias. Then, the reader can be offered to read an article on the same topic - with the opposing bias. Ask questions at the end to see if the reader understood it. Can start an educational charity fund to pay readers for answering those questions. This should help counter-act the vicious cycle of everyone reading only same view.

[co]Back in 2008, we launched a private network that used this verification system for every single profile. All linked directly to four default platforms: FB, Twitter, LinkedIn. Medium could be that fourth reference.

[cp]I see a few problems with this does this mean that you will stop seeing stuff from your friends or people your following if they prove "harmful" to you. Also does this  mean that we will start policing breaking stories that don't have all the information and while written news stories can be edited videos cannot so do we penalize these news organizations for incomplete information that is later corrected

[cq]I guess, that this can be manipulated  as well. It's just a matter of time, when workarounds will make it unusable again.

[cr]generally, satirical news sites as category for any kind of machine analysis are something that need to be thought through carefully (point made by our partners at FirstDraftNews.com)

[cs]How to avoid having this fall prey to the same problems as "real names" policies?  On Twitter, men are substantially more likely to have verified accounts than women; there are likely to be similar patterns with other dimensions of diversity.  See http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F for more

[ct]I would actually suggest that It should be a different name and or system that is already in place on Facebook as verified user on Facebook serve more than one function. and to use twitter as an example the verified check also stops people from faking the identity of that person.

[cu]Yes! +Publishers should put their reputation at stake, establish an “ethics code” and accountability process, eg http://thetrustproject.org/

[cv]I like this idea, it removes opportunity for spammers.

[cw]What makes this difficult is deciding what "privilege" translates to, algorithmically. FB may or may not be doing this already - would we know?

Virality as a driving factor for FB posts/news means that visibility follows a power law. Does "privileged" mean authoritative sources are 10% more likely to be seen? What does that actually look like inside the algorithm?

Perhaps "privileged" means there are reserved Trending slots for such sources?

[cx]+1. this privilege will be the first thing attacked by, let's call them 'the Enemies of Truth'.

[cy]I also agree as what is to stop Facebook from creating a privilege to news sites that  give Facebook some money. Because Facebook isn't a governmental institution there is no incentive not to pull a move on that level.

[cz]This solution is highly Problematic, as what is deemed "offensive" could change over time, and likely will reflect existing power structures and in the world case become a tool of state propaganda. For example, An oppressive state may require facebook to deem negative pieces about itself as offensive (like china is already doing).  

Furthermore, the appetite for fake news is as much of a problem as fake news itself. This could backfire and result in in  non-verified and Offensive news being viewed as the anti-establishment go-to. Even now, Breitbart is widely viewed many as an offensive, non-verified news outline, and many readers read it despite knowing that it is viewed this way

[da]This might vary from country to country but I am thinking of how a Chamber of Commerce works.

[db]We do it for everything else. Anybody can do your taxes, if you want a real answer, go to a CPA. Why not news?

Also, anyone could feasibly sign up for a twitter account as The Rock, but there's only one that's for real.

[dc]Yes, censorship. Unfortunately there needs to be someone or just other algorithms, which would decide, which site is biased. Now look in the world to find a country without a bias in censoring content.

[dd]This might vary from country to country but I am thinking of how a Chamber of Commerce works.

[de]We do it for everything else. Anybody can do your taxes, if you want a real answer, go to a CPA. Why not news?

Also, anyone could feasibly sign up for a twitter account as The Rock, but there's only one that's for real.

[df]Except other outlets *do* pick up fake stories (assuming "outlets" is broadly defined as "any other websites"), whether to propagate a false story or to debunk it. (See: reason why using domain authority/backlinks isn't ideal.)

[dg]Yeah, and this could be gamed, right?

[dh]Bingo.

[di]Or just propagated (without any intent to game the system) by publishers suffering from confirmation bias.

[dj]Not just biases, but they could be propagated knowingly by low quality sites fishing for ad revenue, so this might cause the opposite of the intended effect.

Low quality content, if not outright fake content, is sometimes sold, or at the very least cloned and recycled (ever google'd a headline only to find results of the same headline but from years ago?) a lot in many different places: This is a relevant story: http://arstechnica.com/information-technology/2015/07/inside-an-online-content-mill-or-writing-4156-words-a-day-just-to-earn-lunch-money/

[dk]Sadly, biases and agendas are not the only reasons for lack of quality, sometimes it's pure lack of journalistic standards (i.e. the CNN porn story from last weekend, which was picked up by virtually all outlets based on two tweets from the same person and nothing else). Measuring engagement and verifying whether things get picked up is not a good marker, at least as news unfold in real time.

[dl]Note that Google now--for the purposes of consideration in RankBrain and other ranking factors--ignores all meta tags except for the title tag. What's leftover in the analysis are backlinks and the content itself. As mentioned above, backlinks alone are an imperfect predictor of content quality.

[dm]But there are a constellation of other factors including: domain age, how often it has been flagged in the past, etc... that could go into a "ranking" metric.

I think the end game will always be getting it reviewed by a human, but rank *could* be a decent initial filter.

[dn]You say things that are true. If I had been thorough enough to complete my thought in the first place: self-tagging is probably a no-go, but using a bunch of different signals (as Google does to judge content) seems like a healthy start.

[do]Absolutely. In my newsroom at Liquid Newsroom we use social graph analytics to get insights about the DNA of information markets.

[dp]That method works really well for spam! http://paulgraham.com/spam.html

[dq]Not really ... statistical content filtering is a small part of how spam control works ... more here on the relationship https://medium.com/@SunilPaul/lessons-the-spam-wars-facebooks-fight-to-kill-fake-news-is-just-starting-2aaefb0a389b#.nrbl3142l

[dr]Let's hope that we will see more news publishers and organizations still invest in fact-checking. And let's hope that the general public really wants to read fact-checked articles. As it looks now, more and more people seem to prefer echo chambers.

[ds]This totally fits with my idea (inspired by Eli's idea) for "Verified Sites" in the NewsFeed. (I.e., stories from manually-verified sites that meet a list of criteria get a "verified" badge when they appear in the NewsFeed.)

[dt]yes -- separate these 2, then create dashboard for when they are most out of whack: popular and false, for targeted action

[du]Yes, exactly!

[dv]Great idea. See "Multiple user-selected fact checkers" section below that expands on it and sidesteps much of the concerns about other ideas.

[dw]Or even just a "disputed" tag

[dx]This does exist in this form: https://memebuster.checkdesk.org/ . How would you like to see it improved upon / developed?

[dy]"Unverified" would work and is a less loaded term.

In my personal conversations, I've intentionally saying "fraudulent news" instead of fake. Fraudulent implies intent. Fraudulent means being deceptive for profit, which the worst offenders do.

[dz]This is a brilliant idea.

[ea]Real-time embedded corrections can produce reactance, causing people to resist the correction. Corrections presented after a slight delay tend to do better for those inclined to resist the correction. See:

http://wp.comm.ohio-state.edu/misperceptions/wp-content/uploads/2012/07/GarrettWeeks-PromisePeril-CSCW-final.pdf

[eb]In suggested ideas for preventing false information from spreading, how will we measure whether or not they are working, (to get at the comment above about slight delay actually changes minds - which changes the problem frame to not about the spread, but about the correcting)...?

[ec]Need to cite the fact check in the UX in some way. Otherwise complaints of bias will overrule the fact check

[ed]How does this example differ from the multitude of "questionable" stories relating to sports (transfer speculation and claims) that a published by traditional media sites?

Rumour - seems to be a catch all, cover all - http://www.footballtransferleague.co.uk/football_rumours.aspx

[ee]Simple, brilliant ideas. There is a ton of fakes old news that would become apparent quickly with the date.

[ef]Couldn't this then be abused by larger groups to classify real news as fake?

[eg]I would expect this to simply convince people that the whitelist is biased. Especially since even trustworthy sources sometimes pick up fake information.

[eh]this exists already, right? Its just not very user-friendly (and doesn't have the ideology adjustment, but that would be hard to accomplish given that not everyone selects an ideology on fb). On the first half, though, Tom Trewinnard of Meedan has analyzed well that the UX has flaws that could be fixed https://firstdraftnews.com/is-facebook-is-losing-its-war-against-fake-news/

[ei]You wouldn't have to worry about the ideology listed on fb though, Facebook uses your actions on the site to create a guess at your ideology (you can actually see it in your ad preferences)

[ej]Good point (though oof at the thought of those being used)

[ek]http://www.allsides.com/ does something sort of like this: users' bias ratings are weighted based on a self-reported survey (partially developed by Pew) of political affiliation.

[el]How does this not get abused? Need to make sure there is a failsafe to prevent false flags to remove content.

[em]Agreed as very similar user flagging program was almost implemented on  YouTube. If anyone remembers the YouTube heroes program that created an incentive for flagging videos that broke community guidelines that included but wasn't limited to fake news. this created a hierarchical reward tier based on how much you did with regulations relaxing in the higher tiers along with easier ways to take down videos. so as this applies Facebook how would we be able to regulate literally everyone on Facebook.

[en]Shutting down discussion about the relationship between social and technical responses to an urgent social problem may prevent important strategies from emerging.

[eo]I tend to agree with Kelly. Zeynep, I'm confused - how not relevant? Is it because we're looking for ways to improve the situation NOW, not when these kids grow up?

[ep]Should not encourage false dichotomies, however (see climate change). If 99% of stories report one thing and 1% report another, that 1% shouldn't be given equal weight.

[eq]This happens already in some cases. You can see snopes links in suggested reads when an article is clicked.

[er]This, I think is the big challenge. How do we incentivize this behaviour? there's some empirical evidence on the relationship between network diversity and idea generation. Maybe it's a start point?http://sloanreview.mit.edu/article/how-twitter-users-can-generate-better-ideas/

[es]https://motherboard.vice.com/en_us/article/how-our-likes-helped-trump-win This and tools like it continue to be the most influential ones changing politics everywhere. I thought it might become a category.

[et]_Marked as resolved_

[eu]_Re-opened_

[ev]A section was just added under the heading "Manipulation and ‘weaponization’ of data" at the very end of the document. A number of articles and reports have been included.

Should suggestions and ideas come up, they can be added later within the text. For now, it seems like quite a bit of of research to be looked at and analyzed.

- Lina

[ew]This might work for something like Apple News, which is designed specifically for news consumption, but people share much more than news on Facebook. You still want people to be able to share random personal blogs and whatnot.

[ex]Facebook has a "News Feed" tab that should probably be kept free of personal opinion pages. It would make sense to only allow real news in the News Feed tab.

[ey]CrowdTangle does this. It surfaces stories that perform better relative to how they should be performing, based on past performance of page stories. Facebook just acquired them.

[ez]I like this idea a lot. Of course - you'd have to figure out how to automatically identify "questionable content" - but if we could, slowing potential for spread is an interesting intervention.

This is what the Chinese gov't is doing on Weibo - instead of taking down content, limiting spread. Which raises issues around transparency - what does FB need to do to make sure there's enough information given to users, but not enough to make this whole thing game-able.

[fa]Seems like FB would be VERY resistant to things like this, as they would essentially open their news articles up to being "scooped" by social media sites that don't have this practice (like Twitter). Still an interesting idea though.

[fb]I think FB would be okay with it. You could only add a velocity filter on posts that contain political language (articles that use the words trump,hillary, etc.). You could also exempt white listed sites, such as places like the New York Times

[fc]Craig Silverman covers pretty well how even mainstream newsrooms have been failing to incorporate sufficient fact-checking into their workflows in this white paper: http://towcenter.org/wp-content/uploads/2015/02/LiesDamnLies_Silverman_TowCenter.pdf

[fd]How can you say that the New York Times is "white listed" if their news is biased information as is some onion or world news daily news?

Who can say that CNN is more "white listed" than Fox. Or BBC is better than RT?

[fe]Facebook is already doing this in certain countries.

[ff]Skepticism of legitimate information is encouraged in certain instances (climate change). This has to be carefully handled.

[fg]This could be the most important part of creating a cause campaign--targeting ad ecosystem players who serve ads on fake news sites a la Color of Change campaign against ALEC and Glenn Beck.

[fh]With having an anti-fake news standpoint, this sounds relieving. Although in order for this to be ethical, it could not make the claim that the general user's decision-making process is problematic. It may be said that this ability to distinguish fake news from the truth is a human right. Of course, this contradictory standpoint would not exist if the declaration was provoked from the user.

[fi]People are also busy and overwhelmed and have different priorities in life.

[fj]No, we just need design to support critical thinking. If your premise is people are lazy, then you need to get them to stop being lazy and educate them on why they need to stop being lazy.  If your premise is less judgmental, and assume that everyone has a wide range of naturally occurring human biases, and that people are busy and have competing priorities, then you might design for presenting options that enhance critical thinking.

[fk]Here here. We can work with designers and technologists and journalists to design with 'lazyness' in mind too. It takes extra work to create tools and media that encourage critical thinking, we can advocate for creators that it's worth the time.

[fl]There will always be people who dismiss difficult truths, but we don't need to give their denial an easy platform. I believe the point of this document, and indeed the larger effort to boost signal and squelch noise in modern tech, is to give users who aren't willfully ignorant a better chance to be informed by fact.

There are certainly many points between "willfully ignorant" and "perfectly informed." By fighting false and misleading news, we can move individuals along that continuum. There is value in that transition.

[fm]The use of drones to document visuals of happenings without biased narration, or narration at all, may eliminate the need for multiple variations of biased news reports. This allows the user to have complete control of news interpretation without a provided opinion.

[fn]This is really interesting. Headlines and images are a huge part of the problem. I imagine relatively few people even read the articles.

[fo]I agree, something like more than 50% of people don't read articles they share... (I don't find the link to the study) so it means that content needs to be hidden directly from fb/twitter and not block it in the website

[fp]All of this reminds me of the long debated desire that some have had to create some kind of formal certification for SEO and Digital Marketers to encourage ethical practices. ...The closest we've come to achieving that has been through collections of persons who ascribe to the same or similar value systems. ...Perhaps this is for the best because if you think about it - all we can really do is promise our readers that we will practice due diligence...

[fq]There might be a problem with this. I saw a brilliant physicist's contribution to Wikipedia be deleted due to lack of references. His research was all original, incredibly groundbreaking.

[fr]Scientific journals could be classed as a verified source.

People can be classed as verified source.

etc...

Once the algorithms are trained these problems will be easily avoided.

[fs]It's not just profitable for the fake news accounts, it's also profitable for Facebook. A friend who is a lawyer mentioned last night that the shareholders could hold Zuckerberg, et al, responsible of abrogating his fiduciary obligation to them if he were to cut anything that harms profits (like curtailing fake news posts that otherwise get a ton of engagement). I got a B- in a college business law class, so I'm not one to comment on that, but I welcome any thoughts from people who know the law better.

[ft]I'm not a lawyer either but that argument doesnt seem like it would hold up all that long -- Zuck could argue that fake news represents a long term threat to profit/revenue in that it could undermine credibility/user trust in the platform

[fu]I do wonder if users even care. I am assuming a lot just log on to unwind or just have a good time. A cool viral story fits the bill, not necessarily something you would 'experience' if reading an in depth analysis of tax evasion schemes. Personal opinion, of course :/

[fv]FB doesn't have an incentive to do this, so how does this get implemented?

[fw]Man, I think this is huge— the "authoritative source" will vary widely based on your politics.  I'm always wanting to ask people who share "Hillary is a satanist"-type stories: What would convince you that this story is false? It certainly wouldn't be a Washington Post debunking.

[fx]Why not convince people to stop using Facebook? It worked with MySpace, LOL.

[fy]People must be able to use more sources. Removing facebook will just open the possibility for something worse. I dislike the American "Nipple" moral attitude (while endorsing violence) but any news-outlet will have a certain set of believes.

[fz]Facebook having a monopoly on its news algorithm is undesirable, but would the situation be improved by fragmenting control? There are such heavy incentives for promoting junk/fake news.

While some of the fragments would likely be an improvement, selecting a quality source among them would seem to be just as beyond the average/casual FB user as effectively vetting posts is today.

Seems likely we'd just see the schism increase - tech-savvy/engaged users would find and use good sources, regular users would continue to be at the mercy of misleading orgs.

[ga]The problem is that the network principle will lead to one company (monoply) having all the news. All the technical solutions will not prevent this to happen. So apart from the technical solutions here we need to have a anti-trust law just as we should enforce beter for companies.

[gb]Recently been seeing people trying to build extensions like this. Example: https://devpost.com/software/fib#updates

[gc]Not sure if that specific extension works, btw, haven't tried it out yet, but shows that people are thinking about this.

[gd]Creating a new website is far too easy. if we shut down sfglobe.com ( a top AdX partner), they would just open sfglobal.com

[ge]Who is doing the prioritizing?

[gf]this feels like a different, but worthy, topic. could there be a separate discussion of this challenge?

[gg]Wouldn't this mostly lead to a shift in the language of "false news", create an arms race. Also opens the door to tons of tricky questions.

E.g., are ridiculous questions ("Did Trump steal the Pope's shoes?") -  intentionally created to imply a false thing - themselves objectively false? Are articles that primarily deal in such questions considered false news? Would FB users be better-informed if this sort of article replaced the bulk of clearly false news? Does the questioning tone prompt users to actually engage and think about the link's truthfulness?

[gh]Would require a better incentive system for publishers on the Instant Articles platform than they are currently getting. Publishers are reluctant to use this platform because it decreases the amount of articles click to when they go directly to the publisher's site

[gi]Impossible to put in place, facebook can't force publishers to put content on Facebook and block linked to be shared

[gj]This is nice. Would also be very informative.

[gk]But this can be done even if it's not hosted on FB. Any erring link can be replaced with a "This was fake news" 404...

[gl]Yep, for sure. Triggering this would help stop the spread of this news off platform / offline.

[gm]I like this quite a bit.

[gn]There's a challenge with copyright there?

[go]Or, primarily, a challenge with getting publishers to go along with it. (This already exists as Instant Articles.)

[gp]Also, I think easier bit of the problem is 'what to do if story is fake' - block, alert, flag, demote etc. Isn't the harder part how to figure out by algorithm 'is this true'

[gq]Ultimately, I human needs to be a part of that process -- at least in the beginning. The last thing you want is for people to just come up with more subtle fakes to skirt the algorithm.

Algorithms, in this case, should serve as providing initial filters for human beings, in my opinion.

[gr]Also, when an organization or celebrity is unwittingly used in fake news (ie, the untrue story that Clint Eastwood rejected a Medal of Freedom from Obama, saying he "wasn't his President"), they have a strong incentive to protect their brands and names. Bringing abusive stories to their attention, and explaining their legal options, could be enough to prompt them to take action on their own.

[gs]I wonder if this is at all possible. If might change politics discourse forever ;)

[gt]I wouldn't say it is impossible. Reputable news sites such as the New York Times provide verification of their site, and use journalistic standards. If a site is reputable it provides verification. If a site doesn't provide any kind of verification, standards, etc. Then they should be considered fake, until they do.

[gu]Likely relevant: A pre-social media work, but one that has interesting analysis of bias and reputation in media:

http://www.nber.org/papers/w11664

[gv]IMO, we must address the fake news problem by considering the architecture of perception formation, because it is possible to use algorithms to make people smarter.

[gw]+1

[gx]+1 An important insight.

[gy]:)

[gz]See comments in section below "Suggestions from a Trump Supporter" for this perspective

[ha]La forma en que se propagan las noticias dejan una huella o un patrón, se puede desarrollar un algoritmo que detecte cuando un tipo de forma se está desarrollando. He trabajado esto en Twitter en México desde hace años, posiblemente sirva en Facebook o en otras redes

[hb]Pienso que puede ser interesante ver como se puede aplicar la experience del Tercer Mundo a este caso. En EEUU no están acostumbrados a la Propaganda ni tampoco a situaciones donde los gobiernos se ven forzados a cortar las redes. Muchas temas a desarrollar: dictaduras, censura de los medios...

[hc]It might be interesting to explore how the experience in Third World countries could be applied to this initiative. Users in the US are not aware of Propaganda tactics nor have they experienced govs shutting down social networks. A lot of related topics here: dictatorships, censorship, etc.

[hd]Original title edited for clarity

[he]This is an example of people fighting in real time against fake news on Twitter

[hf]"non organic" es acerca de cómo se genera la noticia y el tipo de redes que se crean al propagar una noticia, generalmente si la noticia es falsa y la intención es difundirla masivamente se recurrirá a equipos de "troll centers" o "celebrities" para difundirla y este tipo de difusión dejan rastros.

[hg]"Non Organic" deals with how the news is generated and the types of networks that are used to spread the news. In general terms, if the news is fake and the intention is to spread it massively, this is will happen in troll centers or with celebrities. These tipo of share leaves traces.

[hh]"organic" = orgánico en español. Cuando existe un interés legítimo en la noticia y es util o verdad o se puede comprobar este noticias genera redes y va conectando comunidades. Este tipo de redes se pueden detectar, crear una base de datos que se puede acceder en tiempo real para detectar futuros casos

[hi]Organic - when there is a genuine interest in the news and it is both useful and true, generating communities around it. These type of networks are easy to spot, enabling a Data Base to be set up real time to trace future cases.

[hj]+1

[hk]This is called information overload, see https://en.wikipedia.org/wiki/Information_overload

[hl]yes -- truth has to be restored as a primary value (as in "the true, the good, and the beautiful"). this is a permanent cultural battle, especially in the face of powerfully persuasive media techniques in the service of a marketing ideology.

[hm]I love the concept... Just perhaps want to suggest you be ready with some real pushback to the notion that the American public and political discourse was ever virtuously truthful. ...We've had a long history of hiding truth and painting it how we wish to see it.

[hn]The same could be said about all professions that participate in the information game. What does it take to be a capital L Librarian or a capital J Journalist?

[ho]The First Draft Partner Network was set up in September 2016, and includes the major social networks, 100 newsrooms, human rights orgs, academic institutions, and professional journalism associations. We are dedicated to improving skills and standards in the reporting and sharing of information that emerges online - exactly the problems being discussed here. See more here https://firstdraftnews.com/about/

[hp]Interesting. Curious whether any of the partners are bringing the news consumers' voices into the conversation? Separately from this point about a coalition, I've been mulling whether something like an audience union would be useful for improving coverage (and whether such a thing already exists).

[hq]Hi Dave - yes we're trying to run a number of town halls, as well as undertaking audience research to ensure audiences are included in these conversations. Some news orgs like the BBC have audience councils so it would be interesting to connect with them on this issue.

[hr]Totally agree we need a multi-disciplinary / multi-stakeholder group to address issues of accurate, balanced news coverage on Facebook and other platforms. I'd include UED experts and data scientists to the technologists list.

[hs]some did go this way: CPB/NPR. good counterweight potential, but also subject to pressure.

[ht]As it appeared on April 8, 2015. This website - and all its complexity- is explained in the video below.

[hu]I don’t know how the White House Pool Reporters are selected, but it seems to include a variety of media outlets.

[hv]Trump has his official Transition 2017 news page.

https://www.youtube.com/channel/UC_NRgn1L4zVWPOEI5Mt5Tog

[hw]This reminded me of another issue: attention spans. They're increasingly on the decline. "Humans have a shorter attention span than goldfish, thanks to smartphones" - http://www.telegraph.co.uk/science/2016/03/12/humans-have-shorter-attention-span-than-goldfish-thanks-to-smart/ This has big implications for how people consume news - they rarely make it past the headline. I think even if we solve the fake news problem, there's another, much bigger hurdle of getting people to actually read and understand stories, rather than jumping to conclusions based on headlines. People are simply losing the patience to read. I fear that it's something we can't solve.

[hx]libraries, which have done so much to promote digital literacy, can be great partners for media/info literacy. there's a lively conversation going on on 1 of the ALA listservs about fake news and the librarian's role

[hy]Yes! From an early age.

[hz]Nice simple nudge

[ia]Agreed

[ib]+1

What's described here is largely how climatefeedback.org works for climate science issues. Could be adapted to other scientific fields

[ic]Careful. I think we need to offer solution frameworks that are not draped in political ideologies.

[id]thanks to whoever put up these comments, appreciated

[ie]No problem. Glad to offer help. I hope my suggestions are accepted in the spirit in which they're offered. Someone sent plausible but fake news to my wife through FB at the height of the election frenzy. The fake info was edited into a graphic. I knew right away it was fake because I live this stuff. She did not know it was fake, however, nor did whoever sent it to her. So this is a real problem.

The contributors here are clearly very advanced in their learning. The more transparent and open you can be with the solution, and the more safeguards you can include to ensure flushing the fakes does not turn into viewpoint suppression (consciously or unconsciously), the more likely it will work in terms of legitimate debunkings being accepted by your target readers.

Let's be honest. Though not personal, there is a lot of mutual distrust and, probably, realistically, a lot of ill will in this cultural divide.

What would people say if I proposed we turn over the fake news problem to a team of right-wing researchers and developers from the South who feel strongly that the nation is now headed in the right direction with Donald Trump as president? Yes, we are all white men from the South, and we are very eager to see Trump implement his plans, but we're well credentialed and technically qualified for the work so -- trust us! -- we will fix it. No need to worry about how our biases may come into play. Sound ridiculous? Well, in effect, that's the arrangement people on my side of this divide are being asked to accept.

Without radical transparency and meaningful ideological diversity on the teams, people on my side of the divide, I can assure you, will assume, fair or not, that you're not doing enough to compensate for the impact on the solution of your own strong ideological commitments and cognitive biases and/or that this is all a pretext and a lot of acrobatics to justify suppressing conservative and pro-Trump views on social media.

[if]Thank you so much for this perspective. Please note that I edited the title and removed "Good Faith Comments/" before "Suggestions" but only to allow for a clearer title in the left hand column of the document.

[ig]In response to a previous comment:

"What would people say if I proposed we turn over the fake news problem to a team of right-wing researchers and developers from the South who feel strongly that the nation is now headed in the right direction with Donald Trump as president?"

Some input because is this is exactly what we are up against in regards to regulatory matters dealing with Net Neutrality, tariffs, etc. Incredibly powerful corporations are seeking 'equal footing' alongside users now that they are up against the so called 'Titans': Google, Uber, Amazon, Netflix, etc.

Whereas before they could be considered natural 'enemies', the whole scenario has changed. All view points need to be taken into account.

Regulatory agencies caught in the bind are trying to adapt as best they can. Some examples:

1) sending forth special consultations to the general public and key industry players (TRAI in India in reference to the Free Basics initiative)

2) 'closing off their ears' (European Commission - when up against Brexit propaganda - did not confront it)

3) remaining silent (FCC postponing all critical legislation until new Adm comes in)

4) accepting they do not know and exploring all the different options and views, gathered from the experience of others (CRT in Colombia)

[ih]Agreed

[ii]assume that people will be trying to beat the algorithms so ensure flexibility is built in

[ij]Eli - Very light editing has been done but just for formatting purposes.

[ik]The best version of this is the new website Tribeworthy(https://tribeworthy.com/). Their platform empowers news consumers to critically review any online article. The reviews create a trust rating for each article, author, and outlet, providing news consumers with helpful feedback when deciding where to get their news. This is what they're calling Crowd Contested Media and it's really cool and really good at holding online media accountable.

[il]You can likely do this pulling out data from Wikidata or Wikipedia already. If the publication exists on there, the info should be available through the API.

[im](I work there.) Happy to help connect you to folks who may be interested in this. mkramer@wikimedia.org :)

[in]Thanks! That's definitely the place to start.

[io]I also think you need to account for small newspapers that are very local and very good. I live in a town of 19,000 people in North Carolina. The student newspaper the next town over does an amazing job of covering local news/issues. It's audience/circulation #s will never be high, but the quality of its journalism is quite good.

[ip]Publications and journalists don't often make the relevant information for what is needed available. It would take some reporting. I've found that often only journalists that have published books, whose press agencies create websites for them, are the only ones on Wikipedia.  One rarely knows who is editing an article. Circulation numbers are rarely updated.  Wikipedia is a prime source, but I'd estimate only 25% of the information you really want exists.

[iq]I would love to to get in touch with the person who wrote this bit about Linked Data. Please ping me via email.

[ir]https://www.linkedin.com/in/ubiquitous

[is]Thanks for reaching out Tim. I would like to learn about Linked Data and Verifiable Claims and if possible collaborate on pushing these ideas in front of as large an audience as possible with the stated aim of getting the attention of people of the web, the social media companies, the CMS software providers, popular blogging platforms to encourage them to adopt these standards. And exploring building various tools using the Linked Data and Verifiable claims platform/spec/infrastructure as a means of forcing the hand of the other players.

Pinged you on linkedin. I am https://www.linkedin.com/in/gautampriya on linkedin.

[it]https://www.w3.org/DesignIssues/  is a good starting point, but I'll put you in touch with some who work at oracle on linked-data related stuff...

[iu]see also; https://docs.google.com/presentation/d/1pFGC1G7CbizUuvbmjECfnNRL4fZk9QLxG8d3nehgwNU/edit#slide=id.p

And: https://www.w3.org/community/credentials

[iv]Thanks.

[iw]Ping to note my interest in this topic. I just linked an article I wrote elsewhere in this mega doc

https://theconversation.com/could-an-auto-logic-checker-be-the-solution-to-the-fake-news-problem-73223

[ix]Hyperlinks with the date (yymmdd), source, title are being used for the time being but the format can be adapted for more formal research.

[iy]Not sure where this should go, but Craig Silverman's work at the Columbia Journalism School's Tow Center is a really good internal (journalist/newsroom) look at where they go wrong, and how newsrooms end up participating in fake/bad stories going viral. Link to 168 p white-paper based on his research using a rumor-tracking tool called Emergent. http://towcenter.org/research/lies-damn-lies-and-viral-content/

[iz]Posted it under Resources - Dynamics of Fake Stories, where you suggested. Excellent read, thanks :)

[ja]Really important point. Could an algorithm ever differentiate between witty satire and deliberately misleading fake news? A satire icon/feature on satirical articles/posts could help but might that spoil the fun when satire is accidentally taken as real news? (Like when Jack Warner, a former vice president of world soccer’s governing body, FIFA, defended himself against corruption charges by citing an article from The Onion).

[jb]A brilliant essay by a French cognitive sociologist that studies how our Internet is so well suited for the creation, spreading and consolidation of rumors, modern mythologies and beliefs by revealing and amplifying very primitive aspects of our human condition. Not available in English AFAIK unfortunately.

[jc]Here is an introduction to this book, in English. http://www.angie.fr/en/caractere/gerald-bronner/

[jd]https://tribeworthy.com does a lot of what you're proposing. Tribeworthy is being called the Yelp for news consumers, so bipartisan public discourse around news is its specialty. Check it out!

[je]So I wrote an app where people can share links to pages e.g. from news sites, highlight text and comment on the fly side-by-side. But being a developer type, I'm not sure what to do with it now it's built. Maybe it (and I) could be of help somehow, have a look: http://thin.glass (be patient on first load, it's on a rather slow server atm).

[jf]Sounds a lot like this https://hypothes.is/

[jg]Consider.it sort of does this: http://www.poynter.org/2016/here-are-27-ways-to-think-about-comments/401728/

[jh]Very cool. Testing it here and there.

[ji]This is the goal of the web annotation architecture (w3.org/annotation). Hypothes.is enables this for communities like ClimateFeedback.org, as well as individuals.  I encourage folks that want to move in this direction to pursue interoperable approaches, so that implementations work together.

[jj]Perhaps the most important website on the internet right now. I'm so proud to be a part of the Crowd Contested Media movement.

[jk]Hey +john@fiskkit.com  glad to see you here!

[jl]You too +litvins@gmail.com ! We're trying to increase coordination among the various communities working on this right now.

[jm]Awesome, we're working on a fake news detector in our software as well. What we really need to do is find a way to stop Trump's tweets!!

[jn]This is very alarming: both a WaPo piece and original reports are very biased and bear some signs of the state actors.

[jo]But how does one or many curtail this massive violation of our trust and common morality?

[jp]According to @AdrianChen:

"When I began researching the story, I assumed that paid trolls worked by relentlessly spreading their message and thus indoctrinating Russian Internet users. But, after speaking with Russian journalists and opposition members, I quickly learned that pro-government trolling operations were not very effective at pushing a specific pro-Kremlin message—say, that the murdered opposition leader Boris Nemtsov was actually killed by his allies, in order to garner sympathy. The trolls were too obvious, too nasty, and too coordinated to maintain the illusion that these were everyday Russians. Everyone knew that the Web was crawling with trolls, and comment threads would often devolve into troll and counter-troll debates.

The real effect, the Russian activists told me, was not to brainwash readers but to overwhelm social media with a flood of fake content, seeding doubt and paranoia, and destroying the possibility of using the Internet as a democratic space. One activist recalled that a favorite tactic of the opposition was to make anti-Putin hashtags trend on Twitter. Then Kremlin trolls discovered how to make pro-Putin hashtags trend, and the symbolic nature of the action was killed. “The point is to spoil it, to create the atmosphere of hate, to make it so stinky that normal people won’t want to touch it,” the opposition activist Leonid Volkov told me.”

http://www.newyorker.com/news/news-desk/the-real-paranoia-inducing-purpose-of-russian-hacks

[jq]Kate - We need to see what do with this. Just let it flow with the rest until we get to the final copy. *IF* we ever reach that stage :D

[jr]This is great - how will we solve for the lack of faith problem? Lack of faith in the source, the truth? How will we build confidence in this kind of a tool?

[js]Honestly how do we know we can trust you? I mean seriously, some of your targets are legitimate site, just more liberal. They might be more socialist leaning, but that doesn't always equate to Russian Propaganda or any propaganda.

[jt]Good piece on this: https://theintercept.com/2016/11/26/washington-post-disgracefully-promotes-a-mccarthyite-blacklist-from-a-new-hidden-and-very-shady-group/

[ju]Of Note: There's some (in my view, legitimate) concern about the methods of this group. More here: https://theintercept.com/2016/11/26/washington-post-disgracefully-promotes-a-mccarthyite-blacklist-from-a-new-hidden-and-very-shady-group/

[jv]Browser extensions are great. But time spent on mobile, in closed apps is the bigger problem, particularly sharing on "dark social" i.e. messaging apps. Particularly insidious problem for asian markets.