Media ReDesign: The New Realities[a]

Before we start …        8

Stay informed        8

Know who the key players are        9

Create alliances and partnerships        9

'Truth in Media' Ecosystem        9

Keep a tally of solutions - and mess ups        9

Delve into the Underworld        10

Be prepared to unlearn everything you know        11

Stay connected        11

A final word        12

Stay Informed        13

Background        13

Initiatives        13

Robert Reich: Inequality Media        13

Dan Rather: On journalism & finding the truth in the news        14

Web Literacy for Student Fact-Checkers        14

Syllabus: Social Media Literacies        14

Bulletin Board        16

Upcoming Events        16

Ongoing Initiatives        17

High-Level Group on Fake News and online disinformation  NEW        17

The News Literacy Project        17

Stony Brook’s Center for News Literacy        17

Past Events [Sample]        19

The Future of News in an Interconnected World        20

MisinfoCon: A Summit on Misinformation        20

Combating Fake News: An Agenda for Research and Action        21

Media Learning Seminar        21

The Future of News: Journalism in a Post-Truth Era        21

Dear President: What you need to know about race        22

Knight Foundation & Civic Hall Symposium on Tech, Politics, and the Media        22

Berkeley Institute for Data Science
UnFakingNews Working Group
        22

In the News        24

Essential reads        24

Featured        25

General        25

Trump Presidency        26

International        27

Press Room: Research and articles related with this project        28

About Eli Pariser        32

Conferences        33

Related articles        33

Start of Document        35

Basic concepts        35

Definition of Fake News        37

Compilation of basic terms        37

Classifying fake news, fake media & fake sources        40

Considerations → Principles → The Institution of Socio - Economic Values        46

General Ideas        47

Behavioral economics and other disciplines        47

Human Editors        49

Under the Hood        52

Facebook        54

Analysis of Headlines and Content        59

Reputation systems        60

Verified sites        64

Distribution - Social Graph        68

Fact-Checking        70

Special databases        73

Interface - Design        74

Flags        75

Downrank, Suspension, Ban of accounts        77

Contrasting Narratives        78

Points, counterpoints and midpoints        79

Model the belief-in-true / belief-in-fake lifecycle        82

Verified Pages - Bundled news        83

Viral and Trending Stories        84

Patterns and other thoughts        86

Ad ecosystem        88

More ideas…        91

Factmata        99

FiB extension        99

Taboola and Outbrain involvement        99

WikiTribune        102

Verified Trail + Trust Rating        105

Bias Dynamics & Mental Models        108

Neuroscience of philosophical contentions        109

Thread by @thomasoduffy        109

Pattern & Patch Vulnerabilities to Fake News        111

The problem with “Fake News”        112

Surprising Validators        114

[Update] “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds”        117

Snopes        119

Ideas in Spanish - Case Study: Mexico        122

Not just for Facebook to Solve        124

A Citizen Science (Crowd Work) Approach        128

The Old School Approach
Pay to Support Real Journalism
        133

Critical Thinking        134

Media Literacy        135

Programs        136

Suggestions from a Trump Supporter on the other side of the cultural divide        138

Ethics of Fake News        144

Your mind is your reality        146

Who decides what is fake?        146

Journalism in an era of private realities        147

Propaganda        147

Scientific Method        147

Alternate realities        148

Legal aspects        151

First Amendment/ Censorship Issues        151

Espionage Act        152

Copyright        153

Trademarks        153

Libel        153

Online Abuse        156

Harassment        156

Trolling and Targeted Attacks        156

Threats        156

Hate Speech        156

Specialized Agencies        158

Rating Agencies        158

Media Governing Body        160

Specialized Models        162

Create a SITA for news        162

Outlier analysis        163

Transparency at user engagement        163

“FourSquare” for News        164

Delay revenue realisation for unverified news sources        164

Other ideas        164

Linked-Data, Ontologies and Verifiable Claims        168

Reading Corner - Resources        172

Social Networks        172

Facebook        172

Twitter        173

Google        173

Filter Bubbles        173

Automated Systems        175

Algorithms        175

User Generated Content        175

Dynamics of Fake Stories        177

Joined-up Thinking - Groupthink        177

Manipulation and ‘Weaponization’ of Data        177

Click Farms - Targeted attacks        178

Propaganda        178

Viral content        179

Satire        179

Behavioral Economics        181

Political Tribalism - Partisanship        182

Ethical Design        182

Journalism in the age of Trump        183

Cultural Divide        184

Cybersecurity        184

Experiences from Abroad        184

Media Literacy        184

Resources list for startups in this space        186

Interested in founding        186

Interested in advising        188

Interested in partnering        188

Interested in investing/funding        190

A Special Thank You        192

ANNEX        195

Themes and keywords to look for        195

Hashtags        200

Fact Checking Guides        201

Selected reads        201

Russian interference        202


Note: Hi, I am Eli, I started this document. Add idea below after a bullet point, preferably with attribution. Add +[your name] (+Eli) if you like someone else’s note. Bold key phrases if you can. Feel free to crib with attribution.

A number of the ideas below have significant flaws. It’s not a simple problem to solve -- some of the things that would pull down false news would also pull down news in  general. But we’re in brainstorm mode.[b][c][d][e][f]

  • November 17, 2016

 



This document is maintained by
@Media_ReDesign, with updates from an extraordinary community of collaborators spanning across many disciplines. Some topics are under the supervision of specific teams, as is the case of news updates, and the Event Calendar (partly updated, linked as a reference, for ideas). All the same, please feel free to contribute with your ideas, as we expand and continue on this journey.


 INVADING FORCES???? Have you seen ANY Photos of these families fleeing Certain Death because and Only Because they tried to defend their God given right to Freedom and Democracy? We Americans rightfully take a lot of Pride in this country for our Many Successes but it would Serve us ALL much better if it wasn't so Damn Near Impossible to Learn of our Occasional COLOSSAL Failures! Perfect Example would be how "WE" Helped overturn a Democratically Elected Leader in IRAQ and PUT Saddam Hussein in Power. It's a great example because pretty much everyone knows how we screwed the pooch on that one...The Role we have played in Creating a Humanitarian CRISIS in Honduras is easily as pervasive with the Only Real Difference being that the IRAQ MESS could pretty much be blamed on a specific few and all in in the Republican Party... In Honduras - our screw ups date as far back as 1911 and have only grown worse with each Major Shift in US Political Power! Reagan used Honduras as a "Staging Ground" for his illegal and ill fated Iran - Contra Fiasco and funneled US Weaponry into what is THE Original Textbook perfect "Banana Republic". As always, when Superior Power is given to societies too poorly structured to assure "Equal Protections" and Many of the other things "WE" take for granted..The Struggles to Possess and Control such Superior Power are Never Ending with Each Group who seizes Power Briefly PUNISHING all of those they "took" Power from! To be clear - I am by NO stretch of the imagination saying that Reagan is to blame for the Many Atrocities "we" have been responsible for in Honduras, or El Salvador, or Panama,or pretty much ALL of Central America! Meddling by our CIA and Profiteering on a Vulgar Scale by American Gas & Oil companies have progressively made matters worse and worse... Even President Obama and Sec of State Clinton made matters worse (and "matters' were already Horrific) by NOT denouncing the Military COUP in 2009 that overturned Honduras's last Freely Elected Leader and Permitted a Regime not all that different from Fidel Castro to seize total,violent, control despite MANY well publicized and well known (in that region at least) Massacres of Hondurans who tried to rally support for Democracy! WHY would ANY American Leader look the other way while people who believe in what "we" believe in and who only want to live in a society "Of the People, By The People and For the People".. Why would we NOT Try to Help these folks??? That's a damn good question! It is THE question that these thousands of REFUGEES are begging to have answered: America You Promised to Be a Shining Beacon on a Hill for ALL who love Freedom and Democracy to See". WHY have you Abandoned Us when We were “there” for you when needed us? We were only Doing EXACTLY what you Asked Us to Do? We have tried to Stand Up for Our Rights, The Rights YOUR Presidents (Kennedy, Reagan and Beyond) Told us We were Entitled To ONLY If We believed in these Rights enough to Fight For Them! We’ve been your Ally, We’ve done EXACTLY as You Asked us to.. Why Didn’t Come to Out Aid?? YOU sent Troops half way around the world to Kill People who didn't have a damn thing to do with 9-11 but that's beside the Point. We are "Americans" - OK Central Americans... But doesn't being in the Same Hemisphere AND sharing part of name, and being the source of soooo many billions of dollars in profits for Your Companies, Doesn't that mean ANYTHING? "WE" Thought your Word, Your Promises Meant Something but Believing your words Only got our Freedom Fighters Gunned Down by Weapons YOU provided to our Oppressors! We Stood Opposed to Communists and Fascist Dictators because we believed "The Shining Beacon for Freedom and Democracy" would Not sit idly while Enemies of Freedom massacred us - Clearly we were wrong. Are these the honest Questions of an “Invading Force” intent on causing us harm? OF Course Not! Why in God’s Name would young families Without SHOES attempt to walk Thousands of Miles over rough and rocky terrain? In Hopes of seeing Six Flags??? The ONLY Reason ANY human would attempt a trek so long and soooo arduous is Because They Have NO Other Choice that ends with them Alive!!  PLEASE pause and ask yourself WHY? WHY would any one undertake such a brutal journey? AND - Why would the Majority of Americans SUPPORT an "INVADING FORCE"?? WHY Would Most Americans be in Favor of giving OUR Tax Dollars to People just coming here to spread Disease and Violence? People walking THOUSANDS of Miles to get here because they are Too Lazy To Work?? ON What Level would “that accusation” make Sense?? WHY Would "we" want to greet an "Invading Force" with Kindness (well I mean other than that was Exactly What Christ Instructed to us do whenEVER we encounter the Poor, the Weak, the Traveler...) Other than as an Act of Christian Kindness WHY would "We" WELCOME an "Invading Forces"?? You Guessed It! NO ONE would "welcome" an Invading Force so Either: (a.) Most Americans have Lost Their minds and are Executing a Secret Handshake DEATH WISH - OR (b.) Trump is LYING his Ass Off Because He’s Losing Political Power and SCARING People worked for him Last Time! GOOGLE - America’s Role in Creating the CRISIS in Honduras and you will Find thousands of sources of Information!  Many are Decades old because this is NOT a New thing… It’s just recently reached it’s Most Horrendous Point - Precisely because Trump’s Anti Brown People Rhetoric has Emboldened the Current Regime to Complete the Slaughter of “it’s” Enemies Free of ANY Concern that America MIGHT come to the Aid of it’s long time Ally! It comes down to this:  No One in The Democratic Party has EVER Doubted that America is THE Greatest Country in the History of the World - BUT… Being the Very Best is a Piss Poor Reason to Quit Trying to Be even Better!

   And THANK GOD for the Red Cross! The Red Cross was formed to be an IMPARTIAL Source of Kindness and Aid for The INNOCENT Victims of Man Made War and Strife!  “THAT” is EXACTLY the Role they are fulfilling in this Crisis and MOST Americans are Damn Thankful that at least the Red Cross can’t Be Bullied into Abandoning All Sense of Morality!


PaleBlueDot

- Carl Sagan

 

      “None of us is as smart as all of us” ― Kenneth H. Blanchard

banner 2.jpg

Flickr

Before we start …

 

Stay informed

Dozens of articles and studies are being published daily on the topic of ‘fake news’. The more we know about what is going on, all the different angles, implications, etc.  the better off we are.

Know who the key players are


Technologists, journalists, politicians, academics, think tanks,  librarians, advocacy organizations and associations, regulatory agencies,
corporations, cybersecurity experts, military, celebrities, regular folk... all have vested interest in this topic. Each can give a different perspective.
 

Throughout the document, you will see some symbols, simply pointers alongside @names, to serve as guides:

verified account     key contact       collaborator  

Create alliances and partnerships

See what has been done or published that could serve as a blueprint going forward. Mentioned in this work, for example, is a special manual set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts. Four contacts there, alone, that might be interested in this project.

Related research:

'Truth in Media' Ecosystem

Organizations and individuals important to the topic of fake news and its solutions
Work in progress - Contributions and suggestions welcome

Keep a tally of solutions - and mess ups

Aside from this, what else has been implemented by Google, Facebook, Twitter and other organizations? How have people reacted? What are they suggesting? How is the media covering this? Have there been any critical turning points? The bots -- so in the news nowadays -- how are they being analyzed, dealt with? What has been the experience with them… abroad?

So many questions...

Statement by Mark Zuckerberg 

November 19, 2016

Delve into the Underworld

FB Fake.jpg

One can’t assume that there is criminal intent behind every story but, when up against state actors, click farms and armies of workers being hired for specific ‘gigs’, it helps to know exactly how they operate. In any realm, be it ISIS, prostitution networks, illegal drugs, etc. they are experts on these platforms.

Recommended

Future Crimes  by Marc Goodman @FutureCrimes 

The Kremlin Handbook - October 2016
Understanding Russian Influence in Central and Eastern Europe

Cybersecurity Summit Stanford
Munich Security Conference -
Agenda

21 September 2016

Panel Discussion:
“Going Dark: Shedding light on terrorist and criminal use of the internet” [1:29:12]

Gregory Brower (Deputy General Counsel, FBI), Martin Hellman (Professor Emeritus of Electrical Engineering, Stanford University), Joëlle Jenny (Senior Advisor to the Secretary General, European External Action Service), Joseph P. McGee (Deputy Commander for Operations, United States Army Cyber Command), Peter Neumann (Director, International Centre for the Study of Radicalisation, King's College London), Frédérick Douzet (Professor, French Institute of Geopolitics, University of Paris 8; Chairwoman, Castex Chair in Cyber Strategy; mod.)

Related


170212 - Medium
The rise of the weaponized AI Propaganda machine  There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.


Signal

Install it. Just because.


160622 - The Intercept
Battle of the secure messaging apps: How Signal beats WhatsApp

Be prepared to unlearn[g] everything you know


For years, other countries hav
e dealt with issues of censorship, propaganda, etc. It is useful to understand what has happened, to see what elements of their experience we can learn from. Case studies, debates, government interventions, reasoning, legislation, everything helps.

Essential here - insights from individuals who have experienced it and understand the local language and ideology.


Note. This also means learning from the “natives”, the ones born with - you know - a chip in their brain.

Stay connected

Please join us on Twitter at @Media_ReDesign and on Facebook for the latest news updates.

A Slack team (group messaging) pertaining to a number of related projects is available for those who wish to connect or do further research on the topic. You can sign up here. For those not familiar, two introductory videos are available, one showing how it can be used in teams, the other describing the platform itself.

Twelve channels
[update pending] have been created so far. Click on the CHANNEL heading to expand the different categories and click on any you want to join. Clicking on DIRECT MESSAGES allows you to contact all members who are currently on there; the full “Team Directory” is accessible through the menu.

A final word

Before starting, go through the document quickly to get a sense of the many areas of discussion that are developing. Preliminary topics and suggestions are being put in place but many other great ideas appear further on, unclassified for the time being while the team gets to them.

This is a massive endeavour but well worth it. Godspeed.


Stay Informed  

Background

A superb summary dealing with fake news is being written up over at Wikipedia. With almost 185 sources to date [August 28, 2018], it gives an overview of much of what the issues are, starting with a detailed look at prominent sources going on to impact by country, responses on the part of industry players and, finally, academic analysis.

As a starting point and perhaps a guideline to better structure the document going forward, it is highly recommended. -
@linmart  [26 DEC 16]

Initiatives

Robert Reich: Inequality Media


After years of collaboration, Jacob Kornbluth @JacobKornbluth worked with Robert Reich @RBReich to create the feature film Inequality for All. The film was released into 270 theaters in 2013 and won the U.S. Documentary Special Jury Award for Achievement in Filmmaking at the Sundance Film Festival.  Building off this momentum, Kornbluth and Reich founded Inequality Media in 2014 to continue the conversation about inequality with viewers.

Inequality for All: Website - FB /InequalityForAll - @InequalityFilm - Trailer

Inequality Media:  Website - FB /InequalityMedia - @InequalityMedia 

Robert Reich: LinkedIn -  FB /RBReich  FB Videos - @RBReich

Saving Capitalism.jpg


Kickstarter Campaign
How does change happen? We go on a trip with Robert Reich outside the “bubble” to reach folks in the heartland of America to find out.

3,790 backers pledged $298,436 to help bring this project to life.

Saving Capitalism: For the Many, Not for the Few  
#SavingCapitalism @SavingCapitalism

Perhaps no one is better acquainted with the intersection of economics and politics than Robert B. Reich, and now he reveals how power and influence have created a new American oligarchy, a shrinking middle class, and the greatest income inequality and wealth disparity in eighty years. He makes clear how centrally problematic our veneration of the free market is, and how it has masked the power of moneyed interests to tilt the market to their benefit.

… Passionate yet practical, sweeping yet exactingly argued,
Saving Capitalism is a revelatory indictment of our economic status quo and an empowering call to civic action.”


As featured in:

160120 - Inc
5 Books that billionaires don't want you to read



Dan Rather: On journalism & finding the truth in the news

Learn to ask the right questions & tell captivating stories. Practical advice for journalists & avid news consumers.

 


Web Literacy for Student Fact-Checkers 

by Mike Caulfield @holden

Work in progress but already excellent. Recommended by @DanGillmor

Syllabus: Social Media Literacies

Instructor: Howard Rheingold - Stanford Winter Quarter 2013


Andrew Wilson, Virtual Politics: Faking Democracy in a Post-Soviet World (Yale University Press, 2005)

This  seminal work by one the world’s leading scholars in the field of “political technology” is a must-read for anyone interested in how the world of Russian propaganda and the political technology industry works and how it impacts geo politics. It has received wide critical acclaim and offers unparalleled insights. Many of the names seen in connection with both Trump’s business dealings and the Russian propaganda apparatus appear in Wilson’s work.


Bulletin Board

Upcoming Events

Event Calendar: Trust, verification, & beyond 
Full listing of events curated by the
@MisinfoCon community

There are a lot of conversations happening right now about misinformation, disinformation, rumours, and so-called "fake news."

The event listing
here is an attempt to catalogue when and where those conversations are happening, and to provide links to follow-up material from those conversations. You can help out by filling in the blanks: What's missing?


01 misinfo.jpg

Latest event update: May 29, 2018


Ongoing Initiatives

European Commission - Digital Single Market

High-Level Group on Fake News and online disinformation  NEW

Colloquium on Fake News and Disinformation Online
27 February 2018
2nd Multistakeholder Meeting on Fake News
Webcast   RECORDED

The News Literacy Project 

@NewsLitProject[h][i]

A national educational program that mobilizes seasoned journalists to help students sort fact from fiction in the digital age.

Stony Brook’s Center for News Literacy 

Hosted on the online learning platform Coursera, the course will help students develop the critical thinking skills needed to judge the reliability of information no matter where they find it — on social media, the internet, TV, radio and newspapers.

Each week will tackle a challenge unique to the digital era:

Week 1:         

The power of information is now in the hands of consumers

Week 2:         

What makes journalism different from other types of information

Week 3:        

Where can we find trustworthy information

Week 4:         

How to tell what’s fair and what’s biased

Week 5:         

How to apply news literacy concepts in real life

Week 6:         

Meeting the challenges of digital citizenship

Course is free but people can opt to pay $49 and do the readings and quizzes (which are otherwise optional) and if they pass muster, end up with a certificate.


Conference.jpg

            Flickr

Past Events [Sample]

The Center for Contemporary Critical Thought - Digital Initiative

Tracing Personal Data Use

April 13-14, 2017

Cambridge Analytica: Tracing Personal Data
(from ethical lapses to its use in electoral campaigns)

Thursday, April 13, 2017
11:00am
East Gallery, Maison Francaise
Columbia University

Speaker:         Paul-Olivier Dehaye @podehaye with Tamsin Shaw 

Respondant:         Cathy O'Neil @mathbabedotorg as respondant
Moderated by:         
Professor Michael Harris

Find out more

International Fact Checking Day

April 2nd, 2017

International Fact-Checking Day will be held on April 2 2017, with the cooperation of dozens of fact-checking organizations around the world. Organized by the International Fact-Checking Network, it will be hosted digitally on
 www.factcheckingday.com. the main components of our initiative will be:

  1. A lesson plan on fact-checking for high school teachers.
  2. A factcheckathon exhorting readers to flag fake stories on Facebook.
  3. A “hoax-off” among top debunked claims.
  4. A map of global activities.

If you are interested in finding out more/participating, reach out to factchecknet@poynter.org


The Future of News in an Interconnected World

01 Mar 2017
12:30 - 15:00
European Parliament, Room P5B00


Independent journalism is under pressure as a result of financial constraints. Local media is hardly surviving and free online content is sprawling. On social media platforms that are built for maximum profit, sensational stories easily go viral, even if they are not true. Propaganda is at an all-time high and personalised newsfeeds result in filter bubbles, which has a direct impact on the state of democracy. Just some of the issues that will be explored in this seminar, as we explore how journalists and companies see their position and the role of social media and technology.

MisinfoCon: A Summit on Misinformation

Feb 24 - 27, 2017
Cambridge, MA

A summit to seek solutions - both social and technological - to the issue of misinformation. Hosted by The First Draft Coalition
 @firstdraftnews, The Nieman Foundation for Journalism @Niemanfdn and Hacks/Hackers @HacksHackers   

Find out
more.

Follow at
@Misinfocon  #misinfocon

MisinfoCon: Pre-event Reading & Creative Studio Resources

Combating Fake News: An Agenda for Research and Action

February 17, 2017 - 9:00 am - 5:00 pm
Harvard Law School
Wasserstein Hall 1585
Massachusetts Ave, Cambridge, MA 02138

Full programme. Follow #FakeNewsSci on Twitter.

Write up:

170217 - Medium
Countering Fake News

Media Learning Seminar 

February 13 - 14, 2017

What do informed and engaged communities look like today?

Find videos of the discussion here or access comments Twitter via #infoneeds

The Future of News: Journalism in a Post-Truth Era

Tuesday, Jan 31, 2017

4:00 - 6:00 pm EST

Sanders Theatre, Harvard University

Co-sponsored by the Office of the President, the Nieman Foundation for Journalism, and the Shorenstein Center on Media, Politics, and Public Policy

Speakers include: Gerard Baker, editor-in-chief of The Wall Street Journal; Lydia Polgreen, editor-in-chief of The Huffington Post; and David Leonhardt, an op-ed columnist at The New York Times

Full Programme

Video coverage of the event is available.


170201 - NiemanLab
The boundaries of journalism — and who gets to make it, consume it, and criticize it — are expanding
Reporters and editors from prominent news organizations waded through the challenges (new and old) of reporting in the current political climate during a Harvard University event on Tuesday night.

Dear President: What you need to know about race 

Jan 27, 2017 - 2:30 pm – 4 pm

Newark Public Library, Newark, NJ.
Community conversation hosted by
Free Press News Voices: New Jersey
Via Craig Aaron
@notaaroncraig, President and CEO of Free Press.

Knight Foundation & Civic Hall Symposium on Tech, Politics, and the Media

Jan 18, 2017 - 8:30 am - 6:00 pm

New York Public Library

5th Ave at 42nd St, Salomon Room

Berkeley Institute for Data Science
UnFakingNews Working Group

Meeting Monday, January 9, 5-7pm -- 190 Doe Library

A group of computer scientists, librarians, and social scientists supporting an ecosystem of solutions to the problem of low quality information in media. For more information, contact nickbadams@berkeley.edu

BLANK PAGE


In the News


Essential reads


170212 - Medium
The rise of the weaponized AI Propaganda machine 

There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.


170127 - IFLA
Alternative Facts and Fake News – Verifiability in the Information Society

161228 - MediaShift
How to fight fake news and misinformation? Research helps point the way

161122 - NYT Magazine

Is social media disconnecting us from the big picture? 
By Jenna Wortham
@jennydeluxe 

161118 Nieman Lab
Obama: New media has created a world where “everything is true and nothing is true” 

By Joseph Lichterman @ylichterman √  

161118 - Medium
A call for cooperation against fake news 

by @JeffJarvis 
@BuzzMachine blogger and j-school prof; author of Public Parts, What Would Google Do?

161116 - CNET
Maybe Facebook, Google just need to stop calling fake news 'news' 
by
Connie Guglielmo @techledes 
Commentary: The internet has a problem with fake news. Here's an easy fix.

Featured


Knight Foundation - Civic Hall Symposium on Tech, Politics and Media 
Agenda and Speakers
New York Public Library -
January 18, 2017

2017
Ethical Journalism Network Ethics in the News [PDF]
EJN Report on the challenges for journalism in the post-truth era

 

161219 - First Draft News
Creating a Trust Toolkit for journalism

Over the last decade newsrooms have spent a lot of time building their digital toolbox. But today we need a new toolbox for building trust

170114 - Huffington Post
Why do people believe in fake news?

160427 - Thrive Global
12 Ways to break your filter bubble

General

161211 - NPR
A finder's guide to facts

Behind the fake news crisis lies what's perhaps a larger problem: Many Americans doubt what governments or authorities tell them, and also dismiss real news from traditional sources. But we've got tips to sharpen our skepticism.


Web Literacy for Student Fact-Checkers by Mike Caulfield @holden

Work in progress but already excellent. Recommended by @Dan

161209 - The Guardian
Opinion: Stop worrying about fake news. What comes next will be much worse

By Jonathan Albright @d1gi, professor at Elon University in North Carolina, expert in data journalism
In the not too distant future, technology giants will decide what news sources we are allowed to consult, and alternative voices will be silenced

161128 - Fortune
What a map of the fake-news ecosystem says about the problem

By Mathew Ingram @mathewi, Senior Writer at Fortune

Jonathan Albright’s work arguably provides a scientifically-based overview of the supply chain underneath that distribution system. That could help determine who the largest players are and what their purpose is.

 161128 - Digiday
The underbelly of the internet': How content ad networks fund fake news
Forces work in favor of sketchy sites. As ad buying has become more automated, with targeting based on audience over site environment, ads can end up in places the advertiser didn’t intend, even if they put safeguards in place.

161125 - BPS Research Digest
Why are some of us better at handling contradictory information than others?

Trump Presidency


'Alternative Facts': how do you cover powerful people who lie?
A collaborative initiative headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits. View only.

170216 - Politico
How a Politico reporter helped bring down Trump’s Labor Secretary pick

"This was the most challenging story I’ve ever done. But it taught me that with dedication and persistence, and trying every avenue no matter how unlikely, stories that seem impossible can be found in the strangest of ways." - Marianne LeVine


Reuters
Covering Trump the Reuters Way
Reuters Editor-in-Chief Steve Adler


170115 - Washington Post
A hellscape of lies and distorted reality awaits journalists covering President Trump
Journalists are in for the fight of their lives. They will need to work together, be prepared for legal persecution, toughen up for punishing attacks and figure out new ways to uncover and present the truth. Even so — if the past really is prologue — that may not be enough.


Dec 2016 - Nieman Lab
Feeling blue in a red state

I hope the left-leaning elements of journalism (of which I would be a card-carrying member if we actually printed cards) take a minute for reflection before moving onto blaming only fake news and Russian hacking for the rise of Trump.

161111 - Medium
What’s missing from the Trump Election equation? Let’s start with military-grade PsyOps
Too many post-election Trump think pieces are trying to look through the “Facebook filter” peephole, instead of the other way around. So, let’s turn the filter inside out and see what falls out.

161109 - NYMag
Donald Trump won because of Facebook

Social media overturned the political order, and this is only the beginning.

International

170113 - The Guardian
UK media chiefs called in by minister for talks on fake news

Matt Hancock, the minister of state for digital and culture policy, has asked UK newspaper industry representatives to join round-table discussions on the issue of fake news.

170107 - The Guardian
German police quash Breitbart story of mob setting fire to Dortmund church

170105 - Taylor Francis Online
Russia’s strategy for influence through public diplomacy and active measures: the Swedish case
Via Patrick Tucker @DefTechPat Tech editor at @DefenseOne

   161224 - The Times of Israel
Pakistan makes nuclear threat to Israel, in response to fake news


161215 - The Guardian
Opinion: Truth is a lost game in Turkey. Don’t let the same thing happen to you
We in Turkey found, as you in Europe and the US are now finding, that the new truth-building process does not require facts. But we learned it too late

61223 - The Wire
The risks of India ignoring the global fake news debate

A tectonic shift in the powers of the internet might be underway as you read this. 

161123 - Naked Security
Fake news still rattling cages, from Facebook to Google to China

Chinese political and business leaders speaking at the World Internet Conference last week used the spread of fake news, along with activists’ ability to organize online, as signs that cyberspace has become treacherous and needs to be controlled.



161220 - NYT
Russian hackers stole millions a day with bots and fake sites

A criminal ring is diverting as much as $5 million in advertising revenue a day in a scheme to show video ads to phantom internet users.


160418 - Politico
Putin's war of smoke and mirrors
We are sleepwalking through the end of our era of peace. It is time to wake up.

Press Room: Research and articles related with this project

'Alternative Facts': how do you cover powerful people who lie?      
A collaborative project headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits

170207 - Bill Moyers
Your guide to the sprawling new Anti-Trump Resistance Movement

170203 - Mashable
Google Docs: A modern tool of powerful resistance in Trump's America

How fake news sparked a political Google Doc movement

170108 - The Guardian
Eli Pariser: activist whose filter bubble warnings presaged Trump and Brexit

“The more you look at it, the more of a complicated it gets,” he says, when asked whether he thinks Facebook’s plan will solve the problem. “It’s a whole set of problems; things that are deliberately false designed for political ends, things that are very slanted and misleading but not false; memes that are neither false nor true per se, but create a negative or incorrect impression. A lot of content has no factual content you could check. It’s opinion presented as fact.”


Fake news has exposed a deeper problem – what Pariser calls a “crisis of authority”.

“For better and for worse, authority and the ability to publish or broadcast went hand in hand. Now we are moving into this world where in a way every Facebook link looks like every other Facebook link and every Twitter link looks like every other Twitter link and the new platforms have not figured out what their theory of authority is.


61223 - The Wire
The risks of India ignoring the global fake news debate

A tectonic shift in the powers of the internet might be underway as you read this. 

161215 - Washington Post
Fake news is sickening. But don’t make the cure worse than the disease.

161215 - USA Today
Fake-news fighters enter breach left by Facebook, Google
A cottage industry of fake-news fighters springs up as big platforms move slowly to roll out fixes.

161206 - Digital Trends
Forget Facebook and Google, burst your own filter bubble 

161130 - First Draft News
Timeline: Key moments in the fake news debate

161129 - The Guardian
How to solve Facebook's fake news problem: experts pitch their ideas

161127 - Forbes
Eli Pariser's Crowdsourced Brain Trust is tackling fake news 

Upworthy co-founder and hundreds of collaborators gather the big answers

161125 Wired
Hive Mind Assemble 
by Matt Burge @mattburgess1 
Upworthy co-founder Eli Pariser is leading a group of volunteers to try to find a way to determine if the news online are real or not

161119 - Quartz
Facebook’s moves to stamp out “fake news” will solve only a small part of the problem

161118 - CNET
The internet is crowdsourcing ways to drain the fake news swamp
Pundits and even President Obama are bemoaning fake news stories that appeared online leading up to the election. A solution might be found in an open Google Doc.

161116 - The Verge
The author of The Filter Bubble on how fake news is eroding trust in journalism

‘Grappling with what it means to look at the world through these lenses is really important to us as a society’

161115 - Digiday [Podcast 23:12] 
Nieman’s Joshua Benton: Facebook has ‘weaponized’ the filter bubble

161109 - Nieman Lab
The forces that drove this election’s media failure are likely to get worse

By Joshua Benton @jbenton 

Segregated social universes, an industry moving from red states to the coasts, and mass media’s revenue decline: The disconnect between two realities shows no sign of abating.

[Press Room - Full Archive]




About Eli Pariser


Eli is an early online organizer and the author of
The Filter Bubble, published by Penguin Press in May 2011.

Shortly after the September 11th terror attacks, Eli created a website calling for a multilateral approach to fighting terrorism. In the following weeks, over half a million people from 192 countries signed on, and Eli rather unexpectedly became an online organizer.

The website merged with MoveOn.org in November of 2001, and Eli -– then 20 years old -- joined the group to direct its foreign policy campaigns. He led what the New York  Times Magazine termed the “mainstream arm of the peace movement”; -- tripling MoveOn’s member base in the process, demonstrating for the first time that large numbers of small donations could be mobilized through online engagement, and developing many of the practices that are now standard in the field of online organizing.

In 2004, Eli co-created the
 Bush in 30 Seconds online ad contest, the first of its kind, and became Executive Director of MoveOn. Under his leadership, MoveOn.org Political Action has grown to five million members and raised over $120 million from millions of small donors to support advocacy campaigns and political candidates, helping Democrats reclaim the House and Senate in 2006.

Eli focused MoveOn on online-to-offline organizing, developing phone-banking tools and precinct programs in 2004 and 2006 that laid the groundwork for Barack Obama’s remarkable campaign. MoveOn was one of the first major progressive organizations to endorse Obama for President in the presidential primary.

In 2008, Eli transitioned the Executive Director role at MoveOn to
Justin Ruben and became President of MoveOn’s board.

Eli grew up in Lincolnville, Maine, and graduated summa cum laude in 2000 with a B.A. in Law, Politics, and Society from Bard College at
Simon's Rock. He is currently serving as the CEO of Upworthy and lives in Brooklyn, NY.

Contact: @elipariser 

Conferences

Combating Fake News: An Agenda for Research and Action 

February 17, 2017 - 9:00 am - 5:00 pm
Full programme - #FakeNewsSci on Twitter


Related articles

170214 - Forbes
Political issues take center stage at SXSW

170205 - The College Reporter
Workshop provides students with knowledge pertaining to fake news



170207 - Backchannel
Politics have turned Facebook into a steaming cauldron of hate

170201 - Triple Pundit
Upworthy and GOOD announce merger, join forces to become the leader in Social Good Media


170127 - Observer
These books explain the media nightmare we are supposedly living in


170118 - OpenDemocracy.net
The internet can spread hate, but it can also help to tackle it

161216 - NPR TED Radio Hour
How can we look past (or see beyond) our digital filters?

161122 - NYT Magazine

Is social media disconnecting us from the big picture? 
By Jenna Wortham
@jennydeluxe 

161112 - Medium
How we broke democracy

Our technology has changed this election, and is now undermining our ability to empathize with each other

1108 Ted Talks
Eli Pariser: Beware online "Filter Bubbles"

110525 - Huffington Post
Facebook, Google giving us information junk food, Eli Pariser warns


0305 - Mother Jones
Virtual Peacenik


030309 - NYT Magazine
Smart-mobbing the war

[Eli Pariser - Full Archive]


Start of Document 

Basic concepts

  • Define concepts clearly[j][k]. Is the “fake” / “true” dichotomy the best approach here? There can be multiple dimensions to help inform people: “serious / satire[l][m],” “factual / opinion[n],” “factually true / factually untrue,” “original source / meta-commentary on the source,” etc. +Kyuubi10

  • To expand on the previous concept… This is more of a question to keep in mind rather than a solution, but I believe important nonetheless:

    How should
    fact-checking[o] be done? How can we confirm that the “powers that be” are not messing with the systems created? How do we avoid human bias in fact-checking? How to we avoid censorship efforts, to use our systems to censor content? Or avoid our systems being used to promote propaganda? --Kyuubi10

  • Not only define or suggest terms, but perhaps try to express what people think is the problem (ie. Is it that people are misled[p][q][r], that political outcomes are being undemocratically influenced, that civil discussion is being undermined and polarized?) There may be many problems people in this discussion have in mind, implicitly or explicitly, and to discuss solutions it's important to agree on what is the problem (or aspect of it) being addressed. [@tmccormick / @DiffrMedia] +@IntugGB 

  • Note [30 Nov 2016] @tmccormick: it seems there are a few overlapping problems, and varying definitions, in this discussion. “Fake news” is used variously to mean deliberate false news; false but not necessarily deliberate; propaganda, as in information created to influence, which may be false or not; or information that is biased or misleading. Also, in some case we aren’t talking about ’news,’ for example false reviews, false or disputed health/medical information (anti-vaxxers issue, eg).  - Note [15 Dec 2016] @linmart Add false equivalencies: A debate where information 99% verified is set up in equal standing as a 1% view.

    Many parts of this discussion concern issues of confirmation bias and the polarization of opinion groups. This intersects with “fake news” topic because it’s one reason people create, share, and accept misinformation, but it is really a broader issue, in that it describes how we tend to form and maintain, narrow or broaden, our views in general.

    Related to polarization, there is another lens on this topic area, which is trust - trust in media institutions, or in civic institutions generally trust or support media organizations, the truth of those orgs’ news doesn’t really matter. (This is generally the angle of the
    Trust Project, one of the biggest existing media collaborations in this field).. Trustworthiness is not the same as truthfulness, as for example we may have degrees of trust in opinion, analysis/interpretation, or prediction, none of which reduce to true or not. Trust is driven by many factors besides news truthfulness, and if the public does not ... [gm corrected - please confirm]

    I[s] note these differing lenses/problems because I am hoping this project will remain an open network for different issues and projects to intersect. The better it maps and organizes the core issues,  considering these different points of view, the better it can be a useful hub for many interested contributors and organizations to learn from and complement each other’s work.

    << I agree. The Onion may be satire, but it communicates sharp societal critiques rather effectively/Same with Daily Show, et al. +Diane R

  • Backfire Effect.  There is little hope of fighting misinformation with simple corrective information.  According to a study by Nyhan & Reifler (Political Behavior, 2010, vol. 32; draft manuscript version here) “corrections frequently fail to reduce misperceptions of the targeted ideological group.”  In some cases there is a “backfire effect”[t][u] (a.k.a. Boomerang effect) in which corrections actually strengthen the mistaken beliefs.  This is especially true when the misinformation aligns with the ideological beliefs of the audience.  More here and here.  This suggests that corrective information strategies are only likely to be successful with audiences that are not already predisposed to believe the misinformation.

  • Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.[v][w]

  • Identify the emotional [x]content of fake news. The public falls for fake news that validate their feelings (and their defenses against unwanted feelings). Respond to the emotional content of fake news.

  • Wondering if it might be necessary to open up a new area within this study: fake reviews. Even if they are just a joke, they can change public perception and who knows what else. Thinking companies, writers, politicians...

    State sponsored, click farms,
    comedy writers, no idea.  - @linmart


 Definition of Fake News


At this time, there does not appear to be a definition of “fake news” in this document.

  • If readers/contributors do a Ctrl+F for the words “define” and “definition,” they will not find a definition of “fake news” in this document. The phrase “fake news” is used over 150 times without a definition.  

  • It seems that a handful of clear cut examples of fake news have been recycled over and over again since the election, but examples are not definitions. Examples do not tell us what fake news is. Establishing a working definition might be preferable to proceeding with work with no definition.

I’d like to see the term, ‘Fake News’ retired. If it’s fake, it’s not news. It’s lies, inventions, falsehoods, fantasy or propaganda. A humorist would call it, “made up shit”. 

Compilation of basic terms

Possible areas of research and working definitions: 

- fake / true
- fake vs. fraudulent
- factual / opinion
- factually true / factually untrue

- logically sound / flawed
- original source / meta-commentary on the source

- user generated content

- personal information

- news

- paid content

- commercial clickbait
- gaming system purely for profit

Motive:
- prank / joke
- to drive followers/likes
- create panic
- brainwashing / programming / deprogramming
- state-sponsored (external / internal)
- propaganda
- pushing agenda

- money

- local ideology

- local norms and legislation - restrictions and censorship (i.e. Thailand, Singapore, China)


- fake accounts
- fake reviews
- fake followers
- click farms
- patterns

- satire

- bias
- misinformation
- disinformation

- libel


- organic / non organic

- viral




Further reference:

161128 - The New York Times 

News outlets rethink usage of the term ‘alt-right’ 

via Ned Resnikoff @resnikoff  Senior Editor, @thinkprogress 


161122 - Nieman Lab 
“No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites

161122 - Medium 
Fake news is not the only problem 

By @gilgul Chief Data Scientist @betaworks 


Classifying fake news, fake media & fake sources

Thread by @thomasoduffy

There are different kinds of “fake” all of which need to be managed or mitigated.  Cumulatively, these fake signals find their way into information in all kinds of places and inform people.  We need to build a diction to classify and conceptualise these aspects to think about them clearly:

  • Fake article / story:  For example, a fabricated story, based on made-up information, presented as true.
  • Fake reference:  A not-intentionally fake article that cites a fake source
  • Fake meme:  A popular media-type for viral syndication, usually comprising of an image and a quote.  In this case, one that contains false/fake information.
  • Fake personality: A person controlling a social profile who pretends to be who they are not, unbeknownst to the public.  E.g. a troll pretending to be a celebrity
  • Fake representative: A person who falsely claims to represent an organisation, sometimes for the purposes of getting attention, sometimes for the purposes of discrediting that organisation.
  • Fake social page: A social page claiming to or portraying itself as officially representing a person/brand/organisation that has no basis
  • Fake website: A whole website that purports to be what it is not, with content that might be cited in topics of interest.
  • Fake reviews:  Reviews, be them published online or within a review section on an ecommerce site that are incentivised or intentionally biased, whereby, if an honest person understood their approach to writing the review, that honest person would mind.  Arguably, this applies to non-disclosed product placement or native advertising that is not disclosed clearly.
  • Fake portrayal:  As video becomes a primary way by which information is transmitted, in any situation where a person is behaving as an actor, to communicate something they don’t hold to be true, and are not doing this purely for entertainment, this could be described as a “fake portrayal”.  For example, if a voice-over artist reads a script for a brand knowing it to be false but uses their skills to present that compellingly, the output is a kind of fake-media.  For example, if a celebrity fitness model showcases a lifestyle using a product they don’t habitually consume as may be inferred by an ordinary person watching the show or advert, this is a kind of fake-media that ought to be limited.

  • Half-Truth: This is most common in reporting. half-truths are a mostly deliberate attempt to mislead an audience while using the truth and/or intentionally leaving out facts or part of the story. This is mostly done in an attempt to control a narrative. A more political yet practical example of a half-truth is when Bill Clinton claimed to “not have sexual relations with that woman”. He used a different definition of sexual relations so that way when the court did call him out he could claim via a technicality his statement were correct. Within the context of news half-truths should possibly be seen ethically on par with or possibly more severe than deliberately fake news.


To some extent, it is worth decoding strategies used by lobbyists, spin doctors, marketing agencies and PR companies - and considering - what measures could limit their ability to syndicate information of a “warped-accuracy” can counter intentionally fake-news.

  • How would the above be “verified” as fakes? In the intent to avoid censorship, the process of verifying fakes must be much more strict than the one to verify “facts”.
    I believe that Bias and propaganda would easily fall within the realm of fake (or better, should fall within), but this would easily mean that the size of the structures you might be fighting against will make this a hard fight. --
    Kyuubi10

  • Governments + Businesses will use their wealth and power in to create a force towards censorship by utilizing the method of “verifying” fakes in order to remove opposing content. +@IntugGB

    The same can be true about creating Verified Sources and Outlets, where they can be their own crowd-source to push their content to be verified. --
    Kyuubi10

Header[y]

@Meighread Dandeneau, Comm Law and Ethics, 19 October 2017

Straight out of a modern dystopian novel comes the Orwellian headline “Obama: New media has created a world where ‘everything is true and nothing is true’’ - or so we would think. Surprisingly, the very real article is less than a year old, and based entirely in nonfiction. In November of 2016, The Nieman Lab published the report after a recent Trump tweet alleged to save a Kentucky Ford automotive business from closing. The information later was proven to be false, but the damage had already been done. The post had been seen and shared by millions of active followers. Obama addressed the event in a press conference in Germany, saying, "If we are not serious about facts, and what's true and what's not … then we have problems” The first amendment protects our right to speak freely, but with fake news becoming more predominant in politics today, we have to ask ourselves - how far-reaching is the law?

Ethically, most people would say it is wrong to mislead or intentionally misinform another person. It’s dishonest, and from a young age, society instills in us the virtue not to lie. When slander is committed, the government has systems of handling it. When fraud is committed, punishment is a court case and conviction away from being enacted. But fake news has no such precedent. The rise of social media has aided and abetted the spread of such stories, and many companies profit from peddling the gossip.

To continue using the ‘Trump saving Ford automotive factory’ example, measurable impact followed when several media organizations picked up the story. Included in the mix were The New York Times, USA Today, and Detroit Free Press, who all spread Trump’s claim unchecked. Further, these companies were unofficially endorsed by public figures who shared them, giving the story enough traction to appear on Google news. James Poniewozik who condemned news organizations during the event later tweeted, “Pushing back on fake news—some spread by the president—is going to become a bigger part of the media’s job.”

But what about alternative media companies, such as Facebook? Mark Zuckerberg deflects facebook’s role in the deliberate spread of fake news, taking the role of “aggregators”. Even if Facebook tried to filter fake news, the process would be nearly impossible, implies Zuckerberg. He states, “While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual”. This has prohibited human moderators from examining news. “There is always a risk that the accusation of being ‘fake’ will be abused to limit free speech. Not all ‘fake’ news is ‘real’, and in any case, one person’s fake news is another person’s opinion,” says blogger Karolina.

Many look at fake news today and consider its circulation just a part of modern media literacy. Others, such as Jonathan Albright, data scientist at Elon University, Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, believe fake news has “a much bigger and darker” purpose. They agree, “By leveraging automated emotional manipulation...a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion”. Now they are equipped to be used as an “impenetrable voter manipulation machine” and change elections as we know them forever. Hints of this have already been seen in the most recent election, which is what sparked their research. They call it “The Weaponized AI Propaganda Machine” and it “has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts”.

One question remains; what can we do? The obvious remark is to hold liars accountable for their actions, in cases where the fault is black and white. But in cases like Trump’s, some could argue that his tweet was hyperbole. The fault can then be found in the media companies who shared the post as if it were raw news. As of now, the speech is covered legally. However as we move toward a future where companies like Cambridge Analytica manipulate the public as a weapon, examination into our changing world and its technology is paramount.

The problem with “Fake News”

How many times over this past year have you heard the term “Fake news” tossed around?  Most likely an uncountable number of times and over a vast variety of subjects.  Fake news has become so prevalent in our society that many of us don’t go a single day without a piece of fake news sneaking its way onto our computers, our smartphones, or our televisions.  This creates both ethical and legal issues that ripple through our society and ultimately take away from the medium of journalism as a whole.  

The term “Fake News” refers to the act of knowingly writing or distributing falsities that assert themselves as fact.  The easiest, and thus most popular way to spread fake news is by sharing these misleading articles on facebook, twitter, or on a plethora of other social media sites.  These articles spread quickly, and because many do not have any reason to believe that the information may be false they take every line as gospel.  So why does this matter?

According to usnews.com, over the past year team LEWIS, (who defines fake news as (an) “ambiguous term for a type of journalism which fabricates materials to perpetuate a preferred narrative suitable for political, financial, or attention-seeking gain")  conducted a study which sought to realize the overall effect that fake news had on people's views of American news and brands.  The study found that only 42% of millennials checked the credibility of publications, and this sank to 25% for baby boomers. This means that there is a chance that 58% of millennials, and 75% of baby boomers may be fed false information on an almost daily basis and accept it as truth.  This is a problem for a number of reasons.  

Usnews.com states that one of the main reasons that fake news is spread is to promote political ideologies, and this is where all of this seriously matters. Let’s say you get most of your news about a politician from Facebook, and let’s say that at least 60% of what you’re reading is either entirely false or misleading. This means that you could potentially be voting for a candidate that totally appeals to you, when in reality they might be the exact opposite of what you’re looking for and you were simply pushed and mislead by the media. Now imagine that this didn’t only happen to you, but this was the case of half, or even a quarter of the people who also voted for this candidate. This is a direct depreciation of fake news, and if is very scary.

Now it’s not that we don’t have the tools to fact check these articles ourselves, in fact it’s really not very difficult to determine the credibility of an article if you know what to look for. The major problem here is that many people don’t have any reason to believe that they’re being mislead, especially the older generations. People tend to read an article or even just the title of an article and have it stuck with them for some reason, and at some point they’ll share it in conversation with their friends because it was so gripping that they wouldn’t want it to be fake in the first place which deters them from even having the thought to check.

The ethical problems that arise because of fake news are significant and have a trial life impact. The people who push these articles have very little to answer for as it is almost impossible to police all media in an attempt to fight this sort of thing. The best battle against fake media is to check everything before preaching it as truth. Scrutinize your sources, and know that there is a chance that anything you are reading online is false. That’s all we’ve got until AI can start determine whether a post is considered fake news.

Fake News Problems

Fake news is everywhere. Many people are unaware of the amount of fake news that is out there in the media, which can include television, radio and the internet including social media like Facebook or Twitter. There are many problems that are related to fake news. As people who use different mediums of the media every day, some of the problems that would be related would be the fact that very few people know how to figure out if something is fake, and the fact that it’s unethical. Once these problems are brought to light, we, as a society, can try to make the issue of fake news known among those who use the media.

When it comes to detecting if something is classified as fake news, not many people know how to determine that. The article, Ten Questions for Fake News Detection, shows 10 different ways people can find out if something is considered fake news. Within the article, it asks, questions like “Does it use excessive punctuation(!!) or ALL CAPS for emphasis?” to determine if something in the media is fake news. There are red flags for answers that should not be answered a particular way. The more red flags there are, the worse it looks. There are other things media users can look at to determine if something is considered fake news. Some are obvious, like excessive punctuation, and some are not as obvious, such as whether or not “the “contact us” section include an email address that matches the domain (not a Gmail or Yahoo email address).” The Ten Questions for Fake News Detection article is just one of many articles people can access if they want to figure out if something is really fake news or not. Legally, these different websites or articles in the media do not have to mention whether or not they are in fact fake news. The First Amendment protects these pieces from being destroyed or put away. The First Amendment grants people freedom of speech and those who are making the articles online or talking about it on television or radio and expressing their freedom of speech. Therefore, it won’t stop anyone from getting that out in the media and in the public’s eye. There are things people can look into that one wouldn’t have even thought of unless they looked further into the matter.

Another main issue with fake news is the ethos aspect of it, ethos meaning the character and credibility of the “news” and the source that it comes from. There are many sources that are considered fake news and a large amount of people would have to agree that it is unethical to be publishing something that is fake. It throws people off when they’re looking through the media and seeing these things. Yes, it does make the viewer question the piece and whether or not it is actually true but depending on the topic, it can affect the viewer in a large way, mostly negative. It can cause anger and rage if it says one thing and the viewer takes that as the truth. This being said, a lot of fake news can be considered unethical because of the pain and frustration it puts people through. People need to take into consideration that maybe it is fake news and if it is, then they need to ignore it even though that may be easier said than done.

It’s crazy to think how popular fake news has become and how much society indulges in it in their everyday lives. There are so many problems that relate to fake news that can be realized if one was to think about it more in regards to how it’s affecting people’s lives. Fake news is an issue and it’s something people need to be aware of and they need to be able to determine if it is fake and why, and it’s also something that is unethical. With the amount of media being used in today’s society, people should have a better understanding of fake news, especially when it’s all over social media, which is extremely important to thousands of people.

Considerations → Principles → The Institution of Socio - Economic Values

by: Timothy Holborn 

A Perspective by Eben Moglen[1] from re:publica 2012

The problem of ‘fake news’ may be solved in many ways.  One way involves mass censorship of articles that do not come from major sources, but may not result in news that is any more ‘true’.  Another way may be to shift the way we use the web, but that may not help us be more connected. Machine-readable documents are changing our world.

It is important that we distill ‘human values’ in assembly with ‘means for commerce’. As we leave the former world of broadcast services where the considerations of propaganda were far better understood; to more modern services that serve not millions, but billions of humans across the planet, the principles we forged as communities seem to need to be re-established.  We have the precedents of Humans Rights[2], but do not know how to apply them in a world where the ‘choice of law’[3] for the websites we use to communicate, may deem us to be alien[4].  Traditionally these problems were solved via the application of Liberal Arts[5], however through the advent of the web, the more modern context becomes that of Web Science[6] incorporating the role of ‘philosophical engineering’[7] (and therein the considerations of liberal arts via computer scientists).


So what are our principle, what are our shared values? And how do we build a ‘web we want’ that makes our world a better place both now, and into the future?

It seems many throughout the world have suffered mental health issues[8] as a result of the recent election result in the USA.  A moment in time where seemingly billions of people have simultaneously highlighted a perceived issue where the results of a populous exacting their democratic rights resulted in global issues that pertained to the outcome being a significant surprise.   So perhaps the baseline question becomes; how will our web better provide the means in which to provide us (humans) a more accurate understanding of world-events and circumstances felt by humans, via our ‘world wide web’.

  General Ideas 

Behavioral economics and other disciplines

  • Invest in Behavioral Economics, the application of psychological insights[z] into human behavior to explain decision-making. Have standards to test the impact of any of these ideas on intended outcomes.

  • Invest in Social Cues. People create and share fake news because they cannot be held accountable for its content. Accountability cues[aa] for content should be part of any technological and behavioral solutions to the fake news problem.
  • A[ab][ac] novel concept proposed in the following article: An algorithm that finds truth even if most people are wrong, Drazen Prelec, Massachusetts Institute of Technology Sloan School, Cambridge MA 02139 dprelec@mit.edu  +alex@coinfund.io





Human Editors
[ad]



  • For more established outlets: consider immersing your team in alternate realities. There is always going to be a bias in human judgement but having experienced being in someone else’s shoes might cut through those perceptions. -@linmart[aj]

  • With the speed of how fake facebook posts propagate, the news coverage might be similar to what unfolds in an emergency crisis. A special manual was set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts.  Described as “a groundbreaking resource for journalists and aid providers providing the tools, techniques and guidelines for how to deal with user-generated content (UGC) during emergencies”. - @JapanCrisis

  • Hire more human editors to vet what makes it to trending topics[ak][al][am]

  • By then it’s too late however. It needs to be stopped before it gets there. There are ways to detect if a story is approaching virality, and if human editors monitor what’s going viral and vet those articles they can can kill its distribution before millions see it.[an][ao][ap][aq][ar][as] Transparency is key to trust (last time they had human editors they were accused of being biased against conservative news outlets). Saying “this was removed because it was fabricated by Macedonian teens” is better than some vague message about violating TOS.  Eventually this data can train a machine learned classifier

Related:

Management Science
The structural virality of online diffusion - Vol. 62, No. 1, January 2016, pp. 180–196 

  • Facebook already has infrastructure for dealing with graphic content, and could easily employ something similar to mitigate fake news

  • Bring back human editors, but disclose to news outlets why specific articles are being punished. Transparency is the only way for news organizations to improve, instead of making them guess. -@advodude

  • Might have a bit too much overhead, but a thought: 1. Begin to fingerprint virally shared stories, as you likely already do in order to serve ads. 2. Have a human editor go through the resulting list and manually flag fake news. 3. Use these “verified fakes” to train an algo to recognize fake content and flag it for review. 4. Use these manual reviews to refine the algorithm. 5. Use user content flags as an additional signal. 6. A human is still required for the foreseeable future, but as the algo improves, the amount of work will decrease. [at]-Steve

  • This could also be some sort of central-server[au] (API) approach. Having a fact-checking server/API which any user or website can query with a given news URL[av]. This fact-checking server/API then returns all the information it knows about the given news URL including whether it assumes the news URL to be fake, reasons for that, and alternative (non-fake) news sources on given topics.[aw] Yet, this still requires human involvement to check stories, but it could be a somewhat independent organization where (also) actual news outlets invest in. +@IntugGb

 

  • Perhaps look at how Artificial Intelligence (AI) is being applied across industries. Search through case studies involving lawyers or doctors where massive amounts of information need to be looked at but where, in the end, the final call is made by a human. +@linmart

  • We are a group of former journalists and executives from Canada’s public broadcaster (CBC) whose company, Vubble, has been working on a solution to solve fake news, pop filter bubbles and engineer serendipity into the digital distribution of content (our focus is video). We’ve created an efficient and scaleable process that marries professional human editors with algorithm technology to filter and source quality content (and flag problematic/fake content -- we believe the audience needs more agency, not less). We are currently putting together the funding to build a machine learning layer that would produce responsive feeds, giving a user a quality experience of content ‘inside’ their comfort zone, but also deliberately popping in an occasional challenging piece to take her slightly outside her comfort zone on difficult subjects. We’re building this as an open platform, providing transparency on how these types of systems work, and routine auditing for bias within the code that drives it. (@TessaSproule) +Mathew Ingram +Eli
  • Create a news site that rewards honest reporting and penalizes dishonest reporting[ax]. - dvdgdnjsph
  • In order to post or vote on content, contributors must purchase site credit.
  • Reddit-style upvoting/downvoting determines the payment/penalty for contributions.
  • More details here 

        



Under the Hood

  • Insert domain-expertise back into the process of development: Engage media professionals and scholars in the development of changes to the Facebook algorithm, prior to these moment when these changes are inputted. [ay]Allow a period of public comment where civil society organizations, representatives from major media outlets, scholars, and the public, can begin to tease out the potential implications of these changes for media organizations - @RobynCaplan, @datasociety, @ClimateFdbk  

>> I elaborated on this a bit here: - @msukmanowsky

161110 - Medium
Using quality to trum misinformation online 

Using page and domain authority seems like a no brainer as a start. I advocated for adding this information to something like Common Crawl 

>> The problem with this approach is that fake-news is not only generated by web domains but via
UGC sites such as youtube, facebook and twitter. - yonas

  • Use a source reliability algorithm [bf]to determine general reliability of a source and how truthful the facts[bg][bh] in a particular article are. This has the benefit that newer news sources still get a fair chance at showing their content. -- Micha

  • Look up the DNS entry to see when the site was first registered - For example, washingtonpost.com was registered in 1995[bi][bj], while conservativestate.com was registered in Sept 2016 in Macedonia (+Daniel Mintz)

  • Track all news sources, to include when the website was first registered and any metadata suggesting links to fake/fraudulent activity, as part of an authenticity metric.

  • Provide strong disincentive for domains propagating false information, e.g. if a domain has demonstrably been the source of false information 10 times over the past year, dramatically decrease the probability that links pointing to it will be shown in users’ feed. -- Man
  • Genetic Algorithms using many of these ideas which may be boiled down into discrete values. Spam filtering is very challenging due to the need to avoid false positives. Start with a seed of known false and true stories.[bk][bl] Create genetic algorithms using several of these variables to compete over samples of these stories (need a large set and to rotate sample to avoid overfitting). Once a satisfactory false positive rate is reached, keep test algorithms running in a non-production environment to look for improvements. -- Steve

  • Audit the algorithm regularly for misinformation--you are what you measure, and a lot of the effects are second-order effects of other choices. --z



Facebook 

  • Facebook polarizes discourse and thus becomes an unpleasant place to “be.” So many people are walking away from the platform as a result of this election. This is a problem in their best interest to solve, and the fake news problem is part of this. -@elirarey

Recommended

161120 - NPR
Post-election, overwhelmed Facebook users unfriend, cut back

  • There are two main reasons that Facebook was so easy to spread fake news on, as it allows anyone to post whatever they want with little to no restrictions. Even if they do break regulations, it most likely wouldn’t be noticed for some time, as there would be quite a bit of posts to monitor. It may even be impossible to completely stop fake news from being spread on Facebook, but there are a few attempts that could be made that would work both legally and ethically. Listed below are problems/reasons why fakes news is spread so easily on Facebook and what could be a possible solution to prevent or drastically lower these issues from happening.
  • One reason Facebook has become an outlet for fake is due to the fact that at least during the time of the 2016 presidential election and any time prior to it, Facebook did not have a strong policy on fake news, so they did not crack down on it due to fact that it was not receiving large amounts negative feedback. After the election, citizens of the U.S. started calling Facebook out, as there were so many fake news articles that were getting shared around within three months of election day. The most shared fake news articles varied from close to 500,000 to nearly one million shares. Since then, Facebook has made a claim that they will start to punish and limit Facebook more. It will be tricky as the website allows all freedom of speech unless it is not a threat towards anyone. They have the right to remove any post they want as it is their website, but if they were to do this frequently, they would lose members of their site, spark debates, and possibly be brought to court (although they would win the case), all of which they would rather avoid. The way they plan on being able to find fake news and remove it is by giving Facebook users the option to mark a post as being fake news. Then, Facebook will check the domain of the site, and lastly, it will be sent to a third party investigation team, and they will be responsible for fact checking. It’s not an awful start, but it could have some downsides such as:
  • How clear is it that people can now mark something as fake news? For instance, would it be clear to the average Facebook user that this is a tool? Most people probably wouldn’t recognize it unless it was obvious, as people tend to use Facebook to merely “swipe through” and see what friends are up to.
  • Would Facebook employees be able to tell what is meant to be fake news? What if a site like The Onion was spammed for being fake news? Clearly it is satire. The same could be said for any opinionated articles being shared.
  • Would sharing the article be halted during the review process? If so, what if it turned out to be legit? It could prevent real news from getting shared.

  • Another reason fake news is popular on Facebook is because most of the users go on Facebook to get a brief overview of what’s going on in the world, mainly within their social lives. The problem with this is the briefness they want. For instance, if a student is on Facebook to kill time before class, he or she wouldn’t want to spend the whole time on the site reading an article. If he or she read a fake news article title that was shocking and engaging, as they usually are, and didn’t have the knowledge or perception of the site being fake news, he or she would most likely take it as true. Then, regarding to how much emotion he or she had towards the piece, they might as well share it, spreading the fake news. A solution to tackling this scenario would be to make it so that in order for a piece that directs you to another page out Facebook to get shared, one must have actually clicked on the link. Although this will not stop all fake news all together, as people may not realize it is fake, it would cut it down. Another way would be to install a merit system, allowing pages to become verified, which would show that they are a trustworthy source.                - Alexander Rajotte

        

Facebook’s first attempt/plan of action to fight fake new.

Facebook’s fake news problem.

  • Facebook needs to be more transparent about the incentives[bp][bq] that are driving the changes to their algorithm at different points in time. This can help limit the potential for abuse of actors seeking to take advantage of that system of incentives. - @RobynCaplan, @datasociety +Eli +Anton

  • Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful.  

    Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’.  It makes it personal and defensive: there is a difference between a comment on a shared article criticising the author and criticising the ‘sharer’, as if they’d written it.  They may not agree with all of it.  They  may be open-minded.  By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’.  [Amanda Harris]
    [br][bs]

  • Implement a Time Delay on FB Re-shares: Political articles[bt] shared on Facebook could be subject to a time delay once they reach over 5,000 shares. Each time an article is re-shared, there is a one hour time delay between when the article is shared and when it appears in the timeline. When the next person shares it, there is another one hour delay, etc. This “cool down” effect will prevent false news from spreading rapidly[bu]. There could be an exponential filter: Once an article reaches 20,000 shares, there could be a 4 hour time delay, etc. A list of white-labelled, verified sites, such as the New York Times and Wall Street Journal[bv], would be exempt from this delay[bw][bx][by].[bz]-  Peter@quill.org +BJ (This is a good idea!)

    >> This suggestion would apply only to FB? Of little use if parallel to that, one single post spreads like wildfire on Twitter. --
    @linmart  Peter: Twitter could also implement this system, where political posts are delayed before appearing in the Timeline. Still, Facebook has far more traction than Twitter internationally, so it’s a better place to start.

  • In countries with strong public media ethos, Facebook should present users with a toggle option on the “trending” --  option #1 is as is, fully driven by Facebook’s algorithm, or option #2 is a dynamic, curated feed that is vetted by professional editors (independent of Facebook -- programmed by a third party, from the country’s public broadcaster or some other publicly-accountable media entity.) --@TessaSproule 


    Recommended:

    161120 - NYT
    How Fake Stories Go Viral
[ca]

  • Facebook already knows when stories are fake

    When you click a “like” button on an article, it pops up a list of articles that people have also shared, and a lot of times that list includes, for example, a link to a Snopes article debunking it.[cc][cd] So they already know that people respond to fake news with comments linking to snopes. They could add a “false” button or menu item to flag as fake news.  -Abby[ce][cf][cg] +Eli +@linmart 

  • Pressure Facebook’s major advertisers to pressure Facebook over the legitimacy of news stories upon which their ads are being displayed.
  • Yes, I second this, but how?

  • This  is showing up:  

FB filter.png

Facebook message that now shows for the link provided by Snopes to the original source of the hoax.

As reported in this story:

161123 - Medium
How I detect fake news 

by @timoreilley 

  • Facebook already has a good source of verifying fake news that they abuse and need to use better.

A few important facts to consider first:

  • When a person ‘likes’ something they are:
  1. Saying to the poster, or site, that they like or approve
  2. Saying to Facebook this is what I like and want to see more of1
  3. Showing to their friends that they like this and approve
  • When sharing fake news all of these factors are bad for accidentally making fake news viral. But #2 can be even more devastating. If you consider Mat Honan’s experiment in his article “I Liked Everything I Saw on Facebook for Two Days. Here's What It Did to Me”1: When you like a fake news article, Facebook may feed you more from the site or more articles like it.
  • On the reverse side if you got tired of seeing something, and it happened to be legitimate, like a Presidential Candidate's Official site, you might tell Facebook “I don’t want to see this”2 You would then not see the legitimate news at the same time you were seeing the additional fake news that Facebook thinks you liked.

The smallest abuse here is that at the individual level a person can no longer trust their own friends to deliver real news. Then they have to go beyond Facebook to try to figure out what is real or fake.

What Facebook needs to do better is realize that trust among your family and actual friends (that you know outside of Facebook) is an invaluable tool for them and for Facebook.

Facebook needs to:

  • Post a verified site internally that educates users how to verify Fake or real news.
  • Have the above page(s), visible by a button or something similar, at all times while a person is in their news feed reading.
  • Have a badge that a person can earn for knowledge on verifying fake news.
  • This badge should only be seen and utilized by a person’s trusted family and known friends, which they should be able to designate themselves.
  • This badge could allow a person to have talking point with their friends when they recognize their friend has shared fake news.
  • The ‘offender’ can then be directed to the Facebook Fake news verification site, and could earn a badge to get their friends confidence back.
  • Badges could a have a level to them that could be raised or lowered by a person’s trusted friends (not friends or friends, or public)
  • In the process Facebook could accumulate these reports of the Fake sites that are verified at a personal level among trusted friends.


Important fact: You trust your actual family and friends, and Facebook needs to acknowledge this, respect it, and tap into it, to help make Facebook a more legitimate site for news sharing.

  1. Worley, Becky (08 May 2013)  Facebook Scam Alert - What really happens when you "like". Yahoo! News.   Retrieved: 18 Oct. 2017.
  2. Honan, Mat (14 August 2014)  I Liked Everything I saw on Facebook for two days. Here's what it did to me. Wired.com. Condé Nast. Retrieved: 18 October 2017
  3. Hide a Story (n.d.) “That Appears in My News Feed?" Facebook Help Center. Facebook. Retrieved: 18 Oct. 2017.


 

Analysis of Headlines and Content

  • Sentiment Analysis of headline - I suspect most fake news outlets use click-bait headlines with extremely strong verbiage to accentuate the importance of the story. Many clickbait headlines will be from legitimate stories so this is a signal, not a panacea. - Steve[ch][ci][cj][ck][cl] +BJ

  • Search deep links for sources which are known to be legitimate - looking for sources within an article using textual analysis (such as “as reported by the AP”, or “Fox News reports”), and checking the domains of said sources for a similar story (or checking the link to the source if provided) is a useful signal for a story not being fake. In the case that this is gamified a programmatic comparison of content between the source, and the referring article may be useful - Steve +Kyuubi10 (This is a great idea!)


  • Cross-partisan index: Articles that people beyond a narrow subgroup are willing to share get more reach[cm][cn]. -- Eli + Jesse + Amanda + Peter +CB +@IntugGB +Rushi +JS +BJ +NBA

  • Cross-partisan index II: Stories/claims that are covered by wide variety of publications (left-leaning, right-leaning) get higher Google ranking or more play on Facebook. --Tamar +1NBA
  • Cross-spectrum collaboration: Outlets perceived as left-leaning (eg NYT) partner on stories with those perceived a right-leaning (eg WSJ). -- Tamar +CB +@linmart

  • Compare Content with partisan language databases.  Some academic research[co]  on Wikipedia has assembled a database of partisan language (i.e. words more likely to be used by Republicans or Democrats) after analyzing the congressional record.  Content could be referenced against this database to provide a measure of relative “bias.”  It could then be augmented by machine learning so that it could continue to evolve. --@profkane[cp]


(Related comments in section on
Surprising Validators -- @rreisman)


Reputation systems

  • Authority of the sharers: Articles posted by people who share articles known to be true get higher scores.-- Eli

  • The inverse -- depress / flag all articles (and sources) shared by people known to share content from false sources. -- John

  • Author authority, as well. Back in the day Google News used Google+ profiles to identify which authors were more legitimate than others and then factored that into their news algorithm. - Kramer +Eli +@linmart
  • I think authorship data might still be available in the form of “rich snippets” embedded in the articles. -Jesse
  • We’re looking at author bios tied back to a source like LinkedIn[cq] - @journethics +@linmart[cr]

  • There are fundamentally not that many fake sources that a small team of humans could not monitor/manage. Once you start flagging initial sources then the “magic algorithm” can take over: those who share sources are themselves flagged; other items they share are flagged in turn; those sources are themselves flagged; and so on. -- John (+Tamar)
  • Upvoting/downvoting - (Andrew)  There are better approaches than just simply counting the number of upvotes and downvote.  (+JonPincus)[cs]

  • Reuters Tracer

    Fall/Winter 2016 - CJR
    The age of the cyborg [AI]
    Already, computers are watching social media with a breadth and speed no human could match, looking for breaking news. They are scanning data and documents to make connections on complex investigative projects. They are tracking the spread of falsehoods and evaluating the truth of statistical claims. And they are turning video scripts into instant rough cuts for human review...

  • Fake news is usually an editorial tactic/strategy. So it’s something that is planned, repeated and with specific individuals working on it. An open standard reputation system[ct] just like Alexa rank will do the job. It will be first crowd-populated. We at Figurit are currently working on implementing this internally to discover stories while eliminating fake ones. So instead of ongoing filtering/policing of the news, an open reputation system adopted by major social networks and aggregators will kill fake news websites.

    NOTE: must put exception for The Onion! ;[cu]]


  • Higher ranking for articles with verified authors. -- Eli[cv][cw]

  • Ask select users -- perhaps verified accounts, publisher pages, etc -- when posting/ sharing to affirm that the content is factual. Frontloading the pledge with a pop-up question (and asking those users to put their reputations at stake) [cx]should compel high-visibility users to consider the consequences of posting dubious content before it’s been shared, not after. (This is based on experiments that show people are more honest when they sign an integrity statement before completing a form than after.) -- Rohan +Manu

  • It’s a strange thing what happened with Google+ When it started, the group was very select. Content was extraordinary -  as were the conversations. Once they opened the floodgates, all hell broke loose and all sorts of ‘characters’ started taking over. Conversations went from being quite academic to … well, different. --@linmart

  • I maintain an (open-source, non-profit) website called lib.reviews, which is a generic review site for anything, including websites. It allows collaborators to form teams of reviewers with shared processes/rules. I run one such team, which reviews non-profit media sources (so far: TruthOut, Common Dreams, The Intercept, Democracy Now!, ThinkProgress, Mother Jones, ProPublica). I think this is essential so news sources in the margins don’t get drowned out by verification systems or efforts to discredit them. Here’s the list of reviews specifically of non-profit media:

    Reviews by Team: Non-profit media

    It’s easy to expand this concept in different ways. If you’re interested in collaborating on the tech behind it or on writing reviews of news sites, see the site itself, or drop me a note at <eloquence AT gmail DOT com>. See our
    FAQ for general issues w/ user reviews.--Erik Moeller @xirzon 


A possible method of implementing reputation systems is to make the reputation calculation dynamic and system based, and mapping the reputation scores of sources on a reverse sigmoid curve. The source scores will then be used to determine the visibility levels of its articles on social media and search engines. This ensure that while credibility takes time to be built, it can be lost very easily.

Where,

Ss -> Source Score

Sa -> Cumulative of its article scores

However, this system needs to be dynamic and allow even newer publications a fair chance to get noticed. This needs to be done by monitoring the reputation of both the sources and the channels the articles pass through.

Have fleshed out the system in a bit more detail in the following open document if anyone is interested in taking a look.

Concept System for Improved Propagation of Reliable Information via Source and Channel Reliability Identification [PDF] 

Anyone interested in collaborating on this can contact me at sid DOT sreekumar AT gmail



Build an algorithm that privileges authority over popularity[cy]. Create a ranking of authoritative sources, and score the link appropriately. I’m a Brit, so I’ll use British outlets as an example: privilege the FT with, say, The Times, along with the WSJ, the WashPo, the NYT, the BBC, ITN, Buzzfeed, Sky News, ahead of more overtly partisan outlets such as the Guardian, the Telegraph, which may count as quality publications but which are more inclined to post clickbaity, partisan bullshit. Privilege all of those ahead of the Mail, the Express, the Sun, the Mirror.

Also privilege news pieces above comment pieces; privilege authoritative and respected commentators above overtly partisan commentators. Privilege pieces with good outbound links - to, say, a report that’s being used a source rather than a link to a partisan piece elsewhere. [cz][da][db]

Privilege pieces from respected news outlets above rants on Medium or individual blogs. Privilege blogs with authoritative followers and commenters above low-grade ranting or aggregated like farms. Use the algorithm to give a piece a clearly visible authority score and make sure the algorithm surfaces pieces with high scores in the way that it now surfaces stuff that’s popular.

Of course, those judges of authority will have to be humans; I’d suggest they’re pesky experts, senior journalists with long experience of assessing the quality of stories, their relative importance, etc. If Facebook can privilege popular and drive purchasing decisions, I’m damn sure it can privilege authority and step up to its responsibilities to its audience as well as its responsibilities to its advertising customers. @katebevan



I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy. [dc]

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[dd]

[de]

Possible requirements

  • National outlet: Should have a paid employee in a minimum number of states.
  • International outlet: Paid employees in multiple locales.
  • Breadth of coverage: A solely focused political outlet should not be a certified outlet.
  • Minimum number of page views prior to certification

Time in existence, numbers of field reporters in each country/local should be required

Verified sites

  • Verified sites. Rather than try to get into the quagmire of trying to identify all "fake" news, there could be a process by which publishers could apply to be marked in the news feed as "verified." [df](Think: Verified people on Twitter… but less arbitrary.)  

    Example criteria: all published articles are linked to a real person, sources for stories are specifically cited, fact-checkers, high-authority sites link to the site as a reputable source, etc. Basically, a combination of factors listed in this doc.

    When you get accustomed to seeing stories in the NewsFeed marked with a badge that marks the source as verified, you’d automatically be a little skeptical of things
    not marked verified. And FB gets to avoid getting directly involved in that impossible “policing fake news” quagmire. --Sonders (+Andy) Nice (Andrew) +@linmart

  • This may potentially be enough, if FB doesn’t want to go all the way down this path of ‘verified’ sites it may be possible to simply build in ‘speed brakes[SP] to slow down stories from suspect sources until they are caught by FB’s other methods (IE user reporting). Issue seems to be that things can go viral too fast to be caught under the current model --Cam (+Daniel Mintz)

  • Riffing on the speed bump idea that Cam wrote above, I’d just say that creating a very permissive whitelist for verified news and speed-bumping other “news” that isn’t on the white list seems like it would make a big dent with very little effort and next to no downsides. And  when I say very permissive, I mean it. NYT would get through, but so would Daily Kos, Breitbart, The Blaze and Upworthy. But the Macedonian sites and their ilk wouldn’t. Wouldn’t come close to solving the whole problem, but would make a dent at very low cost.
  • This is essentially the model thetrustproject.org is using. Trust Indicators include author bio (ID): citations: label of news, analysis: opinion & sponsored content; original reporting; opportunities for public to dispute, etc. @journethics

  • Thoughts/questions on a board to verify sources:

    1) Though it should be a regulatory body, it absolutely must be independent of the government.

    2) Standards shouldn't have to be money or scope-based--this limits the capacity of citizen journalists, smaller outlets, or independent news producers and freelancers. That's the beauty of the internet, but it's also the danger--anyone can say anything. Why not use it as a soapbox for those folks who will provide a megaphone for real news and stories beyond the headlines and major outlets?

    3) On that note, what kind of standards can journalists agree on? Credibility of sources? Journalistic policies? Ethics rules?


    4) I don't know enough about the tech to say this definitively, but I'm not sure this is something you could accomplish with an algorithm at this point. I think this is a place for human editors/boards.

    For the sake of allowing this to be a tool to verify all outlets from small citizen bloggers to the NY Times, it could be a peer review system--volunteer journalists review the outlets/sites that apply for verification?  You could have a higher board of paid editors (funded by some non-profit source or Facebook/ Twitter/ Google) whose job it is to audit larger sources, but in general, is independence from tech firms (which are money-making entities to be covered themselves) something we should seek?
  • 5) We can do a lot of fact checking on our own without using a group of people to verify sources. Though we should have more people verifying sources and not publishing fake news, we should always be on the lookout ourselves. That means not just taking something that you read as is, but comparing it with other sources. That sometimes means just doing a quick google search to see what other sources have to say about it. We also should make sure that we are getting our news from the right sources. It is easy to check the about links on web sites and see if what it is saying comes from a reliable source.

6) Enforcement: when I google news search a topic, a range of articles       come up, not all of which are news---some are fraudulent/fake, some are biased--why is this being called news? Why should it get a checkbox when other, unverified sources, shouldn't appear next to it in the first place?

If anyone is interested in discussing the specifics of what I’m thinking, contact me --
alexleedsmatthews@gmail.com



I doubt FB will get into the curating business nor do they want to be accused of limiting free speech.  The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.  

Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc.  Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc.  Verified News should include major news outlets and others with a historical reputation for accuracy.

How?  There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing  (Accern, for example).

We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[dg]

[dh]

Possible requirements

  • National outlet: Should have a paid employee in a minimum number of states.
  • International outlet: Paid employees in multiple locales.
  • Breadth of coverage: A solely focused political outlet should not be a certified outlet.
  • Minimum number of page views prior to certification

(Time in existence, numbers of field reporters in each country/local should be required)


A broad architecture for reputation systems is outlined below in “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds” - Richard Reisman @rreisman 


Distribution - Social Graph

  • Do other outlets pick up the story? If not, it’s probably false. Could downrank domains on this basis over time -- it’s very unlikely that a site originates highly shareable, true stories that no one else picks up. --Eli +pvollebr[di][dj][dk][dl][dm][dn] +BJ
  • Google News did a “syndicated content” meta tag back in the day. It was used by news sites with original content to signal the GNews algorithm to treat it differently. Any site using similar content would add weight to the original piece, pushing it higher in the rankings.[do][dp][dq] - kramer
  • “Original reporting” is an indicator we’re working on, but it’s tricky. Use language analysis to ID derivative text? - @journethics
  • Related: “fake news” topics + wording thereof probably exhibit vastly different clustering behavior relative to “real news” -- I can’t necessarily anticipate how, but the data’s there to figure it out. So, in short: can train a classifier on the types of features already extracted by algorithms that perform automated “summarization” and other text analysis tools --Andy

    Assuming everyone is in an echo chamber, there might be some value in injecting some form of alternate viewpoint. Verified sources, e.g. NYT would signal “quality” but NYT opinion is slanted; sometimes that may be good suggestion, others not. --ac
  • Can’t solve a technological problem with a technological solution. We need to invest in:
  • Is this a graph clustering problem? ie. if you have a bunch of fake websites that primarily link to each other, you ought to be able to find this somehow[dr]. Spectral analysis of the graph matrix? -- N.


  • Okay, here’s a stupid idea from that paper: textual analysis.[ds][dt] Take a webcrawler, look at the “promoted” box, and assume that you have poor trust for any website that is in the same textual link area of the webpage as “one free wrinkle trick” or anything to do with local doctors being furious (say). Chances are, anything that is in there (“Crooked Hillary is Done!”) is probably also not trustworthy. -- N.



Fact-Checking

  • News organizations that have known fact checkers and issue corrections should have higher weight +pvollebr

  • Link users to the original agreed upon factual sources in real time so that they can do the fact checking themselves rather than rely on someone else, and thus enhance statistical literacy. This is what we are building at Factmata.
  • Pair questionable news with fact-checking sites (and invest in fact-checking sites[du]) - zeynep +Eli +@ClimateFdbk

  • Micro-bounties for fake news?  Maybe Facebook/ Twitter/ Google News -- or some outside philanthropist group -- could set up a small fund to reward non-affiliated users who identify fake news stories (thus incentivizing the exposure of fake news rather than the creation of it -- and crowdsourcing that hunt) +Kyuubi10
  • Create a scoring system: Use human editors, in conjunction with fact checking organizations, to score sites for the news they post. “Pants on Fire” scores a 5. “Mostly False” scores a 4 and so on. Once a site reaches a predetermined number of points, they get banned and removed from Facebook.

    Note: Facebook already has this system in place for individuals - if you violate the rules too often, you get banned. Do the same for pages.
  • Be careful with this. The banning systems are often gamed, in that they can be overloaded or ganged up on or have people with many multiple accounts to affect the results. This must be monitored and cleared by humans, which means fact checking by independent sources.

  • Full Fact is the UK’s independent fact checking charity.
  • Fact Checking has to be completely transparent and auditable, with anyone being able to dispute any ready checked fact. --Kyuubi10


</