Media ReDesign[a]: The New Realities
Know who the key players are 9
Create alliances and partnerships 9
Keep a tally of solutions - and nbvv g y gl Everybody else in the back two spaces mess ups 9
Be prepared to unlearn everything you know 11
Robert Reich: Inequality Media 13
Dan Rather: On journalism & finding the truth in the news 14
Web Literacy for Student Fact-Checkers 14
Syllabus: Social Media Literacies 14
High-Level Group on Fake News and online disinformation NEW 17
Stony Brook’s Center for News Literacy 17
The Future of News in an Interconnected World 20
MisinfoCon: A Summit on Misinformation 20
Combating Fake News: An Agenda for Research and Action 21
The Future of News: Journalism in a Post-Truth Era 21
Dear President: What you need to know about race 22
Knight Foundation & Civic Hall Symposium on Tech, Politics, and the Media 22
Berkeley Institute for Data Science
UnFakingNews Working Group 22
Press Room: Research and articles related with this project 28
Classifying fake news, fake media & fake sources 40
Considerations → Principles → The Institution of Socio - Economic Values 46
Behavioral economics and other disciplines 47
Analysis of Headlines and Content 59
Distribution - Social Graph 68
Downrank, Suspension, Ban of accounts 77
Points, counterpoints and midpoints 79
Model the belief-in-true / belief-in-fake lifecycle 82
Verified Pages - Bundled news 83
Patterns and other thoughts 86
Taboola and Outbrain involvement 99
Verified Trail + Trust Rating 105
Bias Dynamics & Mental Models 108
Neuroscience of philosophical contentions 109
Pattern & Patch Vulnerabilities to Fake News 111
The problem with “Fake News” 112
[Update] “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds” 117
Ideas in Spanish - Case Study: Mexico 122
Not just for Facebook to Solve 124
A Citizen Science (Crowd Work) Approach 128
The Old School Approach
Pay to Support Real Journalism 133
Suggestions from a Trump Supporter on the other side of the cultural divide 138
Journalism in an era of private realities 147
First Amendment/ Censorship Issues 151
Trolling and Targeted Attacks 156
Transparency at user engagement 163
Delay revenue realisation for unverified news sources 164
Linked-Data, Ontologies and Verifiable Claims 168
Reading Corner - Resources 172
Joined-up Thinking - Groupthink 177
Manipulation and ‘Weaponization’ of Data 177
Click Farms - Targeted attacks 178
Political Tribalism - Partisanship 182
Journalism in the age of Trump 183
Resources list for startups in this space 186
Interested in investing/funding 190
Themes and keywords to look for 195
Note: Hi, I am Eli, I started this document. Add idea below after a bullet point, preferably with attribution. Add +[your name] (+Eli) if you like someone else’s note. Bold key phrases if you can. Feel free to crib with attribution.
A number of the ideas below have significant flaws. It’s not a simple problem to solve -- some of the things that would pull down false news would also pull down news in general. But we’re in brainstorm mode.
This document is maintained by @Media_ReDesign, with updates from an extraordinary community of collaborators spanning across many disciplines. Some topics are under the supervision of specific teams, as is the case of news updates, and the Event Calendar (partly updated, linked as a reference, for ideas). All the same, please feel free to contribute with your ideas, as we expand and continue on this journey.
INVADING FORCES???? Have you seen ANY Photos of these families fleeing Certain Death because and Only Because they tried to defend their God given right to Freedom and Democracy? We Americans rightfully take a lot of Pride in this country for our Many Successes but it would Serve us ALL much better if it wasn't so Damn Near Impossible to Learn of our Occasional COLOSSAL Failures! Perfect Example would be how "WE" Helped overturn a Democratically Elected Leader in IRAQ and PUT Saddam Hussein in Power. It's a great example because pretty much everyone knows how we screwed the pooch on that one...The Role we have played in Creating a Humanitarian CRISIS in Honduras is easily as pervasive with the Only Real Difference being that the IRAQ MESS could pretty much be blamed on a specific few and all in in the Republican Party... In Honduras - our screw ups date as far back as 1911 and have only grown worse with each Major Shift in US Political Power! Reagan used Honduras as a "Staging Ground" for his illegal and ill fated Iran - Contra Fiasco and funneled US Weaponry into what is THE Original Textbook perfect "Banana Republic". As always, when Superior Power is given to societies too poorly structured to assure "Equal Protections" and Many of the other things "WE" take for granted..The Struggles to Possess and Control such Superior Power are Never Ending with Each Group who seizes Power Briefly PUNISHING all of those they "took" Power from! To be clear - I am by NO stretch of the imagination saying that Reagan is to blame for the Many Atrocities "we" have been responsible for in Honduras, or El Salvador, or Panama,or pretty much ALL of Central America! Meddling by our CIA and Profiteering on a Vulgar Scale by American Gas & Oil companies have progressively made matters worse and worse... Even President Obama and Sec of State Clinton made matters worse (and "matters' were already Horrific) by NOT denouncing the Military COUP in 2009 that overturned Honduras's last Freely Elected Leader and Permitted a Regime not all that different from Fidel Castro to seize total,violent, control despite MANY well publicized and well known (in that region at least) Massacres of Hondurans who tried to rally support for Democracy! WHY would ANY American Leader look the other way while people who believe in what "we" believe in and who only want to live in a society "Of the People, By The People and For the People".. Why would we NOT Try to Help these folks??? That's a damn good question! It is THE question that these thousands of REFUGEES are begging to have answered: America You Promised to Be a Shining Beacon on a Hill for ALL who love Freedom and Democracy to See". WHY have you Abandoned Us when We were “there” for you when needed us? We were only Doing EXACTLY what you Asked Us to Do? We have tried to Stand Up for Our Rights, The Rights YOUR Presidents (Kennedy, Reagan and Beyond) Told us We were Entitled To ONLY If We believed in these Rights enough to Fight For Them! We’ve been your Ally, We’ve done EXACTLY as You Asked us to.. Why Didn’t Come to Out Aid?? YOU sent Troops half way around the world to Kill People who didn't have a damn thing to do with 9-11 but that's beside the Point. We are "Americans" - OK Central Americans... But doesn't being in the Same Hemisphere AND sharing part of name, and being the source of soooo many billions of dollars in profits for Your Companies, Doesn't that mean ANYTHING? "WE" Thought your Word, Your Promises Meant Something but Believing your words Only got our Freedom Fighters Gunned Down by Weapons YOU provided to our Oppressors! We Stood Opposed to Communists and Fascist Dictators because we believed "The Shining Beacon for Freedom and Democracy" would Not sit idly while Enemies of Freedom massacred us - Clearly we were wrong. Are these the honest Questions of an “Invading Force” intent on causing us harm? OF Course Not! Why in God’s Name would young families Without SHOES attempt to walk Thousands of Miles over rough and rocky terrain? In Hopes of seeing Six Flags??? The ONLY Reason ANY human would attempt a trek so long and soooo arduous is Because They Have NO Other Choice that ends with them Alive!! PLEASE pause and ask yourself WHY? WHY would any one undertake such a brutal journey? AND - Why would the Majority of Americans SUPPORT an "INVADING FORCE"?? WHY Would Most Americans be in Favor of giving OUR Tax Dollars to People just coming here to spread Disease and Violence? People walking THOUSANDS of Miles to get here because they are Too Lazy To Work?? ON What Level would “that accusation” make Sense?? WHY Would "we" want to greet an "Invading Force" with Kindness (well I mean other than that was Exactly What Christ Instructed to us do whenEVER we encounter the Poor, the Weak, the Traveler...) Other than as an Act of Christian Kindness WHY would "We" WELCOME an "Invading Forces"?? You Guessed It! NO ONE would "welcome" an Invading Force so Either: (a.) Most Americans have Lost Their minds and are Executing a Secret Handshake DEATH WISH - OR (b.) Trump is LYING his Ass Off Because He’s Losing Political Power and SCARING People worked for him Last Time! GOOGLE - America’s Role in Creating the CRISIS in Honduras and you will Find thousands of sources of Information! Many are Decades old because this is NOT a New thing… It’s just recently reached it’s Most Horrendous Point - Precisely because Trump’s Anti Brown People Rhetoric has Emboldened the Current Regime to Complete the Slaughter of “it’s” Enemies Free of ANY Concern that America MIGHT come to the Aid of it’s long time Ally! It comes down to this: No One in The Democratic Party has EVER Doubted that America is THE Greatest Country in the History of the World - BUT… Being the Very Best is a Piss Poor Reason to Quit Trying to Be even Better!
And THANK GOD for the Red Cross! The Red Cross was formed to be an IMPARTIAL Source of Kindness and Aid for The INNOCENT Victims of Man Made War and Strife! “THAT” is EXACTLY the Role they are fulfilling in this Crisis and MOST Americans are Damn Thankful that at least the Red Cross can’t Be Bullied into Abandoning All Sense of Morality!
- Carl Sagan
“None of us is as smart as all of us” ― Kenneth H. Blanchard
Dozens of articles and studies are being published daily on the topic of2 ‘fake news’. The more we know about what is going on, all the different angles, implications, etc. the better off we are.
3
Technologists, journalists, politicians, academics, think tanks, librarians, advocacy organizations and associations, regulatory agencies, corporations, cybersecurity experts, military, celebrities, regular folk... all have vested interest in this topic. Each can give a different perspective.
Throughout the document, you will see some symbols, simply pointers alongside @names, to serve as guides:
√ verified account √ key contact √ collaborator
See what has been done or published that could serve as a blueprint going forward. Mentioned in this work, for example, is a special manual set up by leading journalists from the BBC, Storyful, ABC, Digital First Media and other verification experts. Four contacts there, alone, that might be interested in this project.
Related research:
Organizations and individuals important to the topic of fake news and its solutions
Work in progress - Contributions and suggestions welcome
Aside from this, what else has been implemented by Google, Facebook, Twitter and other organizations? How have people reacted? What are they suggesting? How is the media covering this? Have there been any critical turning points? The bots -- so in the news nowadays -- how are they being analyzed, dealt with? What has been the experience with them… abroad?
So many questions...
November 19, 2016
One can’t assume that there is criminal intent behind every story but, when up against state actors, click farms and armies of workers being hired for specific ‘gigs’, it helps to know exactly how they operate. In any realm, be it ISIS, prostitution networks, illegal drugs, etc. they are experts on these platforms.
Recommended
Future Crimes by Marc Goodman @FutureCrimes
The Kremlin Handbook - October 2016
Understanding Russian Influence in Central and Eastern Europe
Cybersecurity Summit Stanford
Munich Security Conference - Agenda
21 September 2016
Panel Discussion:
“Going Dark: Shedding light on terrorist and criminal use of the internet” [1:29:12]
Gregory Brower (Deputy General Counsel, FBI), Martin Hellman (Professor Emeritus of Electrical Engineering, Stanford University), Joëlle Jenny (Senior Advisor to the Secretary General, European External Action Service), Joseph P. McGee (Deputy Commander for Operations, United States Army Cyber Command), Peter Neumann (Director, International Centre for the Study of Radicalisation, King's College London), Frédérick Douzet (Professor, French Institute of Geopolitics, University of Paris 8; Chairwoman, Castex Chair in Cyber Strategy; mod.)
Related
170212 - Medium
The rise of the weaponized AI Propaganda machine There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.
Signal
Install it. Just because.
160622 - The Intercept
Battle of the secure messaging apps: How Signal beats WhatsApp
For years, other countries have dealt with issues of censorship, propaganda, etc. It is useful to understand what has happened, to see what elements of their experience we can learn from. Case studies, debates, government interventions, reasoning, legislation, everything helps.
Essential here - insights from individuals who have experienced it and understand the local language and ideology.
Note. This also means learning from the “natives”, the ones born with - you know - a chip in their brain.
Please join us on Twitter at @Media_ReDesign and on Facebook for the latest news updates.
A Slack team (group messaging) pertaining to a number of related projects is available for those who wish to connect or do further research on the topic. You can sign up here. For those not familiar, two introductory videos are available, one showing how it can be used in teams, the other describing the platform itself.
Twelve channels [update pending] have been created so far. Click on the CHANNEL heading to expand the different categories and click on any you want to join. Clicking on DIRECT MESSAGES allows you to contact all members who are currently on there; the full “Team Directory” is accessible through the menu.
Before starting, go through the document quickly to get a sense of the many areas of discussion that are developing. Preliminary topics and suggestions are being put in place but many other great ideas appear further on, unclassified for the time being while the team gets to them.
This is a massive endeavour but well worth it. Godspeed.
A superb summary dealing with fake news is being written up over at Wikipedia. With almost 185 sources to date [August 28, 2018], it gives an overview of much of what the issues are, starting with a detailed look at prominent sources going on to impact by country, responses on the part of industry players and, finally, academic analysis.
As a starting point and perhaps a guideline to better structure the document going forward, it is highly recommended. - @linmart [26 DEC 16]
,
After years of collaboration, Jacob Kornbluth @JacobKornbluth worked with Robert Reich @RBReich to create the feature film Inequality for All. The film was released into 270 theaters in 2013 and won the U.S. Documentary Special Jury Award for Achievement in Filmmaking at the Sundance Film Festival. Building off this momentum, Kornbluth and Reich founded Inequality Media in 2014 to continue the conversation about inequality with viewers.
Inequality for All: Website - FB /InequalityForAll - @InequalityFilm - Trailer
Inequality Media: Website - FB /InequalityMedia - @InequalityMedia
Robert Reich: LinkedIn - FB /RBReich FB Videos - @RBReich
Kickstarter Campaign
How does change happen? We go on a trip with Robert Reich outside the “bubble” to reach folks in the heartland of America to find out.
3,790 backers pledged $298,436 to help bring this project to life.
Saving Capitalism: For the Many, Not for the Few
#SavingCapitalism @SavingCapitalism
“Perhaps no one is better acquainted with the intersection of economics and politics than Robert B. Reich, and now he reveals how power and influence have created a new American oligarchy, a shrinking middle class, and the greatest income inequality and wealth disparity in eighty years. He makes clear how centrally problematic our veneration of the free market is, and how it has masked the power of moneyed interests to tilt the market to their benefit.
… Passionate yet practical, sweeping yet exactingly argued, Saving Capitalism is a revelatory indictment of our economic status quo and an empowering call to civic action.”
As featured in:
160120 - Inc
5 Books that billionaires don't want you to read
Learn to ask the right questions & tell captivating stories. Practical advice for journalists & avid news consumers.
9
by Mike Caulfield @holden
Work in progress but already excellent. Recommended by @DanGillmor
Instructor: Howard Rheingold - Stanford Winter Quarter 2013
Andrew Wilson, Virtual Politics: Faking Democracy in a Post-Soviet World (Yale University Press, 2005)
This seminal work by one the world’s leading scholars in the field of “political technology” is a must-read for anyone interested in how the world of Russian propaganda and the political technology industry works and how it impacts geo politics. It has received wide critical acclaim and offers unparalleled insights. Many of the names seen in connection with both Trump’s business dealings and the Russian propaganda apparatus appear in Wilson’s work.
Event Calendar: Trust, verification, & beyond
Full listing of events curated by the @MisinfoCon community
There are a lot of conversations happening right now about misinformation, disinformation, rumours, and so-called "fake news."
The event listing here is an attempt to catalogue when and where those conversations are happening, and to provide links to follow-up material from those conversations. You can help out by filling in the blanks: What's missing?
Latest event update: May 29, 2018
European Commission - Digital Single Market
Colloquium on Fake News and Disinformation Online
27 February 2018
2nd Multistakeholder Meeting on Fake News
Webcast RECORDED
A national educational program that mobilizes seasoned journalists to help students sort fact from fiction in the digital age.
Hosted on the online learning platform Coursera, the course will help students develop the critical thinking skills needed to judge the reliability of information no matter where they find it — on social media, the internet, TV, radio and newspapers.
Each week will tackle a challenge unique to the digital era:
Week 1:
The power of information is now in the hands of consumers
Week 2:
What makes journalism different from other types of information
Week 3:
Where can we find trustworthy information
Week 4:
How to tell what’s fair and what’s biased
Week 5:
How to apply news literacy concepts in real life
Week 6:
Meeting the challenges of digital citizenship
Course is free but people can opt to pay $49 and do the readings and quizzes (which are otherwise optional) and if they pass muster, end up with a certificate.
The Center for Contemporary Critical Thought - Digital Initiative
April 13-14, 2017
Cambridge Analytica: Tracing Personal Data
(from ethical lapses to its use in electoral campaigns)
Thursday, April 13, 2017
11:00am
East Gallery, Maison Francaise
Columbia University
Speaker: Paul-Olivier Dehaye @podehaye with Tamsin Shaw
Respondant: Cathy O'Neil @mathbabedotorg as respondant
Moderated by: Professor Michael Harris
Find out more
International Fact Checking Day
April 2nd, 2017
International Fact-Checking Day will be held on April 2 2017, with the cooperation of dozens of fact-checking organizations around the world. Organized by the International Fact-Checking Network, it will be hosted digitally on www.factcheckingday.com. the main components of our initiative will be:
If you are interested in finding out more/participating, reach out to factchecknet@poynter.org
01 Mar 2017
12:30 - 15:00
European Parliament, Room P5B00
Independent journalism is under pressure as a result of financial constraints. Local media is hardly surviving and free online content is sprawling. On social media platforms that are built for maximum profit, sensational stories easily go viral, even if they are not true. Propaganda is at an all-time high and personalised newsfeeds result in filter bubbles, which has a direct impact on the state of democracy. Just some of the issues that will be explored in this seminar, as we explore how journalists and companies see their position and the role of social media and technology.
Feb 24 - 27, 2017
Cambridge, MA
A summit to seek solutions - both social and technological - to the issue of misinformation. Hosted by The First Draft Coalition @firstdraftnews, The Nieman Foundation for Combating Fake News: An Agenda for Research and Action
February 17, 2017 - 9:00 am - 5:00 pm
Harvard Law School
Wasserstein Hall 1585
Massachusetts Ave, Cambridge, MA 02138
Full programme. Follow #FakeNewsSci on Twitter.
Write up:
170217 - Medium
Countering Fake News
February 13 - 14, 2017
What do informed and engaged communities look like today?
Find videos of the discussion here or access comments Twitter via #infoneeds
Tuesday, Jan 31, 2017
4:00 - 6:00 pm EST
Sanders Theatre, Harvard University
Co-sponsored by the Office of the President, the Nieman Foundation for Journalism, and the Shorenstein Center on Media, Politics, and Public Policy
Speakers include: Gerard Baker, editor-in-chief of The Wall Street Journal; Lydia Polgreen, editor-in-chief of The Huffington Post; and David Leonhardt, an op-ed columnist at The New York Times
Video coverage of the event is available.
170201 - NiemanLab
The boundaries of journalism — and who gets to make it, consume it, and criticize it — are expanding
Reporters and editors from prominent news organizations waded through the challenges (new and old) of reporting in the current political climate during a Harvard University event on Tuesday night.
Jan 27, 2017 - 2:30 pm – 4 pm
Newark Public Library, Newark, NJ.
Community conversation hosted by Free Press News Voices: New Jersey
Via Craig Aaron @notaaroncraig, President and CEO of Free Press.
Jan 18, 2017 - 8:30 am - 6:00 pm
New York Public Library
5th Ave at 42nd St, Salomon Room
Meeting Monday, January 9, 5-7pm -- 190 Doe Library
A group of computer scientists, librarians, and social scientists supporting an ecosystem of solutions to the problem of low quality information in media. For more information, contact nickbadams@berkeley.edu
BLANK PAGE
170212 - Medium
The rise of the weaponized AI Propaganda machine
There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.
170127 - IFLA
Alternative Facts and Fake News – Verifiability in the Information Society
161228 - MediaShift
How to fight fake news and misinformation? Research helps point the way
Is social media disconnecting us from the big picture?
By Jenna Wortham @jennydeluxe √
161118 Nieman Lab
Obama: New media has created a world where “everything is true and nothing is true”
By Joseph Lichterman @ylichterman √
161118 - Medium
A call for cooperation against fake news
by @JeffJarvis √
@BuzzMachine blogger and j-school prof; author of Public Parts, What Would Google Do?
161116 - CNET
Maybe Facebook, Google just need to stop calling fake news 'news'
by Connie Guglielmo @techledes √
Commentary: The internet has a problem with fake news. Here's an easy fix.
Knight Foundation - Civic Hall Symposium on Tech, Politics and Media
Agenda and Speakers
New York Public Library - January 18, 2017
2017
Ethical Journalism Network Ethics in the News [PDF]
EJN Report on the challenges for journalism in the post-truth era
161219 - First Draft News
Creating a Trust Toolkit for journalism
Over the last decade newsrooms have spent a lot of time building their digital toolbox. But today we need a new toolbox for building trust
170114 - Huffington Post
Why do people believe in fake news?
160427 - Thrive Global
12 Ways to break your filter bubble
161211 - NPR
A finder's guide to facts
Behind the fake news crisis lies what's perhaps a larger problem: Many Americans doubt what governments or authorities tell them, and also dismiss real news from traditional sources. But we've got tips to sharpen our skepticism.
Web Literacy for Student Fact-Checkers by Mike Caulfield @holden
Work in progress but already excellent. Recommended by @Dan
161209 - The Guardian
Opinion: Stop worrying about fake news. What comes next will be much worse
By Jonathan Albright @d1gi, professor at Elon University in North Carolina, expert in data journalism
In the not too distant future, technology giants will decide what news sources we are allowed to consult, and alternative voices will be silenced
161128 - Fortune
What a map of the fake-news ecosystem says about the problem
By Mathew Ingram @mathewi, Senior Writer at Fortune
Jonathan Albright’s work arguably provides a scientifically-based overview of the supply chain underneath that distribution system. That could help determine who the largest players are and what their purpose is.
161128 - Digiday
The underbelly of the internet': How content ad networks fund fake news
Forces work in favor of sketchy sites. As ad buying has become more automated, with targeting based on audience over site environment, ads can end up in places the advertiser didn’t intend, even if they put safeguards in place.
161125 - BPS Research Digest
Why are some of us better at handling contradictory information than others?
'Alternative Facts': how do you cover powerful people who lie?
A collaborative initiative headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits. View only.
170216 - Politico
How a Politico reporter helped bring down Trump’s Labor Secretary pick
"This was the most challenging story I’ve ever done. But it taught me that with dedication and persistence, and trying every avenue no matter how unlikely, stories that seem impossible can be found in the strangest of ways." - Marianne LeVine
Reuters
Covering Trump the Reuters Way
Reuters Editor-in-Chief Steve Adler
170115 - Washington Post
A hellscape of lies and distorted reality awaits journalists covering President Trump
Journalists are in for the fight of their lives. They will need to work together, be prepared for legal persecution, toughen up for punishing attacks and figure out new ways to uncover and present the truth. Even so — if the past really is prologue — that may not be enough.
Dec 2016 - Nieman Lab
Feeling blue in a red state
I hope the left-leaning elements of journalism (of which I would be a card-carrying member if we actually printed cards) take a minute for reflection before moving onto blaming only fake news and Russian hacking for the rise of Trump.
161111 - Medium
What’s missing from the Trump Election equation? Let’s start with military-grade PsyOps
Too many post-election Trump think pieces are trying to look through the “Facebook filter” peephole, instead of the other way around. So, let’s turn the filter inside out and see what falls out.
161109 - NYMag
Donald Trump won because of Facebook
Social media overturned the political order, and this is only the beginning.
170113 - The Guardian
UK media chiefs called in by minister for talks on fake news
Matt Hancock, the minister of state for digital and culture policy, has asked UK newspaper industry representatives to join round-table discussions on the issue of fake news.
170107 - The Guardian
German police quash Breitbart story of mob setting fire to Dortmund church
170105 - Taylor Francis Online
Russia’s strategy for influence through public diplomacy and active measures: the Swedish case
Via Patrick Tucker @DefTechPat Tech editor at @DefenseOne
161224 - The Times of Israel
Pakistan makes nuclear threat to Israel, in response to fake news
161215 - The Guardian
Opinion: Truth is a lost game in Turkey. Don’t let the same thing happen to you
We in Turkey found, as you in Europe and the US are now finding, that the new truth-building process does not require facts. But we learned it too late
61223 - The Wire
The risks of India ignoring the global fake news debate
A tectonic shift in the powers of the internet might be underway as you read this.
161123 - Naked Security
Fake news still rattling cages, from Facebook to Google to China
Chinese political and business leaders speaking at the World Internet Conference last week used the spread of fake news, along with activists’ ability to organize online, as signs that cyberspace has become treacherous and needs to be controlled.
161220 - NYT
Russian hackers stole millions a day with bots and fake sites
A criminal ring is diverting as much as $5 million in advertising revenue a day in a scheme to show video ads to phantom internet users.
160418 - Politico
Putin's war of smoke and mirrors
We are sleepwalking through the end of our era of peace. It is time to wake up.
'Alternative Facts': how do you cover powerful people who lie?
A collaborative project headed by Alan Rusbridger, ex-editor of The Guardian, Rasmus Kleis Nielsen @rasmus_kleis @arusbridger & Heidi T. Skjeseth @heidits
170207 - Bill Moyers
Your guide to the sprawling new Anti-Trump Resistance Movement
170203 - Mashable
Google Docs: A modern tool of powerful resistance in Trump's America
How fake news sparked a political Google Doc movement
170108 - The Guardian
Eli Pariser: activist whose filter bubble warnings presaged Trump and Brexit
“The more you look at it, the more of a complicated it gets,” he says, when asked whether he thinks Facebook’s plan will solve the problem. “It’s a whole set of problems; things that are deliberately false designed for political ends, things that are very slanted and misleading but not false; memes that are neither false nor true per se, but create a negative or incorrect impression. A lot of content has no factual content you could check. It’s opinion presented as fact.”
Fake news has exposed a deeper problem – what Pariser calls a “crisis of authority”.
“For better and for worse, authority and the ability to publish or broadcast went hand in hand. Now we are moving into this world where in a way every Facebook link looks like every other Facebook link and every Twitter link looks like every other Twitter link and the new platforms have not figured out what their theory of authority is.
61223 - The Wire
The risks of India ignoring the global fake news debate
A tectonic shift in the powers of the internet might be underway as you read this.
161215 - Washington Post
Fake news is sickening. But don’t make the cure worse than the disease.
161215 - USA Today
Fake-news fighters enter breach left by Facebook, Google
A cottage industry of fake-news fighters springs up as big platforms move slowly to roll out fixes.
161206 - Digital Trends
Forget Facebook and Google, burst your own filter bubble
161130 - First Draft News
Timeline: Key moments in the fake news debate
161129 - The Guardian
How to solve Facebook's fake news problem: experts pitch their ideas
161127 - Forbes
Eli Pariser's Crowdsourced Brain Trust is tackling fake news
Upworthy co-founder and hundreds of collaborators gather the big answers
161125 Wired
Hive Mind Assemble
by Matt Burge @mattburgess1 √
Upworthy co-founder Eli Pariser is leading a group of volunteers to try to find a way to determine if the news online are real or not
161119 - Quartz
Facebook’s moves to stamp out “fake news” will solve only a small part of the problem
161118 - CNET
The internet is crowdsourcing ways to drain the fake news swamp
Pundits and even President Obama are bemoaning fake news stories that appeared online leading up to the election. A solution might be found in an open Google Doc.
161116 - The Verge
The author of The Filter Bubble on how fake news is eroding trust in journalism
‘Grappling with what it means to look at the world through these lenses is really important to us as a society’
161115 - Digiday [Podcast 23:12]
Nieman’s Joshua Benton: Facebook has ‘weaponized’ the filter bubble
161109 - Nieman Lab
The forces that drove this election’s media failure are likely to get worse
By Joshua Benton @jbenton
Segregated social universes, an industry moving from red states to the coasts, and mass media’s revenue decline: The disconnect between two realities shows no sign of abating.
Eli is an early online organizer and the author of The Filter Bubble, published by Penguin Press in May 2011.
Shortly after the September 11th terror attacks, Eli created a website calling for a multilateral approach to fighting terrorism. In the following weeks, over half a million people from 192 countries signed on, and Eli rather unexpectedly became an online organizer.
The website merged with MoveOn.org in November of 2001, and Eli -– then 20 years old -- joined the group to direct its foreign policy campaigns. He led what the New York Times Magazine termed the “mainstream arm of the peace movement”; -- tripling MoveOn’s member base in the process, demonstrating for the first time that large numbers of small donations could be mobilized through online engagement, and developing many of the practices that are now standard in the field of online organizing.
In 2004, Eli co-created the Bush in 30 Seconds online ad contest, the first of its kind, and became Executive Director of MoveOn. Under his leadership, MoveOn.org Political Action has grown to five million members and raised over $120 million from millions of small donors to support advocacy campaigns and political candidates, helping Democrats reclaim the House and Senate in 2006.
Eli focused MoveOn on online-to-offline organizing, developing phone-banking tools and precinct programs in 2004 and 2006 that laid the groundwork for Barack Obama’s remarkable campaign. MoveOn was one of the first major progressive organizations to endorse Obama for President in the presidential primary.
In 2008, Eli transitioned the Executive Director role at MoveOn to Justin Ruben and became President of MoveOn’s board.
Eli grew up in Lincolnville, Maine, and graduated summa cum laude in 2000 with a B.A. in Law, Politics, and Society from Bard College at Simon's Rock. He is currently serving as the CEO of Upworthy and lives in Brooklyn, NY.
Contact: @elipariser
Combating Fake News: An Agenda for Research and Action
February 17, 2017 - 9:00 am - 5:00 pm
Full programme - #FakeNewsSci on Twitter
170214 - Forbes
Political issues take center stage at SXSW
170205 - The College Reporter
Workshop provides students with knowledge pertaining to fake news
170207 - Backchannel
Politics have turned Facebook into a steaming cauldron of hate
170201 - Triple Pundit
Upworthy and GOOD announce merger, join forces to become the leader in Social Good Media
170127 - Observer
These books explain the media nightmare we are supposedly living in
170118 - OpenDemocracy.net
The internet can spread hate, but it can also help to tackle it
161216 - NPR TED Radio Hour
How can we look past (or see beyond) our digital filters?
Is social media disconnecting us from the big picture?
By Jenna Wortham @jennydeluxe √
161112 - Medium
How we broke democracy
Our technology has changed this election, and is now undermining our ability to empathize with each other
1108 Ted Talks
Eli Pariser: Beware online "Filter Bubbles"
110525 - Huffington Post
Facebook, Google giving us information junk food, Eli Pariser warns
0305 - Mother Jones
Virtual Peacenik
030309 - NYT Magazine
Smart-mobbing the war
At this time, there does not appear to be a definition of “fake news” in this document.
I’d like to see the term, ‘Fake News’ retired. If it’s fake, it’s not news. It’s lies, inventions, falsehoods, fantasy or propaganda. A humorist would call it, “made up shit”.
Possible areas of research and working definitions:
- fake / true
- fake vs. fraudulent
- factual / opinion
- factually true / factually untrue
- logically sound / flawed
- original source / meta-commentary on the source
- user generated content
- personal information
- news
- paid content
- commercial clickbait
- gaming system purely for profit
Motive:
- prank / joke
- to drive followers/likes
- create panic
- brainwashing / programming / deprogramming
- state-sponsored (external / internal)
- propaganda
- pushing agenda
- money
- local ideology
- local norms and legislation - restrictions and censorship (i.e. Thailand, Singapore, China)
- fake accounts
- fake reviews
- fake followers
- click farms
- patterns
- satire
- bias
- misinformation
- disinformation
- libel
- organic / non organic
- viral
Further reference:
161128 - The New York Times
News outlets rethink usage of the term ‘alt-right’
via Ned Resnikoff @resnikoff √ Senior Editor, @thinkprogress √
161122 - Nieman Lab
“No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites
161122 - Medium
Fake news is not the only problem
By @gilgul Chief Data Scientist @betaworks √√
Thread by @thomasoduffy
There are different kinds of “fake” all of which need to be managed or mitigated. Cumulatively, these fake signals find their way into information in all kinds of places and inform people. We need to build a diction to classify and conceptualise these aspects to think about them clearly:
To some extent, it is worth decoding strategies used by lobbyists, spin doctors, marketing agencies and PR companies - and considering - what measures could limit their ability to syndicate information of a “warped-accuracy” can counter intentionally fake-news.
Header[w]
@Meighread Dandeneau, Comm Law and Ethics, 19 October 2017
Straight out of a modern dystopian novel comes the Orwellian headline “Obama: New media has created a world where ‘everything is true and nothing is true’’ - or so we would think. Surprisingly, the very real article is less than a year old, and based entirely in nonfiction. In November of 2016, The Nieman Lab published the report after a recent Trump tweet alleged to save a Kentucky Ford automotive business from closing. The information later was proven to be false, but the damage had already been done. The post had been seen and shared by millions of active followers. Obama addressed the event in a press conference in Germany, saying, "If we are not serious about facts, and what's true and what's not … then we have problems” The first amendment protects our right to speak freely, but with fake news becoming more predominant in politics today, we have to ask ourselves - how far-reaching is the law?
Ethically, most people would say it is wrong to mislead or intentionally misinform another person. It’s dishonest, and from a young age, society instills in us the virtue not to lie. When slander is committed, the government has systems of handling it. When fraud is committed, punishment is a court case and conviction away from being enacted. But fake news has no such precedent. The rise of social media has aided and abetted the spread of such stories, and many companies profit from peddling the gossip.
To continue using the ‘Trump saving Ford automotive factory’ example, measurable impact followed when several media organizations picked up the story. Included in the mix were The New York Times, USA Today, and Detroit Free Press, who all spread Trump’s claim unchecked. Further, these companies were unofficially endorsed by public figures who shared them, giving the story enough traction to appear on Google news. James Poniewozik who condemned news organizations during the event later tweeted, “Pushing back on fake news—some spread by the president—is going to become a bigger part of the media’s job.”
But what about alternative media companies, such as Facebook? Mark Zuckerberg deflects facebook’s role in the deliberate spread of fake news, taking the role of “aggregators”. Even if Facebook tried to filter fake news, the process would be nearly impossible, implies Zuckerberg. He states, “While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual”. This has prohibited human moderators from examining news. “There is always a risk that the accusation of being ‘fake’ will be abused to limit free speech. Not all ‘fake’ news is ‘real’, and in any case, one person’s fake news is another person’s opinion,” says blogger Karolina.
Many look at fake news today and consider its circulation just a part of modern media literacy. Others, such as Jonathan Albright, data scientist at Elon University, Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, believe fake news has “a much bigger and darker” purpose. They agree, “By leveraging automated emotional manipulation...a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion”. Now they are equipped to be used as an “impenetrable voter manipulation machine” and change elections as we know them forever. Hints of this have already been seen in the most recent election, which is what sparked their research. They call it “The Weaponized AI Propaganda Machine” and it “has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts”.
One question remains; what can we do? The obvious remark is to hold liars accountable for their actions, in cases where the fault is black and white. But in cases like Trump’s, some could argue that his tweet was hyperbole. The fault can then be found in the media companies who shared the post as if it were raw news. As of now, the speech is covered legally. However as we move toward a future where companies like Cambridge Analytica manipulate the public as a weapon, examination into our changing world and its technology is paramount.
The problem with “Fake News”
How many times over this past year have you heard the term “Fake news” tossed around? Most likely an uncountable number of times and over a vast variety of subjects. Fake news has become so prevalent in our society that many of us don’t go a single day without a piece of fake news sneaking its way onto our computers, our smartphones, or our televisions. This creates both ethical and legal issues that ripple through our society and ultimately take away from the medium of journalism as a whole.
The term “Fake News” refers to the act of knowingly writing or distributing falsities that assert themselves as fact. The easiest, and thus most popular way to spread fake news is by sharing these misleading articles on facebook, twitter, or on a plethora of other social media sites. These articles spread quickly, and because many do not have any reason to believe that the information may be false they take every line as gospel. So why does this matter?
According to usnews.com, over the past year team LEWIS, (who defines fake news as (an) “ambiguous term for a type of journalism which fabricates materials to perpetuate a preferred narrative suitable for political, financial, or attention-seeking gain") conducted a study which sought to realize the overall effect that fake news had on people's views of American news and brands. The study found that only 42% of millennials checked the credibility of publications, and this sank to 25% for baby boomers. This means that there is a chance that 58% of millennials, and 75% of baby boomers may be fed false information on an almost daily basis and accept it as truth. This is a problem for a number of reasons.
Usnews.com states that one of the main reasons that fake news is spread is to promote political ideologies, and this is where all of this seriously matters. Let’s say you get most of your news about a politician from Facebook, and let’s say that at least 60% of what you’re reading is either entirely false or misleading. This means that you could potentially be voting for a candidate that totally appeals to you, when in reality they might be the exact opposite of what you’re looking for and you were simply pushed and mislead by the media. Now imagine that this didn’t only happen to you, but this was the case of half, or even a quarter of the people who also voted for this candidate. This is a direct depreciation of fake news, and if is very scary.
Now it’s not that we don’t have the tools to fact check these articles ourselves, in fact it’s really not very difficult to determine the credibility of an article if you know what to look for. The major problem here is that many people don’t have any reason to believe that they’re being mislead, especially the older generations. People tend to read an article or even just the title of an article and have it stuck with them for some reason, and at some point they’ll share it in conversation with their friends because it was so gripping that they wouldn’t want it to be fake in the first place which deters them from even having the thought to check.
The ethical problems that arise because of fake news are significant and have a trial life impact. The people who push these articles have very little to answer for as it is almost impossible to police all media in an attempt to fight this sort of thing. The best battle against fake media is to check everything before preaching it as truth. Scrutinize your sources, and know that there is a chance that anything you are reading online is false. That’s all we’ve got until AI can start determine whether a post is considered fake news.
Fake News Problems
Fake news is everywhere. Many people are unaware of the amount of fake news that is out there in the media, which can include television, radio and the internet including social media like Facebook or Twitter. There are many problems that are related to fake news. As people who use different mediums of the media every day, some of the problems that would be related would be the fact that very few people know how to figure out if something is fake, and the fact that it’s unethical. Once these problems are brought to light, we, as a society, can try to make the issue of fake news known among those who use the media.
When it comes to detecting if something is classified as fake news, not many people know how to determine that. The article, Ten Questions for Fake News Detection, shows 10 different ways people can find out if something is considered fake news. Within the article, it asks, questions like “Does it use excessive punctuation(!!) or ALL CAPS for emphasis?” to determine if something in the media is fake news. There are red flags for answers that should not be answered a particular way. The more red flags there are, the worse it looks. There are other things media users can look at to determine if something is considered fake news. Some are obvious, like excessive punctuation, and some are not as obvious, such as whether or not “the “contact us” section include an email address that matches the domain (not a Gmail or Yahoo email address).” The Ten Questions for Fake News Detection article is just one of many articles people can access if they want to figure out if something is really fake news or not. Legally, these different websites or articles in the media do not have to mention whether or not they are in fact fake news. The First Amendment protects these pieces from being destroyed or put away. The First Amendment grants people freedom of speech and those who are making the articles online or talking about it on television or radio and expressing their freedom of speech. Therefore, it won’t stop anyone from getting that out in the media and in the public’s eye. There are things people can look into that one wouldn’t have even thought of unless they looked further into the matter.
Another main issue with fake news is the ethos aspect of it, ethos meaning the character and credibility of the “news” and the source that it comes from. There are many sources that are considered fake news and a large amount of people would have to agree that it is unethical to be publishing something that is fake. It throws people off when they’re looking through the media and seeing these things. Yes, it does make the viewer question the piece and whether or not it is actually true but depending on the topic, it can affect the viewer in a large way, mostly negative. It can cause anger and rage if it says one thing and the viewer takes that as the truth. This being said, a lot of fake news can be considered unethical because of the pain and frustration it puts people through. People need to take into consideration that maybe it is fake news and if it is, then they need to ignore it even though that may be easier said than done.
It’s crazy to think how popular fake news has become and how much society indulges in it in their everyday lives. There are so many problems that relate to fake news that can be realized if one was to think about it more in regards to how it’s affecting people’s lives. Fake news is an issue and it’s something people need to be aware of and they need to be able to determine if it is fake and why, and it’s also something that is unethical. With the amount of media being used in today’s society, people should have a better understanding of fake news, especially when it’s all over social media, which is extremely important to thousands of people.
by: Timothy Holborn
A Perspective by Eben Moglen[1] from re:publica 2012
The problem of ‘fake news’ may be solved in many ways. One way involves mass censorship of articles that do not come from major sources, but may not result in news that is any more ‘true’. Another way may be to shift the way we use the web, but that may not help us be more connected. Machine-readable documents are changing our world.
It is important that we distill ‘human values’ in assembly with ‘means for commerce’. As we leave the former world of broadcast services where the considerations of propaganda were far better understood; to more modern services that serve not millions, but billions of humans across the planet, the principles we forged as communities seem to need to be re-established. We have the precedents of Humans Rights[2], but do not know how to apply them in a world where the ‘choice of law’[3] for the websites we use to communicate, may deem us to be alien[4]. Traditionally these problems were solved via the application of Liberal Arts[5], however through the advent of the web, the more modern context becomes that of Web Science[6] incorporating the role of ‘philosophical engineering’[7] (and therein the considerations of liberal arts via computer scientists).
So what are our principle, what are our shared values? And how do we build a ‘web we want’ that makes our world a better place both now, and into the future?
It seems many throughout the world have suffered mental health issues[8] as a result of the recent election result in the USA. A moment in time where seemingly billions of people have simultaneously highlighted a perceived issue where the results of a populous exacting their democratic rights resulted in global issues that pertained to the outcome being a significant surprise. So perhaps the baseline question becomes; how will our web better provide the means in which to provide us (humans) a more accurate understanding of world-events and circumstances felt by humans, via our ‘world wide web’.
Related:
Management Science
The structural virality of online diffusion - Vol. 62, No. 1, January 2016, pp. 180–196
>> I elaborated on this a bit here: - @msukmanowsky
161110 - Medium
Using quality to trum misinformation online
Using page and domain authority seems like a no brainer as a start. I advocated for adding this information to something like Common Crawl
>> The problem with this approach is that fake-news is not only generated by web domains but via UGC sites such as youtube, facebook and twitter. - yonas
Recommended
161120 - NPR
Post-election, overwhelmed Facebook users unfriend, cut back
Facebook’s first attempt/plan of action to fight fake new.
Facebook message that now shows for the link provided by Snopes to the original source of the hoax.
As reported in this story:
161123 - Medium
How I detect fake news
by @timoreilley
A few important facts to consider first:
The smallest abuse here is that at the individual level a person can no longer trust their own friends to deliver real news. Then they have to go beyond Facebook to try to figure out what is real or fake.
What Facebook needs to do better is realize that trust among your family and actual friends (that you know outside of Facebook) is an invaluable tool for them and for Facebook.
Facebook needs to:
Important fact: You trust your actual family and friends, and Facebook needs to acknowledge this, respect it, and tap into it, to help make Facebook a more legitimate site for news sharing.
(Related comments in section on Surprising Validators -- @rreisman)
A possible method of implementing reputation systems is to make the reputation calculation dynamic and system based, and mapping the reputation scores of sources on a reverse sigmoid curve. The source scores will then be used to determine the visibility levels of its articles on social media and search engines. This ensure that while credibility takes time to be built, it can be lost very easily.
Where,
Ss -> Source Score
Sa -> Cumulative of its article scores
However, this system needs to be dynamic and allow even newer publications a fair chance to get noticed. This needs to be done by monitoring the reputation of both the sources and the channels the articles pass through.
Have fleshed out the system in a bit more detail in the following open document if anyone is interested in taking a look.
Concept System for Improved Propagation of Reliable Information via Source and Channel Reliability Identification [PDF]
Anyone interested in collaborating on this can contact me at sid DOT sreekumar AT gmail
Build an algorithm that privileges authority over popularity[cv]. Create a ranking of authoritative sources, and score the link appropriately. I’m a Brit, so I’ll use British outlets as an example: privilege the FT with, say, The Times, along with the WSJ, the WashPo, the NYT, the BBC, ITN, Buzzfeed, Sky News, ahead of more overtly partisan outlets such as the Guardian, the Telegraph, which may count as quality publications but which are more inclined to post clickbaity, partisan bullshit. Privilege all of those ahead of the Mail, the Express, the Sun, the Mirror.
Also privilege news pieces above comment pieces; privilege authoritative and respected commentators above overtly partisan commentators. Privilege pieces with good outbound links - to, say, a report that’s being used a source rather than a link to a partisan piece elsewhere. [cw][cx][cy]
Privilege pieces from respected news outlets above rants on Medium or individual blogs. Privilege blogs with authoritative followers and commenters above low-grade ranting or aggregated like farms. Use the algorithm to give a piece a clearly visible authority score and make sure the algorithm surfaces pieces with high scores in the way that it now surfaces stuff that’s popular.
Of course, those judges of authority will have to be humans; I’d suggest they’re pesky experts, senior journalists with long experience of assessing the quality of stories, their relative importance, etc. If Facebook can privilege popular and drive purchasing decisions, I’m damn sure it can privilege authority and step up to its responsibilities to its audience as well as its responsibilities to its advertising customers. @katebevan
I doubt FB will get into the curating business nor do they want to be accused of limiting free speech. The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.
Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc. Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc. Verified News should include major news outlets and others with a historical reputation for accuracy. [cz]
How? There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing (Accern, for example).
We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[da]
[db]Possible requirements
Time in existence, numbers of field reporters in each country/local should be required
6) Enforcement: when I google news search a topic, a range of articles come up, not all of which are news---some are fraudulent/fake, some are biased--why is this being called news? Why should it get a checkbox when other, unverified sources, shouldn't appear next to it in the first place?
If anyone is interested in discussing the specifics of what I’m thinking, contact me -- alexleedsmatthews@gmail.com
I doubt FB will get into the curating business nor do they want to be accused of limiting free speech. The best solution will likely involve classifying Verified News, Non-Verified News, Offensive News.
Offensive News should be discarded and that would likely include things that are highly racist, sexist, bigoted, etc. Non-Verified News should continue with a “Non-Verified” label and encompass blogs, satire, etc. Verified News should include major news outlets and others with a historical reputation for accuracy.
How? There are variety of ML algos that can incorporate NLP, page links, and cross-references of other search sites that can output the three classifications. Several startups use a similar algorithm of verified news sources and their impact for financial investing (Accern, for example).
We could set up a certification system for verified news outlets. Similar to twitter where there are a 1000 Barack Obama accounts, there’s only one ‘verified’ account. A certification requirement might include the following: +@IntugGB[dd]
[de]Possible requirements
(Time in existence, numbers of field reporters in each country/local should be required)
A broad architecture for reputation systems is outlined below in “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds” - Richard Reisman @rreisman
Creation of multiple, user-selected fact checkers
(no banning/ censorship/ editorial control/ coercion):
This may achieve the following:
Most importantly, it injects high-precision information into the system that is not easily obscured. Anybody can write and publish an article that contains lies or half-truths or dog whistles or satire. Fact checkers, however, must provide a clear verdict (through the API). They also cannot easily state that something is true when it is probably not and vice versa without losing some trust when other fact checkers’ verdicts are readily accessible. (+KP)
Linking corrections to the fake news they correct:
Recommended
161122 - Nieman Lab
No one ever corrected themselves on the basis of what we wrote”: A look at European fact-checking sites
Via Nieman Journalism Lab at Harvard @NiemanLab
Reuters Institute
Rise of the fact checker - A new democratic institution?
Via Nieman Journalism Lab at Harvard @NiemanLab
Thread by @juliemaupin
Thread by @thomasoduffy
We need to build narratives that help us clarify, in a crystal clear way, the whole life-cycle and “CX” along the journey of fake news. For example, how a person goes from not holding an opinion on an issue, to inputting fake-news across single, multiple and/or compound reinforcing sources (like brand touchpoints) and believing fake news, holding that perception for a period of time, to later discovering it was not true (when and if they happens) in a way where they change their mind.
By focusing on people who have overcome the belief in some fake-news source, we can ask, how can we get more people to go through a fake-news recovery process sooner.
Similarly, we need to contrast this with how people become resilient against fake news and know it’s not true, so we can nudge people vulnerable to fake news towards more resilience against it.
The goal of this would be to reduce the half-life of fake news, reduce vulnerability to fake news, reduce the incidence of fake news success conditions, and shorten the half-life of successful fake news.
Another way to fix the cycle is to make sure that the sources that publish this fake news come out right away and admit it as being wrong. The longer that a news source keeps fake news out there, the more people are tricked to believe in the wrong thing. A source that comes out with false news should as soon as possible take it down and admit they were wrong or should be banned from that site. It is very important that people get to know the truth and are not just left hanging with that one false story.
Thead by Heather
>> To build on what Heather said, there are specific “clickbait” patterns that a lot of these stories use. Perhaps use that as part of a signal to downrank stories or flag them for human review
Related:
161117 - NBC News
Google Think Tank launches new weapon in fight against ISIS
From the article:
“In traditional targeted advertising, a new mother searching for information on Google about getting a baby to sleep might start seeing ads for blankets and white-noise machines in their feeds.
Through Redirect, someone searching for details about life as an ISIS fighter might be offered links to independently produced videos that detail hardships and dangers instead of the stirring Madison Avenue-style propaganda the terror group puts online.”
The issue of clickbaiting, a way of getting users to click on links that lead to other websites, has been a problem in America since the advancement of technologies and the popularization of the internet. The ever growing phenomenon of social media has not alleviated this issue, it has gotten much worse because of this. I am less likely to go on Facebook now because I will come into contact with ignorance spreading more ignorance. Relatives of mine and individuals who are older and do not know any better about technology are typically victims of fake news. But fake news has been around since before my relatives were born. It all comes from yellow journalism. Most stories written for these papers had almost no research done. Attention-grabbing headlines were the key to selling more papers. Now, using this same formula, similar “publishers” online are being funded by corporations like Google for putting out these false articles. If funding were to be cut, then fake news can ultimately be phased out.
There is tons of evidence of individuals outside the United States publishing false articles about United States officials. These individuals cannot be charged with slander or libel, because these servers reside outside the physical boundaries of the American law. These individuals profit from the American population and any general form of web traffic. I believe these individuals are aware of the American societal push for new and innovative technologies. Most citizens are merely just following the trend. Older individuals, typically baby boomers, consider themselves to be “computer illiterate,” but refuse to be willing to learn how the internet and computers work. My mother is unfortunately one of these individuals, and shares fake news constantly. These “publishers” cannot be affected by American laws, and they are baiting the weak-minded over the internet to gain some web traffic through their articles. This act of using others for their own profit is completely unethical. For a society to progress, they must be presented with the absolute truth. As of yet, the government has no reason to put laws on fake news, because there has not been an event where the government suffered from fake news.
The best way to take control over the amount of fake news being spread is in the hands of private companies like Facebook and Google. With their large amount of users, and how widespread this issue has been, they would have the right to stop fake news sources that cause harm to it’s users. Journalism was meant to deliver factual news that leaned towards the public’s needs instead of potentially causing harm or discomfort. Most fake news sites ultimately cause harm overall, as the readers are blindly funding these “publishers” merely by clicking on their link. I believe that Facebook, as well as Google, should cut off their funding from sites that are commonly reported to be spreading fake news. In addition, they should be stricter rules for signing up with AdSense. One form of criteria should be that the owner/publisher of the website should only receive funding if they reside in the same country as the companies that give ad revenue. So for companies like Google, they not only would be able to phase out any foreign servers, but they would also be able to track any server in America, and can charge them with slander or libel, depending on the severity of the falsehoods.
When it comes down to it, fake news originates from an unethical individual that is milking from the weak minded to receive ad revenue. These companies that fund these sites need to step up to the plate in order for this issue to be addressed. If they just overlook this issue, it can have potential to grow into something that is much worse, potentially causing more harm for anyone who reads these articles down the line. I believe making certain criteria to receive ad revenue can help alleviate this issue, especially if the ad revenue is land-locked.
161123 - NPR [Podcast]
We tracked down a fake-news creator in the suburbs. Here's what we learned
Please note we are tagging contributions with certain [keywords] at this moment. As the document evolves, these will transferred to specific topics already in place - @Media_ReDesign [16 Dec 16]
From our email conversation
Note: Not clear who the contributors are, perhaps important to include given the title
“Using Factmata, you should be able to verify statements on social media, article comments, articles and any text on the internet about important economic issues. If someone makes a statement about the number of teenagers without jobs going up, you should be able to view the latest ONS youth unemployment data; if its a claim that economic growth is collapsing and the economy is a mess, we will link you to the official GDP growth figures. As individual users use the tool to help them verify statements (by receiving relevant data and stats), the system will be able to provide a “truth score” on statements, by aggregating users’ votes.”
Ref: https://medium.com/factmata/whats-next-for-factmata-2df231bd6fe9
"De, with Anant Goel, a freshman at Purdue University, and Mark Craft and Qinglin Chen, sophomores at the University of Illinois at Urbana-Champaign, built a Chrome browser extension that tags links in Facebook feeds as verified or not verified by taking into account factors such as the source’s credibility and cross-checking the content with other news stories. Where a post appears to be false, the plug-in will provide a summary of more credible information on the topic online.
They’ve called it FiB.
Since the students developed it in only a day and a half (and have classes and schoolwork to worry about), they’ve released it as an “open-source project,” asking anyone with development experience to help them improve it. The plugin is available for download to the public, but the demand was so great that their limited operation couldn’t handle it."
161118 - The Washington Post
Fake news on Facebook is a real problem. These college students came up with a fix in 36 hours
Feedback on our Taboola article:
"Traditional ads tend to be intrusive. Thereby blocking a majority of them hinders that. Taboola and outbrain are unfortunately helping spread misinformative/non-factual and/or politically biased articles through their sponsored links, something that in my opinion is worse than commercials for the sake of products/services. Keep that in mind for your "Acceptable Ads" program moving forward."
From
161129 - AdExchanger
Ad Blocking declined in Germany; Advertisers are concerned with Snapchat video averages
Revcontent and RevDefender both serve Taboola-style ads using web sockets and are really, really hard to block.
"Facebook is a prominent distributor of online news, including the fake variety. Analytics company Jumpshot tracked Facebook referrals from more than 20 news sites from September to November, finding 80% of unique visitors to hyperpartisan news sites came from the platform, Poynter reports. Fake news site abcnews.com.co can attribute 60% of total visits to Facebook during that period. Compare that to The New York Times and CNN, which attribute 20% and 11% of site visits to Facebook, respectively. Even as they’re cut from Facebook and Google exchanges, fake news sites are finding new purchase in content recommendation widgets like Taboola and Revcontent, which manufacture scale through long-tail networks."
The link to "content recommendation widgets":
161128 - Digiday
The underbelly of the internet': How content ad networks fund fake news
Also, a story about how Le Monde is trying to automate hunting down fake news:
How Le Monde is taking on fake news
"The plan is to build a hoax-busting database, which incorporates information on which sites are fake and which are verified, trusted sources, and readers can access via Google and Firefox Chrome extensions. The idea is that once a user has downloaded the extension, when they come across articles online a red flag will appear if the site or news is deemed fake, yellow if the source is unreliable or green if it’s ok.... Laurent has plans to expand the hoax-busting database beyond France’s borders. “Our goal is to be open source, so that everyone can use it. And we hope to have a bigger database by sharing our database of news sites and other fake news from other countries,” he added."
UPDATE - 180228
“The news is broken but we figured out how to fix it.”
– Jimmy Wales, Founder of Wikipedia
Wikipedia has endlessly been ridiculed as the poster child of ‘unreliable resources’ on the Internet. Ironically enough, in this day and age, Wikipedia proves to be not only one of the more accurate and informative resources on the internet, but also one of few amongst the list of the ever dwindling ‘real’ and ‘trusted’ news outlets.
Jimmy Wales compares Fakes News to the email spam situation that was much more prevalent a few years ago. Today, email spam is miniscule in comparison to how much of an actual issue it was at its prime. Wales predicts that Fake News will be infiltrated and banished in the years to come. However, unlike email spam, Fake News has serious capabilities of altering public opinion (so much so that it could, you know, potentially throw a presidential election). The impact of Fake News will not even be quantifiable until years have passed and historians can look back on what actually happened. Mark Zuckerberg himself is reluctant to admit that the sharing of these articles may have had extreme effects on public opinions. Wales sites the whole situation as an “asleep at the switch” case that probably lead to some unfavorable voter suppression.
Wales sites advertising and ‘click-bait’ driven journalism as the single most detrimental piece of the Fake News crisis. The monetary payout tied to the amount of views and clicks an article can accumulate are tainting the quality and authenticity of information being shared. The payout is probably the little voice in the back of the minds of platforms like Facebook and Twitter that are well aware of the circulation of Fake News, yet have not taken any serious actions to combat it.
In the beginning of its creation, Wikipedia was considered the epitome of unreliable. However, Jimmy Wales created a community of rather insightful and well-informed individuals that only seem to be getting more precise and scrutinizing over the accuracy of their information. It is interesting that Wikipedia is a site completely built on inviting anyone to edit and contribute to their pages, however Wikipedia was not attacked by the Fake News epidemic in the slightest. Have you ever noticed that Wikipedia is an advertisement free website?
In light of the growing dilemma of Fake News, Wikipedia’s founder Jimmy Wales offers a solution. Much like this doc itself, Wikipedia provides a platform for open editing that allows a constant flow of new and changing information to be checked and elaborated on by any number of contributors. Because Wikipedia has been built around the idea of providing accurate information and constantly reassuring the exactitude of concepts being shared, Wales has created an arsenal of professional fact-checkers. From this concept a brilliant idea was born.
People seek out “very basic, very straight presented facts”. This is what has made Wikipedia so successful. This is why Wikipedia was not infiltrated by Fake News reports. The community of Wikipedia is constantly checking their own information to make sure it is as straightforward and accurate as possible. People seek out information on Wikipedia, whereas a platform like Facebook, information is simply presented to you, and you more of less have to take it at face value (pun completely intended).
So how do we present news in a basic, straightforward manner? Jimmy Wales is in the process of creating a news empire called Wiki Tribune. Built very similar to the way Wikipedia is constructed; Wales contends we start by completely stripping the advertisements from online journalism. If we take the advertisements and click-bait out of the journalism equation all together, we take away the monetary reward that is driving such ridiculous articles to transpire. He is also attempting to approach the build up of this company from the grassroots. Wales wants to raise funds for the start up on a donation basis. This way, sponsors have no way of pushing their own agendas by looming their money over the heads of the company. The all-encompassing idea of this form of news delivery is, “by the people, for the people”. Wales wants to partner professional journalists with the public to determine what news needs to be reported on, and then deliver that information to broadcasting stations. If we build an insightful, articulate community, we have potential to reshape the public opinion with truth.
Check out Wiki Tribune, which has been growing its platform since April 2017:
Provided below are some videos of Jimmy Wales further explaining his mission:
Up Front (13 October 2017) How do you fight back against fake news? AlJazeera. Retrieved: 27 February 2018
FORMAT
https://www.youtube.com/watch?v=nho5NaLzc5Q
https://www.youtube.com/watch?v=buO_lk0fHwM
Thread by Kyuubi10 (29/11/2016)
After posting my rumblings in “More ideas...” I decided to reorganised the ideas in a more comprehensible format.
I’d like to expand on the notion of verifying content and provide an expandable structure which would be both efficient and cost effective.
The common ideas provided so far covered automation, crowd-sourcing and machine learning very well. What I have done is simply pull from existing ideas and improve upon them.
How to get verified?
How will the trust rating work?
Another direction
I'm going to go out on a limb and say this is exactly the wrong direction to go in. The solution to a divided America lies in more politics and more debate, not figuring out how to design away what you disagree with.
Consider the top "true" mainstream stories referenced in this often cited BuzzFeed analysis all have a big partisan slant:
16116 - BuzzFeed
This analysis shows how fake election news stories outperformed real news on Facebook
The pro-Trump crowd has a point when they note that the vast, vast majority of the mainstream media voted against him and ask how it can be truly fair and objective.
Sure that is not an excuse to draw an equivalence between the WaPo and some random tweet on the internet that get picked up by infonews or whatever Pravda-esque news outlet the pro-Trump crowd uses today.
Neither should that be an excuse to shut down debate through well-intentioned though ultimately profoundly misguided design solutions.
Those of us that disagree with Trump need to win the war of ideas. Here is my two cents in that vein:
161124 - Calbuzz
Op Ed: Young man’s hope for spirit of California
- Patrick Atwater @patwater
Thread by @thomasoduffy
It is important to recognise the biological and neurological factors that underlie a person’s ability to consider points of view that are different to their own.
Biologically, when a person’s fight or flight system is activated, their brain seems to shut down their philosophical centers and render them more certain about what seems to be true according to their present mental model. This seems to be an evolutionary protective function; under threat, deep uncertain thought is unsafe thus in fight or flight mode, a person’s brain optimises them to take rapid action to run away or fight, requiring certainty upon which action can be taken… where survival depends on winning or exiting, not deep thinking in that moment.
Experientially, many stressed people get “stuck in mindsets” they can’t unravel or exit from through rational thought or even qualified information alone. But later, on holiday, typically when sufficiently relaxed, their brain changes mode (increasing the depth of their parasympathetic access) which in turn allows them to feel safe and uncertain, the minimum viable brain state to contemplate views different to their own and to change their thinking. When a person is relaxed and mindful, they have more neuroplasticity - the capacity to change their mind.
However, when a person is traumatised, i.e. is exposed to extreme events, their neural pathways get phosphorylated, and this “stuck in mindset” factor can be amplified. Many people who are traumatised seem to carry a mindset through their life unless they successfully deprogramme that same trauma which is not trivial, nor seems to happen based upon ordinary life conditions.
The second part of this is to recognise the whole media system makes money by triggering peoples alert systems - i.e. tricking them into paying attention for their own safety. Thus the media is keeping many people in various degrees of fight or flight / threat perception which has a side effect of keeping them more stuck in mindsets and unable to autocorrect from fake news they have inputed. In order to think differently or correct their biases, a greater proportion of parasympathetic access (the nervous system configuration that is opposite of fight or flight) and neuroplasticity is required, in tandem with educational journeys that guide people to think more intelligently, rationally and accurately.
(note: maybe an qualified neurobiologist can confirm my chemisty, where principles are correct)
Thread by @thomasoduffy
We must be equally cognisant of how specific classes of people end up being exposed to, inputting, reacting to and re-transmitting fake news or derivations of fake news signals. We need to think about this in terms of different kinds of people and relationship with the media they consume and the channels/platforms/device types from which they consume it. This can help us get our head around the reality of the problem beyond us over generalising from our own limited personal experience and behaviours.
We need simple rules-of-thumb that help bring clarity to the issue… for example: Does an individual’s behaviour marginally increase or reduce the quantity of fake news information, authority, reach and transmission in the World?
Just like you can make pretty good choices about food using metrics like ratio of calories: nutrients, we need design principles that scale up to increase the ratio and authority of accurate news, accurate thinking about a multiplicity of information sources, and the whole lifecycle of news related behaviours. ++Kyuubi10
How many times over this past year have you heard the term “Fake news” tossed around? Most likely an uncountable number of times and over a vast variety of subjects. Fake news has become so prevalent in our society that many of us don’t go a single day without a piece of fake news sneaking its way onto our computers, our smartphones, or our televisions. This creates both ethical and legal issues that ripple through our society and ultimately take away from the medium of journalism as a whole.
The term “Fake News” refers to the act of knowingly writing or distributing falsities that assert themselves as fact. The easiest, and thus most popular way to spread fake news is by sharing these misleading articles on facebook, twitter, or on a plethora of other social media sites. These articles spread quickly, and because many do not have any reason to believe that the information may be false they take every line as gospel. So why does this matter?
According to usnews.com, over the past year team LEWIS, (who defines fake news as (an) “ambiguous term for a type of journalism which fabricates materials to perpetuate a preferred narrative suitable for political, financial, or attention-seeking gain") conducted a study which sought to realize the overall effect that fake news had on people's views of American news and brands. The study found that only 42% of millennials checked the credibility of publications, and this sank to 25% for baby boomers. This means that there is a chance that 58% of millennials, and 75% of baby boomers may be fed false information on an almost daily basis and accept it as truth. This is a problem for a number of reasons.
Usnews.com states that one of the main reasons that fake news is spread is to promote political ideologies, and this is where all of this seriously matters. Let’s say you get most of your news about a politician from Facebook, and let’s say that at least 60% of what you’re reading is either entirely false or misleading. This means that you could potentially be voting for a candidate that totally appeals to you, when in reality they might be the exact opposite of what you’re looking for and you were simply pushed and mislead by the media. Now imagine that this didn’t only happen to you, but this was the case of half, or even a quarter of the people who also voted for this candidate. This is a direct depreciation of fake news, and if is very scary.
Now it’s not that we don’t have the tools to fact check these articles ourselves, in fact it’s really not very difficult to determine the credibility of an article if you know what to look for. The major problem here is that many people don’t have any reason to believe that they’re being mislead, especially the older generations. People tend to read an article or even just the title of an article and have it stuck with them for some reason, and at some point they’ll share it in conversation with their friends because it was so gripping that they wouldn’t want it to be fake in the first place which deters them from even having the thought to check.
The ethical problems that arise because of fake news are significant and have a trial life impact. The people who push these articles have very little to answer for as it is almost impossible to police all media in an attempt to fight this sort of thing. The best battle against fake media is to check everything before preaching it as truth. Scrutinize your sources, and know that there is a chance that anything you are reading online is false. That’s all we’ve got until AI can start determine whether a post is considered fake news.
Related to cross-partisan / cross-spectrum notes above - Richard Reisman (@rreisman)
See [Update] “A Cognitive Immune System for Social Media” based on “Augmenting the Wisdom of Crowds” below
This outlines some promising strategies for making the filter bubble more smartly permeable and making the echo chamber smarter about what it echos. Summarizing from my 2012 blog post: Filtering for Serendipity — Extremism, “Filter Bubbles” and “Surprising Validators”:
...Quoting Sunstein:
People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”[gx]
Our initial convictions are more apt to be shaken if it’s not easy to dismiss the source as biased, confused, self-interested or simply mistaken. This is one reason that seemingly irrelevant characteristics, like appearance, or taste in food and drink, can have a big impact on credibility. Such characteristics can suggest that the validators are in fact surprising — that they are “like” the people to whom they are speaking.
It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.
Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it.
… My post picked up on that:
This struck a chord with me, as something to build on. Applying the idea of “surprising validators” (people who can make us think again):
This provides a specific, practical method for directly countering the worst aspects of the echo chambers and filter bubbles…
This offers a way to more intelligently shape the “wisdom of crowds,” a process that could become a powerful force for moderation, balance, and mutual understanding. We need not just to make our “filter bubbles” more permeable, but much like a living cell, we need to engineer a semi-permeable membrane that is very smart about what it does or does not filter.
Applying this kind of strategy to conventional discourse would be complex and difficult to do without pervasive computer support, but within our electronic filters (topical news filters and recommenders, social network services, etc.) this is just another level of algorithm. Just as Google took old academic ideas about hubs and authority, and applied these seemingly subtle and insignificant signals to make search engines significantly more relevant, new kinds of filter services can use the subtle signals of surprising validators (and surprising combinators) to make our filters more wisely permeable.
(My original post also suggested broader strategies for managed serendipity: “with surprising validators we have a model that may be extended more broadly — focused not on disputes, but on crossing other kinds of boundaries — based on who else has made a similar crossing…”)
(Update: my 12/15/16 post adds broader and more current context to my original Surprising Validators post: 2016: Fake News, Echo Chambers, Filter Bubbles and the "De-Augmentation" of Our Intellect)
- Richard Reisman (@rreisman) (#SurprisingValidators)
(Expanding on Surprising Validators, above -- added 11/1/18) -- Richard Reisman (@rreisman)
Here are links to work on a broad architecture for “Augmenting the Wisdom of Crowds” to create “A Cognitive Immune System for Social Media” drawing a nuanced view of “The Tao of Truth.” This draws on work detailed in 2002-3 and added to in the past few years (reinforced by recent work and meetings on “fake news” and disinformation).
It builds a reputation system that includes both external and algorithmically emergent authority of both news sources and those who disseminate and comment on them. Much like Google PageRank it is recursively based on “Rate the Raters and Weight the Ratings,” where raters/ratings with high imputed authority are weighted higher than those with lower imputed authority. Thus sources and raters have only as much weight as other sources and raters give them.
The main body of ideas on such an architecture is in these posts:
A separate but related body of innovative work is on a strategy for changing our social media business models -- to remove the inherent disincentives (the ad model) that have led the platforms to promote disinformation when they should be down-ranking it. Introductions to that work are in this post and journal article:
(Currently a pro-bono effort.)
- Richard Reisman (@rreisman)
(#AugmWoC, #WisdomOfTheCrowd: #RateTheRaters #SurprisingValidators)
I am adding some ideas in Spanish but I do believe that there is a way to identify the fake news analyzing it’s patterns of propagation. I’ll be glad if someone can translate. >> @LoQueSigue
d trolls are working some years ago and maybe the knowledge developed here could help around th
I’ve tried years ago to develop software that could fight this battle on Twitter, and there are some ideas that could maybe also be useful on Facebook:
Súmate como socio para crear un medio 3.0 en México [Pantalla Completa]
In Mexico we are in a kind of future where fake news, armies of bots ane world.
Muchas de las noticias falsas dejan un rastro de propagación según han sido compartidas. Así he podido identificar si se han generado de forma orgánica, es decir por el interés legítimo de la gente en esa noticia y también cuando se trata de un equipo de personas que se dedican a propagar información falsa. Les dejo un par de ejemplos con data de Twitter que bien se puede usar en Facebook. – @LoQueSigue_[ha][hb][hc]
More case examples (in spanish) here:
El[hd] día en que la sociedad derrotó a los bots:
#EstamosHartosdeEPN vs #EstamosHartosCNTE[he] :
Así fue el ataque masivo de bots al @GIEIAYOTZINAPA. Demostración
Trace of a fake news/Trending “non organic”[hf][hg]
Case study on Twitter: #LosSecretosDeAristegui.
People were paid for are spreading false information about journalist.
Link to full screen video:
Carmen Aristegui ¿verdad o manipulación? - [Pantalla completa]
Trace of a real “organic”[hh][hi] news. A lot of people is sharing a real information connecting communities
(Source? Because if true, hoo boy we can do this with graph theory.-- N.) +@IntugGB[hj]
@LoQueSigue_ --- That graphs are my own, I’ve just generated them to try to explain. The first one is about a trending topic generated yesterday spreading a fake news about a popular mexican journalist. The second one is about a Change.org campaign popular in Mexico.
You can find something about this idea on Wired:
Pro-Government Twitter Bots Try to Hush Mexican Activists
My name is Will Thalheimer. Not a member of the political caste. Just an American who is frightened. Twitter: @MakeTruthGreat
I wrote up a couple of suggestions here:
A Suggestion on how Facebook could fix its Fake News problem
In short:
I think the most feasible and therefore most likely implemented solution will be one that:
— zach.fried@gmail
We are building citizen science software producing and displaying credible and legitimate ‘truthiness’ scores. Here’s how it works: Everyday, participating citizen scientists around the world are invited to read one or a few of the the top trending articles in their language. These lightly-trained citizen scientists will tag words and phrases in the articles that appear to commit common fallacies like presenting a false dichotomy, mistaking correlation for causation, ignoring selection effects, and more. When many citizen scientists independently verify the fallacy, the news article text committing the fallacy is flagged so that news readers can recognize the potential mistake. (Journalists will be given an opportunity to correct (or contest) the mistake during a 72 hour window.) Journalists and their publishers will be (visually) rated by the number of fallacies they commit and propagate. These ground → up measures will be seen as more legitimate than holistic, more easily-biased metrics. When the project is operational, humans will be able to measure and visualize — regardless of their agreement with the content of a journalist or news source — the truth value and quality of published ideas. Over time, the project will advance public literacy and the quality of our discourse.
(for more information, email nick@goodlylabs.org )
Special Campaigns
Promote relevant social issues applying viral techniques. - @IntugGB
All too often, people are unaware of what is stake either because they do not understand or it simply does not interest them.
Background
Watch:
AIB: Save the Internet [Fullscreen]
“Pooled” White House News Dashboard
Just as there is a White House Pool that covers the President, would like to propose something similar for a new kind of aggregation site for specific topics, such as covering the President. In the case of covering the President, the “page” might have the following sections and draw on the existing reporters and publications that are part of the White House Press Pool[hu][hv].
The site / page could include:
While it may be difficult to encourage competing publications to collaborate on this initiative, it may help highlight the different and competing narratives that are circulating. It may also be a way of having a neutral place to debunk news articles. So, when a friend shares something on social media that is incorrect (example: Trump incorrectly stating how much he self-funded his campaign), you can go to this pooled site, look at relevant articles and suggest your friend go there to confirm.
This is not a perfect approach and doesn’t address all the issues recently raised (example: Separating fact from commentary). Curious to read comments from others on something like this.
Thread
161214 - The Journal Gazette
Bursting the Bubble: Absence of critical thinking among young has troubling implications for nation
150415 - Telegraph
Humans have shorter attention span than goldfish, thanks to smartphones
120620 - Telegraph
Children with short attention spans 'failing to read books'
Recommended
161116 - Medium
Facebook, Google, Twitter et al need to be champions for media literacy
By @DanGillmor
There has been a *lot* of research in this area. I’m working on a PhD in this topic.
Broadly, there are three ways of looking at the whether or not the information is in fact trustworthy - Credibility-based, Computational, and Crowdsourced.
All of them can be gamed and hacked, so FB does have a tough problem. I think that there are patterns of interactions with information that users have with information that are different with respect to whether or not they are trying to find an answer to a question or merely trying to support a bias. The trick is in teasing out how to reliably find out whether the user you’re watching at the moment is engaged in trying to do one or the other. Aggregating the traces of the people who are looking hard for answers might wind up being very helpful.
A pop up box that comes up when someone is about to share an article that’s been frequently flagged as fake that just says “many Facebook users have reported this article contains false information - do you still want to share it?”[hz][ia] Could slow the spread of bad information but would not allow flagging as warfare to totally drown out opposition.
volve the platform’s objective:
The biggest problem with fake news is many of the links looked like real news. Sure there were the hyper-partisan sites that were clearly iffy, but I’m talking about ones like abcnews.com.co and the like. This needs to rectified, and can be in a bunch of ways, mostly a sort of blacklist with keywords from legit news sites. Like abcnews can be keyed out of any other permissible link. Secondary to this, a flagging feature which allows users (through an algorithm) to flag a story as ‘false, misleading, inaccurate’ etc. Thirdly, FB/Twitter/et al need to take it upon themselves to ban hate speech posting/sharing.
Second, there needs to be a human editorial staff, hands down. The problem with this is you have to hire a legit staff of non-partisan journalists that can curate the sidebar news section and deal with other flags that pop up.
Third, a ‘trusted source’ white-list. National TV, national print, major local papers, etc. can be whitelisted and re-approved (annually, bi-annually, etc) to post and the ‘flagging’ feature would have to be remarkably high to merit a review of the article posted/shared.
Lastly, there needs to be a clear divide between “news” and “opinion”. TV news, especially Fox News, has taken the “news” and twisted it into “opinion based consumption.” News Organizations need to be held accountable on clearly stating that an article on FB/Twitter/etc is NEWS or OPINION. Hard news, facts, fact checks, quotes, etc that are said/done/reported on as news should be posted and verified as that. If the NYT/WaPo/Fox/MSNBC/etc posts an editorial or opinion piece, post that that’s what it is. -- Ian
Please note this section has a number of contributions with very valuable perspectives. We ask that the text be left as is, intact, with no modifications of their personal opinions. Thank you -@linmart
I’m not clear on whether you want comments from any member of the public or only full-time researchers and specialists that you know. But since I managed to find my way in here, I’ll go ahead and offer some good faith suggestions to help your project. I have a background in user interface design and software engineering, but I’m not here to comment as a specialist only as an interested observer from the other side of the cultural divide.
I think you can get buy-in from people on my side of the divide to stop circulating dubious news if you devise a process perceived to be fair and open[ig].
Some Suggestions:
- Avoid devising processes or algorithms that rely on sites like Snopes or fact-check. The quality of the debunking work on those sites is uneven, excellent at times horrible at others. It is the reason many people on my side of the cultural divide do not consider those sites credible. Any debunking tied to snopes will be dismissed by many people for legitimate reasons.
- Use as much transparency as possible.[ih]
- Ensure there is real and meaningful ideological diversity on human teams.
← Adding some observation to this, in the context of what you just mentioned (underlined text). All points I am identifying with my name but please add any additional ones, as needed. The comments function I am not using as I need this to appear in the preview - @linmart, @IntugGB
Supplement
I wrote the above comments related to the cultural divide. Additional comments:
Definition of Fake News
At this time, there does not appear to be a definition of “fake news” in this document.
Moral authority of news gatekeepers
The global perspective that some researchers here appear to be bringing to this work is indispensable. This “perspective” clarification, however, is about the cultural climate in the United States specifically.
The urgency of the ongoing political situation in the US, not only for the country but for the world, makes it imperative that the so called “Fake News” issue be considered through this very specific lens, venturing perhaps even further to suggest that it be analysed from the perspective that at least part might be attributed to ongoing propaganda efforts by a foreign nation.
Incidentally, at @IntugGB, we are looking at this from a global perspective, examining how this problem ties in with such critical events as the #Aleppo crisis, with constant mentions throughout that the news coming from the combat areas are part of a Globalist agenda. “Fake”.
Going further, there seems potential for the creation a multinational entity in charge of dealing with issues of this nature. As it stands, INTUG is the only organization that represents users’ rights in the field of ICT policy -- on a global scale. While ‘fake news’ as a topic is not exactly under its jurisdiction, the effect it has on users’ experience is of great concern.
These two links shed some light on work being carried out in the European Union:
161212 - Open Democracy
We need European regulation of Facebook and Google
GRIP Initiative
Hope this helps. - @linmart, @IntugGB [14 Dec 16]
Thread posted under The Old School Approach has been transfered to this section: - @linmart
I[ij]t’s tough when media essentially sells their stories to viewers. You are always going to have those people trying to push out something that isnt worth anything but is really flashy just to turn a quick dollar. This is also the problem in most media today. The press should not collude with gov officials. The press should report the facts against ALL officials. Media figure heads should not be pushing their opinions on the masses. I shouldn’t have to dig into the internet to find the truth about people. The press should be telling me personal histories. Especially during the elections.
I heard all of this negative press for trump and what he did in the past and the mean things he says. But I never heard anything about hillarys past scandals. Lets not forget that they DO exist. It's not conspiracy or lies. It's in the headlines of newspapers from decades ago. Yet I never heard an ounce of that. Why? I understand that trump was not a great person morally. But most of us common men and women aren’t, and if you think that you are, then maybe you should try practicing some humility. Let's not set these ideals that we should have some immaculately clean personas running for president. Clinton was just as bad as trump in her own ways.
Lets dial it back to JFK and his address to the press. This is what media should be like. Reporting without bias. Without opinions. Telling facts to the American people. You can't call someone a racist just because they say a stereotyped blurb about another race. That’s not racism. We need the media now more than ever to start reporting unbiased news. Facts, fair and balanced facts and not embellish them with opinion or hearsay. It's so dangerous to the people as a whole. Put away your personal feelings and report fact, and if you want you bash a candidate like trump you should dig some dirt up on his opponent as well. People didn’t vote for racism or bigotry or against gays or women. They voted against the governmental system, against a failing and biased press.
As soon as we stop selling these dramatic stories that ignorant americans eat up, then we will be able to distinguish fake news from real news. Embellishing truth distorting words, these are tactics of a corrupt press. Yet these prevail in almost every media outlet in the nation. Why? Because it sells. People love drama in their dull lives and media outlets know this. They need numbers and viewers. It makes money. The press isn’t in it to help the people out anymore. It's just in it for the money. Just like the fake news sites.
So really, can they be stopped? No. Not in this day and age. Not until we unwind the current state of the media and transform it to something informative and real. Not speculate on someone’s personality. Report the facts even if it's something bad about our government. Go against the grain. Give us real stories, keep your opinions out of it.
The masses love personalities in the media, that’s why we all know their names, but too often these personalities get in the way of reporting what’s real and what's the personalities opinion. We identify with them. We emulate them. If they say I don’t like Clinton or I don’t like trump, then they will mirror that. The media is in a state of distress. It’s no wonder they are falling to these barbaric hordes of Inquirer type medias. Because people can't distinguish real from fake.
The media is all hype and passion and opinion. Even if it is laced with fact, there is a heavily biased undertone to all of it. If you are intelligent enough to see this, you may be safe. If not, you are going to end up being the one that believes the fake news headlines. Please please please put away your hate and divisiveness for the other side of the opinion, practice listening more than you do speaking. Listen and try to understand both sides. See the facts in both sides, don’t dismiss what you don’t hear from your circle as BS and if you hear something from the news that you seem interested in, dig a little further and see how much truth there is to it before you plaster it all over the brains of your cohorts like it's fact.
Stop labelling people too, it's disgusting. In a world where we are so passionate about gender pronouns and offending people, we need to look at ourselves. Are we labeling someone Corrupt? Racist? Sick? Bigot? Stop. We honestly don’t know more about that person than what the press tells us and that is no ground to stick labels on people. Think about a bad thing that you said or did once or maybe twice. Think about if the press got a hold of that and blasted the world with labels for you based off of that mistake. Is that who you are? No you are human. Just like I expect our president to be. As soon as the press starts being compassionate to people as humans, we may be able to defeat fake news.
Related reads
161213 - Vox
This Trump voter didn't think Trump was serious about repealing her health insurance
Why would people vote for a presidential candidate who campaigned on taking away their health insurance? Last week, we went to Corbin, Kentucky, to try to answer that question. It’s a small city in southeastern Kentucky, an area of the country that has seen huge declines in its uninsured rate — but also voted overwhelmingly for Trump.
How about testing out everything we are discussing on the platform itself? Call it “Proof of concept”
161117 - The Washington Post
This researcher programmed bots to fight racism on Twitter. It worked.
By Ciara Allen
Looking back at the last couple of years I believe we’ve come to realized that Journalism (in America) has been influenced by competing political agendas. These political agendas have been speed up with the help of social media platform, spreading lies and misinformation to most people in this nation. Regarding the 2016 election where fake news seemed to circulate freely, without the proper fact checking, and how it spread through social media it called us to question once again our sources of journalism. While it seems few things are going to change regarding how to stop fake news right now, it is clear from things I have previously read in this Google Doc that there are plenty of people scratching their heads trying to figure out a sensible solution.
Perhaps the definition of fake news is still a bit ambiguous. A simple Google search and you can see the definition is still one based on opinion. I doubt it will enter our dictionaries any time soon perhaps not just because it's opinion based but also because it’s self-explanatory. That being said when reading an article on the subject of fake news I think it should be necessary for the author to put down their own definition, this help put the article into perspective. My definition of fake news would be: a pocket of misinformation that can be damaging, in regards to an agency, entity, or person. Thinking about it, we could run into the problem that credible sources could be labelled “Fake news” without any real evidence by opposing views.
I wonder how the jilted American public can ever regain trust in the media. How the qualities of Journalism can ever be returned to it. Has the media realized how damaged its connection to the people has become? There is a lot of work to be done. If there was perhaps some way for a new form of media maybe we could start fresh. Would that be easier than trying to fix our old platforms?
A lot of blame for the spread of these fake new articles has fallen on the more well-known social media sources such as Google, Facebook, and Twitter. That being said our television news sources, having been part of the political agendas for years. But the biggest question in all this fake news business is why a significant portion of the public did not seem to care about the deceit. A question I myself am curious about is that if Americans knew that they were being lied to what stopped them from fact checking on the internet to expose these articles lies? I’m not sure myself, but it’s an interesting question to ask. Despite all the chaos that fake news brought the last election and continues to bring today, the biggest advantage we have right now is that people are discussing it. I think the conversation has and will continue to be a productive one.
I’d like to see the term, ‘Fake News’ retired. If it’s fake, it’s not news. It’s lies, inventions, falsehoods, fantasy or propaganda. A humorist would call it, “made up shit”.
Reliable Sources - Who decides what is fake?
By Brian Stelter & the CNNMoney Media team via Jay Rosen @jayrosen_nyu
18:48 UTC - 16 Dec 15
I'm going to chime in here and say this is the crux of the issue. We shouldn’t be deciding what is fake or isn’t - we should be helping people communicate these thoughts more clearly to each other.
If 10 of my friends think a news source is lying, it doesn’t matter to me what “Distinguished Institution X” thinks. Nobody is going to trust a source that claims to be authoritative if it disagrees with them.
If we enable people to share their dispositions with their friends in a formal manner, we can help fix the problem. Trying to start with some high level, abstract, difficult-to-prove thing like climate change is never going to work. If we’re honest with ourselves, we have to admit that none of us (who aren’t researching climate change) have done any of those experiments. We believe it as an article of trust in the community of people we’ve interacted with. That's it. That’s why we believe in climate change. It sure as hell isn’t because “we’ve looked at the evidence and we know it’s there.” Almost nobody is in a position to read that evidence or to analyze experimental designs or to look at relationships between computer climate models and actual climate outcomes. We believe it because we trust the people saying it.
That doesn’t make it bad. That doesn’t make the beliefs wrong. We just need to come at this problem from the right angle.
Instead of saying “news X is fake”, we should be showing “these are the people, institutions, and groups who think it’s fake.” , and showing the viewer’s relation to them in the social graph.
Dec 2016 - Nieman Lab
Public trust for private realities
170212 - Medium
The rise of the weaponized AI Propaganda machine
There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.
161216 - The Globe and Mail
Opinion: The fake war on fake news
By Sarah Kendzior @sarahkendzior
Call "fake news" what it really is -- propaganda
161208 - Quartz
If you want to save democracy, learn to think like a scientist
161130 - New Scientist
Seeing reason: How to change minds in a ‘post-fact’ world
(hillarybeattrump.org) by, Jackie Brown
Although The United States of America’s constitution and laws regulate the country’s population, US residents differ from one another in regards to ethical and moral codes. Typically, American individuals identify under the political party that stands under similar ethics. For example, a person who identifies with the conservative party more commonly stands against abortion rights, whereas a liberal person is more often a prochoice advocate. This all links back to their personal ethical beliefs, which can be influenced by a number of things, like a religious practice, or any significant property of their personal lifestyle. Morality will never override public policy because law and ethics are two completely different concepts, even if they often have an influence on one another.
Long before fake news media was a universally recognized notion, there have been issues with political controversy in media. Before expiring in 1801 for purportedly violating the first amendment, the Sedition Act of 1798 made it illegal to write, print, utter, or publish anything that criticized the Congress or the US President. Although today this kind of criticism can be seen as an act of trolling, which often is considered to be bullying in other scenarios, federal laws cannot prevent or act on such behavior. Our society has come a long way in recognizing unethical bully behavior, though it seems to more often be addressed in the physical world than it is in the virtual world.
This hateful behavior is continuously given opportunity for existence with technological growth and web-media channels. Moderation is nearly impossible with the prevalent anonymity of the internet, especially if we wanted to strictly regulate our own country’s Internet activity. In efforts towards distinguishing the difference between fake and real news, one’s ethical code or moral beliefs holds an impact on their perception of the media.
A satirical journalist site has gone viral for generating news that exists in an alternate universe where Hillary Clinton won the presidential election last November. This webpage’s header reads “News From The Real America, Where The Majority Rules” and can be retrieved with the URL (www.hillarybeattrump.org). Regarding names and online advertising or publication, there are currently no set standards. Therefore, legally, but not ethically, creators can use shrewd article titles that can make the fake news appear real to the group of people they are writing to. This site is known for their dozens of fake news articles, however someone who has some awareness of US politics and fake news can distinguish the alternate reality of it. Someone who may be out of the knowledge circle (like a child, or a person from another country, etc.) may take this information seriously. The content addresses political figures, but figures of pop culture as well. Not only does this site carry a range of fictional information, it is clear that the satire can only be amusing to a person who does not support President Donald Trump.
Articles particularly attack persons of the conservative party, one example being Betsy DeVos, who is the conservative-leaning Secretary of Education. (HBT) On the site’s alternate reality, an article written about her is titled Betsy Devos Blames Campus Rape on Women’s Higher Education and incorporates many made-up quotations that were supposed statements of DeVos regarding the college rape epidemic. The unethical behavior behind these rumored quotes are simply unfair, and many lies are said about DeVos to make her seem unethical, especially to a liberal audience. Untrue statements are written as if they were made by DeVos herself, one example being, “Devos writes, Look: When women were barred from attending college 40 years ago, no female students were raped, because there were no female college students. Thus, educating women is dangerous. The numbers clearly show that female college students are at fault. If we're going to stop campus rape, the key is returning to all-male campuses. That’s just science!” (HBT)
In order to fully wrap my head around these lies, I needed to find evidence proving DeVos’s statements wrong. The New York Times writers released an article in September addressing Betsy DeVos and her stand on campus sex assault titled, Betsy DeVos Says She Will Rewrite Rules on Campus Sex Assault. The article points out that “Ms. DeVos did not say what changes she had in mind. But in a strongly worded speech, she made clear she believed that in an effort to protect victims, the previous administration had gone too far and forced colleges to adopt procedures that sometimes deprived accused students of their rights.” (Saul and Goldstein) This is the needed proof that DeVos is not an unethical sexist, and did not make such misogynistic statements.
The HBD site’s creator, who remained anonymous for her Washington Post interview, claims her intention was for the platform to be like “a joyful middle finger.” She continued by stating, “I didn’t want to wallow or argue with people who can’t be argued with. There’s something about humor as confrontation that I instinctively thought would work — like a good right jab that I could keep using.” Although she did not see harm with the creation of this fake media in an alternate reality, others do not share her view. The website itself has been bashed in multiple occasions, and it seems the anger is aimed mostly at the site’s anonymous creator. The platform is argued as “seemingly designed to thrill liberals and progressives - and drive conservatives and Donald Trump supporters crazy” by Brett Barrouquere in his 2017 Chron article titled In alternate web universe, Hillary Clinton is President Barrouquere then continued to critic the creator saying, “whoever is managing the page is having a grand time at trolling Trump and other Republicans.”
Peter Hasson, associate editor of The Daily Caller, points out in his politics article Fake News Site Lets Liberals Live In Alternate Reality Where Hillary Is President “The site’s articles single out prominent Republicans like Texas Sen. Ted Cruz and White House press secretary Sean Spicer for mockery.” Hasson mentions, “In the midst of a Constitutional crisis, this is our response,” the site’s description reads. “Long live the true president, Hillary Rodham Clinton.”
Although The First Amendment sustains the existence of this website as lawful, it is positively unethical for a platform of cruel lies and insults towards a particular political party to exist. Unfortunately, it is unrealistic to please the entire nation’s standards when it comes to politics and morals. So until we come closer to equality and peace, there will continue to be arguments considering ethics and law in the media.
WORKS CITED:
1. (Brett Barrouquere)
2. (Peter Hasson)
3. HBT (hillarybeattrump.org)
http://www.hillarybeattrump.org/home/2017/7/19/tiffany-trump-files-papers-to-legally-change-her-name
4. (Stephanie Saul and Dana Goldstein)
https://www.nytimes.com/2017/09/07/us/devos-campus-rape.html
While we are focused on rooting out fake news -with legitimate concern for it- real or fake, it is protected under the First Amendment. More information is considered the antidote of bad information. SCOTUS took that to extremes with its “more money = more speech” philosophy.
That all said, the road to hell is often paved with good intentions. While I support the endeavor to ensure quality information is distributed, we must also be cautious not to lead ourselves into the waters of censorship. What may start out as a framework for determining truth or credibility could easily - especially under this Administration - turn far more Orwellian. The tools and algorithms we develop for good, could be used for straight up censorship OF the truth.
Regardless of what comes of the overall debate around fake news, we must build in transparency, accountability and a process that ensures that the purpose of the First Amendment - to engage and inform our citizenry with a marketplace of ideas - continues without censorship. +Diane R +@linmart
Recommended
161122 - NYT
Facebook said to create censorship tool to get back into China
The First Amendment’s right to free speech raises a lot of questions regarding whether limits should be placed on that free speech. Consider hate speech and slander, where laws are in place limiting hate speech and threats, as well as laws where someone is able to sue another person for posting fake news about them that could potentially harm their reputation. However, these laws usually end up going much on a case by case basis because of the fact that fake news has to be proven false in order to win the case in court. There are also the instances regarding public figures or people with high governmental or societal power. Should they be held accountable in the same way that an average citizen should be held accountable in regards to the First Amendment rights, or are their rights different because of the fact that they have influential power and are in the public eye, making them able to spread their message to a larger and wider audience?
Jenna Ellis’s article entitled “Trump is not threatening the First Amendment; Americans’ ignorance of what it means most definitely is” for Fox News accuses American citizens of not fully understanding the First Amendment when criticizing tweets and speeches given by President Trump. Ellis explains that citizens fear for their right to free speech, and they are afraid that Trump attempts to regulate the press and limit citizens’ rights to the First Amendment. However, according to Ellis, Trump is only concerned with limiting the right to post ‘fake news,’ such as the laws already in place to be able to sue against libel and slander.
In the cases of fake news, the article explains that President Trump wants more accountability for the places that publish and post fake news. For example, he accuses NBC of posting fake news and expresses his opinion about holding them more accountable to what they publish that is written by their journalists or talked about by their news reporters. However, that raises the question as to whether companies and publishers should be held accountable for fake news that they did not write themselves. One argument could be made that publication companies and news stations should be checking facts more tediously and ensuring that they are aware of the truth behind each article that they post. However, there is also the argument that the writer or broadcaster that created the news story or article should be the one held accountable, since they are the ones that could be knowingly sharing the fake news with their companies and audiences. Unfortunately, there are instances where journalists will interview considerably ‘reliable’ sources for information, but their sources give them incorrect information. That means that the wording of an article could also be misinterpreted. If the article is about a certain individual, that individual could sue the journalist for falsely representing information pertaining to them. Even if the information was based off of truth, the entirety of the information includes pieces that stretch the truth or misrepresent a situation.
In regards to Donald Trump’s twitter, some critics of the current president argue that some of his tweets post incorrect information regarding the government or the United States. Some people also argue that it is hard to determine which tweets are facts versus opinions given by the president on his personal account. This raises the question as to whether Trump should be allowed to post information regarding the United States in his account, considering he could be misinformed and would then be announcing the misinformation as the leader of this country. In my own opinion, his tweets should be regulated, and he should be held accountable for the misinformation he posts on his twitter account. For example, Ellis describes in her article about how a tweet by Trump raised fear that he was trying to take away citizens’ rights to free speech. However, this was only an opinion that he vaguely expressed about the frustration he felt toward fake news. As a leader of the country, his opinions are considered by citizens to represent the way he wants to lead as president of the United States. He should be held more accountable when expressing opinions regarding any matters because people could misinterpret these opinions as being motivation for him wanting to pass new laws and regulations. While there should be more laws in place for public figures and what they are able to say with the power and authority that they have, it would also limit their free speech as citizens of this country. That raises the question as to whether laws should be made so that these public figures can only have free speech on private accounts and not on verified social media accounts or in front of audiences or on live broadcasts.
An interesting and provocative take over the gray areas in the legal protections for the press and fears in the Age of Trump - FB Dan Rather @DanRather
161211 - Politico
Donald Trump’s real threat to the press
One main major legal issue that fake news can bring to the forefront is libel. Libel is a published statement that is false and harms a person’s reputation. Libel has been an issue since the beginning of the United States of America, with roots dating back to the Sedition Act of 1798, in which the United States Congress made it a crime to write any “false, scandalous and malicious” statements about the President or Congress. Predictably these same issues regarding false, scandalous and malicious statements are even more present today, as more people have access to a variety of ways to publish information.
There are multiple different ways these fake news organizations are attempting to avoid libel lawsuits against them. One of the most frequently used tools to do so is by hosting these websites that publish inaccurate news about political happenings and people in the United States (often presidential candidates from the 2016 election) on servers outside of the United States. In December of 2016, NBC News ran a story about an anonymous teen in Macedonia who had been making thousands of dollars over the past six months from publishing stories, most of which were shared on facebook over and over, that were not true and often very damaging to the reputations of presidential candidates. Most of the time, these articles garnered clicks by targeting Hillary Clinton.
Most of these cases are textbook examples of what we in the United States call “libel”. Article titles ranged from somewhat believable scenarios such as "JUST IN: Obama Illegally Transferred DOJ Money To Clinton Campaign!", to clear cases of “clickbait” titles such as "BREAKING: Obama Confirms Refusal To Leave White House, He Will Stay In Power!". Rather than being punished for these articles however, this teen has been living lavishly in his home country, preying on the curiosity of Trump supporters and Facebook users in general in the United States and apparently around the world.
Libel laws gained a lot of attention when in an early 2016 speech, then presidential candidate Donald Trump said that if elected he would “open up” libel laws. Predictably, due to the many dust-ups Donald Trump had already had with news organizations, this caused many people to raise their eyebrows to this statement, thinking that this would lead to a large amount of lawsuits involving the soon to be presidential elect Donald Trump.
Trump’s presidency is sure to bring libel to the forefront quite a bit over his tenure in office. It is a perfect storm of factors that can lead to lawsuits and opinions from people on both sides about what constitutes libel and who should be disciplined for what they say. For one, President Trump is very outspoken. One example of this is his continued use of his personal Twitter account @realDonaldTrump. On his twitter account he has already used the term “fake news” many times, accusing many people and organizations of practicing fake news. In the modern United States, we have never seen a president be so outspoken in making accusations of people attempting to damage his reputation.
Trump himself has been no stranger to controversy. He has said many things that have caused outrage, both in political and ethical nature. These controversial things have caused many media members to call him out, and when fake news is available to everybody in a matter of seconds, it is easy to assume that a lot of this fake news will be brought to the attention of Donald Trump himself, who as noted earlier, has already been on the record strongly advocating against libel. This makes fake news all the more dangerous.
In the last few years, especially during the 2016 presidential election, fake news has been a widespread concern. First of all, the definition of “fake news” itself has seen some debate. When we hear fake news mentioned in the media (namely the POTUS’s twitter account, where he often uses the term) it is sometimes used to mean any news article that might not be sympathetic to the values of the poster, or in support of their ideals and goals, even when those reports are factually accurate. When we, students and editors of media, talk about fake news we are referring to the publication of an actual false report that is intended to be taken as truth. Fake news attempts to pass off made up stories as factual information. Doesn’t that make it libel or slander? Would that not make it extremely easy to prosecute legally? Not entirely.
Libel is defined as a published false statement that is damaging to the reputation of a person. So while fake news is definitely the publication of false statements, it is not always defamatory to the character of others. Often times, though, it is. Why then are we not quick to take action against these false publications? The answer is complicated. The law itself is not exactly crystal clear on internet libel and a lot of the time it is difficult to track down anonymous content posters, and people who aim to circulate fake news often choose to have their sites hosted in places they know are not traceable by the US government. Many times the case is that as long as the website itself did not post the offending content, they cannot be held responsible for the user who did. This makes it exceedingly difficult to prosecute those who perpetuate fake news stories.
Another thing to be concerned with in regard to widespread fake news is how powerful it can be. If the general public is not careful to follow fact checking steps when reading articles, they are contributing to the power fake news has over them. More and more often fake news articles (attempting to be legitimate truthful information) are shared around facebook, twitter, and other media outlets. They are created with attention grabbing headlines (often outlandish) in the hopes of generating clicks, likes, and shares which in turn helps the host site to gain revenue from ads and other related sources. During the presidential election of 2016, headlines claiming absurd things like the fact that Pope Francis was going to be backing Donald Trump as a candidate, and Hillary Clinton selling guns to ISIS, were seen all over the internet. These articles became so prevalent that many believe they actually helped sway the election in Trump’s favor. The general news reader needs to constantly be wary of the information presented to them as “news.”
That said, we need to be sure that we are being perfectly clear when referring to fake news. As mentioned earlier, the definition of fake news differs from person to person. When searching the definition, a broad array of interpretations is available. For example, there is satirical news (e.g. The Onion) which makes no attempt to be harmful and is not anything more than a clever joke, but does offer a serious potential to fool a reader who was not clear that what they were reading was satire. This is why satirical websites should be forced to state their nature clear enough to be understood. On the other hand there are also articles that are completely fabricated and are designed with the intent to do harm. It is these types of fake news that can have the most negative impact, but we should worry about the harm fake news as a whole has done to the public’s trust of the mass media.
161208 - The Washington Post
This is what happens when Donald Trump attacks a private citizen on Twitter
Via Farhad Manjoo @fmanjoo Tech writer for the NYT
161204 - The Guardian
The trolling of Elon Musk: how US conservatives are attacking green tech
161206 - The Guardian
Megyn Kelly accuses Trump social media director of inciting online abuse
The Fox News host says Dan Scavino is ‘a man who works for Donald Trump whose job it is to stir up’ nastiness and threats, and urges him to stop
161213 - MIT Technology Review
If only AI could save us from ourselves
Google has an ambitious plan to use artificial intelligence to weed out abusive comments and defang online mobs. The technology isn’t up to that challenge—but it will help the Internet’s best-behaving communities function better.
170115 - BuzzFeed
How to use Facebook and fake news to get people to murder each other
In South Sudan, fake news and online hate speech has helped push the country toward genocide amid a three year civil war, according to independent researchers and the United Nations. “Social media has been used by partisans on all sides, including some senior government officials, to exaggerate incidents, spread falsehoods and veiled threats or post outright messages of incitement,” a separate report by a UN panel of experts released in November reads.
170112 - VOA
Researchers create South Sudan hate speech lexicon
European Court of Human Rights - Hate Speech
Non-partisan News and Information Objectivity
Could major news producers, distributors and advertisers come together to fund a ratings agency system, organized to objectively analyze the spread of misinformation in society?
Working like financial credit agencies or the Better Business Bureau, could ratings agencies provide valuable services to both consumers and organizations?
In order for society to progress in the best possible direction we must seek the truth. Covering up, misinforming, or selectively informing the public may be better for the short term, but for best long term results the full truth must be known.
How does the public know what the truth is? The sole purpose of “the media” or an individual journalist is to inform the public. This brings up the question “who gets to decide what counts as ‘pertinent information?’” Zero bias is not possible, but the best we can hope for is to acknowledge this and get as close as possible to unbiased.
Members of the media need to get together and create some sort of national or even international governing body. Similar to how practicing law is regulated by the BAR Association, I would argue that we must demand equal integrity and scrutiny of our media members. This cannot be controlled by the government ever, and should only be geared towards regulating integrity issues. This cannot make it illegal to “practice journalism,” but would be more of a stamp of approval. There should be no restriction on freedom of speech.
As previously mentioned, the government cannot be involved in this. The members of the media need to compete amongst each other to gain the viewer’s trust. Therefore, journalists from a variety of political viewpoints need to be at the head of an organization that does the fact checking and investigates misconduct. There will be no restriction of what is allowed to be reported, but simply a removal of this approval stamp due to misconduct.
Similar to other organizations (MLB, NBA, NRA, etc.), the governing body would need to have elections for some sort of panel to ensure every flavor of perspective (bias) is represented. Subject matter experts need to be consulted, and ideas put to a vote on what counts as misconduct, and how exactly to deal with suspected individuals or incidents. Just like how there are ethics involved in practicing law, enforcing the law, military ethics, etc., this panel needs to agree on journalism ethics. I would also suggest that it should be made transparent to the public who contributes money to the news source, or possibly even restrict who is allowed to contribute financially to a news source just like we wouldn’t want shady contributions given to a judge or police officer.
In short, the media needs to control itself. This is not a Facebook issue, or a Twitter issue, or even a fake news issue. This is a leadership issue. Although clickbait, fake news, and other forms of misinformation are a problem, they are not the root of the problem. + @IntugGB
Keywords:
Blockchain
Smart Contracts
Crowd Knowledge
Machine Learning
Below are some ideas we have been working on for a while: @nickhencher
Without collective skin in the game it is unlikely this one initiative will work - this initiative should offer bottom line benefit. Fake news is not the only problem.
“SITA or Société Internationale de Télécommunications Aéronautiques, was founded in February 1949 by 11 airlines in order to bring about shared infrastructure cost efficiency by combining their communications networks.
A shared cooperative initiative for news would seem to offer wide and far reaching benefits”
Knowledgeable, validated but anonymous validation and fact checking by the crowd
News articles are graphed and outliers are … (?)
Users (checkers) graphs are created - scoring and matching against articles being questioned. Interests. Location. Knowledge.
Use BlockOneId (Reuters is open to the use of this technology, contact: Ash) to ensure anonymity of fact checkers and ensure collusion is prevented.
Fact checkers have to be rewarded - blockchain and smart contracts can facilitate this.
Leverage machine learning to graph stories. Look for stories that are outliers, flag as exceptional and begin further validation checks, these can be both automated and human (crowd). Reward checkers
Fact checking is not simple and is time consuming - assuming readers will do this is not a solution.
Assume that Fake News is going to become more sophisticated and weaponised. At the moment this initiative seems to be focused on static event news - this is going to move to live events and there will be consequences - fake news consumed at an event can quickly turn a demo into a riot. Witness accounts and location data will become essential when validating eyewitness accounts.
Hand in hand with the above it should be possible to allow positive validation of an article. If someone reads an article then can check in as having read it, this engagement can then be taken down a multiple of paths .
Turn the model on it’s head and give the fact checkers the revenue from the fake news stories. A (excluding state sponsored) reason these stories are being produced is for money - delay the money and change the terms
(similar to Youtube model with pirated content)
Wikipedia: How to Identify Reliable Sources (Mel @mkramer)
Ideas:
Solutions for Fake News
Fake News is becoming an epidemic as online media is making it increasingly more available to create propaganda that is misleading readers from what is the truth. Finding solutions how to handle this outbreak is proving to be quite troublesome as far as sifting through what is fake news and what is real news is much like sifting through the sand in a desert. Understanding the reasons for the creation of fake news might play a key role to discourage against it. Another problem could be that people are not informed enough about fake news or how to avoid falling for it be a lack of knowledge or concern.
Generating revenue seems to be one of the leading reasons that fake news had become so popular. Many of the “click-bait” sites are made to generate revenue from ads or endorsing products by tricking people to go to them with false headlines. The more popular the site becomes the more incentive for the creators to flood the internet with more websites to increase profit.
One solution to ending fake news would be to mandate that sites without properly certified or researched information to place disclaimers before entering a website that produces articles such as blogs. The problems faced with this solution would require domain hosts to perform checks on websites that register as a blog website. This allows for free speech to continue on the internet and for creators still to prosper from ad revenue without informing the person that they were visiting an unregistered site. Other larger sites would have to step up and have a method to alerting users to external links that may be false or are not certified as non-“click-bait” sites.
Fake news in the newest form of shanghai-ing people or tricking them into doing something that they do not wish to do, such as read false information on something they believe to be true. The most efficient way to stop this epidemic would be to inform and teach people what to look out for such as the big warning signs like false titles. Some attempts to battle this have already been attempted as seen in a series of seminars by Dr. Kyle Moody, Assistant Professor of Communications Media at Fitchburg State University, that were held in various towns in Massachusetts earlier this year in 2017. Fake news prays on people seeking information of interest but without being educated on today’s media it’s like shooting arrows in the dark.
By: @Ubiquitous
Linked-Data[9] is a technology that produces machine and human readable information that is embedded in webpages. Linked-Data powers many of the online experiences we use today, with a vast array of the web made available in these machine-readable formats. The scope of linked-data use, even within the public sphere, is rather enormous[10].
Right now, most websites are using ‘linked data’ to ensure their news is being presented correctly on Facebook[11] and via search, which is primarily supported via Schema.org[12] [13].
The first problem is: that these ontologies do not support concepts such as genre[14]. This means in-turn that rather than ‘news’ becoming classified[15], as it would in any ordinary library or newspaper, the way in which ‘news’ is presented in a machine-readable format is particularly narrow and without (machine readable) context.
This means, in-turn, that the ability for content publishers to self-identify whether their article is an ‘advertorial’, ‘factual’, ‘satire’, ‘entertainment’ or other form of creative work - is not currently available in a machine-readable context.
This is kind of similar to the lack of ‘emotions’ provided by ‘social network silos’[16] to understand ‘sentiment analysis’[17][18] through semantic tooling that offer means to profile environments[19] and offer tooling for organisations. Whilst Facebook offers the means to moderate particular words for its pages product[20] this functionality is not currently available to humans (account holders).
The mixture of a lack of available markup language for classifying posts, alongside the technical capabilities available to ‘persona ficta’ in a manner that is not similarly available to Humans, contributes towards the lack of ‘human centric’ functionality these platforms currently exhibit.
Bad Actors and Fact-Checking
In dealing with the second problem (In association to the use of Linked-Data), the means in which to verify claims is available through the application of ‘credentials’[21] or Verifiable Claims[22] which in-turn relates to the Open Badges Spec[23].
These solutions allow an actor to gain verification from 3rd parties to provide their audience greater confidence that the claims represented by their articles. Whether it is the means to “fact check” words, ensure images have not been ‘photoshopped[24]’ or other ‘verification tasks’, one or more reputable sources could use verifiable claims to in-turn support end-user (reader / human) to gain confidence in what has been published. Pragmatically, this can either be done locally or via the web through 3rd parties through the use of Linked-Data. For more information, get involved in W3C[25], you’ll find almost every significant organisation involved with Web Technology debating how to build standard to define the web we want[26].
General (re: Linked Data)
If you would like to review the machine-readable markup embedded in the web you enjoy today, one of the means to do so is via the Openlink Data Sniffer[27] An innovative concept for representing information was produced by Ted Nelson[28] via his Xanadu Concept[29]
Advancements in Computing Technology may make it difficult to trust media-sources[30] in an environment that seemingly has difficulty understanding the human-centric foundations to our world; and, where the issues highlighted by many, including Eben Moglen[31], continue to grow. Regardless of the technical means we have to analyse content[32], it will always be important that we consider virtues such as kindness[33]; and, it is important that those who represent us, put these sorts of issues[34][35] on the agenda in which “fake news” has become yet another example (or symptom) of a much broader problem (imho).
A simple (additional) example of how a ‘graph database’ works as illustrated by this DbPedia example[36]. The production of “web 3.0”[37] is remarkably different to former versions due to the volume of pre-existing web-users. Whilst studies have shown that humans are not really that different[38], the challenge becomes how to fund the development costs of works that are not commercially focused (ie: in the interests of ‘persona ficta’[39]) in the short-term, and to challenge issues such as ‘fake news’ or indeed also even, how to find a ‘Toilets’[40]. As ‘human centric’ needs continue to be unsupported via the web or indeed also, the emerging intelligent assistants[41] working upon the same datasets; the problem technologists have broadly produced becomes that of a world produced for things that ‘sell’, without support for things we value. Whether it be support for how to help vulnerable people, receipts that don’t fade (ie: not thermal, but rather machine-readable), civic services, the means to use data to uphold ‘rule of law’, vote and participate in civics or the array of other examples in which we have the technology, but not the accessible application in which to apply the use of our technology to social/human needs.
Indeed the works we produce and contribute on the web are for the most-part provided not simply freely, but at our own cost. The things that are ‘human’ are less important and indeed, poorly supported.
This is the bigger issue. We need to define means to distil the concept of ‘dignity’ on the web. Apps such as Facebook often have GPS history from our phones; does that mean the world should use that data to identify who broke into a house? If it is said you broke a speed limit in your vehicle when the GPS records show you were somewhere else, how should that help you?
Developing stories, research, ongoing initiatives, tools, etc. all related with the project.
Please post links[ix] with a reference to who posts. MT indicates the article has been brought in from a post on Twitter. Some will include description of the source, along with areas of expertise.
< verified account < key contact < collaborator
161125 The Guardian - Facebook doesn't need to ban fake news to fight it via @charlesarthur Freelance tech journalist; The Guardian's Technology editor 2009-14 <<
161123 MIT Review - Facebook’s content blocking sends some very mixed messages via @techreview <
161122 Reuters - Facebook builds censorship tool to attain China re-entry MT @dillonmann Communications Director @webfoundation
161122 NYT - Facebook said to create censorship tool to get back into China - MT @lhfang Investigative Journalist. @theintercept lee.fang@theintercept.com
161122 Reuters - Facebook builds censorship tool to attain China re-entry MT @dillonmann Communications Director @webfoundation <
161119 Recode - Here’s how Facebook plans to fix its fake-news problem - Steffen Konrath @LiquidNewsroom <
160520 Guardian - The inside story of Facebook’s biggest setback - MT @GrahamBM << Founder of Learning Without Frontiers (LWF)
161123 VB - Twitter Cortex team loses some AI researchers MT @LiquidNewsroom <
161107 The Washington Post - This researcher programmed bots to fight racism on Twitter. It worked. MT @mstrohm
161013 Google - Journalism & News: Labeling fact-check articles in Google News By Richard Gingras, Head of News, Google
Google Support : What does each source label (e.g., “blog”) mean?
Source labels are a set of predefined, generally understood terms that describe the content of your news site and serve as hints to Google News to help classify and show your content.
170113 The Guardian - Self-segregation: how a personalized world is dividing Americans
161122 NYT Magazine - Is social media disconnecting us from the big picture? - MT Howard Riefs @hriefs Director, Corporate Communications @SearsHoldings
161120 NPR - Post-election, overwhelmed Facebook users unfriend, cut back - MT @newsalliance <
161112 Medium - How we broke democracy
161116 Tiro al aire: Romper la burbuja - by @noalsilencio
The Filter Bubble: What the Internet is hiding from you Slide presentation by @EliPariser <<< MT @noalsilencio
170222 Seeker How a Twitter algorithm could bring Democrats and Republicans closer
together By Kiran Garimella
Via @gvrkiran
161123 Medium - Detecting fake viral stories before they become viral using FB API by @baditaflorin << Data Scientists at Organised Crime and Corruption Reporting Project (OCCRP)
161123 - Medium - How I detect fake news by @timoreilley << Founder and CEO of O’Reilley Media @OReillyMedia
170317 - Could an auto logic checker be the solution to the fake news problem? @CrispinCooper (apologies for self promotion)
Common Crawl - @msukmanowsky
Page Rank - authority of web domain - @timoreilly via @elipariser
161121 - Slate - Countries don't control the Internet. Companies do.
141023 - Wired - The laborers who keep dick pics and beheadings out of your Facebook feed
A site with readings and resources about verifying information that circulates via social media
Verification Handbook [PDF] - @Storify
Wikipedia: Identifying reliable sources
161123 NPR - We tracked down a fake news creator in the suburbs. Here's what we learned
161123 Medium - Fixing fake news: Treat the problem not just the symptom
161122 Medium - Fake news is not the only problem by @gilgul Chief Data Scientist @betaworks, co-founder @scalemodel | Adjunct Professor @NYU | @globalvoices
161120 NYT - How fake stories go viral
161111 Medium - How we broke democracy MT @TobiasRose
150215 Tow Center for Digital Journalism - Lies, damn lies and viral content - @TowCenter via Steve Runge
161118 Medium - Does fake news on Facebook make smart people stupid?
170212 Medium - The rise of the weaponized AI Propaganda machine
There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.
170130 The myth that British data scientists won the election for Trump
A piece of data science mythology has been floating around the internet for several weeks now. It surfaced most recently in Vice, and it tells the story of a firm, Cambridge Analytica, that was supposedly instrumental in Donald Trump’s campaign.
MT Jonathan Albright @d1gi1
170128 Motherboard - The data that turned the world upside down
161118 - Medium - How the Trump campaign built an identity database and used Facebook ads to win the election Joel Wilson @MedicalReport Consumer protection litigator. Former deputy attorney general in Trenton. Via @WolfieChristl Researcher, activist
16117 Medium - What’s missing from the Trump election equation? Let’s start with military-grade psyOps by Jonathan Albright @d1gi
Too many post-election Trump think pieces are trying to look through the “Facebook filter” peephole, instead of the other way around. So, let’s turn the filter inside out and see what falls out.
160201 The Guardian - Ted Cruz erased Trump's Iowa lead by spending millions on voter targeting
161118 The Drive - These are the lobbyists behind the site attacking Elon Musk and Tesla via @ElonMusk Tesla, SpaceX, SolarCity, PayPal & OpenAI
161120 Never mind the algorithms: The role of click farms and exploited digital labor in Trump's election - MT @FrankPasquale < Author The Black Box Society: The Secret Algorithms Behind Money & Information
161124 Washington Post - Russian propaganda effort helped spread ‘fake news’ during election, experts say
MT @jonathanweisman Deputy Washington Editor, The New York Times via Centre for International Governance Innovation @CIGIonline
161002 RT - Russian server co. head on DNC hack: ‘No idea’ why FBI still has not contacted us
Key takeaway - Take note of verification procedures:
[Vladimir Fomenko, owner of the Russian server company implicated in the DNC hack told RT] he was as surprised to learn from US media that his company was somehow implicated. He also believes that the only connection to Russia the Americans really have is the servers being from there.
“Thinking that the criminals must likewise also be from Russia is just absurd,” he says. “No one blames Mark Zuckerberg when criminals use Facebook for their own ends? … As soon as we learnt our servers were involved, we disconnected the perpetrators from our equipment. And conducted our own investigation. We have learnt certain things and are ready to share it with special services at their first call.”
161123 - Digiday - 'It was a fad': Many once-hot viral publishers have cooled off - via @Digiday
161111 - The Verge - Understanding how news goes viral: Facebook buys CrowdTangle, the tool publishers use to win the internet - MT @betaworks
Lies, damn lies, and viral content [PDF - 168 pages] by David Silverman, Tow Center for Digital Journalism
170106 The Guardian Will satire save us in the age of Trump?
161123 The Media Briefing - Could satire get caught in the crossfire of the fake news wars?[ja] - MT @JeffJarvis
La démocratie des crédules [Democracy of the Credulous][jb][jc] by Gerald Bronner
Excerpt from the book [translation]:
Why do the myths of the plot invade the minds of our contemporaries? Why does the treatment of politics tend to "peopolize" itself? Why are men always suspicious of science? How could a young man claiming to be the son of Michael Jackson and have been raped by Nicolas Sarkozy be interviewed in a major newspaper during a span of 20 hours? How, in a general way, do imaginary or invented facts, even frankly false, succeed in spreading, attracting public support, influencing policy decisions, in short, shaping a part of the world in which we live? Was it not reasonable, however, to hope that with the free flow of information and the increase in the level of study, democratic societies would tend towards a form of collective wisdom?
This invigorating essay proposes, by convening numerous examples, to answer all these questions by showing how the conditions of our contemporary life have allied with the intimate functioning of our brain to make us dupes. It is urgent to understand it. If you have only one book to read, this is it.
Interview [Audio in French]: Les Matins de France Culture - Gérald Bronner
Predictably Irrational: The hidden forces that shape our decisions by Dan Ariely @danariely << Professor of Psychology and Behavioral Economics
Thinking, Fast and Slow by Daniel Kahneman Professor of Psychology and Public Affairs
Engaging the reader in a lively conversation about how we think, Kahneman reveals where we can and cannot trust our intuitions and how we can tap into the benefits of slow thinking. He offers practical and enlightening insights into how choices are made in both our business and our personal lives―and how we can use different techniques to guard against the mental glitches that often get us into trouble.
170113 Fortune - What’s driving fake news is an increase in political tribalism by Mathew Ingram @mathewi
“Researchers argue that this powerful desire to be seen as a member of a specific group or tribe influences the way we behave online in a variety of ways, including the news we share on social networks like Facebook. In many cases it's a way to signify membership in a group, rather than a desire to share information.”
170111 NYT - The real story about fake news is partisanship
“Today, political parties are no longer just the people who are supposed to govern the way you want. They are a team to support, and a tribe to feel a part of. And the public’s view of politics is becoming more and more zero-sum: It’s about helping their team win, and making sure the other team loses.
… Partisan bias fuels fake news because people of all partisan stripes are generally quite bad at figuring out what news stories to believe. Instead, they use trust as a shortcut. Rather than evaluate a story directly, people look to see if someone credible believes it, and rely on that person’s judgment to fill in the gaps in their knowledge.”
161114 Medium - I’m sorry Mr. Zuckerberg, but you are wrong - MT Danah Boyd @zephoria <<
160816 Nieman Lab Designing news products with empathy: How to plan for individual users’ needs and stresses
160518 - How technology hijacks people’s minds — from a magician and Google’s design ethicist by Tristan Harris @tristanharris < Ex-Design Ethicist @Google <<
161122 Medium - An open letter to my boss, IBM CEO Ms. Ginni Rometty MT @katecrawford Expertise: machine learning, AI, power and ethics
161113 Medium - The code I am still ashamed off
170214 Politico - Have TV media had their fill of Kellyanne? Via Poynter Institute @Poynter
161205 MSNBC - Trump allies defend his election lie as ‘refreshing’
161201 Bill Moyers - Trump’s seven techniques to control the media via Robert Reich @RBReich
161122 Medium What journalism needs to do post-election by @Brizzyc Social Journalism Director at CUNY MT @jeffjarvis
161122 CJR Maneuvering a new reality for US journalism MT @astroehlein European Media Director, @HRW
161122 The Washington Post - What TV journalists did wrong — and the New York Times did right — in meeting with Trump - MT @JayRosen << Professor of Journalism at @NYUniversity
161121 In Trump territory, local press tries to distance itself from national media
161109 NYT - A ‘Dewey defeats Truman’ lesson for the digital age MT Karen Rundlet @kbmiami Journalism Program Officer @KnightFdn
161116 Vox - For years, I've been watching anti-elite fury build in Wisconsin. Then came Trump.
161115 CRJ - Q&A: Chris Arnade on his year embedded with Trump supporters - MT @mlcalderone Senior media reporter, @HuffingtonPost; Adjunct, @nyu_journalism <<
161123 CNN - What about the black working class? - MT @tanzinavega CNN National reporter race/inequality
161123 Washington Post - Journalists report Google warnings about ‘government-backed attackers’
161122 - CJR - Maneuvering a new reality for US journalism by @NicDawes Media at Human Rights Watch (@hrw) MT @astroehlein European Media Director, @HRW <<
161123 - [IND] The Times of India - Ban on misleading posts: Collector served legal notice- MT @jackerhack Co-founder @internetfreedom <
161122 - Quartz - Stanford researchers say young Americans have no idea what’s news - MT @MatthewCooney Principal @DellEMC
160829 - Common Sense Media - How to raise a good human in a Digital World - MT @CooneyCenter
Everyone is interested in this right now, so I think it’d be useful to have a list of those wanting to actively work on it, and any other resources that could be applied to the task.
Please email jared@tribeworthy.com to get in touch.
First Draft formed as a nonprofit coalition in June 2015 to raise awareness and address challenges relating to trust and truth in the digital age. These challenges are common to newsrooms, human rights organizations and social technology companies and also their audiences, communities and users. We offer quick reference resources, case studies and best practice recommendations on firstdraftnews.com.
In September 2016 we launched the First Draft Partner Network, the first of its kind to bring together the largest social platforms with global newsrooms, human rights organizations and other fact-checking and verification projects around the world. The Partner Network is based on the idea that the scale of the challenges that society faces around filtering factual information and authentic content can only be tackled via a global collaboration of organizations working together to find solutions. Our partners are journalism, human rights and technology organisations that have an international remit and work at the intersection of information distribution and social media --@cward1e
To all of you who are making this possible. None of us is smarter than all of us.
And for being our inspiration and allowing this dream to spread far and wide:
Eli Pariser @elipariser √
CEO and Founder
Upworthy
New York
Andrew Rasiej @Rasiej
Co-Founder Civic Hall
Personal Democracy Forum
Senior Advisor Sunlight Foundation
New York
Micah L. Sifry @Mlsif
Co-Founder Civic Hall
New York
Lenny Mendoca @Lenny_Mendoca
Director Emeritus
McKinsey & Company
San Francisco
New York
Craig Newmark @craignewmark √
Founder of Craiglist and Craigconnects
San Francisco
Assistant Professor (Media Analytics) at Elon University in North Carolina
Expert in data journalism
Editor-at-large TechCrunch
Zeynep Tufekci @zeynep
Sociology Associate Professor University of North Carolina @UNC
Contributing Op-Ed Writer New York Times @NYTimes
Former Fellow Berkman Klein Center @BKCHarvard
Author of forthcoming book on Networked Social Movements
Kent Grayson @KentGrayson
Associate Professor of Marketing - Kellogg School of Management @Kellogg
Northwestern University
Taylor Owen @Taylor_Owen
Assistant Professor of Digital Media & Global Affairs - University of British Columbia @UBC
Senior Fellow Tow Center for Digital Journalism @towcenter
Founder and Editor in Chief of OpenCanada.org @OpenCanada
Author of Disruptive Power: The Crisis of the State in the Digital Age
Connie Moon Sehat
News Frames Director
Global Voices Online
Global Voices @GlobalVoices
Sameer Padania @sdp
External Assessor for the Google Digital News Initiative's Innovation Fund
Program Officer Program on Independent Journalism - Open Society Foundations (OSF) @OpenSociety
Christoph Schlemmer @schlemmer
Journalist
Fellow Reuters Institute for the Study of Journalism (RISJ) @risj_oxford
University of Oxford
Business Reporter Austrian Press Agency
Denice W. Ross @denicewross
Public Interest Technology fellow at New America
Co-founder #PoliceData Initiative
Sally Lerman @JournEthics
Fellow TrustProject.org
Kent Grayson @Kent Grayson
Marketing Professor at the Kellogg School of Business @Kellogg
Faculty Coordinator TrustProject.org
[so many more still pending]
[letter] links to related articles - @Media_ReDesign
Ad Ecosystem
Ad-buying system
Metrics
Online ad exchanges
AdScience
AppNexus
DoubleVerify
Moat
Algorithm
Artificial Intelligence (AI)
Common Crawl
DNS Entry
Domain-expertise
Open Graph Standard
Page authority
Page domain
Web domain
Web traffic
Metadata
Social graph
User Generated Content - UGC
Context
Information ecosystem [16 FEB 17]
Disinformation
‘Crust of lies’ [20 JAN 17]
Deception
Falsehoods
Fiction
Lies
Obedience
Cynicism
Alternate reality
Gaslighting [27 JAN 17]
Backfire effect
Behavioral Economics
Cognitive bias
Cognitive dissonance
Confirmation bias
Contact Hypothesis
Echo-chamber (media)
Filter bubble
Framing [18 FEB 17]
Groupthink
Ideological bubble
Ideological frames
Information Ghetto
Opinion corridor (Sweden)
Perspective
Preconceptions
Self-segregation
Spiral of silence
Splinternet - cyberbalkanization
Tribes
Censorship
Opacity
Surveillance
Hannah Arendt
Vaclav Havel
George Orwell
Equal Time Rule
Partisan divide
Partisan refraction
Partisan tribalism
Partisanship
Polarization
Political extremism
Political framing
Political tribalism
Post-truth politics
Autocracy
Dictatorship
Denunciation
Hate speech
Harassment
Libel
Targeted attacks
Threats
Culture clash
Cultural divide
Islamophobia
Jewish
Muslim
Racialization [15 OCT 11]
Racism
White Supremacy
War on Terror
Civil discourse
Community
Compromise
Consensus
Empathy
Perspective
Trust - trustworthiness (media, institutions)
Understanding
Worldview
Media Literacy
News Literacy
Media trust
Public service announcements
First Amendment
Freedom of Information Act
Freedom of speech
Public’s right-to-know
Design Ethics
Journalism
Media Design
Media Structure
Regulations
Virality
Data analytics
Cambridge Analytica [16 FEB 17]
Weaponization of data
Click Farms
Macedonian teens [15 FEB 17] [03 NOV 16] [24 AUG 16]
Propaganda
Radicalization
Counterintelligence
CRAP Test
Contextual understanding
Critical Thinking
Skepticism
Accountability
Corraboration
Fact Check
Primary sources
False Equivalencies
Spam filtering
Veracity
Reputation Systems
Scientific Method
Social Networks
Under the Hood (technical aspects)
#1lib1ref Wikipedia ‘One Librarian, One Reference Campaign’
#lisjs
#checkyoursources
#FactsMatter
170215 - University of Oregon School of Journalism
SOJC faculty's tips for spotting fake news
161211 - NPR
A Finder's Guide To Facts
161118 - CNN
Here's how to outsmart fake news in your Facebook feed
Boston College
News Know How: Pause before you click
Digital Polarization Initiative (AASCU)
Web Literacy for Student Fact-Checkers
Via @toddmilbourn
Vaclav Havel, Power of the Powerless
Hannah Arendt, Origins of Totalitarianism
George Orwell, Politics and English Language
The following was a test done on Dec 10 to see how information could be transferred to this document, linking directly to our feed over at @IntugGB. Interesting experiment but one impossible to repeat; the platform is clearly not designed for this purpose.
Re-upping this thread by former CIA analyst/chief targeter @nadabakos
MT @yashar
23:51 UTC - 10 Dec 2016
#1 For the collective good of the American people a thorough Intel Community assessment or NIE should be drafted immediately
Thread
MT Nada Bakos @nadabakos
03:52 UTC - 10 Dec 2016
I repeat: I laid out what was going on with Russian campaign, based on leaks from European intel, before election:
MT Kurt Eichenwald @kurteichenwald, Senior writer Newsweek
23:46 UTC - 10 Dec 2016
161104 - Newsweek
Why Vladimir Putin's Russia is backing Donald Trump
The intel didn't state that Iraq had WMDs. The Bush-Cheney WH made that misrepresentation.
MT Nancy Pelosi @NancyPelosi
22:21 UTC - 10 Dec 2016
Guys, let's give Trump a chance.
He deserves a chance to hand the country over to Russia, Goldman and Exxon while we sit around and watch.
MT Judd Legum @JuddLegum
22:07 UTC - 10 Dec 2016
How can American history pivot so radically in the course of a few weeks? It really is awesome in its scope, and historic in its scale.
MT Eric Lipton @EricLiptonNYT
21:46 UTC - 10 Dec 2016
This critical moment for deterrence. Trump isn't just refusing to condemn Russian interference. He is committed to visibly rewarding it.
MT Susan Hennessey @Susan_Hennessey
21:33 UTC - 10 Dec 2016
I want to be magnanimous, even in what may be a rigged defeat, but Trump & GOP appear bent on destroying the Future of Life on Earth.
MT John Perry Barlow @JPBarlow
21:30 UTC - 10 Dec 2016
As US faces down crisis over Russia relations, Trump forgoes reconciliation, twists the knife by selecting Tillerson *for* Kremlin ties.
MT Susan Hennessey @Susan_Hennessey
21:29 UTC - 10 Dec 2016
Asked why Tillerson is qualified to be SecState, Trump cites his "massive deals w/Russia," he "knows the players"
MT Judd Legum @JuddLegum
21:24 UTC - 10 Dec 2016
Fox News
A preview of tomorrow's exclusive interview with President-elect Donald Trump. As questions swirl as to who Trump will pick for Secretary of State, he comments on leading candidate Rex Tillerson, CEO of ExxonMobile, saying he's "much more than a business executive, I mean he’s a world class player."
20:58 UTC - 10 Dec 2016
Really NYT? You're seriously going to frame up "both sides" reporting on the CIA's Russian hacking report?
MT Matt McDermott @mattmfm
20:36 UTC - 10 Dec 2016
Tillerson as Secretary of State would signify the greatest discontinuity in US foreign policy since the end of the Cold War.
MT Dimitri Trenin @DmitriTrenin
18:53 UTC - 10 Dec 2016
Let's also revisit this by @AliWatkins in September: The White House asked Congress to keep quiet on Russian Hacking
MT Miriam Elder @MiriamElder
17:53 UTC - 10 Dec 2016
161228 - BuzzFeed
The White House asked Congress to keep quiet on Russian Hacking
From July 2016, on Trump's broader links to Russia, and what they mean for Europe
MT Anne Applebaum @anneapplebaum
17:41 UTC - 10 Dec 2016
160721 - The Washington Post
Opinion: How a Trump presidency could destabilize Europe
Good time to (re)read @sheeraf: Meet Fancy Bear, the Russian group hacking the US Election
MT Miriam Elder @MiriamElder
17:40 UTC - 10 Dec 2016
161015 - BuzzFeed
Meet Fancy Bear, the Russian group hacking the US Election
For the first time in history, Washington has accused a foreign government of trying to influence the US election. Sheera Frenkel investigates the Russian group accused of hacking the US election — and finds they’ve been practicing for this moment for a long time.
One more time: It is totally plausible that Russia did what is being charged by anonymous sources. Still need to see actual evidence. Now.
MT Dan Gillmor @dangillmor
19:36 UTC - 10 Dec 2016
Harry Reid flagged the issue for Comey. But for Comey, only Hillary's emails mattered. Comey sat on this. Shameful.
MT George Takei @GeorgeTakei
19:27 UTC - 10 Dec 2016
https://assets.documentcloud.org/documents/3035844/Reid-Letter-to-Comey.pdf
Donald Trump and the GOP are going to have to face questions of treason like few other incoming admins ever have. Buckle up.
MT Isaac Saul @Ike_Saul
19:15 UTC - 10 Dec 2016
If you're looking for Tillerson's thoughts on stuff that might matter to a Secretary of State, try here first.
MT Daniel W. Drezner @dandrezner
19:13 UTC - 10 Dec 2016
120627 - CFR
The New North American Energy Paradigm: Reshaping the Future
Video - Full Screen
If you're convinced that Hillary is corrupt, flawed, unlikable, dishonest, that was EXACTLY the goal of Russian tampering. Congratulations.
MT Peter Daou @peterdaou
19:07 UTC - 10 Dec 2016
“Tillerson will be paired with former U.N. Ambassador John Bolton as his deputy secretary of state.”
MT Max Abrahms @MaxAbrahms
18:59 UTC - 10 Dec 2016
A lot of the negative/shocked reactions to Rex Tillerson as SecState seem to come from people w/limited understanding of private sector
Developing thread
MT Suzanne Maloney @MaloneySuzanne via Jack Tapper @jaketapper
18:59 UTC - 10 Dec 2016
161210 - NBC News
Rex Tillerson of Exxon Mobil expected to be named Trump's Secretary of State: Sources
I'm not challenging the outcome of the election, but very concerned about Russian interference/ actions at home & throughout the world.
MT Lindsey Graham @LindseyGrahamSC
18:55 UTC - 10 Dec 2016
Russia is trying to break the backs of democracies – and democratic movements – all over the world.
Lindsey Graham @LindseyGrahamSC
Hard to imagine Tillerson getting confirmed, but 2016 has made clear that just bc you can’t imagine something doesn’t mean it can’t happen.
MT Nicole Hemmer @pastpunditry
18:43 UTC - 10 Dec 2016
"We stand now at the most dangerous moment for liberal democracy since the end of World War II."
MT Howard Wolfson @howiewolf
18:39 UTC - 10 Dec 2016
161209 The Atlantic
Russia and the Threat to Liberal Democracy: How Vladimir Putin is making the world safe for autocracy
I never agree with @chuckschumer but he's correct on this one. Where is his Republican colleague joining his call?
MT Joe Walsh @WalshFreedom
18:38 UTC - 10 Dec 2016
161210 Politico
Schumer demands congressional inquiry on Russian meddling
When I wrote this on July 5, people said I was a paranoid red-baiter.
MT Franklin Foer @FranklinFoer
18:32 UTC - 10 Dec 2016
160704 - Slate
Putin's Puppet
If the Russian president could design a candidate to undermine American interests—and advance his own—he’d look a lot like Donald Trump.
As expected and on cue...
So, I was getting troll army onslaughts so I installed Block Together, but I fear it has blocked real people who were just new. Apologies.
MT Summer Brennan @summerbrennan
18:32 UTC - 10 Dec 2016
Today the Trump team attempted to discredit the CIA as the people who falsely said Saddam had weapons of mass destruction. That is a lie.
Developing thread
MT Mark Harris @MarkHarrisNYC
17:55 UTC - 10 Dec 2016
150320 Business Insider
Here's the full version of the CIA's 2002 intelligence assessment on WMD in Iraq
161020 - Esquire
How Russia pulled off the biggest election hack in US history
Via @michikokakutani @Fahrenthold
17:37 UTC - 10 Dec 2016
An investigation must begin as soon as possible on any evidence Russia actively worked to hijack our election & elect Donald Trump.
MT Senator Patty Murray (D-WA) @PattyMurray
17:32 UTC - 10 Dec 2016
161210 - Teen Vogue
Donald Trump is gas lighting America and deliberately undermining the very foundation of our freedom
By Lauren Duca @laurenduca
With Tillerson as possible Sec of State, book we should be reading. Exxon has its own foreign policy.
MT Eric Lipton @EricLiptonNYT
17:22 UTC - 10 Dec 2016
120608 - NYT
Well-Oiled Machine ‘Private Empire,’ Steve Coll’s book about Exxon Mobil
Tricky thing is, how does Obama respond to this mess without war powers, since we cannot declare war on Russia? https://www.law.cornell.edu/uscode/text/47/606
MT Summer Brennan @summerbrennan, Author, journalist, former UN Disarmament & International Security
17:01 UTC - 10 Dec 2016
161210 - NYMag
Trump, McConnell, Putin, and the triumph of the Will to Power
16:07 ITC - 10 Dec 2016
TRUTH: Hillary's "unlikability" WAS the Russian strategy. Make her toxic with fake news, trolling, hacking. 66 million didn't fall for it.
MT Peter Daou @peterdaou
16:04 UTC - 10 Dec 2016
We are in an unprecedented situation: a president that 54% of voters opposed elected with the help of a Russian intelligence operation.
MT Ryan Lizza @RyanLizza
15:39 UTC * 10 Dec 2016
[The New York Times] once sat on facts that could change the 2004 election result, citing "fairness." Now CIA/FBI do the same.
15:14 UTC - 10 Dec 2016
060813 - NYT Opinion Pages
Eavesdropping and the election: An answer on the question of timing
Neither the article featured nor the White House statement refers to Wikileaks at all. Just Russia. Something you want to tell us Julian?
MT Summer Brennan @summerbrennan
15:11 UTC - 10 Dec 2016
161209 - CNN
Obama orders review of Russian election-related hacking
The last time we declared war was in 1941, but of course we've fought nearly constant wars since then. But this is a cyber Pearl Harbor.
MT Summer Brennan @summerbrennan
15:03 UTC - 10 Dec 2016
If only we had been warned the cull of whistleblowers could result in voters lacking access to vital information.
MT Edward Snowden @Snowden
14:41 UTC - 10 Dec 2016
160406 - The Intercept
Obama's gift to Donald Trump: A policy of cracking down on journalists and their sources
161210 - The Intercept
Anonymous leaks to the Washington Post about the CIA’s Russia beliefs are no substitute for evidence
MT Glenn Greenwald @ggreenwald
12:12 UTC - 10 Dec 2016
The intel report on Russia's role in the 2016 election must be available for all electors before the electoral college meets Dec. 19
MT John Dean @JohnWDean
06:40 UTC - 10 Dec 2016
Don't just read the headline and lede on this one. Keep going, paying particular attention to the White House/Gang of 12 meeting.
MT Steve Benen @stevebenen, Producer for the Rachel Maddow Show
01:09 UTC - 10 Dec 2016
A reminder to take every claim made by unnamed US officials about intelligence conclusions with healthy skepticism.
MT Christopher Hayes @chrislhayes, Editor at large, The Nation
00:54 UTC - 10 Dec 2016
'Friends and associates said few U.S. citizens are closer to Mr. Putin than Mr. Tillerson'
MT Casey Michel @cjcmichel Nationalism, extremism, post-Sovietism
Formerly @HarrimanInst, @CrisisGroup, @TPM, @PeaceCorps
00:27 UTC - 10 Dec 2016
161206 - WSJ
Rex Tillerson, a candidate for Secretary of State, has ties to Vladimir Putin
Here is a thread with links to stories about the US, Russia, the cyber war, and its role in our election. Work in progress. Begin here.
Summer Brennan @summerbrennan
00:06 UTC - 26 Nov 2016
161210 - The Guardian
Russian involvement in US vote raises fears for European elections
CIA investigation may have implications for upcoming French and German polls, even raising doubts over integrity of Brexit vote
Via Lukasz Olejnik @lukOlejnik, Internet, Web Security & Privacy, Research and Engineering
161209 - The Washington Post
Secret CIA assessment says Russia was trying to help Trump win White House
The CIA has concluded in a secret assessment that Russia intervened in the 2016 election to help Donald Trump win the presidency, rather than just to undermine confidence in the U.S. electoral system, according to officials briefed on the matter.
… The CIA shared its latest assessment with key senators in a closed-door briefing on Capitol Hill last week, in which agency officials cited a growing body of intelligence from multiple sources. Agency briefers told the senators it was now “quite clear” that electing Trump was Russia’s goal…
… Seven Democratic senators last week asked Obama to declassify details about the intrusions and why officials believe that the Kremlin was behind the operation. Officials said Friday that the senators specifically were asking the White House to release portions of the CIA’s presentation.
… [in reference to September pre-election proceedings] In a secure room in the Capitol used for briefings involving classified information, administration officials broadly laid out the evidence U.S. spy agencies had collected, showing Russia’s role in cyber-intrusions in at least two states and in hacking the emails of the Democratic organizations and individuals.
And they made a case for a united, bipartisan front in response to what one official described as “the threat posed by unprecedented meddling by a foreign power in our election process.”
161209 - NYT
Russian hackers acted to aid Trump in election, U.S. says
US election hacking: Obama orders 'full review' of Russia interference
161125 - The Washington Post
Americans keep looking away from the election’s most alarming story
By Eric Chenoweth @EricDChenoweth Co-director of the Institute for Democracy in Eastern Europe
END OF TWITTER NEWSFEED
161210 - The Guardian
Russian involvement in US vote raises fears for European elections
CIA investigation may have implications for upcoming French and German polls, even raising doubts over integrity of Brexit vote
Via Lukasz Olejnik @lukOlejnik, Internet, Web Security & Privacy, Research and Engineering
161209 - The Washington Post
Secret CIA assessment says Russia was trying to help Trump win White House
The CIA has concluded in a secret assessment that Russia intervened in the 2016 election to help Donald Trump win the presidency, rather than just to undermine confidence in the U.S. electoral system, according to officials briefed on the matter.
… The CIA shared its latest assessment with key senators in a closed-door briefing on Capitol Hill last week, in which agency officials cited a growing body of intelligence from multiple sources. Agency briefers told the senators it was now “quite clear” that electing Trump was Russia’s goal…
… Seven Democratic senators last week asked Obama to declassify details about the intrusions and why officials believe that the Kremlin was behind the operation. Officials said Friday that the senators specifically were asking the White House to release portions of the CIA’s presentation.
… [in reference to September pre-election proceedings] In a secure room in the Capitol used for briefings involving classified information, administration officials broadly laid out the evidence U.S. spy agencies had collected, showing Russia’s role in cyber-intrusions in at least two states and in hacking the emails of the Democratic organizations and individuals.
And they made a case for a united, bipartisan front in response to what one official described as “the threat posed by unprecedented meddling by a foreign power in our election process.”
161209 - NYT
Russian hackers acted to aid Trump in election, U.S. says
US election hacking: Obama orders 'full review' of Russia interference
161125 - The Washington Post
Americans keep looking away from the election’s most alarming story
By Eric Chenoweth @EricDChenoweth Co-director of the Institute for Democracy in Eastern Europe
Answering the question:
But how does one or many curtail this massive violation of our trust and common morality?
Writes MT Jacob Harris @harrisj - Mission Accomplished:
160712 - The New Yorker
The real paranoia inducing purpose of Russian Hacks
“When I began researching the story, I assumed that paid trolls worked by relentlessly spreading their message and thus indoctrinating Russian Internet users. But, after speaking with Russian journalists and opposition members, I quickly learned that pro-government trolling operations were not very effective at pushing a specific pro-Kremlin message—say, that the murdered opposition leader Boris Nemtsov was actually killed by his allies, in order to garner sympathy. The trolls were too obvious, too nasty, and too coördinated to maintain the illusion that these were everyday Russians. Everyone knew that the Web was crawling with trolls, and comment threads would often devolve into troll and counter-troll debates.
The real effect, the Russian activists told me, was not to brainwash readers but to overwhelm social media with a flood of fake content, seeding doubt and paranoia, and destroying the possibility of using the Internet as a democratic space. One activist recalled that a favorite tactic of the opposition was to make anti-Putin hashtags trend on Twitter. Then Kremlin trolls discovered how to make pro-Putin hashtags trend, and the symbolic nature of the action was killed. “The point is to spoil it, to create the atmosphere of hate, to make it so stinky that normal people won’t want to touch it,” the opposition activist Leonid Volkov told me.”
Updates
161120 The Intercept @theintercept
Some fake news publishers just happen to be Donald Trump’s cronies
161120
Never mind the Algorithms: The role of click farms and exploited digital labor in Trump's selection
Case Study: The Washington Post
Interesting to see how this particular story evolves with time
New report provides startling evidence of a massive Russian propaganda operation using Facebook and fraud sites
MT @profcaroll Associate Professor of Media Design @parsonsdesign The New School @thenewschool
161124 - Washington Post
Russian propaganda effort helped spread ‘fake news’ during election, experts say [jn]
MT @jonathanweisman Deputy Washington Editor, The New York Times via Centre for International Governance Innovation @CIGIonline
Critique:
This article is based on some fairly shaky ground IMO, which I wrote about here:
Mathew Ingram @mathewi
160925 - Fortune
No, Russian agents are not behind every piece of fake news you see
@mathewi - This has been looked at in so much depth. But think of it this way. We are bringing everything to the table. If it is the Washington Post, fine. We are ready. Start checking…
Crucial to read this article in context of Russian psyop allegations. FB weaponized targeting by susceptibilities - MT @profcaroll
161119 - NYT
Opinion: The secret agenda of a Facebook quiz
It's now imperative that the Electoral College get schooled on how Facebook may have been used to swing the election - @profcaroll
Also consider the 98 personal data points that attackers can use on victims for propaganda campaigns on Facebook - @profcarroll
160819 - The Washington Post
98 personal data points that Facebook uses to target ads to you
Unfortunately, I fear media will be reluctant to poke at the underlying ad targeting issues here because their revenue still depends on it. But given National Emergency of a propaganda revelation journos must whittle this down to data privacy because that's security solution here.
Precision targeted ads based on a duopoly’s concentrated user data profile is a national security risk to propaganda attacks. Now we know. - @profcarroll
Good thing Germans believe in defending their privacy knowing why it must be cherished and always defended: @profcarroll
161124 - Reuters
Merkel fears social bots may manipulate German election
We at PropOrNot.com are integrating automated and manual approaches to identifying Russian propaganda[jr],[js][jt] [ju]which much of what this misinformation is: While some fake news is just commercial clickbait, and some is satire, much is state-sponsored, and echoes official and semi-official Russian propaganda in numerous ways. We’re building a browser toolbar to help identify it.[jv] This is a guide to manually identifying it, and this is an example of how that works in practice.
END OF DOCUMENT
[17] https://blogs.msdn.microsoft.com/jennifer/2016/06/28/sentiment-analysis-on-social-network-data-twitter-facebook-etc/
[18] http://people.sabanciuniv.edu/berrin/share/LDA/Stanford-NLP-Course-termproject-ssoriajr-kanej.pdf
[24] http://www.independent.co.uk/news/the-best-photoshopped-images-and-the-stories-behind-them-10465812.html
[25] https://www.w3.org/community/
[26] https://www.ted.com/talks/tim_berners_lee_a_magna_carta_for_the_web
[27] http://osds.openlinksw.com/
[28] https://en.wikipedia.org/wiki/Ted_Nelson
[29] https://www.youtube.com/watch?v=En_2T7KH6RA
[a]Det här är en kommentar
[b]This should include becoming familiar with those working in countries as dissidents against a propaganda machine. They often use the Internet as bloggers because they cannot publish openly in their own countries without great personal risk, but are a font of knowledge when dealing with repressive regimes. Who are they? Find them and pick their brains. They are your brothers in arms.
[c]Great comment Alex!
[d]is this account deactivated?
[e]Thanks for noticing. @TheNewsLP was updated with their new Twitter handle: @NewsLitProject
- Kate B.
[f]_Marked as resolved_
[g]_Re-opened_
[h]this is a key issue. At a minimum, there should be a working definition of what is meant by "fake news" (in this project, by others), and examination of how it relates to other, perhaps better-defined concepts like propaganda, misinformation, disinformation. It seems clear that in current discussion and this document, people are using "fake news" to mean various things.
[i]I've been using the term "fraudulent" instead of fake news, as fraudulent better describes the intention to deceive people for clicks/ad dollars.
Many people are having a hard time differentiating between "fake news" sites that are pushing an agenda (Breitbart) and those that are gaming the system purely for profit, with no political leanings (Macedonian sites). We need to do a better job of separating the two categories. Both are problematic, but we need to have different approaches to solving each.
[j]This goes back to the notion of a satire symbol. We can't trust the "masses" to be quick enough to recognize satire. But let's also realize there is a big difference between satire and FAKE news.
[k]+1. Real satire sites should have no problem identifying themselves as such.
[l]There are more-subtle and pernicious judgements to be made as well. Framing and omission can easily create filter bubbles as much as overt misinformation can. See +giladlotan@gmail.com's piece on Medium, here: https://points.datasociety.net/fake-news-is-not-the-problem-f00ec8cdfcb#.d0x5oyhxp
[m]I believe that, unfortunately, the responsibility of fact checking falls solely on the reader. It's economically unfeasible to have an independent body fact check all articles before publication, thus leading publishers to employ their own fact-checkers but the question "Who watches the watchmen?" rings true. A responsible reader should verify sources as well as statements on their own in order to be certain that what's being said is true.
[n]I'd say this is the simplest and most encompassing definition of the *problem* with "fake news" and other forms of misinformation
[o]Who gets to define that? Such a broad definition is rife for potential censorship...
[p]Best to use metrics that all can agree are valid, and measure at the most granular level: to the word.
[q]Taking notes. That is the idea :)
[r]Recent study suggests backfire effect may actually be rare: http://www.poynter.org/2016/fact-checking-doesnt-backfire-new-study-suggests/436983/
[s]Boomerang effect (same thing) is well documented in persuasion research though https://en.wikipedia.org/wiki/Boomerang_effect_(psychology)
[t]Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful. Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’. It makes it personal and defensive: there is a difference between a comment on a shared article criticising the author and criticising the ‘sharer’, as if they’d written it. They may not agree with all of it. They may be open-minded. By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’. [Amanda Harris]
[u]Your post is being moved to the topic Facebook, with the main idea under Basic Concepts. What a great suggestion, thanks.
[v]How you define "emotional"? Most commentary is nothing than emotional, so, if the definition is vague, 100% of commentary will end up being flagged.
[x]Dan Ariely at Duke should be able to help answer the question "Do people lie to pollsters, and if so, why?"
[y]Deindividuation theory
[z]Sorry, not super familiar with the structure of this document. Please feel free to move to a more appropriate section (I couldn't find one)
[aa]The doc is in constant evolution, Aleksandr. Let me go through the research first, to see where it can be added.
A special section for keywords is being worked on at the very end; that should help us get a general sense of the many topics and suggestions emerging as we start to sharpen the focus.
Many thanks,
- Lina
[ab]N.B. This all seems to be about social networks (where trending news is encouraged). I feel there can also be diversity in the number of news sites - and these are run by human editors. However, there would need to be a solid plan about how to communicate the knowledge of these sites to the public. So I expanded on the title.
[ac]Do NOT propose to hire editors as a solution to any problem, there are editors in all tabloids like The Sun, and The Daily Mirror, they can be useless and thus does not solve anything in the real world. Bad Idea!
[ad]Looks like Mark Zuckerberg might have heard you, Mathew. But instead of hiring them (in fact they sacked their staff), they prefer outsourcing the human work now https://twitter.com/LiquidNewsroom/status/799880674719203328 reported by Kurt Wagner, recode.
[ae]This gives rise to a dilemma. If you hire human editors to filter fake news, their decisions will disproportionately affect conservative websites (see Buzzfeed analysis). This will expose them to accusations of bias, and might exacerbate the fundamental problem that eliminating fake news is trying to solve: reducing polarisation.
[af]Great idea, but will it happen? I'm doubtful.
[ag]And who will pay for their time? Where will the money come from?
[ah]I think there's plenty of good media out there by nuanced and moderate writers. The main problem of media in the age of information isn't in the creation of good content, but it's monetization.
As long as people live in polarized filter bubbles, the actual quality of journalism available is irrelevant; clickbait hyperpartisan websites are going to dominate.
What needs to change is not the journalists themselves but the structure of the medium in which it's disseminated. How do we widen people's filter bubbles?
[ai]great except when they had human editors vetting trending topics they got vilified for that too
[aj]Had it right the first time
[ak]Yuuuuup.
[al]In practice, there are probably tens of thousands of unique articles going viral at any moment on Facebook. You'd need a smart algorithm to surface "probable hoaxes" to editors
[am]this is what i am working now on, a algorithm that will find viral news before they go viral
[an]Just read your medium post Badita. Looks interesting. here's a paper with a great overview from Duncan Watts and Sharad Goel discussing mechanisms for virality https://5harad.com/papers/twiral.pdf
[ao]I am developing an app to incentivize real news that addresses this issue. If anyone could take a look and offer comments/sugestions it would be helpful:
https://github.com/qualisign/torrential-times/blob/master/fake-news.md
[ap]David - Is there another link? We are getting a 404 on that one.
[aq]Yes -- the Readme is a broad overview, and I am close to finishing a demo
that I'll link to soon.
https://github.com/qualisign/torrential-times
[ar]I suspect this is what FB has already done. I think the problem shows up in defining where the line is for "human no longer required." 99% accuracy? 99.9%? I'd assert it should be 4- or 5- nines, but I don't believe we can reach that level of accuracy with current machine learning techniques or data availability. How then do we convince FB that they need to employ a significant number of actual humans for the next 5-10 years?
[as]boy that won't get hacked in two minutes. good idea but federation is better.
[at]This is exactly the architecture of Web Annotation (w3.org/annotation). The goal is federation, and an ability to provide insight or meta-information about resources that you do not control. In other words, communities could run annotation servers, or moderate groups on larger annotation servers. At scale, with 100s or 1000s of expert communities leveraging their expertise in areas in which they are familiar, this can be part of an overall solution. Also, machine algorithms rather than humans can inhabit layers/groups of their own. It's clear that combining machine analysis and human critique / curation is an important dimension of the problem. An example of such an expert community is ClimateFeedback.org. At Hypothes.is, we plan to begin working with expert communities on a much larger basis in 2017 to achieve exactly this. Article: http://dotearth.blogs.nytimes.com/2016/05/03/scientists-build-a-hype-detector-for-online-climate-news-and-commentary/
[au]This sounds like what is happening at http://www.opensources.co/. They are building a database of sources and tagging them. There is an API (I think) that is being used by the folks at BS Detector (http://bsdetector.tech/) which is a browser plug-in.
[av]This rhetoric may be problematic, depending on the personas of the contributors. I like the idea of the consumer audience being anyone that seeks real news. In contrast, allowing any "Joe Shmo" to have ownership of site credit and control of up-voting and down-voting is opening doors to spammers. I think sooner than later, this idea would lose its efficiency. Unless the contributors are tested to determine if they qualify under certain standards in order to trust that they can detect false content.
[aw]Great idea, but why would Facebook who is already trying to cozy up to China with a censoring tool be open to this?
[ax]Sadly this is flawed since there are plenty of fake or pseudo-news sites that have high domain authority due to the very fact that a bunch of sites link to them as sources of "news."
[ay]Google has found lots of ways to deal with this--collaborate?
[az]They've figured out ways to handle it in a system where people are actively seeking specific information. Most of those programmatic solutions don't apply to FB. One of their *non*-programmatic solutions is getting publishers to manually apply and manually get approved if they want to show up in Google News results, but that's also the antithesis of the way the FB platform works. (It reeks of censorship/favoritism/non-neutrality/call-it-what-you-will and FB will always shy away from that.)
[ba]what about piggybacking on it and checking if something appears on Google News and mark it as such on the FB feed. Just let people know it's a verified source
[bb]Those sites with high domain authority can just be blacklisted. It'd be easy to identify which news sources have a high rate of fake news. You could also use clustering and ML by using fake news sites and users that share them to identify other fake news sources.
[bc]Google also got rid of Page Rank for the most part.
[bd]Seems cool but like it touches a relatively small part of the area covered by fake news.
[be]What would this be for climate change?
[bf]climatefeedback.org
[bg]Old domains are readily for sale on the secondary markets.
[bh]Looking at age is a dangerous precedent because of this exact issue. I like the idea of using this as a signal but it would warrant further discussion.
[bi]Yeah, there's something here.
[bj]Agree
[bk]I really like this idea. I wonder if adding a "Do you want to read this before you share it?" step would be useful.
[bl]I LOVE your idea, Hannah! So good!
[bm]could also weight more if read to the bottom, or time spent on page to vet actual time-spent reviewing or reading
[bn]With increased transparency, though, wouldn't it increase the potential of abuse? If a malfeasant knows how FB ranks content, then content can be created that games the system.
[bo]+1 - what media company doesn't want to know this to help boost its own stories?
[bp]Differentiate between sharing ‘personal information’ and ‘news articles’ on social media - the current ‘share’ button for both is unhelpful. Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’. It makes it personal and defensive: there is a difference between a comment on a shared article criticising the author and criticising the ‘sharer’, as if they’d written it. They may not agree with all of it. They may be open-minded. By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’. [Amanda Harris]
[bq]Your post is being moved to the topic Facebook, with the main idea under Basic Concepts. What a great suggestion, thanks.
[br]How would this be defined?
[bs]This idea has merit, but seems to only be a partial step. It would be tricky to first determine which stories get speed-braked and which do not. If speed breaks are applied globally, to all articles, it limits the possibility of any story going viral, which has negative commercial implications that make it a non-starter.
[bt]Pre-conceived notion of which outlets are trustworthy might not be justified. For instance, WSJ is not a credible source when it comes to climate science topics, see http://climatefeedback.org/outlet/the-wall-street-journal/
[bu]I understand the idea, but this will be percived as a form of censorship. why this newspaper is on and other is not, etc
[bv]+=1
[bw]New York Times and wall Street Yournal are "white labeled, verified sites"? Since when? They publish the same biased crap as Onion, or world news daily.
[bx]I like the idea of using institutional credibility to verify, but it runs the risk reinforcing the dominance of few media voices at a time when we should value adherence to journalistic principles above name recognition. What about an independent journalistic standards board that provides a sort of 'stamp of approval' for any outlet seeking it. The approval would be based on journalistic practice, rather than name recognition, funding or anything else. This probably goes beyond the point we've reached with algorithms, but you could use a volunteer editors--like a sort of peer review for news.
[by]We are left however assuming that the NY Times is a gatekeeper of quality. yet as we know there are times when the times is itself not able to get past an institutional bias to be "an institution" and all that comes with that. access instead of toughness, credulousness when a source represents power, a belief that with power necessarily comes the need for respect, etc. all of this mitigates TOWARDS "fake" news in a way this document, and this whole meme, tend to avoid, because facebook fake news stories are an easier target.
[bz]Exactly. But how would this tag be visible? This is the problem that Web Annotation is intended to solve (w3.org/annotation / hypothes.is)
[ca]that doesn't mean "Facebook knows the story is fake"; when someone shares an accurate article, Facebook also suggests links to "fake news" to make the balance
[cb]Agree with Manu. Correlation does not imply causation. A Snopes (or any particular) article being recommended could simply be a factor of other people sharing that link in comments to the original post. Also, a Snopes article could be related that verifies the original article's truthfulness instead of falsehood.
[cc]The Berkman Center teamed up with Google for what's called StopBadware. The effort identified sites propogating malware, etc. and then notified users before they clicked through. Perhaps there's a way of evolving the concept for these purposes --
[cd]Perhaps but malware is much easier to define in code than truthfulness
[ce]Somewhat also as a response to the idea of a delay up there (in that if one is willing to change or obstruct the user's ability to share whatever they want this much, something less aggressive might be better, like what's suggested here)
Truthfullness is hard to check. But _suspiciousness_ maybe not as much, since there are some obvious tell-tale signs of suspicious/fake news stories after all. So once a story starts gaining crazy momentum, maybe questions like these are worth asking:
Does a quick google search of a whole sentence in the article spawn a hell of a lot of the same article but in different sites? Are articles almost exactly (or completely exactly) like the article found in different time periods? (ergo, is it recycled but maybe with a few changes here and there)? Is the author a known fake news writer (not the same as satire) or someone who is known to produce fake content people mistake for real (see Paul Horner). (I feel like bots and crawlers should be able to check this kind of simple stuff right?)
If yes mark that story as "suspicious" somehow and if a new user tries to share it warn them before leting that share go through (maybe link to a small explanatory page about what a content-mill is or something else to educate the user would be cool).
Have in mind that a lot of people question even Snopes (if you haven't seen that ever, maybe that's the fault of your echo chamber's size), so while I personally think that "is this debunked in snopes?" is a good indicator, I wouldn't go as far as to forbid the user from sharing content based on that, rather, a warning would be good enough.
EDIT: Actually, these ideas are somewhat repeated in bullet points a few pages below...
[cf]Rather than sentiment analysis of a headline - why not machine learn the articles and remove the baiting entirely
[cg]While I like the idea of easy identifying any content-based approach can ultimately be bypassed as fake-news outlets are capable of adapting to such circumstances. It's a bit like a hidden search-result algorithm and the whole SEO industry.
[ch]agree, whatever the solution - content only will not achieve by itself - it is one component and it needs to be agile
[ci]Sentiment analysis is quite biased and intrinsically flawed... I would stay away from it for this particular application.
[cj]Also, perhaps headline writers would simply adjust
[ck]This!
[cl]+1. Eminently doable with current data and without human intervention. But may not be effective, as people from various parts of the political spectrum may be more or less open-minded.
[cm]Added reference. Hello from Wikimedia UK. Wikipedia has a big role to play in preventing fake news.
[cn]Yes, indeed, machine learning can identify partisan bias. Then, the reader can be offered to read an article on the same topic - with the opposing bias. Ask questions at the end to see if the reader understood it. Can start an educational charity fund to pay readers for answering those questions. This should help counter-act the vicious cycle of everyone reading only same view.
[co]Back in 2008, we launched a private network that used this verification system for every single profile. All linked directly to four default platforms: FB, Twitter, LinkedIn. Medium could be that fourth reference.
[cp]I see a few problems with this does this mean that you will stop seeing stuff from your friends or people your following if they prove "harmful" to you. Also does this mean that we will start policing breaking stories that don't have all the information and while written news stories can be edited videos cannot so do we penalize these news organizations for incomplete information that is later corrected
[cq]I guess, that this can be manipulated as well. It's just a matter of time, when workarounds will make it unusable again.
[cr]generally, satirical news sites as category for any kind of machine analysis are something that need to be thought through carefully (point made by our partners at FirstDraftNews.com)
[cs]How to avoid having this fall prey to the same problems as "real names" policies? On Twitter, men are substantially more likely to have verified accounts than women; there are likely to be similar patterns with other dimensions of diversity. See http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F for more
[ct]I would actually suggest that It should be a different name and or system that is already in place on Facebook as verified user on Facebook serve more than one function. and to use twitter as an example the verified check also stops people from faking the identity of that person.
[cu]Yes! +Publishers should put their reputation at stake, establish an “ethics code” and accountability process, eg http://thetrustproject.org/
[cv]I like this idea, it removes opportunity for spammers.
[cw]What makes this difficult is deciding what "privilege" translates to, algorithmically. FB may or may not be doing this already - would we know?
Virality as a driving factor for FB posts/news means that visibility follows a power law. Does "privileged" mean authoritative sources are 10% more likely to be seen? What does that actually look like inside the algorithm?
Perhaps "privileged" means there are reserved Trending slots for such sources?
[cx]+1. this privilege will be the first thing attacked by, let's call them 'the Enemies of Truth'.
[cy]I also agree as what is to stop Facebook from creating a privilege to news sites that give Facebook some money. Because Facebook isn't a governmental institution there is no incentive not to pull a move on that level.
[cz]This solution is highly Problematic, as what is deemed "offensive" could change over time, and likely will reflect existing power structures and in the world case become a tool of state propaganda. For example, An oppressive state may require facebook to deem negative pieces about itself as offensive (like china is already doing).
Furthermore, the appetite for fake news is as much of a problem as fake news itself. This could backfire and result in in non-verified and Offensive news being viewed as the anti-establishment go-to. Even now, Breitbart is widely viewed many as an offensive, non-verified news outline, and many readers read it despite knowing that it is viewed this way
[da]This might vary from country to country but I am thinking of how a Chamber of Commerce works.
[db]We do it for everything else. Anybody can do your taxes, if you want a real answer, go to a CPA. Why not news?
Also, anyone could feasibly sign up for a twitter account as The Rock, but there's only one that's for real.
[dc]Yes, censorship. Unfortunately there needs to be someone or just other algorithms, which would decide, which site is biased. Now look in the world to find a country without a bias in censoring content.
[dd]This might vary from country to country but I am thinking of how a Chamber of Commerce works.
[de]We do it for everything else. Anybody can do your taxes, if you want a real answer, go to a CPA. Why not news?
Also, anyone could feasibly sign up for a twitter account as The Rock, but there's only one that's for real.
[df]Except other outlets *do* pick up fake stories (assuming "outlets" is broadly defined as "any other websites"), whether to propagate a false story or to debunk it. (See: reason why using domain authority/backlinks isn't ideal.)
[dg]Yeah, and this could be gamed, right?
[dh]Bingo.
[di]Or just propagated (without any intent to game the system) by publishers suffering from confirmation bias.
[dj]Not just biases, but they could be propagated knowingly by low quality sites fishing for ad revenue, so this might cause the opposite of the intended effect.
Low quality content, if not outright fake content, is sometimes sold, or at the very least cloned and recycled (ever google'd a headline only to find results of the same headline but from years ago?) a lot in many different places: This is a relevant story: http://arstechnica.com/information-technology/2015/07/inside-an-online-content-mill-or-writing-4156-words-a-day-just-to-earn-lunch-money/
[dk]Sadly, biases and agendas are not the only reasons for lack of quality, sometimes it's pure lack of journalistic standards (i.e. the CNN porn story from last weekend, which was picked up by virtually all outlets based on two tweets from the same person and nothing else). Measuring engagement and verifying whether things get picked up is not a good marker, at least as news unfold in real time.
[dl]Note that Google now--for the purposes of consideration in RankBrain and other ranking factors--ignores all meta tags except for the title tag. What's leftover in the analysis are backlinks and the content itself. As mentioned above, backlinks alone are an imperfect predictor of content quality.
[dm]But there are a constellation of other factors including: domain age, how often it has been flagged in the past, etc... that could go into a "ranking" metric.
I think the end game will always be getting it reviewed by a human, but rank *could* be a decent initial filter.
[dn]You say things that are true. If I had been thorough enough to complete my thought in the first place: self-tagging is probably a no-go, but using a bunch of different signals (as Google does to judge content) seems like a healthy start.
[do]Absolutely. In my newsroom at Liquid Newsroom we use social graph analytics to get insights about the DNA of information markets.
[dp]That method works really well for spam! http://paulgraham.com/spam.html
[dq]Not really ... statistical content filtering is a small part of how spam control works ... more here on the relationship https://medium.com/@SunilPaul/lessons-the-spam-wars-facebooks-fight-to-kill-fake-news-is-just-starting-2aaefb0a389b#.nrbl3142l
[dr]Let's hope that we will see more news publishers and organizations still invest in fact-checking. And let's hope that the general public really wants to read fact-checked articles. As it looks now, more and more people seem to prefer echo chambers.
[ds]This totally fits with my idea (inspired by Eli's idea) for "Verified Sites" in the NewsFeed. (I.e., stories from manually-verified sites that meet a list of criteria get a "verified" badge when they appear in the NewsFeed.)
[dt]yes -- separate these 2, then create dashboard for when they are most out of whack: popular and false, for targeted action
[du]Yes, exactly!
[dv]Great idea. See "Multiple user-selected fact checkers" section below that expands on it and sidesteps much of the concerns about other ideas.
[dw]Or even just a "disputed" tag
[dx]This does exist in this form: https://memebuster.checkdesk.org/ . How would you like to see it improved upon / developed?
[dy]"Unverified" would work and is a less loaded term.
In my personal conversations, I've intentionally saying "fraudulent news" instead of fake. Fraudulent implies intent. Fraudulent means being deceptive for profit, which the worst offenders do.
[dz]This is a brilliant idea.
[ea]Real-time embedded corrections can produce reactance, causing people to resist the correction. Corrections presented after a slight delay tend to do better for those inclined to resist the correction. See:
http://wp.comm.ohio-state.edu/misperceptions/wp-content/uploads/2012/07/GarrettWeeks-PromisePeril-CSCW-final.pdf
[eb]In suggested ideas for preventing false information from spreading, how will we measure whether or not they are working, (to get at the comment above about slight delay actually changes minds - which changes the problem frame to not about the spread, but about the correcting)...?
[ec]Need to cite the fact check in the UX in some way. Otherwise complaints of bias will overrule the fact check
[ed]How does this example differ from the multitude of "questionable" stories relating to sports (transfer speculation and claims) that a published by traditional media sites?
Rumour - seems to be a catch all, cover all - http://www.footballtransferleague.co.uk/football_rumours.aspx
[ee]Simple, brilliant ideas. There is a ton of fakes old news that would become apparent quickly with the date.
[ef]Couldn't this then be abused by larger groups to classify real news as fake?
[eg]I would expect this to simply convince people that the whitelist is biased. Especially since even trustworthy sources sometimes pick up fake information.
[eh]this exists already, right? Its just not very user-friendly (and doesn't have the ideology adjustment, but that would be hard to accomplish given that not everyone selects an ideology on fb). On the first half, though, Tom Trewinnard of Meedan has analyzed well that the UX has flaws that could be fixed https://firstdraftnews.com/is-facebook-is-losing-its-war-against-fake-news/
[ei]You wouldn't have to worry about the ideology listed on fb though, Facebook uses your actions on the site to create a guess at your ideology (you can actually see it in your ad preferences)
[ej]Good point (though oof at the thought of those being used)
[ek]http://www.allsides.com/ does something sort of like this: users' bias ratings are weighted based on a self-reported survey (partially developed by Pew) of political affiliation.
[el]How does this not get abused? Need to make sure there is a failsafe to prevent false flags to remove content.
[em]Agreed as very similar user flagging program was almost implemented on YouTube. If anyone remembers the YouTube heroes program that created an incentive for flagging videos that broke community guidelines that included but wasn't limited to fake news. this created a hierarchical reward tier based on how much you did with regulations relaxing in the higher tiers along with easier ways to take down videos. so as this applies Facebook how would we be able to regulate literally everyone on Facebook.
[en]Shutting down discussion about the relationship between social and technical responses to an urgent social problem may prevent important strategies from emerging.
[eo]I tend to agree with Kelly. Zeynep, I'm confused - how not relevant? Is it because we're looking for ways to improve the situation NOW, not when these kids grow up?
[ep]Should not encourage false dichotomies, however (see climate change). If 99% of stories report one thing and 1% report another, that 1% shouldn't be given equal weight.
[eq]This happens already in some cases. You can see snopes links in suggested reads when an article is clicked.
[er]This, I think is the big challenge. How do we incentivize this behaviour? there's some empirical evidence on the relationship between network diversity and idea generation. Maybe it's a start point?http://sloanreview.mit.edu/article/how-twitter-users-can-generate-better-ideas/
[es]https://motherboard.vice.com/en_us/article/how-our-likes-helped-trump-win This and tools like it continue to be the most influential ones changing politics everywhere. I thought it might become a category.
[et]_Marked as resolved_
[eu]_Re-opened_
[ev]A section was just added under the heading "Manipulation and ‘weaponization’ of data" at the very end of the document. A number of articles and reports have been included.
Should suggestions and ideas come up, they can be added later within the text. For now, it seems like quite a bit of of research to be looked at and analyzed.
- Lina
[ew]This might work for something like Apple News, which is designed specifically for news consumption, but people share much more than news on Facebook. You still want people to be able to share random personal blogs and whatnot.
[ex]Facebook has a "News Feed" tab that should probably be kept free of personal opinion pages. It would make sense to only allow real news in the News Feed tab.
[ey]CrowdTangle does this. It surfaces stories that perform better relative to how they should be performing, based on past performance of page stories. Facebook just acquired them.
[ez]I like this idea a lot. Of course - you'd have to figure out how to automatically identify "questionable content" - but if we could, slowing potential for spread is an interesting intervention.
This is what the Chinese gov't is doing on Weibo - instead of taking down content, limiting spread. Which raises issues around transparency - what does FB need to do to make sure there's enough information given to users, but not enough to make this whole thing game-able.
[fa]Seems like FB would be VERY resistant to things like this, as they would essentially open their news articles up to being "scooped" by social media sites that don't have this practice (like Twitter). Still an interesting idea though.
[fb]I think FB would be okay with it. You could only add a velocity filter on posts that contain political language (articles that use the words trump,hillary, etc.). You could also exempt white listed sites, such as places like the New York Times
[fc]Craig Silverman covers pretty well how even mainstream newsrooms have been failing to incorporate sufficient fact-checking into their workflows in this white paper: http://towcenter.org/wp-content/uploads/2015/02/LiesDamnLies_Silverman_TowCenter.pdf
[fd]How can you say that the New York Times is "white listed" if their news is biased information as is some onion or world news daily news?
Who can say that CNN is more "white listed" than Fox. Or BBC is better than RT?
[fe]Facebook is already doing this in certain countries.
[ff]Skepticism of legitimate information is encouraged in certain instances (climate change). This has to be carefully handled.
[fg]This could be the most important part of creating a cause campaign--targeting ad ecosystem players who serve ads on fake news sites a la Color of Change campaign against ALEC and Glenn Beck.
[fh]With having an anti-fake news standpoint, this sounds relieving. Although in order for this to be ethical, it could not make the claim that the general user's decision-making process is problematic. It may be said that this ability to distinguish fake news from the truth is a human right. Of course, this contradictory standpoint would not exist if the declaration was provoked from the user.
[fi]People are also busy and overwhelmed and have different priorities in life.
[fj]No, we just need design to support critical thinking. If your premise is people are lazy, then you need to get them to stop being lazy and educate them on why they need to stop being lazy. If your premise is less judgmental, and assume that everyone has a wide range of naturally occurring human biases, and that people are busy and have competing priorities, then you might design for presenting options that enhance critical thinking.
[fk]Here here. We can work with designers and technologists and journalists to design with 'lazyness' in mind too. It takes extra work to create tools and media that encourage critical thinking, we can advocate for creators that it's worth the time.
[fl]There will always be people who dismiss difficult truths, but we don't need to give their denial an easy platform. I believe the point of this document, and indeed the larger effort to boost signal and squelch noise in modern tech, is to give users who aren't willfully ignorant a better chance to be informed by fact.
There are certainly many points between "willfully ignorant" and "perfectly informed." By fighting false and misleading news, we can move individuals along that continuum. There is value in that transition.
[fm]The use of drones to document visuals of happenings without biased narration, or narration at all, may eliminate the need for multiple variations of biased news reports. This allows the user to have complete control of news interpretation without a provided opinion.
[fn]This is really interesting. Headlines and images are a huge part of the problem. I imagine relatively few people even read the articles.
[fo]I agree, something like more than 50% of people don't read articles they share... (I don't find the link to the study) so it means that content needs to be hidden directly from fb/twitter and not block it in the website
[fp]All of this reminds me of the long debated desire that some have had to create some kind of formal certification for SEO and Digital Marketers to encourage ethical practices. ...The closest we've come to achieving that has been through collections of persons who ascribe to the same or similar value systems. ...Perhaps this is for the best because if you think about it - all we can really do is promise our readers that we will practice due diligence...
[fq]There might be a problem with this. I saw a brilliant physicist's contribution to Wikipedia be deleted due to lack of references. His research was all original, incredibly groundbreaking.
[fr]Scientific journals could be classed as a verified source.
People can be classed as verified source.
etc...
Once the algorithms are trained these problems will be easily avoided.
[fs]It's not just profitable for the fake news accounts, it's also profitable for Facebook. A friend who is a lawyer mentioned last night that the shareholders could hold Zuckerberg, et al, responsible of abrogating his fiduciary obligation to them if he were to cut anything that harms profits (like curtailing fake news posts that otherwise get a ton of engagement). I got a B- in a college business law class, so I'm not one to comment on that, but I welcome any thoughts from people who know the law better.
[ft]I'm not a lawyer either but that argument doesnt seem like it would hold up all that long -- Zuck could argue that fake news represents a long term threat to profit/revenue in that it could undermine credibility/user trust in the platform
[fu]I do wonder if users even care. I am assuming a lot just log on to unwind or just have a good time. A cool viral story fits the bill, not necessarily something you would 'experience' if reading an in depth analysis of tax evasion schemes. Personal opinion, of course :/
[fv]FB doesn't have an incentive to do this, so how does this get implemented?
[fw]Man, I think this is huge— the "authoritative source" will vary widely based on your politics. I'm always wanting to ask people who share "Hillary is a satanist"-type stories: What would convince you that this story is false? It certainly wouldn't be a Washington Post debunking.
[fx]Why not convince people to stop using Facebook? It worked with MySpace, LOL.
[fy]People must be able to use more sources. Removing facebook will just open the possibility for something worse. I dislike the American "Nipple" moral attitude (while endorsing violence) but any news-outlet will have a certain set of believes.
[fz]Facebook having a monopoly on its news algorithm is undesirable, but would the situation be improved by fragmenting control? There are such heavy incentives for promoting junk/fake news.
While some of the fragments would likely be an improvement, selecting a quality source among them would seem to be just as beyond the average/casual FB user as effectively vetting posts is today.
Seems likely we'd just see the schism increase - tech-savvy/engaged users would find and use good sources, regular users would continue to be at the mercy of misleading orgs.
[ga]The problem is that the network principle will lead to one company (monoply) having all the news. All the technical solutions will not prevent this to happen. So apart from the technical solutions here we need to have a anti-trust law just as we should enforce beter for companies.
[gb]Recently been seeing people trying to build extensions like this. Example: https://devpost.com/software/fib#updates
[gc]Not sure if that specific extension works, btw, haven't tried it out yet, but shows that people are thinking about this.
[gd]Creating a new website is far too easy. if we shut down sfglobe.com ( a top AdX partner), they would just open sfglobal.com
[ge]Who is doing the prioritizing?
[gf]this feels like a different, but worthy, topic. could there be a separate discussion of this challenge?
[gg]Wouldn't this mostly lead to a shift in the language of "false news", create an arms race. Also opens the door to tons of tricky questions.
E.g., are ridiculous questions ("Did Trump steal the Pope's shoes?") - intentionally created to imply a false thing - themselves objectively false? Are articles that primarily deal in such questions considered false news? Would FB users be better-informed if this sort of article replaced the bulk of clearly false news? Does the questioning tone prompt users to actually engage and think about the link's truthfulness?
[gh]Would require a better incentive system for publishers on the Instant Articles platform than they are currently getting. Publishers are reluctant to use this platform because it decreases the amount of articles click to when they go directly to the publisher's site
[gi]Impossible to put in place, facebook can't force publishers to put content on Facebook and block linked to be shared
[gj]This is nice. Would also be very informative.
[gk]But this can be done even if it's not hosted on FB. Any erring link can be replaced with a "This was fake news" 404...
[gl]Yep, for sure. Triggering this would help stop the spread of this news off platform / offline.
[gm]I like this quite a bit.
[gn]There's a challenge with copyright there?
[go]Or, primarily, a challenge with getting publishers to go along with it. (This already exists as Instant Articles.)
[gp]Also, I think easier bit of the problem is 'what to do if story is fake' - block, alert, flag, demote etc. Isn't the harder part how to figure out by algorithm 'is this true'
[gq]Ultimately, I human needs to be a part of that process -- at least in the beginning. The last thing you want is for people to just come up with more subtle fakes to skirt the algorithm.
Algorithms, in this case, should serve as providing initial filters for human beings, in my opinion.
[gr]Also, when an organization or celebrity is unwittingly used in fake news (ie, the untrue story that Clint Eastwood rejected a Medal of Freedom from Obama, saying he "wasn't his President"), they have a strong incentive to protect their brands and names. Bringing abusive stories to their attention, and explaining their legal options, could be enough to prompt them to take action on their own.
[gs]I wonder if this is at all possible. If might change politics discourse forever ;)
[gt]I wouldn't say it is impossible. Reputable news sites such as the New York Times provide verification of their site, and use journalistic standards. If a site is reputable it provides verification. If a site doesn't provide any kind of verification, standards, etc. Then they should be considered fake, until they do.
[gu]Likely relevant: A pre-social media work, but one that has interesting analysis of bias and reputation in media:
http://www.nber.org/papers/w11664
[gv]IMO, we must address the fake news problem by considering the architecture of perception formation, because it is possible to use algorithms to make people smarter.
[gw]+1
[gx]+1 An important insight.
[gy]:)
[gz]See comments in section below "Suggestions from a Trump Supporter" for this perspective
[ha]La forma en que se propagan las noticias dejan una huella o un patrón, se puede desarrollar un algoritmo que detecte cuando un tipo de forma se está desarrollando. He trabajado esto en Twitter en México desde hace años, posiblemente sirva en Facebook o en otras redes
[hb]Pienso que puede ser interesante ver como se puede aplicar la experience del Tercer Mundo a este caso. En EEUU no están acostumbrados a la Propaganda ni tampoco a situaciones donde los gobiernos se ven forzados a cortar las redes. Muchas temas a desarrollar: dictaduras, censura de los medios...
[hc]It might be interesting to explore how the experience in Third World countries could be applied to this initiative. Users in the US are not aware of Propaganda tactics nor have they experienced govs shutting down social networks. A lot of related topics here: dictatorships, censorship, etc.
[hd]Original title edited for clarity
[he]This is an example of people fighting in real time against fake news on Twitter
[hf]"non organic" es acerca de cómo se genera la noticia y el tipo de redes que se crean al propagar una noticia, generalmente si la noticia es falsa y la intención es difundirla masivamente se recurrirá a equipos de "troll centers" o "celebrities" para difundirla y este tipo de difusión dejan rastros.
[hg]"Non Organic" deals with how the news is generated and the types of networks that are used to spread the news. In general terms, if the news is fake and the intention is to spread it massively, this is will happen in troll centers or with celebrities. These tipo of share leaves traces.
[hh]"organic" = orgánico en español. Cuando existe un interés legítimo en la noticia y es util o verdad o se puede comprobar este noticias genera redes y va conectando comunidades. Este tipo de redes se pueden detectar, crear una base de datos que se puede acceder en tiempo real para detectar futuros casos
[hi]Organic - when there is a genuine interest in the news and it is both useful and true, generating communities around it. These type of networks are easy to spot, enabling a Data Base to be set up real time to trace future cases.
[hj]+1
[hk]This is called information overload, see https://en.wikipedia.org/wiki/Information_overload
[hl]yes -- truth has to be restored as a primary value (as in "the true, the good, and the beautiful"). this is a permanent cultural battle, especially in the face of powerfully persuasive media techniques in the service of a marketing ideology.
[hm]I love the concept... Just perhaps want to suggest you be ready with some real pushback to the notion that the American public and political discourse was ever virtuously truthful. ...We've had a long history of hiding truth and painting it how we wish to see it.
[hn]The same could be said about all professions that participate in the information game. What does it take to be a capital L Librarian or a capital J Journalist?
[ho]The First Draft Partner Network was set up in September 2016, and includes the major social networks, 100 newsrooms, human rights orgs, academic institutions, and professional journalism associations. We are dedicated to improving skills and standards in the reporting and sharing of information that emerges online - exactly the problems being discussed here. See more here https://firstdraftnews.com/about/
[hp]Interesting. Curious whether any of the partners are bringing the news consumers' voices into the conversation? Separately from this point about a coalition, I've been mulling whether something like an audience union would be useful for improving coverage (and whether such a thing already exists).
[hq]Hi Dave - yes we're trying to run a number of town halls, as well as undertaking audience research to ensure audiences are included in these conversations. Some news orgs like the BBC have audience councils so it would be interesting to connect with them on this issue.
[hr]Totally agree we need a multi-disciplinary / multi-stakeholder group to address issues of accurate, balanced news coverage on Facebook and other platforms. I'd include UED experts and data scientists to the technologists list.
[hs]some did go this way: CPB/NPR. good counterweight potential, but also subject to pressure.
[ht]As it appeared on April 8, 2015. This website - and all its complexity- is explained in the video below.
[hu]I don’t know how the White House Pool Reporters are selected, but it seems to include a variety of media outlets.
[hv]Trump has his official Transition 2017 news page.
https://www.youtube.com/channel/UC_NRgn1L4zVWPOEI5Mt5Tog
[hw]This reminded me of another issue: attention spans. They're increasingly on the decline. "Humans have a shorter attention span than goldfish, thanks to smartphones" - http://www.telegraph.co.uk/science/2016/03/12/humans-have-shorter-attention-span-than-goldfish-thanks-to-smart/ This has big implications for how people consume news - they rarely make it past the headline. I think even if we solve the fake news problem, there's another, much bigger hurdle of getting people to actually read and understand stories, rather than jumping to conclusions based on headlines. People are simply losing the patience to read. I fear that it's something we can't solve.
[hx]libraries, which have done so much to promote digital literacy, can be great partners for media/info literacy. there's a lively conversation going on on 1 of the ALA listservs about fake news and the librarian's role
[hy]Yes! From an early age.
[hz]Nice simple nudge
[ia]Agreed
[ib]+1
What's described here is largely how climatefeedback.org works for climate science issues. Could be adapted to other scientific fields
[ic]Careful. I think we need to offer solution frameworks that are not draped in political ideologies.
[id]thanks to whoever put up these comments, appreciated
[ie]No problem. Glad to offer help. I hope my suggestions are accepted in the spirit in which they're offered. Someone sent plausible but fake news to my wife through FB at the height of the election frenzy. The fake info was edited into a graphic. I knew right away it was fake because I live this stuff. She did not know it was fake, however, nor did whoever sent it to her. So this is a real problem.
The contributors here are clearly very advanced in their learning. The more transparent and open you can be with the solution, and the more safeguards you can include to ensure flushing the fakes does not turn into viewpoint suppression (consciously or unconsciously), the more likely it will work in terms of legitimate debunkings being accepted by your target readers.
Let's be honest. Though not personal, there is a lot of mutual distrust and, probably, realistically, a lot of ill will in this cultural divide.
What would people say if I proposed we turn over the fake news problem to a team of right-wing researchers and developers from the South who feel strongly that the nation is now headed in the right direction with Donald Trump as president? Yes, we are all white men from the South, and we are very eager to see Trump implement his plans, but we're well credentialed and technically qualified for the work so -- trust us! -- we will fix it. No need to worry about how our biases may come into play. Sound ridiculous? Well, in effect, that's the arrangement people on my side of this divide are being asked to accept.
Without radical transparency and meaningful ideological diversity on the teams, people on my side of the divide, I can assure you, will assume, fair or not, that you're not doing enough to compensate for the impact on the solution of your own strong ideological commitments and cognitive biases and/or that this is all a pretext and a lot of acrobatics to justify suppressing conservative and pro-Trump views on social media.
[if]Thank you so much for this perspective. Please note that I edited the title and removed "Good Faith Comments/" before "Suggestions" but only to allow for a clearer title in the left hand column of the document.
[ig]In response to a previous comment:
"What would people say if I proposed we turn over the fake news problem to a team of right-wing researchers and developers from the South who feel strongly that the nation is now headed in the right direction with Donald Trump as president?"
Some input because is this is exactly what we are up against in regards to regulatory matters dealing with Net Neutrality, tariffs, etc. Incredibly powerful corporations are seeking 'equal footing' alongside users now that they are up against the so called 'Titans': Google, Uber, Amazon, Netflix, etc.
Whereas before they could be considered natural 'enemies', the whole scenario has changed. All view points need to be taken into account.
Regulatory agencies caught in the bind are trying to adapt as best they can. Some examples:
1) sending forth special consultations to the general public and key industry players (TRAI in India in reference to the Free Basics initiative)
2) 'closing off their ears' (European Commission - when up against Brexit propaganda - did not confront it)
3) remaining silent (FCC postponing all critical legislation until new Adm comes in)
4) accepting they do not know and exploring all the different options and views, gathered from the experience of others (CRT in Colombia)
[ih]Agreed
[ii]assume that people will be trying to beat the algorithms so ensure flexibility is built in
[ij]Eli - Very light editing has been done but just for formatting purposes.
[ik]The best version of this is the new website Tribeworthy(https://tribeworthy.com/). Their platform empowers news consumers to critically review any online article. The reviews create a trust rating for each article, author, and outlet, providing news consumers with helpful feedback when deciding where to get their news. This is what they're calling Crowd Contested Media and it's really cool and really good at holding online media accountable.
[il]You can likely do this pulling out data from Wikidata or Wikipedia already. If the publication exists on there, the info should be available through the API.
[im](I work there.) Happy to help connect you to folks who may be interested in this. mkramer@wikimedia.org :)
[in]Thanks! That's definitely the place to start.
[io]I also think you need to account for small newspapers that are very local and very good. I live in a town of 19,000 people in North Carolina. The student newspaper the next town over does an amazing job of covering local news/issues. It's audience/circulation #s will never be high, but the quality of its journalism is quite good.
[ip]Publications and journalists don't often make the relevant information for what is needed available. It would take some reporting. I've found that often only journalists that have published books, whose press agencies create websites for them, are the only ones on Wikipedia. One rarely knows who is editing an article. Circulation numbers are rarely updated. Wikipedia is a prime source, but I'd estimate only 25% of the information you really want exists.
[iq]I would love to to get in touch with the person who wrote this bit about Linked Data. Please ping me via email.
[ir]https://www.linkedin.com/in/ubiquitous
[is]Thanks for reaching out Tim. I would like to learn about Linked Data and Verifiable Claims and if possible collaborate on pushing these ideas in front of as large an audience as possible with the stated aim of getting the attention of people of the web, the social media companies, the CMS software providers, popular blogging platforms to encourage them to adopt these standards. And exploring building various tools using the Linked Data and Verifiable claims platform/spec/infrastructure as a means of forcing the hand of the other players.
Pinged you on linkedin. I am https://www.linkedin.com/in/gautampriya on linkedin.
[it]https://www.w3.org/DesignIssues/ is a good starting point, but I'll put you in touch with some who work at oracle on linked-data related stuff...
[iu]see also; https://docs.google.com/presentation/d/1pFGC1G7CbizUuvbmjECfnNRL4fZk9QLxG8d3nehgwNU/edit#slide=id.p
And: https://www.w3.org/community/credentials
[iv]Thanks.
[iw]Ping to note my interest in this topic. I just linked an article I wrote elsewhere in this mega doc
https://theconversation.com/could-an-auto-logic-checker-be-the-solution-to-the-fake-news-problem-73223
[ix]Hyperlinks with the date (yymmdd), source, title are being used for the time being but the format can be adapted for more formal research.
[iy]Not sure where this should go, but Craig Silverman's work at the Columbia Journalism School's Tow Center is a really good internal (journalist/newsroom) look at where they go wrong, and how newsrooms end up participating in fake/bad stories going viral. Link to 168 p white-paper based on his research using a rumor-tracking tool called Emergent. http://towcenter.org/research/lies-damn-lies-and-viral-content/
[iz]Posted it under Resources - Dynamics of Fake Stories, where you suggested. Excellent read, thanks :)
[ja]Really important point. Could an algorithm ever differentiate between witty satire and deliberately misleading fake news? A satire icon/feature on satirical articles/posts could help but might that spoil the fun when satire is accidentally taken as real news? (Like when Jack Warner, a former vice president of world soccer’s governing body, FIFA, defended himself against corruption charges by citing an article from The Onion).
[jb]A brilliant essay by a French cognitive sociologist that studies how our Internet is so well suited for the creation, spreading and consolidation of rumors, modern mythologies and beliefs by revealing and amplifying very primitive aspects of our human condition. Not available in English AFAIK unfortunately.
[jc]Here is an introduction to this book, in English. http://www.angie.fr/en/caractere/gerald-bronner/
[jd]https://tribeworthy.com does a lot of what you're proposing. Tribeworthy is being called the Yelp for news consumers, so bipartisan public discourse around news is its specialty. Check it out!
[je]So I wrote an app where people can share links to pages e.g. from news sites, highlight text and comment on the fly side-by-side. But being a developer type, I'm not sure what to do with it now it's built. Maybe it (and I) could be of help somehow, have a look: http://thin.glass (be patient on first load, it's on a rather slow server atm).
[jf]Sounds a lot like this https://hypothes.is/
[jg]Consider.it sort of does this: http://www.poynter.org/2016/here-are-27-ways-to-think-about-comments/401728/
[jh]Very cool. Testing it here and there.
[ji]This is the goal of the web annotation architecture (w3.org/annotation). Hypothes.is enables this for communities like ClimateFeedback.org, as well as individuals. I encourage folks that want to move in this direction to pursue interoperable approaches, so that implementations work together.
[jj]Perhaps the most important website on the internet right now. I'm so proud to be a part of the Crowd Contested Media movement.
[jk]Hey +john@fiskkit.com glad to see you here!
[jl]You too +litvins@gmail.com ! We're trying to increase coordination among the various communities working on this right now.
[jm]Awesome, we're working on a fake news detector in our software as well. What we really need to do is find a way to stop Trump's tweets!!
[jn]This is very alarming: both a WaPo piece and original reports are very biased and bear some signs of the state actors.
[jo]But how does one or many curtail this massive violation of our trust and common morality?
[jp]According to @AdrianChen:
"When I began researching the story, I assumed that paid trolls worked by relentlessly spreading their message and thus indoctrinating Russian Internet users. But, after speaking with Russian journalists and opposition members, I quickly learned that pro-government trolling operations were not very effective at pushing a specific pro-Kremlin message—say, that the murdered opposition leader Boris Nemtsov was actually killed by his allies, in order to garner sympathy. The trolls were too obvious, too nasty, and too coordinated to maintain the illusion that these were everyday Russians. Everyone knew that the Web was crawling with trolls, and comment threads would often devolve into troll and counter-troll debates.
The real effect, the Russian activists told me, was not to brainwash readers but to overwhelm social media with a flood of fake content, seeding doubt and paranoia, and destroying the possibility of using the Internet as a democratic space. One activist recalled that a favorite tactic of the opposition was to make anti-Putin hashtags trend on Twitter. Then Kremlin trolls discovered how to make pro-Putin hashtags trend, and the symbolic nature of the action was killed. “The point is to spoil it, to create the atmosphere of hate, to make it so stinky that normal people won’t want to touch it,” the opposition activist Leonid Volkov told me.”
http://www.newyorker.com/news/news-desk/the-real-paranoia-inducing-purpose-of-russian-hacks
[jq]Kate - We need to see what do with this. Just let it flow with the rest until we get to the final copy. *IF* we ever reach that stage :D
[jr]This is great - how will we solve for the lack of faith problem? Lack of faith in the source, the truth? How will we build confidence in this kind of a tool?
[js]Honestly how do we know we can trust you? I mean seriously, some of your targets are legitimate site, just more liberal. They might be more socialist leaning, but that doesn't always equate to Russian Propaganda or any propaganda.
[jt]Good piece on this: https://theintercept.com/2016/11/26/washington-post-disgracefully-promotes-a-mccarthyite-blacklist-from-a-new-hidden-and-very-shady-group/
[ju]Of Note: There's some (in my view, legitimate) concern about the methods of this group. More here: https://theintercept.com/2016/11/26/washington-post-disgracefully-promotes-a-mccarthyite-blacklist-from-a-new-hidden-and-very-shady-group/
[jv]Browser extensions are great. But time spent on mobile, in closed apps is the bigger problem, particularly sharing on "dark social" i.e. messaging apps. Particularly insidious problem for asian markets.