Regulate the web
a presentation by Stephanie Rieger
Free expression, harmful speech, and the future of the internet
Some rights reserved - CC-BY-NC - hello@yiibu.com
Comments? Questions?
Leave them on a slide, and I will do my best to respond :)
The future will bear witness to these extraordinary times through�the voices of hope, despair, anger, joy, creativity, and collaboration captured by billions of us on the internet.
Source: u/CanadianCola
That’s a big responsibility for a space that (for all the good it brings) can so often feel incredibly broken.
This presentation is as much a story of how we got to this point, as where we might be headed next.
Because if you think today’s internet has problems—the future that’s currently taking shape risks not only doing little to solve them, but creating entirely new ones.
Source: Jeenah Moon
≈
To better understand what’s wrong with the current path (and how we might re-imagine things for the better) let’s start at the very beginning…
Today’s internet is a far cry from what its creators envisioned…
a space that was technologically rooted in open and decentralised protocols, and philosophically rooted in the ideals of free speech, and open and democratic access to information.
“I thought all I had to do was keep it, just keep it free and open and people will do wonderful things…If you’d asked me 10 years ago I would have said humanity is going to do a good job with this…if we connect all these people together, they are such wonderful people they will get along. I was wrong”
In retrospect, this was a tad naive…
We now realise that open protocols and ideals don’t guarantee an open web…
…that features designed to give everyone a voice can also be weaponized to cause unspeakable harm…
…to spread hate and division, or exploit fear and weakness during difficult times…
UNESCO research has identified nine key themes in the Covid-19 disinfodemic. So many false narratives, and a wide range of goals for spreading them.
…or sow chaos by undermining the values and institutions we all rely on.
Source: The Guardian
“Welcome to one of the central challenges of our time. How can we maintain an internet with freedom of expression at the core, while also ensuring that the content being disseminated doesn't cause irreparable harm to our communities, our democracies and our physical and mental wellbeing.”
When there are problems with the web, the reflex is often to look for solutions within the technologies and standards that underpin it.
And although tech will have a role to play in solving these challenges, it can only be part of the solution.
To better understand why this is the case,�let’s explore a very different yet no less critical driver of the the web’s evolution—regulation.
Law is code*
Innovation, experimentation, and freedom of expression
*riffing off Lawrence Lessig
PART 1
If you publish a newspaper, you are considered the ‘publisher’ and are liable for the content within it.
This is the opposite of the approach the law takes for phone networks, who aren’t liable for the things you might say on a call.
In the internet’s early years, web sites were considered just another kind of publisher. This was OK during the brief period when the internet was small, and mostly consisted of academics and researchers.
Pretty soon however, people started objecting to the content found on some services, and the authorities started coming after the entities that hosted that content.
*In solidarity, other French internet providers even agreed to cut their customers’ access to Usenet for a week, as a form of ‘electronic strike’ :)
1991
CompuServe defamation case
1994
Prodigy defamation case
1995
Bridgesoft v. Lenoir
Copyright infringement
1996
Francenet & Worldnet servers confiscated for hosting porn*
1997
CompuServe’s GM prosecuted for hosting violent content
In the United States the outcome of these cases was particularly troubling, as it left websites with one of two extreme choices…
Sites are treated as mere ‘distributors’ of content, so are safe from litigation.
Sites are treated as the ‘publisher’ so can be punished for content they don’t take down.
This was referred to as the “moderators’ dilemma”
(A third option could’ve been not to host third party content at all…or maybe host it but review everything ahead of time, and be super conservative about it, but even then still risk litigation)
“No provider or user of an interactive computer service�shall be treated as the publisher or speaker of any information provided by another information content provider”
— Section 230, Communication & Decency Act (1996)
Then in 1996, came 26 words that would go on to define the internet as we know it.
Four years later, the EU would provide its own version of intermediary liability protection through the 2000 E-Commerce Directive
The goal of both of these laws was to protect ‘internet intermediaries’ from litigation.
Intermediary�Services and platforms (such as Facebook, Dropbox and Wikipedia)…that host, give access to, index, or facilitate the transmission and sharing of content created by others.
The hope, was that this would encourage both freedom of expression, and innovation using this emerging medium.
While Section 230 and its EU counterpart aren’t identical, they share two key component parts—often referred to as the sword and the shield.
Disclaimer:
Simplification follows. I’ve read a lot about this, but I am not a lawyer
The information contained in this deck is provided for informational purposes only, and should not be construed as legal advice on any subject matter..
This part of the law effectively states that internet companies are not liable for their users’ content.
The shield
…can’t be sued if someone streams a violent video.
This means that…
…isn’t liable for potentially defamatory articles.
This means that…
…can’t be sued because of a comment left on their blog.
This means that…
317,000 status updates
400 new users
147,000 photos uploaded
54,000 shared links
The shield’s provision has a few limitations*, but in practice mean that companies can experiment with all kinds of services that include user generated content, without the need to pre-screen every single piece of content to ensure nothing ‘bad’ ends up online.
(Which is particularly handy given the astronomical amounts of content we now post)
every 60 seconds…
*e.g. in the US, the shield doesn’t apply to copyright violations, federal criminal law violations, and (since 2018) sex trafficking content.
If the world were nothing but the might have sufficed—but I don’t think any of us want to imagine platforms like Facebook and YouTube without some form of content moderation.
Which brings us to …
This part of Section 230 says that while companies aren’t forced to moderate the content they host—�they can (in good faith) do so to remove material they believe to be offensive, disturbing, or otherwise user-alienating…and *all* websites do.
(Here U.S. and EU liability laws differ. In the U.S., sites aren’t liable for what they choose to take down or leave up. In the EU, sites can lose liability protection if they are made aware of potentially illegal content—and do nothing about it.)
The sword
To set the parameters that will guide content moderation, each company devises a set of 'house rules’ that is representative of the site's values*, and the kind of environment they wish to enable…
Sites then create processes to enforce the rules. These vary from site to site, but typically include a mix of automation, user-initiated flagging, and review by human moderators.
Internal flowchart Facebook created to explain how hate speech regarding migrants should be actioned.
START
Is there a mention of a protected characteristic (PC)?
Is there a mention of a quasi PC? (Migrants)
Consider other policies.
Is the PC mentioned along with other details?
Are ALL the details also protected?
Is the PC/subset being attacked by?
Consider other policies.
Are ALL the details also protected?
Is the QPC/subset being attacked by?
Consider other policies.
DELETE
DELETE
YES
YES
YES
YES
YES
NO
NO
NO
YES
NO
YES
YES
NO
Is the PC mentioned along with other details?
As it happens, the most popular services that host third-party content are based in the U.S. Their rules are therefore not only legally subject to the provisions of Section 230…
Number of people using social media platforms (MAU, 2005-2018) via Our World In Data
…but often culturally influenced by America’s tradition of robust free expression, born of and protected by the 1st Amendment to the U.S. Constitution.
“Generally, we remain neutral as to the content because our general counsel and CEO like to say that we are the free speech wing of the free speech party”
— Tony Wang, Twitter GM UK (in 2012)
It’s difficult to overstate how important the very simple yet powerful legal protections provided by the sword and shield have been for freedom of expression, and the growth of the internet as we know it. These laws not only enabled the rise of the services that billions of us use to share ideas, information, and opinions…
…they also enabled the very existence of hundreds of millions of personal blogs, non-profit websites, and open source projects that would never have thrived if subjected to a constant risk of lawsuits.
“The Wikipedia we know today simply would not exist without Section 230.”
— Leighanna Mixter, Wikimedia Foundation
All things considered, this arrangement seemed to work well enough until online communication went mobile, began to centralise, and a few of these sites got really, really, big.
With billions of users and millions posts per second, content moderation becomes *much* harder.
527 760 photos shared
4 146,600 videos watched
456 000 tweets shared
46 740 photos posted
As your audience is now global, your single set of made in California ‘house rules’ regularly clashes with local laws, cultures, religions, political norms, and attitudes towards everything from free speech to nudity.
People should be able to make statements that ________ publicly.
Source: Pew Research, Spring 2015 Global Attitudes survey
Your giant platform is now also a giant target.
Bad actors flood your site with ‘coordinated inauthentic behaviour’…
…and post things your tech and moderators hadn't realised they should be looking for.
“The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology.”
— Facebook press release, 2019
To make matters worse, the algorithms your site relies on to help people make sense of their humongous feed grow ‘engagement’ and sell ads, rarely discriminate.
If it’s popular—they'll make sure *even more people* see it.
“There’s a lot of bad things that are good for engagement… conspiracies, divisiveness, radicalisation, terrorism. All these things seem pretty terrible, but for the metric of engagement, they’re amazing.”
— Guillaume Chaslot, The Toxic Potential of YouTube’s Feedback Loop
And what algorithms don’t spread, humans likely will— compelled as we are by our very nature to share things we love or hate.
“The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commented about on Reddit, and mirrored around the world before the tech companies could even react.”
— Drew Harwell (@drewharwell) March 15, 2019
This might all still be OK, *if* we had a far wider range of meaningful alternatives to the use of giant platforms whose size and influence merely exacerbate all of these problems.
“Even in normal circumstances, the scale of a few dominant platforms leaves ordinary people with little choice but to use them. Now, the COVID-19 crisis has propelled us to the point where these platforms are embedded even more deeply in our lives, and further entrenched in our social and political structures.”
— CIGI, Rethinking digital platforms for the post Covid-19 era
Which brings us to where we are today…
People are angry, governments want solutions, and they are choosing�to act through new regulations aimed at the two most obvious targets— the intermediaries that host all of this content, and the liability laws that protect them*.
Intermediaries
+
*and us…although it may not always seem that way, these laws protect our freedom of expression, and our experience when we use these services!
“Whether it’s in India, the United States, or the European Union itself, lawmakers are grappling with what is ultimately a really hard problem—removing 'bad' content at scale without impacting ‘good’ content, AND in ways that work for different types of internet services AND don’t radically change the open character of the internet.”
— Mozilla
Law is code
Consolidation and censorship
PART 2
Designing regulation is a bit like coding. You build tiny modules and string them together to make a larger thing. Some of these you invent, and some you borrow from places that you trust, or where they’ve been proven to work.
(Why is this important? Because the EU is currently revising many of its already substantial internet regulations, and countries around the world—including many where hosting speech is currently poorly protected—are starting to copy the parts that best suit their tech regulation goals.)
Let’s look at three emerging approaches that aim to regulate what content can appear online, and seem likely to be widely emulated.
1. STRICT REMOVAL TIMEFRAMES
Companies must act very quickly to assess and remove certain types of ‘bad’ content.
Network Enforcement Act (NetzDG), 2017
Sites must investigate and take down “obviously illegal” speech such as incitements to hatred, or terrorist propaganda within 24 hrs of it being reported.
AN EXAMPLE
See also recent violent material laws in Australia, Pakistan, Nigeria, Vietnam, Ethiopia, Austria, 1-hr removal in France, and the EU Terrorist Directive.
Most content that people report to platforms is what lawyers refer to as “awful but lawful”.
It may be upsetting or insulting, it may make you sad or angry, it may cause you to spend less time on that site, or censor your own speech, but it would not necessarily be considered illegal.
Assessing the lawfulness of content is also greatly dependent on context.
A photo depicting graphic violence posted by a terror organisation to glorify the event, or incite similar ones…
…would in most cases be found unlawful
The very same photo posted by…
…would in most cases NOT be found unlawful
Assessing the lawfulness of content is also greatly dependent on context.
The analysis required to make this distinction can take time, may not be straightforward, and in the ‘real world’ requires lawyers and the courts…who may often disagree.
Yet companies are now being asked to play the role of the courts, and pressured to not only make 'the right decision'—but do so very quickly—and with no public record of how the decision was made, and often no way to appeal.
lawyers can tell it’s unlawful
lawyers can tell it’s lawful
lawyers probably disagree about it
Diagram inspired by Intermediary Liability 101, Daphne Keller
What’s more, when faced with this choice, companies have little (legal) incentive to err on the side of freedom of expression…
End result—even the most well meaning site may choose to be overly cautious, prioritise speed, and delete large amounts of content*—even stuff that’s perfectly lawful.
*Over-removal is sadly more likely to happen with smaller sites who won't have large, well trained teams ready to review content 24/7, so may simply apply an “if in doubt, just take it down” approach.
2. MODERATOR’S DILEMMA 2.0
Companies can be held liable for simply knowing illegal content exists.
FOSTA/SESTA, 2018
Sites can now be sued (or owners jailed for 10 years) if they “promote or facilitate prostitution” or “knowingly assist, facilitate, or support sex trafficking”
AN EXAMPLE
*See also the proposed EARN IT Act in the US.
The moderator’s dilemma is back. In the U.S. ( in the specific context of sex trafficking material) sites can now be liable if they moderate, but miss something, or make the ‘wrong' decision about the content they leave up.
“[Given our scale] if we are 99.999% accurate we are still making 1.5 million mistakes a month”
— Del Harvey, Twitter (in 2013!)
Terrified of hosting any content that could be *related to* sex trafficking, many sites have opted for the simplest option: expanding their 'house rules’ to proactively ban broad categories of content that might cause them risk.
This caused the removal of perfectly lawful and valuable content such as sex education forums, sex worker support groups, LGBT resources, and niche online dating services.
A few small sites also chose to simply shut down.
Within months, many sites banned most (remaining) forms of nudity or ‘adult' conversation*.
*Some companies admitted this was directly caused by FOSTA/SESTA, while others simply called it a ‘policy change’. Some larger sites were (thanks to their more sophisticated moderation capabilities) able to make exceptions for art, medical photos, erotica etc.
3. ACTIVE MONITORING
Laws that mandate companies ‘actively monitor’ special categories of content.
Article 17, EU Copyright Directive
Makes companies liable for content used without permission as soon as it’s uploaded.
AN EXAMPLE
*More on the EU Copyright Directive. Also see the EU Terrorist Directive , and a recent bill in New Zealand.
The only way to check every single piece of content against potential copyrighted works (and this applies whether you’re Facebook, or Tumblr, or Wikipedia, or simply have a personal blog) is through entirely automated tech that compares vast databases of rights-holder-verified content to the things people want to post.
There is nothing nuanced about an automated upload filter.
Today’s content matching technologies can’t yet recognise critical contexts such as 'fair use’, parody, or permitted citations—and the ability to do so isn’t coming anytime soon. They are therefore likely to flag and remove large amounts of perfectly lawful content.
Source: Trump UK visit memes
A widespread need for upload filters also risk further entrenching the power of large players.
If you’re a non-profit with a small operating budget, or a startup trying to grow a new service, or even a mid-sized established service like Wordpress.com, and you now have to provide this service to your user base—will you have the necessary skill or resources to do so?
$100 million (USD)
Cost to develop ContentID, YouTube’s filter designed to identify copyrighted video, audio, and melodies.
€900/mth
Cost to filter 5000 audio files using Audible Magic, a 3rd party content monitoring service.
And if you think potentially losing a few Baby Yoda memes to copyright filters isn’t so bad, consider that upload filters are also suggested in the EU’s upcoming Terrorist Directive.
This therefore begs the question: Would the terabytes of footage of police violence, bloodied protestors, and tear gas emanating from the US (or Hong Kong and Belarus, or countless other places where the web has enabled us to bear witness to injustice) have survived a world of widespread content filters?
“Regrettably, despite the fact that many great minds in government, academia, and civil society are working on this hard problem, online content regulation remains stuck in a paradigm that undermines users’ rights and the health of the internet ecosystem, without really improving users’ internet experience.”
— Mozilla
The current wave of regulation isn’t all bad*, but it seems to presume that because we have a giant-tech-platform-shaped-problem, we need an equally giant-tech-platform-shaped-solution.
*Germany’s NetzDG for example includes some widely welcomed reporting and transparency provisions.
“So long as we are wedded to the idea that a few large companies will set the rules for speech and discussion online, we will constrain the solution-space of possible interventions.”
— Ethan Zuckerman, The case for digital public infrastructure
It feels as if we’ve reached a critical fork in the road.
A time where we can either choose to ‘fix’ the web by restoring the openness, diversity, and public stewardship of its early years—or accept a future where its largest players are legally compelled to define and manage both its shape, and value to society.
“…the monopolist’s top preference is not to be regulated but their second preference is to be regulated in a way that only they can possibly comply with.”
Code is law?
Protocols…not platforms
PART 3
One company…
and so on…
Most internet services we use today are centralized.
As these companies scale, they are faced with a near impossible problem: how to create and apply policies that do justice to the sheer diversity in human speech and culture, and the myriad reasons we choose to create, share, and spend time on the web.
“We need the flexibility to build a wide diversity of tools, for a wide variety of purposes. What we have right now with Facebook is one room—�and we try to use it as a church, we try to use it as a lecture hall, we try to use it as a bar, we try to use it as a hotel. It doesn’t work because it’s the same damn room…we might want to have different architectures and different ways of building for different systems and different purposes.”
— Ethan Zuckerman, Fixing social media (40:00)
An alternative idea to enable this diversity, would be to refocus online social spaces away from a primarily centralised model—towards a new generation of open protocols that shift both the architecture, and the incentives of network participants.
READ THE ESSAY
Altering the internet's economic and digital infrastructure to promote free speech. By Mike Masnick
Source: Midem
Let's see what such a protocols might enable using a real-world example: a decentralised social app i’m currently using called Planetary.
Don't you deserve a better social network?
Wouldn’t you rather have a social network that respected your privacy, resisted abuse and harassment, rewarded content creators and was open by default? We would too, that’s why we’re building the world’s first mainstream client for a truly distributed social network.
Planetary is built using an open protocol called Secure Scuttlebutt, that enables the creation of different types of decentralized applications.
“You can think of a protocol as a set of rules and practices and behaviours that let different apps and services run by different people talk to each other.”
FYI - The Scuttlebutt reference implementation is written in JavaScript with Node.js. There are also active implementation efforts in Go, Python, and Rust.
Decentralized apps don't have to talk to a single master server�(or to anyone's servers) to work. They can talk directly to each other, and to apps or servers created and run by other people.
Many decentralised apps, and the protocols that underpin them, in some way aim to re-shape the relationship between people, their data, and the services they use.
Let’s see how Scuttlebut’s values are reflected in code, and go on to shape the behaviours, and incentives the protocol creates.
Scuttlebutt principles stack
PULL VS PUSH
Scuttlebutt is a ‘pull based’ protocol—
a bit like RSS. You simply point it at the people you want to follow.
This means that when you first join an app there’s nothing to see (and no one can see you)…
…until you start to make connections by following one of your friends.
(a friend 'inviting you in' with a link or QR code you scan on their phone is a common way to join)
Scuttlebutt is designed to mimic�a real community. With each new person you follow, it’s as if you’re shining a light in a corner containing potential new friends.
And while this doesn’t prevent you from making friends in other communities, you aren’t immediately subjected to the content, replies, and conversations of total strangers.
You have to seek them out…
1 hop:�user implicitly followed
2 hops: visible in the app
3 hops:
fetched and stored, but user must seek out
Diagram source
ABUSE AND HARASSMENT RESISTANT
This architecture has some interesting properties.
Because you only download messages from your friends and their friends—Scuttlebutt networks tend to be more resistant to abuse, harassment, and spam.
Part of this is also social.
Friends, neighbours, or other groups that create a community around a common purpose are less likely to spam or harass each other, are more likely to work together to create rules that suit their specific context, and more likely to retain a shared understanding of the rules.
It’s also architectural. Scuttlebutt isn’t so much a network, as a network of networks. And while some may intersect, others can happily live entirely on their own.
(For this reason, it’s hard to know how many Scuttlebutt users there actually are).
Sure—but we have those already—primarily fuelled by opaque, mostly profit-driven algorithms that we can’t inspect or control. Certain communities choosing to self-isolate may be the lesser of those evils*.
(*Let's also not forget that for marginalised communities, such as the U.S. sex-workers who lost their community and support sites due to FOSTA—there would be significant benefits in a private community built through common goals and needs, that no one can arbitrarily shut down, and where they can ultimately set the rules. In fact, such a community currently exists on Mastodon, which is based on another decentralized protocol called ActivityPub.)
LOCAL FIRST
Planetary is local-first. This means that, instead of hosting every user’s content on a centralised server, each person hosts their own posts, and that of the friends.
Whenever you’re online, you grab whatever new content your friends have fetched, and send over whatever they don’t yet have.
When people follow you, they copy the storage container, and then sync it whenever you’re both online.
When you write a post it’s stored in your local storage.
I have written a thing!
stuff i wrote
stuff i wrote
You can even use Scuttlebutt without an internet connection.
Synching data with nearby friends over bluetooth, then grabbing updates from more distant networks by synching with people who’ve recently been online.
INTEROPERABILITY
Planetary has its own set of ‘house rules’, and plans to moderate content to facilitate a better experience, but because of the underlying protocol—if I’m not happy with the rules (or anything else about the app)…
…i’m free to take my identity, all my posts, and my friends to another compatible app* with different rules, interfaces, or features above and beyond what’s built into the protocol.
*Each Scuttlebutt app can write to several different content types, and each ‘reader’ can decide which ones they read.
Two scuttlebutt apps with different goals, features, and approaches
Planetary
Manyverse
SO…HOW MIGHT A PROTOCOL BASED FUTURE LOOK?
In this model, interoperability unlocks the possibility for a very different future. One where a small number of centralised platforms are replaced by a larger, and far more diverse decentralized ecosystem…
“Rather than relying on a few giant platforms to police speech online, there could be widespread competition, in which anyone could design their own interfaces, filters, and additional services, allowing whichever ones work best to succeed, without having to resort to outright censorship for certain voices…”
— Mike Masnick, Protocols not platforms
data hosting
business model
trust & safety*
UI/UX/�features
APIs
access rights
values/�rules
moderation/appeals operations
centralized platform with sole control over…
*Trust & Safety (n): profession that develops and enforces principles and policies that define acceptable behavior and content online.
In this future, any person, group, or company could create an compatible app—either to form a new community, or better serve an existing one—and in doing so provide new functionality, or point a different lens at the content and conversations powered by the protocol.
With a multitude of apps able to ‘read’ the same content, users could throughout their lives dip in and out, experimenting with a range of communities—each providing different opportunities, and requiring different levels of commitment and responsibility from their users.
funding: subscription
purpose: �discussion/tools/live gaming
content rules/values:�community derived
content moderation: 3rd-party
funding: taxation/fee
purpose: �community news/discussion
rules/values: “British culture"
content moderation: in-house
funding: donation
purpose: open knowledge
rules/values: “verifiable facts"
content moderation: community
Local
access: open
access: UK citizens/residents
access: open
Above examples are hypothetical :) For added perspective, see Toby Shorin’s excellent paid community ideas.
Other apps might instead exclusively focus on functionality, offering ‘last-mile’ customizations applicable to the content from any app to reduce cognitive load, uncover more diverse viewpoints, or enable users to tune the experience to suit their personal use case or tolerance-level for certain kinds of speech.
(Right: just a few of the “rules” in Gobo, an experimental social browser “that gives you control and transparency over what you see”. Gobo is open source, and was built by a small team at MIT Media Lab's Center for Civic Media.)
By freeing each app from the automatic need to scale, interoperability could usher in a new era of socially owned applications.
This might include state-owned and tax funded apps, usage-fee funded ‘public utilities’, or even cooperatives, funded by, accountable to, and serving the needs of a neighbourhood, a family,�or some other non profit-seeking�group with a common purpose.
Photo: Wikipedia
A final step might be to encourage even greater decentralization. Third party services that further broaden choice for users, and assist smaller apps with the complexities of facilitating (what may still be) highly diverse and fast-evolving global speech.
These services could themselves be built upon an open distributed protocol, or simply consist of trusted providers that compete to best exemplify and enable the forward-thinking values and functionality that governments, civil society, and global internet users demand.
Trust and safety as a [trusted] service
Some apps might even delegate choice of provider to their users.
Data intermediaries (trusts, co-ops, distributed storage*)
Services
Safeguards (NOTE: THESE MOSTLY DON’T YET EXIST!)
*Read Mozilla’s new report that explore seven potential data governance approaches, including data trusts and fiduciaries.
A few final words…and a request
*except maybe for the part about big tech paying local taxes.
When I think of the challenges with today’s internet, i’m often reminded of the climate crisis—a collection of large, complex, multi-layered, and often interrelated problems that impact all of us—but disproportionately harm the most vulnerable amongst us.
A problem with no lone solution, �And no easy answers*.
A problem that we will only solve by leveraging all the tools at our disposal—technology, design, values, regulation—and ensuring each of these work hand-in-hand to reduce risk, protect others…and give us the necessary tools to protect ourselves.
Technology
Design
Values
Regulation
For this to happen, we will need to have brutally frank discussions about the kind of future we want, and the trade-offs we’re willing to accept to get there.
Neutral moderation
Reliable
Harassment free
No data leakage
No real-world harm
Trustworthy information
Free
Portable data
Minimal data collection
Censorship resistant
Surveillance resistant
Anonymous
Trade-off diagram courtesy of Alex Stamos in “The platform challenge: balancing safety, privacy, and freedom”.
We desperately need to move on from a “just fix all the bad things” attitude—be it aimed at our governments, the tech industry, or each other (e.g. when we suggest we just walk away from these spaces) �—to a space where trade-offs become a core part of the conversation.
“Content moderation at scale is impossible to perform perfectly— platforms have to make millions of decisions a day and cannot get it right in every instance. Because error is inevitable, content moderation system design requires choosing which kinds of errors the system will err on the side of making.”
— Evelyn Douek, Covid-19 and social media content moderation
Trade-offs such as deciding which lesser harms we may be willing to trade for increased likelihood that the very worst harms will be held at bay.
(Merely defining “the very worst harms”…while accounting for local norms and cultures remains a huge challenge, and one more reason we need a basic architecture that allows for greater diversity.)
We will also need to consider which problems demand centralized action, and which will demand a more local, distributed, and community-specific solution.
Setting aside for a moment how you may feel about the companies on this list, would you be willing to trade the future potential for highly-targeted mass coordinated action (e.g. election integrity, CSAM monitoring, climate emergency coordination) for a primarily decentralized environment?�
And if not—how might these very different models usefully, equitably, and architecturally co-exist?
Finding a useful, equitable, and long-lasting answer to the challenge of managing internet speech may be one of the most important decisions we make—and it’s one we must make together.
So to close, I have two things to ask of you…
March 2, 2019 demonstration in Berlin against Article 17 (ex-13) of the new EU Copyright Directive. Photo by Tim Lüddemann
1. Help advocate for more thoughtful regulation
Wherever you live in the world, there’s an organisation that you can follow to keep abreast of issues around digital rights and regulation, and lend a hand when action is needed*. Here are some of the groups i’ve found most useful…
Electronic Frontier Foundation (US/Global)
Internet Society (Global)
Mozilla Policy Blog (Global)
Access Now (Global)
*And there is a lot going on…FOSTA is being challenged for endangering sex workers and hindering law enforcement. Poland is challenging Article 17. The French constitutional court just struck down the 24-hr takedown stipulation in their new terror law, Austria wants it's own NetzDG, Trump and others are threatening to repeal Section 230.
Suggest other orgs by Leaving a comment on this slide!
2. Help imagine better futures
There’s never been a better time to take a stand about the future of the internet. And we all have something to contribute.
Pick a topic you’re most interested in (or feel most able to contribute to) and start challenging the assumption that the future of our online spaces can only consist of a more centralized, optimized, and sanitized version of our present.
“This is a good time to question absolutely everything.�You want to go back to the same old system?”�— John Boyega
Source: u/CanadianCola
“The problems we’ll face in this century are going to need everyone’s attention and contributions—not just that of our leaders and policymakers and journalists and thought leaders.
They’ll need help from people we love and people we hate, from you and from me.”
— Mike Godwin, Did the early internet activists blow it?
THANK YOU FOR YOUR TIME :)
hello@yiibu.com