1 of 111

Regulate the web

a presentation by Stephanie Rieger

Free expression, harmful speech, and the future of the internet

Some rights reserved - CC-BY-NC - hello@yiibu.com

Comments? Questions?

Leave them on a slide, and I will do my best to respond :)

2 of 111

The future will bear witness to these extraordinary times through�the voices of hope, despair, anger, joy, creativity, and collaboration captured by billions of us on the internet.

3 of 111

That’s a big responsibility for a space that (for all the good it brings) can so often feel incredibly broken.

4 of 111

This presentation is as much a story of how we got to this point, as where we might be headed next.

Because if you think today’s internet has problems—the future that’s currently taking shape risks not only doing little to solve them, but creating entirely new ones.

Source: Jeenah Moon

5 of 111

To better understand what’s wrong with the current path (and how we might re-imagine things for the better) let’s start at the very beginning…

6 of 111

Today’s internet is a far cry from what its creators envisioned…

a space that was technologically rooted in open and decentralised protocols, and philosophically rooted in the ideals of free speech, and open and democratic access to information.

7 of 111

“I thought all I had to do was keep it, just keep it free and open and people will do wonderful things…If you’d asked me 10 years ago I would have said humanity is going to do a good job with this…if we connect all these people together, they are such wonderful people they will get along. I was wrong

Tim Berners Lee

In retrospect, this was a tad naive…

8 of 111

We now realise that open protocols and ideals don’t guarantee an open web

9 of 111

…that features designed to give everyone a voice can also be weaponized to cause unspeakable harm…

10 of 111

…to spread hate and division, or exploit fear and weakness during difficult times…

UNESCO research has identified nine key themes in the Covid-19 disinfodemic. So many false narratives, and a wide range of goals for spreading them.

11 of 111

…or sow chaos by undermining the values and institutions we all rely on.

Source: The Guardian

12 of 111

“Welcome to one of the central challenges of our time. How can we maintain an internet with freedom of expression at the core, while also ensuring that the content being disseminated doesn't cause irreparable harm to our communities, our democracies and our physical and mental wellbeing.”

Claire Wardle

13 of 111

When there are problems with the web, the reflex is often to look for solutions within the technologies and standards that underpin it.

14 of 111

And although tech will have a role to play in solving these challenges, it can only be part of the solution.

To better understand why this is the case,�let’s explore a very different yet no less critical driver of the the web’s evolution—regulation.

15 of 111

Law is code*

Innovation, experimentation, and freedom of expression

*riffing off Lawrence Lessig

PART 1

16 of 111

If you publish a newspaper, you are considered the ‘publisher’ and are liable for the content within it.

This is the opposite of the approach the law takes for phone networks, who aren’t liable for the things you might say on a call.

17 of 111

In the internet’s early years, web sites were considered just another kind of publisher. This was OK during the brief period when the internet was small, and mostly consisted of academics and researchers.

18 of 111

Pretty soon however, people started objecting to the content found on some services, and the authorities started coming after the entities that hosted that content.

*In solidarity, other French internet providers even agreed to cut their customers’ access to Usenet for a week, as a form of ‘electronic strike’ :)

1991

CompuServe defamation case

1994

Prodigy defamation case

1995

Bridgesoft v. Lenoir

Copyright infringement

1996

Francenet & Worldnet servers confiscated for hosting porn*

1997

CompuServe’s GM prosecuted for hosting violent content

19 of 111

In the United States the outcome of these cases was particularly troubling, as it left websites with one of two extreme choices…

  1. Turn a blind eye. Don’t try to remove ‘bad’ content
  1. Look for, and remove ‘bad’ content

Sites are treated as mere ‘distributors’ of content, so are safe from litigation.

Sites are treated as the ‘publisher’ so can be punished for content they don’t take down.

This was referred to as the “moderators’ dilemma”

(A third option could’ve been not to host third party content at all…or maybe host it but review everything ahead of time, and be super conservative about it, but even then still risk litigation)

20 of 111

“No provider or user of an interactive computer service�shall be treated as the publisher or speaker of any information provided by another information content provider”



— Section 230, Communication & Decency Act (1996)

Then in 1996, came 26 words that would go on to define the internet as we know it.

Four years later, the EU would provide its own version of intermediary liability protection through the 2000 E-Commerce Directive

21 of 111

The goal of both of these laws was to protect ‘internet intermediaries’ from litigation.

Intermediary�Services and platforms (such as Facebook, Dropbox and Wikipedia)…that host, give access to, index, or facilitate the transmission and sharing of content created by others.

The hope, was that this would encourage both freedom of expression, and innovation using this emerging medium.

22 of 111

While Section 230 and its EU counterpart aren’t identical, they share two key component parts—often referred to as the sword and the shield.

23 of 111

Disclaimer:

Simplification follows. I’ve read a lot about this, but I am not a lawyer

The information contained in this deck is provided for informational purposes only, and should not be construed as legal advice on any subject matter..

24 of 111

This part of the law effectively states that internet companies are not liable for their users’ content.

The shield

25 of 111

…can’t be sued if someone streams a violent video.

This means that…

26 of 111

…isn’t liable for potentially defamatory articles.

This means that…

27 of 111

…can’t be sued because of a comment left on their blog.

This means that…

28 of 111

317,000 status updates

400 new users

147,000 photos uploaded

54,000 shared links

The shield’s provision has a few limitations*, but in practice mean that companies can experiment with all kinds of services that include user generated content, without the need to pre-screen every single piece of content to ensure nothing ‘bad’ ends up online.

(Which is particularly handy given the astronomical amounts of content we now post)

every 60 seconds…

*e.g. in the US, the shield doesn’t apply to copyright violations, federal criminal law violations, and (since 2018) sex trafficking content.

29 of 111

If the world were nothing but the
 might have sufficed—but I don’t think any of us want to imagine platforms like Facebook and YouTube without some form of content moderation.

Which brings us to …

30 of 111

This part of Section 230 says that while companies aren’t forced to moderate the content they host—�they can (in good faith) do so to remove material they believe to be offensive, disturbing, or otherwise user-alienating…and *all* websites do.

(Here U.S. and EU liability laws differ. In the U.S., sites aren’t liable for what they choose to take down or leave up. In the EU, sites can lose liability protection if they are made aware of potentially illegal content—and do nothing about it.)

The sword

31 of 111

To set the parameters that will guide content moderation, each company devises a set of 'house rules’ that is representative of the site's values*, and the kind of environment they wish to enable…

*…cultural, political, legal, media, market, as well as internal and external stakeholder pressures are also heavily reflected in these rules!

32 of 111

Sites then create processes to enforce the rules. These vary from site to site, but typically include a mix of automation, user-initiated flagging, and review by human moderators.

Internal flowchart Facebook created to explain how hate speech regarding migrants should be actioned.

START

Is there a mention of a protected characteristic (PC)?

  • Gender
  • Sexual orientation
  • Gender identity
  • Race

  • Ethnicity
  • Religious affiliation
  • National origin
  • Serious disease or disability

Is there a mention of a quasi PC? (Migrants)

Consider other policies.

Is the PC mentioned along with other details?

Are ALL the details also protected?

Is the PC/subset being attacked by?

  • Encouraging violence
  • Calling for segregation
  • Calling for exclusion
  • Degrading generalizations
  • Dismissing the PC
  • Cursing the PC
  • Using slurs

Consider other policies.

Are ALL the details also protected?

Is the QPC/subset being attacked by?

  • Encouraging violence
  • Comparing them to animals, disease, or filth
  • Consider them subhuman
  • Consider them sex offenders, murderers, or terrorists

Consider other policies.

DELETE

DELETE

YES

YES

YES

YES

YES

NO

NO

NO

YES

NO

YES

YES

NO

Is the PC mentioned along with other details?

33 of 111

As it happens, the most popular services that host third-party content are based in the U.S. Their rules are therefore not only legally subject to the provisions of Section 230…

Number of people using social media platforms (MAU, 2005-2018) via Our World In Data

34 of 111

…but often culturally influenced by America’s tradition of robust free expression, born of and protected by the 1st Amendment to the U.S. Constitution.

“Generally, we remain neutral as to the content because our general counsel and CEO like to say that we are the free speech wing of the free speech party

Tony Wang, Twitter GM UK (in 2012)

35 of 111

It’s difficult to overstate how important the very simple yet powerful legal protections provided by the sword and shield have been for freedom of expression, and the growth of the internet as we know it. These laws not only enabled the rise of the services that billions of us use to share ideas, information, and opinions…

36 of 111

…they also enabled the very existence of hundreds of millions of personal blogs, non-profit websites, and open source projects that would never have thrived if subjected to a constant risk of lawsuits.

“The Wikipedia we know today simply would not exist without Section 230.”


— Leighanna Mixter, Wikimedia Foundation

37 of 111

All things considered, this arrangement seemed to work well enough until online communication went mobile, began to centralise, and a few of these sites got really, really, big.

38 of 111

With billions of users and millions posts per second, content moderation becomes *much* harder.

527 760 photos shared

4 146,600 videos watched

456 000 tweets shared

46 740 photos posted

39 of 111

As your audience is now global, your single set of made in California ‘house rules’ regularly clashes with local laws, cultures, religions, political norms, and attitudes towards everything from free speech to nudity.

People should be able to make statements that ________ publicly.

40 of 111

Your giant platform is now also a giant target.

41 of 111

Bad actors flood your site with ‘coordinated inauthentic behaviour’

42 of 111

…and post things your tech and moderators hadn't realised they should be looking for.

“The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology.”

Facebook press release, 2019

43 of 111

To make matters worse, the algorithms your site relies on to help people make sense of their humongous feed grow ‘engagement’ and sell ads, rarely discriminate.

If it’s popular—they'll make sure *even more people* see it.

“There’s a lot of bad things that are good for engagement… conspiracies, divisiveness, radicalisation, terrorism. All these things seem pretty terrible, but for the metric of engagement, they’re amazing.”

Guillaume Chaslot, The Toxic Potential of YouTube’s Feedback Loop

44 of 111

And what algorithms don’t spread, humans likely will— compelled as we are by our very nature to share things we love or hate.

“The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commented about on Reddit, and mirrored around the world before the tech companies could even react.”

— Drew Harwell (@drewharwell) March 15, 2019

45 of 111

This might all still be OK, *if* we had a far wider range of meaningful alternatives to the use of giant platforms whose size and influence merely exacerbate all of these problems.

“Even in normal circumstances, the scale of a few dominant platforms leaves ordinary people with little choice but to use them. Now, the COVID-19 crisis has propelled us to the point where these platforms are embedded even more deeply in our lives, and further entrenched in our social and political structures.”

— CIGI, Rethinking digital platforms for the post Covid-19 era

46 of 111

Which brings us to where we are today…

47 of 111

People are angry, governments want solutions, and they are choosing�to act through new regulations aimed at the two most obvious targets— the intermediaries that host all of this content, and the liability laws that protect them*.

Intermediaries

+

*and us…although it may not always seem that way, these laws protect our freedom of expression, and our experience when we use these services!

48 of 111

“Whether it’s in India, the United States, or the European Union itself, lawmakers are grappling with what is ultimately a really hard problem—removing 'bad' content at scale without impacting ‘good’ content, AND in ways that work for different types of internet services AND don’t radically change the open character of the internet.

Mozilla

49 of 111

Law is code

Consolidation and censorship

PART 2

50 of 111

Designing regulation is a bit like coding. You build tiny modules and string them together to make a larger thing. Some of these you invent, and some you borrow from places that you trust, or where they’ve been proven to work.

(Why is this important? Because the EU is currently revising many of its already substantial internet regulations, and countries around the world—including many where hosting speech is currently poorly protected—are starting to copy the parts that best suit their tech regulation goals.)

51 of 111

Let’s look at three emerging approaches that aim to regulate what content can appear online, and seem likely to be widely emulated.

52 of 111

1. STRICT REMOVAL TIMEFRAMES

Companies must act very quickly to assess and remove certain types of ‘bad’ content.

Network Enforcement Act (NetzDG), 2017

Sites must investigate and take down “obviously illegal” speech such as incitements to hatred, or terrorist propaganda within 24 hrs of it being reported.

AN EXAMPLE

53 of 111

Most content that people report to platforms is what lawyers refer to as “awful but lawful”.

It may be upsetting or insulting, it may make you sad or angry, it may cause you to spend less time on that site, or censor your own speech, but it would not necessarily be considered illegal.

54 of 111

Assessing the lawfulness of content is also greatly dependent on context.

A photo depicting graphic violence posted by a terror organisation to glorify the event, or incite similar ones…

…would in most cases be found unlawful

55 of 111

The very same photo posted by…

  • The BBC reporting on the tragedy,
  • A researcher publishing an analysis,
  • Wikipedia as record of the event, or
  • Amnesty International condemning it.

…would in most cases NOT be found unlawful

Assessing the lawfulness of content is also greatly dependent on context.

56 of 111

The analysis required to make this distinction can take time, may not be straightforward, and in the ‘real world’ requires lawyers and the courts…who may often disagree.

Yet companies are now being asked to play the role of the courts, and pressured to not only make 'the right decision'—but do so very quickly—and with no public record of how the decision was made, and often no way to appeal.

lawyers can tell it’s unlawful

lawyers can tell it’s lawful

lawyers probably disagree about it

Diagram inspired by Intermediary Liability 101, Daphne Keller

57 of 111

What’s more, when faced with this choice, companies have little (legal) incentive to err on the side of freedom of expression

58 of 111

End result—even the most well meaning site may choose to be overly cautious, prioritise speed, and delete large amounts of content*—even stuff that’s perfectly lawful.

*Over-removal is sadly more likely to happen with smaller sites who won't have large, well trained teams ready to review content 24/7, so may simply apply an “if in doubt, just take it down” approach.

59 of 111

2. MODERATOR’S DILEMMA 2.0

Companies can be held liable for simply knowing illegal content exists.

FOSTA/SESTA, 2018

Sites can now be sued (or owners jailed for 10 years) if they “promote or facilitate prostitution” or “knowingly assist, facilitate, or support sex trafficking”

AN EXAMPLE

*See also the proposed EARN IT Act in the US.

60 of 111

The moderator’s dilemma is back. In the U.S. ( in the specific context of sex trafficking material) sites can now be liable if they moderate, but miss something, or make the ‘wrong' decision about the content they leave up.

“[Given our scale] if we are 99.999% accurate we are still making 1.5 million mistakes a month”

Del Harvey, Twitter (in 2013!)

61 of 111

Terrified of hosting any content that could be *related to* sex trafficking, many sites have opted for the simplest option: expanding their 'house rules’ to proactively ban broad categories of content that might cause them risk.

This caused the removal of perfectly lawful and valuable content such as sex education forums, sex worker support groups, LGBT resources, and niche online dating services.

A few small sites also chose to simply shut down.

Within months, many sites banned most (remaining) forms of nudity or ‘adult' conversation*.

*Some companies admitted this was directly caused by FOSTA/SESTA, while others simply called it a ‘policy change’. Some larger sites were (thanks to their more sophisticated moderation capabilities) able to make exceptions for art, medical photos, erotica etc.

62 of 111

3. ACTIVE MONITORING

Laws that mandate companies ‘actively monitor’ special categories of content.

Article 17, EU Copyright Directive

Makes companies liable for content used without permission as soon as it’s uploaded.

AN EXAMPLE

*More on the EU Copyright Directive. Also see the EU Terrorist Directive , and a recent bill in New Zealand.

63 of 111

The only way to check every single piece of content against potential copyrighted works (and this applies whether you’re Facebook, or Tumblr, or Wikipedia, or simply have a personal blog) is through entirely automated tech that compares vast databases of rights-holder-verified content to the things people want to post.

64 of 111

There is nothing nuanced about an automated upload filter.

Today’s content matching technologies can’t yet recognise critical contexts such as 'fair use’, parody, or permitted citations—and the ability to do so isn’t coming anytime soon. They are therefore likely to flag and remove large amounts of perfectly lawful content.

65 of 111

A widespread need for upload filters also risk further entrenching the power of large players.

If you’re a non-profit with a small operating budget, or a startup trying to grow a new service, or even a mid-sized established service like Wordpress.com, and you now have to provide this service to your user base—will you have the necessary skill or resources to do so?

$100 million (USD)

Cost to develop ContentID, YouTube’s filter designed to identify copyrighted video, audio, and melodies.

€900/mth

Cost to filter 5000 audio files using Audible Magic, a 3rd party content monitoring service.

66 of 111

And if you think potentially losing a few Baby Yoda memes to copyright filters isn’t so bad, consider that upload filters are also suggested in the EU’s upcoming Terrorist Directive.

This therefore begs the question: Would the terabytes of footage of police violence, bloodied protestors, and tear gas emanating from the US (or Hong Kong and Belarus, or countless other places where the web has enabled us to bear witness to injustice) have survived a world of widespread content filters?

67 of 111

“Regrettably, despite the fact that many great minds in government, academia, and civil society are working on this hard problem, online content regulation remains stuck in a paradigm that undermines users’ rights and the health of the internet ecosystem, without really improving users’ internet experience.”

Mozilla

68 of 111

The current wave of regulation isn’t all bad*, but it seems to presume that because we have a giant-tech-platform-shaped-problem, we need an equally giant-tech-platform-shaped-solution.

*Germany’s NetzDG for example includes some widely welcomed reporting and transparency provisions.

“So long as we are wedded to the idea that a few large companies will set the rules for speech and discussion online, we will constrain the solution-space of possible interventions.”

— Ethan Zuckerman, The case for digital public infrastructure

69 of 111

It feels as if we’ve reached a critical fork in the road.

A time where we can either choose to ‘fix’ the web by restoring the openness, diversity, and public stewardship of its early years—or accept a future where its largest players are legally compelled to define and manage both its shape, and value to society.

“…the monopolist’s top preference is not to be regulated but their second preference is to be regulated in a way that only they can possibly comply with.”

Cory Doctorow

70 of 111

Code is law?

Protocols…not platforms

PART 3

71 of 111

One company…

  • creates the interface,
  • hosts the data,
  • crafts the algorithms,
  • decides who can and can't use the app,
  • creates ‘house rules’ for content,
  • creates processes to implement those rules,
  • determines if/when you can appeal
  • provides (or doesn’t) APIs for 3rd parties,

and so on…

Most internet services we use today are centralized.

72 of 111

As these companies scale, they are faced with a near impossible problem: how to create and apply policies that do justice to the sheer diversity in human speech and culture, and the myriad reasons we choose to create, share, and spend time on the web.

“We need the flexibility to build a wide diversity of tools, for a wide variety of purposes. What we have right now with Facebook is one room—�and we try to use it as a church, we try to use it as a lecture hall, we try to use it as a bar, we try to use it as a hotel. It doesn’t work because it’s the same damn room…we might want to have different architectures and different ways of building for different systems and different purposes.”

— Ethan Zuckerman, Fixing social media (40:00)

73 of 111

An alternative idea to enable this diversity, would be to refocus online social spaces away from a primarily centralised model—towards a new generation of open protocols that shift both the architecture, and the incentives of network participants.

READ THE ESSAY

PROTOCOLS NOT PLATFORMS

Altering the internet's economic and digital infrastructure to promote free speech. By Mike Masnick

Source: Midem

74 of 111

Let's see what such a protocols might enable using a real-world example: a decentralised social app i’m currently using called Planetary.

Don't you deserve a better social network?

Wouldn’t you rather have a social network that respected your privacy, resisted abuse and harassment, rewarded content creators and was open by default? We would too, that’s why we’re building the world’s first mainstream client for a truly distributed social network.

75 of 111

Planetary is built using an open protocol called Secure Scuttlebutt, that enables the creation of different types of decentralized applications.

“You can think of a protocol as a set of rules and practices and behaviours that let different apps and services run by different people talk to each other.”

Planetary App FAQ

FYI - The Scuttlebutt reference implementation is written in JavaScript with Node.js. There are also active implementation efforts in Go, Python, and Rust.

76 of 111

Decentralized apps don't have to talk to a single master server�(or to anyone's servers) to work. They can talk directly to each other, and to apps or servers created and run by other people.

77 of 111

Many decentralised apps, and the protocols that underpin them, in some way aim to re-shape the relationship between people, their data, and the services they use.

Let’s see how Scuttlebut’s values are reflected in code, and go on to shape the behaviours, and incentives the protocol creates.

Scuttlebutt principles stack

78 of 111

PULL VS PUSH

79 of 111

Scuttlebutt is a ‘pull based’ protocol

a bit like RSS. You simply point it at the people you want to follow.

80 of 111

This means that when you first join an app there’s nothing to see (and no one can see you)…

81 of 111

…until you start to make connections by following one of your friends.

(a friend 'inviting you in' with a link or QR code you scan on their phone is a common way to join)

82 of 111

Scuttlebutt is designed to mimic�a real community. With each new person you follow, it’s as if you’re shining a light in a corner containing potential new friends.

83 of 111

And while this doesn’t prevent you from making friends in other communities, you aren’t immediately subjected to the content, replies, and conversations of total strangers.

You have to seek them out…

1 hop:�user implicitly followed

2 hops: visible in the app

3 hops:

fetched and stored, but user must seek out

Diagram source

84 of 111

ABUSE AND HARASSMENT RESISTANT

85 of 111

This architecture has some interesting properties.

Because you only download messages from your friends and their friends—Scuttlebutt networks tend to be more resistant to abuse, harassment, and spam.

86 of 111

Part of this is also social.

Friends, neighbours, or other groups that create a community around a common purpose are less likely to spam or harass each other, are more likely to work together to create rules that suit their specific context, and more likely to retain a shared understanding of the rules.

87 of 111

It’s also architectural. Scuttlebutt isn’t so much a network, as a network of networks. And while some may intersect, others can happily live entirely on their own.

(For this reason, it’s hard to know how many Scuttlebutt users there actually are).

88 of 111

Sure—but we have those already—primarily fuelled by opaque, mostly profit-driven algorithms that we can’t inspect or control. Certain communities choosing to self-isolate may be the lesser of those evils*.

(*Let's also not forget that for marginalised communities, such as the U.S. sex-workers who lost their community and support sites due to FOSTA—there would be significant benefits in a private community built through common goals and needs, that no one can arbitrarily shut down, and where they can ultimately set the rules. In fact, such a community currently exists on Mastodon, which is based on another decentralized protocol called ActivityPub.)

89 of 111

LOCAL FIRST

90 of 111

Planetary is local-first. This means that, instead of hosting every user’s content on a centralised server, each person hosts their own posts, and that of the friends.

Whenever you’re online, you grab whatever new content your friends have fetched, and send over whatever they don’t yet have.

When people follow you, they copy the storage container, and then sync it whenever you’re both online.

When you write a post it’s stored in your local storage.

I have written a thing!

stuff i wrote

stuff i wrote

91 of 111

You can even use Scuttlebutt without an internet connection.

Synching data with nearby friends over bluetooth, then grabbing updates from more distant networks by synching with people who’ve recently been online.

92 of 111

INTEROPERABILITY

93 of 111

Planetary has its own set of ‘house rules’, and plans to moderate content to facilitate a better experience, but because of the underlying protocol—if I’m not happy with the rules (or anything else about the app)…

94 of 111

…i’m free to take my identity, all my posts, and my friends to another compatible app* with different rules, interfaces, or features above and beyond what’s built into the protocol.

*Each Scuttlebutt app can write to several different content types, and each ‘reader’ can decide which ones they read.

Two scuttlebutt apps with different goals, features, and approaches

Planetary

  • Target audience: mainstream users
  • Discover new people in a user directory
  • Subscription feeds (coming soon!)
  • ‘House rules’, centralized moderation
  • Funded through features/private investment

Manyverse

  • Target audience: early adopters
  • Added privacy through invite-based discovery
  • Tools to fine tune friend-of-a-friend visibility
  • Community moderation only
  • Community-funded by donation

95 of 111

SO…HOW MIGHT A PROTOCOL BASED FUTURE LOOK?

96 of 111

In this model, interoperability unlocks the possibility for a very different future. One where a small number of centralised platforms are replaced by a larger, and far more diverse decentralized ecosystem…

“Rather than relying on a few giant platforms to police speech online, there could be widespread competition, in which anyone could design their own interfaces, filters, and additional services, allowing whichever ones work best to succeed, without having to resort to outright censorship for certain voices…”

— Mike Masnick, Protocols not platforms

data hosting

business model

trust & safety*

UI/UX/�features

APIs

access rights

values/�rules

moderation/appeals operations

centralized platform with sole control over…

*Trust & Safety (n): profession that develops and enforces principles and policies that define acceptable behavior and content online.

97 of 111

In this future, any person, group, or company could create an compatible app—either to form a new community, or better serve an existing one—and in doing so provide new functionality, or point a different lens at the content and conversations powered by the protocol.

98 of 111

With a multitude of apps able to ‘read’ the same content, users could throughout their lives dip in and out, experimenting with a range of communities—each providing different opportunities, and requiring different levels of commitment and responsibility from their users.

funding: subscription

purpose: �discussion/tools/live gaming

content rules/values:�community derived

content moderation: 3rd-party

funding: taxation/fee

purpose: �community news/discussion

rules/values: “British culture"

content moderation: in-house

funding: donation

purpose: open knowledge

rules/values: “verifiable facts"

content moderation: community

Local

access: open

access: UK citizens/residents

access: open

Above examples are hypothetical :) For added perspective, see Toby Shorin’s excellent paid community ideas.

99 of 111

Other apps might instead exclusively focus on functionality, offering ‘last-mile’ customizations applicable to the content from any app to reduce cognitive load, uncover more diverse viewpoints, or enable users to tune the experience to suit their personal use case or tolerance-level for certain kinds of speech.

(Right: just a few of the “rules” in Gobo, an experimental social browser “that gives you control and transparency over what you see”. Gobo is open source, and was built by a small team at MIT Media Lab's Center for Civic Media.)

100 of 111

READ THE ESSAY

THE CASE FOR DIGITAL PUBLIC INFRASTRUCTURE

By Ethan Zuckerman

By freeing each app from the automatic need to scale, interoperability could usher in a new era of socially owned applications.

This might include state-owned and tax funded apps, usage-fee funded ‘public utilities’, or even cooperatives, funded by, accountable to, and serving the needs of a neighbourhood, a family,�or some other non profit-seeking�group with a common purpose.

Photo: Wikipedia

101 of 111

A final step might be to encourage even greater decentralization. Third party services that further broaden choice for users, and assist smaller apps with the complexities of facilitating (what may still be) highly diverse and fast-evolving global speech.

These services could themselves be built upon an open distributed protocol, or simply consist of trusted providers that compete to best exemplify and enable the forward-thinking values and functionality that governments, civil society, and global internet users demand.

Trust and safety as a [trusted] service

Some apps might even delegate choice of provider to their users.

Data intermediaries (trusts, co-ops, distributed storage*)

Services

  • support developing content rules
  • turn-key moderation/appeals support
  • data forensics (e.g. spam, fake account detection)
  • cross-site coordination (e.g. CSAM, coordinated inauthentic behaviour, foreign influence ops)

Safeguards (NOTE: THESE MOSTLY DON’T YET EXIST!)

  • in-house global + domain-matter expertise
  • transparent, externally-audited
  • unionised (remuneration, training, health benefits)
  • professional association certified

*Read Mozilla’s new report that explore seven potential data governance approaches, including data trusts and fiduciaries.

102 of 111

A few final words…and a request

103 of 111

*except maybe for the part about big tech paying local taxes.

When I think of the challenges with today’s internet, i’m often reminded of the climate crisis—a collection of large, complex, multi-layered, and often interrelated problems that impact all of us—but disproportionately harm the most vulnerable amongst us.

A problem with no lone solution, �And no easy answers*.

104 of 111

A problem that we will only solve by leveraging all the tools at our disposal—technology, design, values, regulation—and ensuring each of these work hand-in-hand to reduce risk, protect others…and give us the necessary tools to protect ourselves.

Technology

Design

Values

Regulation

105 of 111

For this to happen, we will need to have brutally frank discussions about the kind of future we want, and the trade-offs we’re willing to accept to get there.

Neutral moderation

Reliable

Harassment free

No data leakage

No real-world harm

Trustworthy information

Free

Portable data

Minimal data collection

Censorship resistant

Surveillance resistant

Anonymous

Trade-off diagram courtesy of Alex Stamos in “The platform challenge: balancing safety, privacy, and freedom”.

We desperately need to move on from a “just fix all the bad things” attitude—be it aimed at our governments, the tech industry, or each other (e.g. when we suggest we just walk away from these spaces) �—to a space where trade-offs become a core part of the conversation.

106 of 111

“Content moderation at scale is impossible to perform perfectly— platforms have to make millions of decisions a day and cannot get it right in every instance. Because error is inevitable, content moderation system design requires choosing which kinds of errors the system will err on the side of making.”

— Evelyn Douek, Covid-19 and social media content moderation

Trade-offs such as deciding which lesser harms we may be willing to trade for increased likelihood that the very worst harms will be held at bay.

(Merely defining “the very worst harms”…while accounting for local norms and cultures remains a huge challenge, and one more reason we need a basic architecture that allows for greater diversity.)

107 of 111

We will also need to consider which problems demand centralized action, and which will demand a more local, distributed, and community-specific solution.

Setting aside for a moment how you may feel about the companies on this list, would you be willing to trade the future potential for highly-targeted mass coordinated action (e.g. election integrity, CSAM monitoring, climate emergency coordination) for a primarily decentralized environment?�

And if not—how might these very different models usefully, equitably, and architecturally co-exist?

108 of 111

Finding a useful, equitable, and long-lasting answer to the challenge of managing internet speech may be one of the most important decisions we make—and it’s one we must make together.

So to close, I have two things to ask of you…

March 2, 2019 demonstration in Berlin against Article 17 (ex-13) of the new EU Copyright Directive. Photo by Tim Lüddemann

109 of 111

1. Help advocate for more thoughtful regulation

Wherever you live in the world, there’s an organisation that you can follow to keep abreast of issues around digital rights and regulation, and lend a hand when action is needed*. Here are some of the groups i’ve found most useful…

Access Now (Global)

*And there is a lot going on…FOSTA is being challenged for endangering sex workers and hindering law enforcement. Poland is challenging Article 17. The French constitutional court just struck down the 24-hr takedown stipulation in their new terror law, Austria wants it's own NetzDG, Trump and others are threatening to repeal Section 230.

Suggest other orgs by Leaving a comment on this slide!

110 of 111

2. Help imagine better futures

There’s never been a better time to take a stand about the future of the internet. And we all have something to contribute.

Pick a topic you’re most interested in (or feel most able to contribute to) and start challenging the assumption that the future of our online spaces can only consist of a more centralized, optimized, and sanitized version of our present.

“This is a good time to question absolutely everything.�You want to go back to the same old system?”� John Boyega

111 of 111

“The problems we’ll face in this century are going to need everyone’s attention and contributions—not just that of our leaders and policymakers and journalists and thought leaders.

They’ll need help from people we love and people we hate, from you and from me.

— Mike Godwin, Did the early internet activists blow it?

THANK YOU FOR YOUR TIME :)

hello@yiibu.com