CLSI 2019 Session Descriptions
Facilitator: Maristela Miranda
Duration: 1 hour
With the growing number of jurisdictions with data protection laws, more attention should be given to how the provisions of these policies are enforced, such as those pertaining to the rights of individuals in relation to their personal data and the processing thereof. In the Philippines, for instance, the law provides data subjects with a number of rights relative to their personal data, including the right to access certain details about the processing thereof, namely:
1. contents of one’s personal data that were processed;
2. sources from which personal data were obtained;
3. names and addresses of recipients of the personal data;
4. manner by which one’s personal data were processed;
5. reasons for the disclosure of one’s personal data to recipients;
6. information on automated decision-making, if any;
7. date when one’s personal data were last accessed and modified; and
8. designation or name or identity and address of the personal information controller.
Relative thereto, there should also be a continuing discussion about mechanisms through which data subject rights can be enforced. Citizen Lab’s AccessMyInfo platform vis-à-vis the data subject right to information is a good example to highlight and build on when working on a similar mechanism in other jurisdictions. Other examples and best practices may also be shared and taken up.
The session should provide a venue for discussing data subject rights, with particular emphasis on the right to be informed about the processing activities of data controllers (or personal information controllers, in the case of the Philippines). Specific aims could include:
1. comparing data protection laws in different jurisdictions and their treatment of data subject rights.
2. discussing the enforcement of these rights, particularly the right of an individual to information about the processing of his or her personal data.
3. presenting mechanisms like AccessMyInfo and their potential or effectiveness in supporting and/or promoting data subject rights.
1. Short introduction on the topic by the facilitator
2. Small group workshop or discussion (identification of challenges, successes and best practices of different organizations from different countries)
3. Brainstorming (recommendations)
An analysis (can be used to develop a short paper) of the current state of accessibility of information to data subjects across jurisdictions, its impact on the exercise of the other data subject rights (right to correction, etc.) and recommendations on how organizations can uphold these rights.
1. Experience as a member of the data protection authority
2. Experience in handling data subjects requests (as a data controller, data protection officer)
3. Experience as a data subject requesting access to information, or any other related requests (e.g., to erasure of personal data)
4. Knowledge on data protection
1. THE PITFALLS OF SUBJECT ACCESS REQUESTS
2. CITIZEN LAB'S ACCESS MY INFO PLATFORM
3. ICO GUIDE ON THE RIGHT OF ACCESS
4. ICO GUIDE ON RIGHT TO BE INFORMED
5. GUIDE FOR DATA SUBJECTS
Facilitator: Lisa Garbe, Tina Freyburg, Veronique Wavre
Duration: 1 hour
We seek to shed light on the role of commercial internet service providers (ISP) in facilitating or hampering the implementation of internet shutdowns. Our session focuses on the measurement of state-ordered internet disruptions at the level of autonomous systems (AS) operating in Africa. In contrast to existing data collections based on media reports or active probing, we want to develop an accurate technical measurement of internet shutdowns that allows for analyses at different levels (e.g. companies or sub-national geographic entities). We plan to use real-time internet activity ( BGP and VPN) data that is provided by registries such as RIPEnet, the research center CAIDA or the company Psiphon, respectively. We are interested in better understanding technical particularities when working with this kind of data. Our overarching goal is to determine which AS block access to (specific) internet services and to examine any differences in compliance with governmental requests across AS, motivated by properties such as owner identity or the location of their headquarter. We will bring to the session material that allows to working on the questions below in a focused manner.
Among the questions, we would like to tackle in the proposed session are:
(1) When is an irregularity in internet activity considerable enough to call it a shutdown? -- question of thresholds.
(2) How to distinguish patterns of disruption that are politically motivated from disruptions provoked by technical failure (e.g., in response to natural disaster) – question of patterns in inaccessibility that are spatially or AS-based clustered.
(3) How to determine variation in AS compliance with governmental request -- start/end of disruption; technique (e.g., VPN allowed; throttling)
(4) How to aggregate BGP data to an analysis level that can be used for research at the level of company-year?
(5) Can we identify blockage of specific internet services, notably social media websites (packet filtering) based on irregularities in internet activity (BGP data)?
(6) Is it possible (and how) to use the number of allocated IP address space at the level of AS as a proxy to measure market shares?
We will provide participants with an overview of our research project and the different data sets we would like to use. We will then present the cases we want to zoom into during the summer institute.
The session will give participants an overview of how different forms of internet activity can be used to identify internet shutdowns and the challenges working with such data for scholars without an IT background. We hope to join forces with both IT scholars and others to further work on transforming/aggregating this data to allow for analyses at the company/country level.
This session would greatly benefit from people with an IT background and/or experience in working with BGP data.
The following blog post summarizes our paper Blocking the Bottleneck in which we motivate our data collection of ISPs and their shareholders in Africa and explain how ISP ownership is potentially linked to ISP involvement in internet shutdowns: https://www.opentech.fund/news/research-corner-blocking-the-bottleneck-internet-shutdowns-and-ownership-at-election-times-in-sub-saharan-africa/
Facilitator: Gennie Gebhart
Duration: 2 hours
From Snapchat stories to Facebook’s “pivot to privacy,” ephemeral messaging features are increasingly taking center-stage in secure messaging discussions. But not all "ephemeral" messengers offer the same protections, and their varying strengths and weaknesses are often not clear to the users who need them.
In this session, we will aim to start building technical and policy frameworks for evaluating ephemeral messaging features, and identify areas for further collaboration and work.
The session will be divided roughly into two, somewhat overlapping parts: how we evaluate ephemeral messengers, and how we communicate that evaluation. First, we'll look at a baseline list of messengers that offer ephemeral features (provided by facilitator) and discuss: What technical standards does an ephemeral messenger need to meet? What policy choices can reinforce or undermine those technical standards? Are there any meaningful patterns among different names for this general functionality ("disappearing," "exploding," "self-destructing," "unsend-able"), or are they just confusing advertising? Second, we'll talk about best practices for recommending or even ranking ephemeral features in particular and secure messengers in general. What situations might require a user to seek ephemeral features (e.g. domestic abuse, employer surveillance, border search, etc.). What considerations do users in those situations have outside of security and privacy features when choosing how to communicate? How can our community work to ensure that users have up-to-date information about this constantly changing area of technology? Finally, before we conclude, we'll summarize our discussion so far and identify potential next steps.
At the end of this session, we will have: (1) a preliminary list of technical and policy requirements for ephemeral messengers, (2) examples of how technology and policy can interact, reinforce each other, and undermine each other in this area, (3) descriptions of user groups and threat models that might be especially concerned about ephemeral messaging, (4) strategies and best practices for communicating secure messaging information to those groups, and (5) potential next steps and who will be responsible for them.
Secure messaging experience (as a user, engineer, policy manager, or otherwise!), familiarity with threat models that might motivate users to seek ephemeral messaging
Facilitator: William J. Tolley
Duration: 1 hour
Virtual Private Networks (VPNs) have become essential tools for enhancing privacy and allowing individuals in restrictive regions to evade censorship, and as their popularity grows, researchers and activists have continued to invest more time into verifying the veracity of the security claims made by commercial VPN providers.
My research focuses on ways that information about the individual might be leaked to malicious actors even if the VPN they are using is following all the guidelines and advice they have received from security researchers.
I have developed an attack on VPNs to make inferences about the connections the individual is making through their VPN, and potentially hijack active connections. This attack also affects other privacy enhancement and censorship circumvention tools such as Lantern, Orbot, Psiphon, and WireGuard.
In this session, I would like to demonstrate how the providers of these tools, cybersecurity researchers, and technologists have been unintentionally providing potentially dangerous advice to the people who rely on these technologies to work as described, and learn effective strategies for combating this lack of understanding and reaching the people who most need to know about these risks.
The primary objective of this session is to address the potential risks involved in using VPNs as a primary method of privacy enhancement. This will include: (1) clearing up some common misunderstandings about how VPNs work; (2) examining their effectiveness as a security tool for vulnerable populations; (3) exploring ways to improve VPNs and communicate their shortcomings to the people that use them.
Additionally, we are interested in learning more about the people that rely on VPNs for private communication, and would like to know (4) if they are aware of any information leakage or have taken steps to mitigate it; (5) a survey of the devices and privacy tools that are being used; (6) effective methods for teaching people about the risks involved in using VPNs and how to mitigate them, or replace/supplement them with other privacy tools.
I will give a basic description of how a VPN works with enough detail to understand the types of vulnerabilities we are concerned about and then open the floor for discussions about their seriousness, how to prevent them, and most importantly, how to communicate them to users and developers alike.
Hopefully, for those in attendance, a better understanding of how VPNs work and how promoting them as an off-the-shelf solution for vulnerable communities is potentially dangerous. For me personally, I hope to gain more information about how VPNs are being used, how knowledgeable the users are, and effected strategies for communicating my findings with other researchers and VPN users.
Useful Skills: Experience using a VPN is recommended, but not necessary.
Our project is still respecting a disclosure, but there are other papers that address some of the problems with VPNs that will be discussed.
Facilitator: Khairil Yusof, Maria Xynou
Duration: 2 hours
Citizen Lab URL Test Lists provide an extensive global list of URLs with common categories and metadata that allow app developers and researchers to build apps and write reports that are somewhat consistent and comparable internationally. By capturing the needs of more users, improvements could be made to the lists for wider adoption (including human users), which would lead to them being better maintained, up to date and complete.
The session aims to improve test-lists for wider adoption, by sharing with users both technical and researchers/writers what are the current and possible uses of the test-lists; capture current and possible future uses by participants, and what type of improvements would be needed for the lists to be more useful for their needs
- Brief introduction to test-lists
- Current uses of test-lists and some issues
- Collect from participants, how they are using/could be using the tests lists and what could be improved
- map and identify additional metadata for test-lists for adoption by wider/diverse range of researches and partners
- improve existing categories/standards if needed
- identify documentation and organization needed to increase number of maintainers and also quality review
- updates to current test lists based on inputs from participants
- identify existing(or new) communication channels for current and future maintainers and contributors
None. Any knowledge of online resources used by diverse local communities will be helpful for this session.
- Contributing to Test Lists https://ooni.io/get-involved/contribute-test-lists/
- "Sourcing and extending metadata of Citizen Lab test-lists for dashboards "https://sinarproject.org/digital-rights/updates/test-lists
- "Joined-up-data for Network Interference Measurement and Reporting with Citizenlab tests-lists" https://sinarproject.org/digital-rights/human-rights-internet-censorship-dashboards/joined-up-data-test-lists
-"Human Rights Censorship Dashboards" https://sinarproject.org/digital-rights/human-rights-internet-censorship-dashboards/initial-human-rights-censorship-dashboards
Facilitator: Kalyani Menon Sen and David Kelleher
Duration: 2 hours
The session will introduce the campaign against India's biometric id scheme, mounted by an alliance of techies, digital rights and human rights activists, women's rights groups and legal activists. Participants will be invited to suggest strategies to explain the risks and dangers of the scheme to non-technical and "apolitical" members of the general public.
1. To inform participants about the risks and dangers of the Aadhaar scheme.
2. To get some out-of-the-box ideas for strengthening the "Say No To Aadhaar" campaign.
3. To create a network of people who will amplify the voice of the campaign.
10-15 mins: Quick intro: Participants share their names and affiliations. (Time will depend on number of participants).
5 mins: Intro to session objectives, structure (one slide).
15-20 mins. Intro to the Aadhaar project and our concerns - privacy violations; denial of entitlements; undermining of women's rights; establishment of surveillance regime. (Five slides)
10 mins: Intro to World Cafe exercise.
20 mins + 20 mins: World Cafe in two rounds to brainstorm ideas around specific elements of the scheme. Trigger questions for each round will be provided to each group.
10 mins: Participant reflections (round robin).
We hope the session will lead to a circle of supporters and collaborators for the campaign.
We hope participants will be people who have experience in interrogating digital technologies through the lens of human rights and accountability of the state to its citizens.
The campaign page at <https://rethinkaadhaar.in/> is enough for an overview of the campaign and also has links to all our cases in the Supreme Court, research reports, news reports and popular writing.
Facilitator: Vasilis Ververis and Valentin Weber
Duration: 1 hour
This session highlights preliminary findings of a project which focuses on measuring the international presence of surveillance middleboxes. The preliminary findings include Huawei, Baracuda Networks, Apache, Netsweeper, Blue Coat System, Cisco, and Antlabs middleboxes in various countries. Several data measurements will be presented during the session.
The objective of the session is to raise and discuss the following questions with the participants: What are the best ways of gathering middlebox fingerprints? How to cope with header obfuscation? How to deal with PPI data gathered during measurements? How to cross-verify measurements?
20 minutes presentation of the findings
20 minutes discussion of questions
20 minutes further discussion
Possible collaborations on the subject matter and exchange of knowledge.
Technical skills e.g. network measurement
Facilitator: Elizabeth Anne Watkins
Duration: 1 hour
Critiques of algorithmic decision-making have begun to coalesce along several streams, with the brightest lights shining on fairness, accountability, transparency, and bias. Embedded in such framings are potentially dangerous presumptions of what it means for a technological tool to work “correctly,” that the greatest distances to be crossed are merely those of functionality, comprehension, and due process.
The objective of this session is to surface a research agenda concerned not with the characteristics of a facial recognition algorithm (“is it fair?” or “is it biased?”) but rather, with the webs of significance and power in which it’s embedded, including the organizational, the social, and the political. An increasing number of critics has begun to flesh out how this might look [1,2,3,4]. In this session, doctoral student Elizabeth Anne Watkins will present early findings from her work studying how Uber drivers engage with the implementation of facial recognition in their work. The floor will open to discussion among the participants.
Key questions for participants are:
What are current research agendas on facial recognition missing? Where are the gaps?
How can we ask the right questions of these gaps?
What are potential data-gathering methods to match our questions?
Who should be collaborating to produce multi-disciplinary scholarship?
How can we involve communities where these technologies are being integrated?
Sample questions in this research agenda might include, How does facial recognition create or reify infrastructures of value extraction? How can we make sure the gains from machine-learning (and the reams of public data on which they’re trained) are shared among those from whom the data was harvested? How does the implementation of facial recognition into domestic and civic spaces create novel threat surfaces? What histories of categorization and institutionalization does a recognition algorithm entrench? How are communities of “users,” whether users are subjects or agents of being “seen” by such technologies, conceptualized by researchers? How do these communities, even if the subjects of surveillance, realize their own agency in the creation of cultures, beliefs, and practices of facial recognition?
The specific objectives of this session are to 1) Bring together a set of researchers, advocates, and policy actors concerned about this intersection of machine learning and security, 2) identify the most salient and pressing questions lying outside of the normal purview of fairness, accountability, and transparency, 3) identify potential research methods, communities, and sites of integration.
For the first 15 minutes the session organizer will present her field work and early findings on the topic, after which the floor will open to discussion around a set of key questions.
The session outcomes include identifying a community of potential collaborators, outlining robust research questions and methods, and locating potential communities and sites for future study.
An interest in facial recognition and knowledge of how to affectively collaborate on putting together a robust research agenda.
Resources: Will follow up with the slide deck for the presentation.
Facilitator: Kemly Camacho, Jeannette Torrez
Duration: 1 hour
In this session we will share the results of a research about the women situation in IT careers (informatics, electronics, etc) in Costa Rica and in Argentina. Also we will share the results of a research about the online risks and threats for women environment activist in Central América
In this session we will present the path we have done to transform research results in advocacy and later in policies at national level
Also we will share how research experience allowed us to obtain evidence and impulse new communities of STEM women.
10 minute presentation on the situation in Costa Rica, 10 minutes presentation on the situation in Argentina, 40 minutes of discussion.
To address the challenges of women in STEM areas and IT industry in Latin America
Interest in the situation of women in STEM areas
Facilitator: Alia Yofira (Researcher at ELSAM)
Duration: 1 hour
The adoption of a comprehensive personal data protection regulation is pivotal to safeguard the fundamental rights to privacy in the digital age. However, the policy-makers are oftentimes ill-equipped and not well-informed to formulate a strong and comprehensive personal data protection laws. As the representatives of the voices of the people, civil society organizations (CSOs) play an important role in policy development processes on data protection in their respective countries. Since 1993, Institute for Policy Research and Advocacy (ELSAM) has been advocating the integration of human rights principles in policy making process in Indonesia. This session will further discuss ELSAM's advocacy strategies and biggest hurdles we faced on advocating the endorsement of Personal Data Protection Bill in Indonesia.
This session aims to brainstorm policy advocacy strategies (best practices and success stories) and seek feedback on ELSAM’s advocacy strategies (what can we improve?)
15 minutes short introduction to privacy issues in Indonesia and ELSAM’s advocacy strategies on advocating Personal Data Protection Bill
45 minutes discussion and Q&A (feedback on ELSAM’s advocacy strategies, success story of data protection regulation advocacy by participants)
Desired outcome of this session is to identify practical strategies for improved personal data protection regulation advocacy
Useful Skills: Policy Advocacy
Facilitator: Joss Wright
Duration: 1 hour
This session is focused around gathering data from platforms for information controls research.
We will firstly look at the implications for studying information controls of the growing centralization of the internet towards platforms, such as Facebook, WeChat, and similar, and their dual role as gatekeepers for information and actors asked to carry out detection, moderation, and removal of content. Getting community input on the legal and ethical implications of gathering data on these platforms will be a key part of the sessions.
More practically, this session will aim to discuss practical means to gather data from platforms, particularly data from mobile apps such as WhatsApp or WeChat, which are increasingly important sources of information. Are network-based approaches, such as proxies, most practical, or will it be better
hook directly into applications on devices or via emulators?
Given the limited time available, this session will hope to lay the groundwork for developing an agreed set of techniques to gather platform- and app-based data most effectively.
Given limited time, the objectives of the session will aim to set out the legal and ethical constraints around gathering data from closed platforms in order to maintain a view of information controls as they move from network-based censorship to platform-based censorship.
From a technical perspective, the planned outcome would be to lay the
groundwork for development of tools, or a set of approaches, aimed at expanding our view of the increasing amount of data only available inside online
platforms, as a complement to traditional network measurement approaches.
10 minutes -- introduction to problem and discussion of key platforms.
20 minutes -- legal and ethical implications of accessing platform data without explicit API access.
30 minutes -- brainstorming approaches, tools, techiques to access key platforms.
- An initial set of ethical principles, or questions, surrounding access to closed platforms such as Facebook.
- An identified set of techniques, tools, and approaches most appropriate for gathering data from different key existing platforms. (Preliminary focus on WhatsApp and WeChat.)
If more time is available these could be extended. Follow-up or breakout sessions are possible.
Mobile app development. Network security, particularly SSL certificate pinning. Experience with regional platforms of interest. (WeChat and Souroush are potentials.) Ethics of network measurement.
Menlo Report: https://www.caida.org/publications/papers/2012/menlo_report/
(Technical details platform-specific.)
Facilitator: Moses Karanja
Duration: 1 hour
While reasonable privacy intrusion is expected in and around airports, communication surveillance at most airports is unrestrained. This session brings together researchers interested in the role of international organisations in regulating privacy practices in transnational spaces like airports. Specifically, it focuses on the role of Airports Council International (ACI), a trade association of airports that develops standards and recommended practices for airports aimed to provide the public a safe and efficient air transport system. With membership drawn from 176 countries in 2019, ACI has experience in initiating cooperation in the aviation industry, an example being on airport greenhouse gas monitoring. Privacy practices in airports present a possible issue area for ACI to build international cooperation.
1) sharing current landscape analysis on airports and privacy practices (with a focus on specific privacy practices on WiFi access and safety using network measurement data);
2) eliciting feedback from participants on the appropriateness of focusing on ACI as an international agency primed to advance traveling publics' privacy interests internationally; and
3) a discussion on what processes and research-based evidence might make ACI (or other potential bodies) prioritise such privacy concerns.
Seeing as the third objective is on advocacy strategies on a very niche sector, I expect and welcome non-specific advice from the participants drawn from their diverse areas of expertise.
Introduction and sharing current landscape analysis on airports and privacy practices: 15 minutes
Feedback from participants on the appropriateness of focusing on ACI: 20 minutes
Discussion on processes and research-based advocacy strategy: 20 minutes
Summary and way forward: 5 minutes
1) An improved landscape analysis document on airports and privacy practices.
2) A possible advocacy brief from the discussion on how ACI (or other potential bodies) can prioritise such privacy concerns.
Network monitoring, especially on mobile devices.
Link to ICA website: https://aci.aero/about-aci/priorities/
Interesting Reddit AMA on airport culture:
Facilitator: Matthew Braga
Duration: 2 hours
At the end of 2017, Citizen Lab released Security Planner, an online resource that provides simple recommendations for lower-risk users interested in improving the privacy and security of their digital lives. When someone visits the Security Planner website, we ask them a few questions about their devices, habits, and concerns, and then present them with next steps — a privacy setting to change, perhaps, or an app to download — tailored to their needs. However, Security Planner is only as good as the recommendations it provides, and we want to make sure we’re giving our users the best possible advice.
One way we do this is with the help of our peer review committee, drawn from experts in the wider security and privacy community. Members help vet recommendations before they appear on the site for accuracy, relevance, timeliness, and ease of use for a general audience.
Another way is with sessions like this one. The goal of this session is to solicit constructive feedback on the usefulness of Security Planner’s current recommendations from a wide range of CLSI attendees, and brainstorm potential new recommendations and features that could enhance the Security Planner experience in the months and years to come.
During this session attendees will participate in a handful of brainstorming and feedback-driven exercises. Attendees will…
>>> Go through the process of using Security Planner to generate their own tailored recommendations, and then provide feedback on the experience.
>>> Review the content of select Security Planner recommendations for qualities such as accuracy, relevance, timeliness, legibility, and simplicity, using a provided evaluation rubric.
>>> Brainstorm new potential recommendations, as well as opportunities to add new features and other overall improvements to the Security Planner experience.
This session will run for two hours. The following is a rough schedule:
- 15 minutes: A brief introduction to Security Planner and the road ahead
- 15 minutes: You try it: A Security Planner dry-run with feedback
- 60 minutes: Content review
- 30 minutes: Future recommendations and feature brainstorm
The near-term outcome of this session will be the development of new, updated, and future recommendation that will make Security Planner even more reliable and trustworthy for our users. Longer term, the hope is that we will be able to use feedback from participants to improve the overall design and functionality of the Security Planner experience.
None required! In fact, the wider the range of skillsets, backgrounds, perspectives, and expertise the better. This session will benefit equally from the presence of technical and non-technical attendees who can draw on their own experiences when reviewing recommendations and providing feedback on the Security Planner experience overall.
- Evaluation rubric for Security Planner recommendations (to come)
- Brainstorming document for new ideas (to come)
Facilitator: Adrian Shahbaz; Allie Funk
Duration: 1 hour
Tech companies are having an increasing impact on internet freedom and digital rights worldwide. In light of this, and other evolving internet freedom issues, the Freedom on the Net (FOTN) project is undergoing a methodology review. During the initial review process, the FOTN team is eager to hear from the internet freedom community on how to more effectively assess the role that tech companies have on free expression and privacy in the 65 countries the report covers.
This session will engage the internet freedom research community on how to best reflect the actions of tech companies on free expression and privacy in countries covered by the FOTN report, including what emerging issues lie within the report’s scope, how the actions of private actors may be weighed against those committed by the state, and what additional indicators can be added to the report’s checklist of questions, which will be circulated ahead of time for participants’ preparation.
10min: Introduce FOTN's methodology and which indicators currently take into account the actions of tech companies
20min: Discuss possible indicators on the role of companies in restricting online content
20min: Discuss possible indicators on the role of companies in surveillance and data collection
10min: Take stock of discussion outcomes and elaborate next steps
The session will produce a set of possible indicators relating to the role of companies in (a) restricting online content and (b) privacy, surveillance, and data collection, to be incorporated in future editions of the Freedom on the Net methodology.
People who have developed qualitative and quantitative research methodologies would be of particular help for this session, as well as those with knowledge of companies’ content moderation and data collection policies.
A printed copy of the FOTN 2019 checklist of questions will be provided. Broader information about the report methodology may be found here: https://freedomhouse.org/report/freedom-net-methodology
Facilitator: Jon Penney (Citizen Lab / Harvard Law), Joss Wright (Oxford Internet Institute)
Duration: 2 hours
How can private sector companies, particularly those in the technology sector, be better held accountable for human rights abuses they commit while operating abroad?
This question is a persistent challenge for the Citizen Lab and other researchers and activists working at the intersection of ICT, human rights, and global security, where victims of international corporate human rights abuses—like victims of internet censorship or targeted surveillance online—often have no recourse in their own countries—due to corruption, lack of sufficient laws or independent courts, or the fact that the government itself is involved in abusive human rights activities.
This session will continue the conversation that began at last year's CLSI on this topic. However, if you did not attend last year, this session will still lay a foundation so all attendees can participate. We will consider these questions such as:
• What are the international human rights obligations of businesses operating abroad?
• What are responsibilities of states to regulate the activities of businesses, domiciled or resident within their jurisdictions, while operating abroad?
• What research is being done, or be be done, to better track the impact of these businesses?
• Any current/new developments in the law? (e.g. what is the status of the talks concerning a new UN Treaty)
• Do international or national level solutions/regulatory frameworks provide better means of accountability?
• What are the existing mechanisms/frameworks for accountability and their advantages/disadvantages? (e.g., has the new Canadian Ombudsperson for Responsible Enterprise (CORE) been a success?)
• What are some new / emerging / potential mechanisms/solutions/reforms for this accountability problem?
This session will build upon the discussion last year, with an aim to (1) Lay a common legal, policy, and ethical foundation for session participants to understand the rights/obligations of states, companies, and citizens in this context; (2) formulate a list of concrete proposals, solutions, and policies, to recommend to international bodies and national governments providing practical and effective means for holding private sector companies, businesses, and organizations accountable for human rights abuses committed abroad; (3) suggest research methods and means of measurement, to probe/test the effectiveness of proposed solutions/recommendations.
This session will begin laying the foundation for the broader conversation, with a discussion of some examples of businesses operating internationally--outside normal laws/rules--and abusing human rights. This is so all participants can be on the same footing. We will then, as a group, consider the different questions we'd like to address during the session (with those in the session description as a starting point) and then proceed to address each question in turn. The discussion should both inform about existing mechanisms of corporate accountability (for digital activities but also more generally) as well as their advantages/disadvantages. It also work to build consensus in the group around the most effective reforms/ideas/solutions as well as concrete ideas for reforms and new research/projects to hold these businesses accountable.
The two core intended outcomes: (1) the session will provide the foundation a report or paper analyzing the issue in-depth and proposing various policy solutions, specific to digital activities. (2) Lay the foundation for new research/projects/studies to evaluate and analyze existing accountability measures.
We need people working on tracking business/corporate human rights abuses both in their home countries and internationally or people who have experienced such abuses, or worked to mitigate/prevent them. We need people with research skills with ideas on how to track/hold them accountable. We need people with law/policy/international organization backgrounds with ideas for solutions/reforms to hold them accountable. But everyone is welcome!
- UN Guiding Principles on Businesses and Human Rights
- UN Treaty Talks and Human Rights Accountability for Corporate Digital Activities
Facilitator: Maria Xynou
Duration: 1 hour
Censorship events emerge around the world on an ongoing basis, often during political events. How can we improve how we track and respond to them?
The Open Observatory of Network Interference (OONI) proposes a decentralized approach: censorship measurement campaigns led by local communities. Through the use of OONI Probe software, you can measure networks to examine how internet censorship is implemented and to collect network data that can potentially serve as evidence of censorship.
This hands-on session will explain how you can use OONI Probe to measure various forms of internet censorship, and how to use OONI Run to coordinate censorship measurement testing in your community (for example, leading up to elections). We will also discuss how to interpret OONI’s censorship measurement data, and how to coordinate follow-up testing based on results.
Community members with experience leading local censorship measurement campaigns are encouraged to share their knowledge, any challenges they encountered, and lessons learned. This session will involve a brainstorm on best practices when engaging local communities with censorship measurement campaigns.
The objectives of this session include:
* Share knowledge and skills on how to coordinate censorship measurement campaigns
* Discuss best practices when engaging local communities with censorship measurement research
* Better understand community needs and challenges when participating in censorship measurement campaigns
* Collect feedback and ideas on how to support censorship measurement campaigns
* Introduction to OONI & censorship measurements campaigns
* Using OONI Probe
* Generating & sharing OONI Run links: Coordinating censorship measurements
* Exploring & interpreting OONI data
* Discussion of best practices
Participants will learn how to use OONI Probe and OONI Run to coordinate censorship measurement campaigns. This will enable them to coordinate censorship measurement testing worldwide, particularly in response to emergent censorship events and/or leading up to political events (such as elections).
Participants will also learn how to interpret OONI data in order to use it as part of their research and advocacy efforts.
The feedback that participants will provide will help inform the development and improvement of OONI tools and methodologies.
Previous experience with censorship measurement research is helpful, but not required.
OONI Probe: https://ooni.io/install/
OONI Run: https://run.ooni.io/
OONI data: https://ooni.io/data/
Facilitator: Mahsa Alimardani
Duration: 1 hour
This session will take a peak into my OTF Fellowship project at the OII into studying national and foreign platforms in Iran after censorship. The project is investigating the use of Telegram and its government alternative Soroush after the implementation of censorship Telegram (the nation's most widely used messaging and social media application). While there were some challenges with Soroush, the project has made great headway with delving in Telegram, and answering some important research questions about communication technologies in Iran's censorship environment.
Workshop early findings in my project, as well as workshop ideas on how to work with these massive data sets in the future.
About 10 minutes to introduce the topic and the field, 10 minutes to go over my methodology, 10 minutes to go over findings, and 10 minutes for discussion.
To discuss potential ways to study massive data sets such as Telegram's; and discuss new ways to situation platform and censorship studies.
Useful Skills: Data science skills
A book chapter orienting Telegram's importance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2976414
API code for Soroush: https://github.com/maasalan/soroush-messenger-apis
Facilitator: Gabby and Joey Shea
Duration: 2 hours
The ubiquitous news coverage and language of security that surrounds unwanted, false, and malicious content has put increasing pressure on politicians, members of civil society, and private companies to produce solutions. Yet proposed and enacted policies and efforts are left wanting, often ineffective, and at worst ripe for abuse. This session will therefore cover four broad categories of solutions aimed at mitigating the spread and impact of disinformation, and their associated opportunities, challenges, and vulnerabilities. It will also consider the viability and effectiveness of each of these solutions in different political contexts. The four categories of efforts to curb disinformation are: 1) technical; 2) educational; 3) legislative or regulatory; and 4) industry-led policies.
1) Identify global efforts to mitigate disinformation; 2) For each policy or solution, identify who the stakeholders are, and what powers, responsibilities, and accountability are afforded to them; 3) Identify vulnerabilities in each of these policies or solutions and how they can be exploited (and by whom); and 4) Consider the feasibility of these solutions in illiberal and authoritarian regimes and liberal democratic states.
The session will begin with an overview of the four broad categories of disinformation solutions. We will have a seed list to begin with and then elicit additional solutions and policies from the participants. We will then break out into groups representing the four categories. Each group will then pick one or two solutions to “hack”. After the breakout session, we will reconvene to discuss our findings and how these disinformation efforts can be improved.
A global list of counter-disinformation efforts; relevant stakeholders and their accorded responsibilities and powers; context-specific strategies; and potential risks to civil liberties and human rights.
Mind mapping; policy analysis; security studies; media studies; online marketing. Honestly, disinformation is such a ridiculously broad category of problematic information that all backgrounds and skills are probably useful to the session.
“A guide to anti-misinformation actions around the world” https://www.poynter.org/ifcn/anti-misinformation-actions/
“2018 thematic report to the Human Rights Council on content regulation” https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/ContentRegulation.aspx
“The MisinfoSec Framework Takes Shape: Misinformation, Stages, Techniques and Responses” https://medium.com/@credibilitycoalition/misinfosec-framework-99e3bff5935d
Facilitator: Alexei Abrahams
Duration: 1 hour
Disinformation and propaganda on Twitter in the MENA region tend to center around certain ‘filters’ (hashtags, keywords) that delimit a topic of conversation (#jamal_khashoggi, #sudan, Vision2030, etc.). In this interactive session, we will share Twitter data streamed on various filters of interest, and help participants perform some preliminary analysis to identify influential accounts and (time permitting) inauthentic ‘bot’ accounts.
(1) download and share data on topics of interest to participants.* (2) provide a demonstration of some preliminary analysis to identify influencers and bots operating on this topics.
15 minutes for introductions and sharing data. 45 minutes for analysis
Data shared with participants. Topic influencers and possibly some bots identified.
Familiarity with MENA region. Familiarity with Python/R, data wrangling
Participants should contact Alexei so that he can download Twitter data on topics of interest to them.
Facilitator: Oarabile Mudongo
Duration: 2 hours
This session seeks to share stories and address issues of Internet Shutdowns across the globe and in addition, it will offer participants an interactive session to discuss the impacts of internet shutdowns — often ordered by governments to disrupt communication and access to information — on people’s social, political, and economic life.
With the combination of personal narratives and the economic loss in numbers, this session seeks to produce a policy brief and set of recommendations based on the vast and deep effects of shutdowns and people who are living in blackouts or disconnected.
Objectives: This practicum session is part of a larger conversation on network disruptions aiming at extending the effort of the #KeepItOn Coalitions, amplifying the voice of internet shutdown victims, and substantiating useful solutions for multiple actors, including circumvention technology and policy advocacy.
This practicum session is part of a larger conversation on network disruptions aiming at extending the effort of the #KeepItOn Coalitions, amplifying the voice of internet shutdown victims, and substantiating useful solutions for multiple actors, including circumvention technology and policy advocacy.
1: We will have an informing and sharing session (group discussions) in addition to the workshop. The workshop will have a practical hands-on component, referred to as a 'Practicum' which will run throughout the session. These discussions as an outcome will produce a policy brief with a set of recommendations for state actors (African Union).
2: Beyond this session, we hope to take the opportunity at this program to find new partners, identify new trends and opportunities, and share positive policy progress and advocacy tactics back to the community.
3: In this session, we will discuss and develop a concrete roadmap for the introduction of economic arguments into the day-to-day campaigning and policy-work using COST (https://netblocks.org/projects/cost), a new data-driven policy tool that will automate the task of economic estimation. This workshop will brings together experts from legal, technology and policy backgrounds and invites active participation from the audience to better understand how a next-generation policy tool can impact internet freedom and digital rights community. How can we make policy work more visible to under-represented communities? How can we build advocacy tools that empower the general public?
Communications Policy; Research; Design thinking
Facilitator: Amalia Toledo & María Juliana Soto
Duration: 2 hours
Technological changes and our close relationship with technology and the Internet have made us believe that we are in a world full of possibilities that make our lives easier. But the reality is different, particularly if you belong to groups that have historically suffered systemic violence and discrimination such as women and gender dissent. In fact, the digital technologies, those that we carry in our pockets, that we install in our residences and workplaces, that so enthusiastically private companies sell us and governments adopt, have become weapons of patriarchy. These new tools silence, monitor, harass, assault and control women and gender dissidents. The landscape, for the moment, is bleak.
In spite of it or very in spite of it, feminist digital storytelling is gaining strength every day as a form of resistance, of summoning us to strategize against technological patriarchy. And that is what we want to explore in this session: how these narratives can be used to fight back the multiple forms of violence face by women and gender dissidents. To this end, we will present two projects -"Alerta Machitroll" and "Tour Deliro. Salsa and surveillance"- that use alternative narratives to draw attention to the technology-mediated forms in which women and gender dissidents are silenced, surveilled or controlled. Then, in small groups we will think about situations of violence that we know, have witnessed or lived to answer the following questions: what happened, how we/they responded or would have liked to respond, and what reflection did we have after the event and during the group discussion.
Once in plenary, we will present a case for each group and collectively we will build an ingenious response to this situation, a cultural metaphor (e.i. movie, a song, a fable or a folktale) that helps explain where the problem is and why we must fight it.
This exercise will hopefully inspire participants to think about narratives that strengthen feminist movements through the construction of alternative, attractive and witty counter-speeches closer to pop culture, with the ultimate aim of reaching more audiences, of expanding resistance.
Objectives: Strengthen digital feminist narrative or storytelling to deal with the multiple tech-mediated, gender-based violence from an witty and alternative counter-speeches closer to pop culture, allowing us to broaden our audience and expand our resistance against patriarchy.
Strengthen digital feminist narrative or storytelling to deal with the multiple tech-mediated, gender-based violence from an witty and alternative counter-speeches closer to pop culture, allowing us to broaden our audience and expand our resistance against patriarchy.
-Brief presentation of "Alerta Machitroll" and "Tour Delirio" projects.
-Break-up groups to identify cases of gender-based violence that show the silencing, surveillance or control that technology can exert over women and gender dissidents.
-Collective discussion of what ingenious responses or cultural metaphors can be used to fight back such violence and/or create awareness among people.
From the session we will come up with ideas to create alternative and witty feminist storytellings/narratives for participants to take back to their contexts and work on.
There is no skill set that is necessary for this workshop beyond the innate creativity of human beings. However, extensive knowledge of pop culture is highly welcome.
Facilitator: Jonathan Rozen & Avi Asher-Schapiro
Duration: 1 hour
As a global press freedom organization, The Committee to Protect Journalists (CPJ) is grappling with how surveillance poses a threat to journalists’ safety and press freedom around the world. Our research model is built around documenting discrete cases of physical and legal attacks against journalists, as well as providing journalists with safety information based on our research. Our session will present several questions about how best to enhance our research on surveillance and open up the floor for others to contribute ways we may strengthen our work, as CPJ and as a global community.
The session will elicit feedback for on the following questions:
-How do we use/mobilize our networks of journalists around the world to alert us to, and help us document instances of surveillance? What might be ethical considerations for this?
-How do we train staff, build capacity to be prepared to take on these issues, especially when our core capacity is angled towards physical and legal threats?
-How can we as privacy advocates encourage journalists to advocate for improved policy and reduced abuse of surveillance tools? In other words, how can we encourage journalists to see themselves as constituents in the fight against abuses of surveillance technology?
-What are ways our research could assist your research and visa versa - identify pathways for collaboration, information sharing, etc.
Loose time-frame for each section of the discussion:
-Minutes 1-10: Avi and Jonathan outline CPJ’s current research
-Minutes 10-25: How do we use/mobilize our networks of journalists around the world to alert us to, and help us document instances of surveillance? What might be ethical considerations for this?
-Minutes 25-40: How do we train staff, build capacity to be prepared to take on these issues, especially when our core capacity is angled towards physical and legal threats?
-Minutes 40-55: How can we as privacy advocates encourage journalists to advocate for improved policy and reduced abuse of surveillance tools? In other words, how can we encourage journalists to see themselves as constituents in the fight against abuses of surveillance technology?
-Minutes 55-60: What are ways our research could assist your research and visa versa - identify pathways for collaboration, information sharing, etc.
After the workshop, CPJ will produce a memo drawn from the dialogue that will be distributed among participants. Hopefully we will also establish new partnerships for future research.
A wide variety of skills are welcome: technical, research, legal, etc.
It would be helpful for participants to read CPJ research published on our site that involves surveillance, you can find it at this link:
Facilitator: David Kelleher
Duration: 1 hour
This session will take advantage of the presence at CLSI of many experts in this area to map the current work in the field of digital security and gender equality and think together about emerging directions for research and advocacy.
1. Map the work being done by participants and the work they are aware of
2. Analyze the pattern that is emerging
3. Identify possible directions for research and advocacy
Intros and session outline
Hear from participants as to the work they are engaged in
Use an analytic tool to classify and sort
Ask what are promising directions for innovation
1. Blog post
2. Increased understanding among participants as to the current state of the field
Experience in gender and cyber security/policy areas.
Facilitator: King-wa Fu
Duration: 1 hour
Much has been said about China's intention to exert outward influence and manipulate public opinion in Taiwan, Hong Kong, and elsewhere via media and Internet. It can be operated through injecting pro-China speech into another region's media system or attacking or silencing people/website who hold critical view towards China. Misinformation/disinformation campaign can be but not necessarily a kind of strategy. But the problem is very difficult to identity this type of campaign. It also includes multiple platforms (offline and online media, from BBS, website, to social media, from private to public messages) and multiple data sources (company registry, domain name registry, abnormal Internet traffic, cyberattack reports etc). It also requires a collective knowledge to analyze Internet access data as well as content data.
1) To identify all possible strategies China may be using;
2) To stock-take the resource, skillset, and the technology we have so far and difficulties
3) To brainstorm project ideas
1) To identify all possible strategies China may be using;
2) To stock-take the resource, skillset, and the technology we have so far and difficulties
3) To brainstorm project ideas
Develop an action plan
Facilitator: Maya Ganesh
Duration: 2 hours
As technologies evolve and develop, so do ways to exploit them to abuse people; and women are usually the first casualties. People who are the targets of tech-mediated violence and online abuse can only prevent or manage their situations by managing their own relationships, networks, and communications. Women have to be constantly vigilant and on guard, and this has a chilling effect on their participation online. There is nothing substantive that social media companies do to mitigate the effects of abuse and support victims; when they do, the results can be poorly conceived, like Facebook's experiment to hash personal images to prevent their unauthorized sharing. Complaints to Twitter to erase death and rape threats in private messages can take weeks to process.
This session asks how and if civil society could develop guidelines for building tools and technologies to resist, prevent or mitigate particular kinds of online abuse. Online abuse is a social, cultural and political problem, not something that can be techno-fixed. However, like Stalker Buster and Foxy Doxxing, how can we leverage what we know about patterns of online abuse, the interaction of the social and the technical, and the lived experience of women to interrupt some of the computational dimensions of abuse? This session was proposed as an imaginative and creative space, and not just to list all the dangers associated with computational/tech solutionism. Is it possible to give designers and programmers a perspective on how to avoid bad technofixes? What might creative, ethnographically-informed and sensitive computational tactics to manage and prevent online abuse look like? Can we develop preliminary specifications and guidelines from a civil society on this? This session will focus on the personal and professional contexts of human rights defenders, journalists and activists.
1. To sketch out both the opportunities of and limits to how computational tactics and tools can address online abuse and harassment.
2. To establish baseline norms and non-negotiables for computational tactics and tweaks at the level of systems requirements and engineering, data architectures, or UX/UI.
3. To identify the different social, community-based, policy, and technical and business actors who need to be involved in such discussions.
1. Introductions; framing the question/problem; clarifying how the session will run, goals and objectives- 15 -20 mins
2. Individual work: Creative brainstorming about a dream/wish list of how tech could work to mitigate or prevent online abuse. 10 mins
3. Small group discussion: Should or could we use tech tools and computational tactics to address (particular kinds of) abuse? Why or why not? What would be the limits and opportunities? Who should be part of this process? Each group might work on two or three different kinds of abuse (non consensual image sharing, stalking, or verbal abuse - TBD by the group) 40 mins + 20 mins discussion time (with breaks)
4. Small group work: Based on the previous discussion, drafting a set of guidelines that can become the basis for a collaboratively authored paper/post. 20 mins.
A document that will become a whitepaper.
I would really like this to be a collaborative and cross-disciplinary session. So, I am hoping to see people with a variety of backgrounds and skills: - Familiarity with systems architectures and engineering (mobiles esp) and UI/UX design and development - Legal and policy perspectives on online abuse, data protection and privacy - Awareness of how different kinds of online abuse occur in interpersonal and different socio-cultural contexts
https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ https://github.com/DeepLab/FoxyDoxxing https://stalkerbuster.de/ https://www.wired.co.uk/article/facebook-revenge-porn-tools https://ourdataourselves.tacticaltech.org/posts/empowering-abuse https://www.genderit.org/feminist-talk/architectures-online-harassment-part-1 https://ourdataourselves.tacticaltech.org/posts/30-on-weaponised-design/
Facilitator: Peter Micek
Duration: 2 hours
There are number of censorship measurement tools that are part of the #KeepItOn community. As our community grows, we want to help the measurement community understand the risks associated with measuring internet shutdowns and censorship.
At this session, we will be kick starting our work on understanding the different methodology the measurement community uses to measure internet shutdowns and censorship and safety issues associated with it.
Map different internet measurement tools, user safety concerns, best practices for protecting the safety of users, and minimum standards for user safety.
We have out major outcomes : we want to develop best practices for measuring internet shutdowns ethically and minimum standards a measurement tool must meet to be recommended by the #KeepItOn community.
research in internet shutdowns, methodology development, open source documentation,
Facilitator: Olga Paz, Jason Li, Helen Nyinakiiza
Duration: 1.5 hours
A fast-paced, interactive session for everyone to share experiences and build a design framework for digital safety trainings. Facilitators will draw on their experiences in the prevention of violence against women in digital spaces and in general trainings with communities in Africa, Asia and South America. Past training processes, games and materials will be shared by the facilitators to stimulate discussion.
- To train ourselves to think intentionally and critically about planning and creating trainings
- To share experiences and resources with one another
- To create a checklist of design requirements
Share & discuss within groups (handouts, games, comics, snacks will be provided):
- How do we get people to understand that digital security is important for everybody?
- How do we figure out who we are training and what their needs are?
- Considerations when creating curricula and training materials
Followed by game time❗️🎲
- Compiled case studies and best practices
- Framework and minimum requirements checklist for designing trainings
- Knowledge sharing from communities around the world
Experience in attending/organizing digital security trainings
Facilitator: Arturo Filastò
Duration: 1 hour
The OONI team has been working on revamping OONI Explorer, the platform used to present network measurements to end users. We have also made some progress in how we make the data available to end users depending on their analysis needs and technical skill.
The proposed session is going to be about analyzing network measurements at scale and presenting the findings to end users in a meaningful and understandable way.
We will discuss some of the workflows that we go through at OONI to process and analyze network measurement data, as well as the recent work done on revamping OONI Explorer.
We will be speaking about the iterative process we go through when doing data analysis for research reports and how we extended this semi-manual process to something that can be done automatically to create meaningful visualisations in OONI Explorer.
The session will try to be as practical as possible and give an opportunity to participants to try ideas out live and get input on how to work with OONI data.
We will also discuss the challenges that arise from presenting automatic analysis of OONI data inside of OONI Explorer, in particular what we did in order to try and minimise the chance of misinterpretation.
Participants will also be able to give feedback and discuss features that could be part of future OONI Explorer versions.
• Have people learn how to use the OONI data as part of their work
• Demo and collect feedback on the revamped OONI Explorer
• Brainstorm ideas on how to present complex technical data with minimal misinterpretation
• Brief introduction of OONI network measurement data and the audiences that need to access the data
• Overview of the new ways by which OONI data can be used
• Demo of the revamped OONI Explorer
• Discussion of the workflow used by OONI analysts when doing investigations and interactive analysis
• Brainstorm and discussion about challenges other projects have faced in using OONI data and presenting their own technical data
• Sharing of knowledge on how to use OONI data
• Collection of feedback to inform the development of our tools
• Better understanding of the audiences that may be interested in using this data
Some prior experience working with OONI data or network measurement data in general.
Facilitator: Marie A. Yeh & Lisa Yeo
Duration: 2 hours
The purpose of this session is to further understanding of those who engage in online harassment. Our session seeks to build on existing literature that implicates online disinhibition, anonymity, and other factors to explicate the psychosocial motivations of those who engage in online harassment. This project is in its preliminary stages and hopes to bring together researchers to develop a research agenda and future research projects. Specifically, research ideas for enhancing understanding of those who harass to inform the development of interventions by organizations, bystanders and individuals targeted by harassment to prevent, attenuate and/or deescalate harassing behaviors online. In particular, this session will work to elucidate the role of gender. While men are more likely to experience any form of harassing behavior, women, especially young women encounter sexualized forms of abuse at much higher rates with 21% of women ages 18 to 29 reporting being sexually harassed online, more than double the share of men experiencing this form of harassment (Duggan 2017). Women also view online harassment as more of a major problem than men (Duggan 2017). In addition, gender is often a major focus of harassing behavior (Duggan 2017).
1. Develop a research agenda and future research projects
2. Identify data sources for collecting meaningful data from social media and other online sources for qualitative and quantitative content analysis to inform the issue
3. Identify interdisciplinary research questions and approaches to tackling online harassment
1. Brief overview of the problem of online harassment
2. Discussion regarding participant’s interests and expertise
3. Brainstorming activities regarding
a. data sources and data gathering methodologies for qualitative and quantitative content analysis
b. Identification of research questions and approaches
Development of 2 to 3 projects that have research questions and identified methodologies to tackle those questions with concrete next steps for proceeding post session.
- Programming skills that provides the ability to retrieve data from social media and websites
- Bring different perspectives and expertise that could inform understanding of those who commit online harassment as well as victims of it.
IODA: A platform to detect and analyse Internet outages
Facilitator: Ramakrishna Padmanabhan, Alberto Dainotti
Duration: 1 hour
The session is about Internet connectivity shutdowns. We'd like to ask people to look together for past/current episodes of Internet blackouts using our IODA (ioda.caida.org) platform. In this session, we will get people familiar with this platform/data and collect feedback about directions to improve the interface usability, APIs, documentation material, and inference methods. We would also like to discuss how IODA and other tools/projects can complement each other.
For people who would like to get a sense for the kinds of events that IODA detects, and would like to get a feel for its interface, here are a few example events (click the links to see these events in IODA's interface):
Shutdown in Ethiopia coinciding with an attempted coup in June 2019:
Shutdowns in Syria during national exams in June 2019: https://ioda.caida.org/ioda/dashboard#lastView=overview&view=inspect&entity=country/SY&from=1559841600&until=1560322500
Shutdown in China in May 2019:
Shutdown in Benin during elections in April 2019:
We want to engage with researchers and civil society in order to promote and facilitate their use of IODA to detect, understand, and document episodes of Internet connectivity shutdowns. More broadly, we want to advance this field together, understanding which tools/data are currently available and in which direction we should focus our efforts.
- We will begin with a demo of our tool, IODA, that detects Internet connectivity shutdowns and show how it enables the observation of their characteristics (timing, geographic extent, etc.).
- We will dive into some examples and discuss them in detail.
- We will engage with the audience and explore recent events of interest that audience members are interested in.
We are targeting the following outcomes:
- An increase in IODA's user base resulting from audience members' increased familiarity with the tool.
- Improved usability, accessibility, and functionality for IODA's web interface as a result of feedback from the community.
- Increased collaboration with other people working in this area to identify gaps in Internet shutdown detection and solutions to address these gaps.
Our session is intended to be accessible to a diverse audience. Technical knowledge about the Internet (e.g.: how Internet routing works) can help members understand IODA's methodology in depth but such knowledge is not a prerequisite.
Project website: https://ioda.caida.org/
IODA dashboard: https://ioda.caida.org/ioda/dashboard
Duration: 1 hour
Magma provides a research framework to people working on information controls and network measurements, especially in authoritarian contexts and high-risk areas. This framework is meant to enable them to properly structure an activity plan, to make informed choices regarding the required tools (including ethical and security aspects), as well as to analyze the data produced by such tools. The work will also seek to understand what further improvements could be implemented in existing network tools so to increase their effectiveness in light of measurement needs.
The proposal intends to address a crucial lacuna by creating a dedicated
guide for network measurements in risky environments, which will be made
available in a range of different formats for easy online readability and web
In particular, the guide is conceived as an open licensed and collaborative
repository that systematizes and makes available network measurements and
internet censorship testing methodology by drawing on the experiences and
feedback from previous, current and upcoming work in high-risk areas.
The three main components of the project are:
1. Testing methodology
2. Data analysis
3. Research ethics
2. Project description
3. Future work and collaborations
Evaluation of a guide to research on topics of internet censorship, information controls and surveillance.
Facilitator: Sylvia Kanari
Duration: 1 hour
My session will map censorship in Tanzania as a way of identifying the trends and developments in this area. It will look at the events and times where censorship and surveillance are at a high in the Country.
To identify trends in censorship in Tanzania
To identify events that lead to censorship
To identify ways in which censorship is carried out in Tanzania
Sharing some of my research findings on Information Controls in Tanzania.
Having a discussion that will look at different methods of measuring censorship and what is most applicable in Tanzania.
Participants will have a better understanding of how censorship is carried out in Tanzania.
Participants will share different methodologies of measuring censorship that can be applied in information controls research.
Facilitator: Juliana Guerra
Duration: 1 hour
A safer internet begins with the infrastructure. Although we don't realize it, when we navigate the internet, we do it over protocols that run over other protocols which are far from neutral. Who designed them? How? Where? This session is not only a brief look on the way internet protocols are designed but an invitation to imagine and create new protocols, based on the Feminist Principles of the Internet. What do we need to fix? To change? To create? Let's do it together!
At the Internet Research Task Force (IRTF) we are writing a draft to describe -from an intersectional feminist perspective- how internet's standards, protocols, and their implementations may impact diverse, traditionally marginalized groups. For this purpose, we have identified some principles and protocols, directly and indirectly related to feminist claims on the internet such as access, movements, economy, expression, and embodiment.
In this session, we will play like an algorithm: the input will be a set of words from internet protocols, feminist principles, and our different backgrounds on gender and technology. Based on some guidelines and questions, we will process this material to shape other possible protocols for a more inclusive, safe, and fair internet.
To move forward the on the methodology chapter of the draft-guerra feminism  which consists on identifying and collecting use cases of how internet protocols can enable or restrict the possibility of women and queer people to enjoy a universal, acceptable, affordable, unconditional, open, meaningful and equal access to the internet.
1. Based on my research and experience in the Human Rights Protocols Considerations Research Group (HRPC-RG) at the Internet Research Task Force (IRTF/IETF), to make visible the way in which internet protocols are designed and defined, their biases and limitations, and the need to a more diverse participation in the design processes.
2. Through a cut-up and montage methodology, to identify a common language to speak about internet protocols with end-users.
Input ( 15' ): An introduction to what are IETF and HRPC-RG? Who's part of that community and which are their goals? What do we want?
Processing (30') : Cut-up and montage session, to recreate texts with a set of words present in IETF archive and Femnist Principles of the Internet.
Output ( 15' ): Conclusions. All participants will visit and read each one of the products and share some ideas and opinions.
Depending on the number of participants, each team will create at least three paragraphs articulating (each one of them) one internet protocol with one Feminist Principle of the Internet. These texts will contain an understanding of how a protocol and related services on the internet -i.e., email or video and audio streaming- can be articulated to specific Feminist Principles of the internet. Hopefully, some of these paragraphs will refer some use cases where the protocol -or related services- can enable or restrict the safe and full participation of marginalized groups such as women and queer people, among others like rural communities, indigenous people or people of color.
It is ideal to involve people with different abilities, interested in the intersection between gender and technology. It will be very useful to involve people with technical skills (both in software development and in network design and maintenance) and people who understand the needs and claims regarding gender and women's rights on the Internet.
- draft-guerra-feminism https://datatracker.ietf.org/doc/draft-guerra-feminism/
- feminist principles of the internet https://feministinternet.org/en/principles
Facilitator: Bex Hong Hurwitz and Cynthia El Khoury
Duration: 1.5 hours
We find ourselves engaging in incident documentation and feel impacts of documentation. As feminist activists and feminist digital security and wellness practitioners, we will facilitate a space for discussions about our best practices, current questions and challenges about how to practice incident documentation in a feminist and trauma-informed way.
Participants will engage in discussions sharing current practices of incident documentation, feminist and trauma-informed approaches to documentation and use of documentation, and barriers and opportunities in sharing documentation across movements.
i. Somatic Experience work
We will begin our session together with sharing our somatic experiences of documentation. What drives us to document; who do we document for; where does it land in our bodies?
ii. Breakouts to discuss and share practices
Breakout 1 - Using shared documentation for incident response and broader change
- Examples of how we have used documentation
- How can we reduce harm after incident occurs when we are reusing the documentation?
- Barriers / opportunities to using in incident response, in structural / campaign work, in other areas
Breakout 2 - Feminist and trauma-informed incident documentation and use
- What are impacts of incident experience and documentation / support work? Who stewards the documentation?
- How is “healing” informing our documentation?
- How are we storing and sharing the documentation?
- How can we reduce harm after incident occurs when we are reusing the documentation? How to ensured informed consent for unknown futures?
- How can we know that care is embedded within documentation?
iii. Politics of Care in Documentation
Acknowledging work that exists and the effect that it can have on the mind, body, emotions and spirituality. We will share and document how we care for ourselves and each other while documenting.
Recognition of somatic experiences of documentation; sharing practices in collective documentation and feminist trauma-informed approaches; identification of challenges and questions and spaces for continuing to share and work through challenges and questions
Digital security incident response experience; trauma-informed facilitation
Facilitator: Christopher Parsons and Cynthia Khoo
Duration: 1.5 hours
Stalkerware, a type of malware that is used to facilitate intimate partner violence, abuse, and harassment, is a pressing problem. In this session, we examine how recently published stalkerware research can be mobilized by security researchers, legal and policy advocates, and victim/survivor support workers in their efforts to mitigate and prevent gender-based violence, abuse, and harassment.
This session will continue from last year’s discussion about stalkerware. It aims to: (1) engage in a discussion about the potential for Citizen Lab research methods to be adopted by parties in non-Canadian jurisdictions; (2) evaluate ways to mobilize research to effect meaningful change; and (3) leverage frontline and related experts to identify novel lines of stalkerware-related research.
This session is divided into two parts. First, Citizen Lab researchers will present a brief summary and breakdown of the Lab’s recently published stalkerware research. This will serve to provide a roadmap for replicating the methods and modes of analysis we used for those who are interested in engaging in stalkerware-related research in other jurisdictions.
The other half of the session will focus on how to mobilize completed research among those who work on the frontlines of the stalkerware issue, from a programming and design, security research and malware detection, policy and politics, or victim/survivor support and advocacy perspective. Specifically, during this half of the session we engage in either small breakout or general group discussions, to engage with the following questions:
(a) What can researchers do in designing projects to help advocacy groups, frontline workers, researchers from related fields, and other interested experts take up the output products? What allocations of resources are most likely to result in inhibiting or remedying the development, availability, purchase, or use of stalkerware?
(b) How can we encourage researchers and lawyers from other jurisdictions to conduct stalkerware-related research? What guidance or resources do they need?
(c) What activities should private companies, such as mobile OS developers, app developers, payment processors, advertisers, anti-virus companies, and third-party hosting companies undertake to mitigate, prevent, or counter the availability, spread, utility, or functionality of stalkerware?
(d) How do we navigate ethical issues and draw lines in the case of "dual-use" applications (e.g., those used for child or employee monitoring, or tracking lost devices)? How can scrutiny of potential harms be integrated into the creation and sales processes of these applications?
(e) How should anti-virus engines and anti-malware detection systems address stalkerware apps?
Outcomes from this involve, first, engaging in a multistakeholder and multinational discussion of the issue of stalkerware. Second, emergent from the breakout or general group discussions will be a better appreciation for what research questions and projects are appropriate going forward, as well as means of mobilizing existing and future research through advocacy, law, and technological interventions.
Policy, legal, technical, advocacy, frontline support worker, or activism background
(a) The Predator in Your Pocket: A Multidisciplinary Assessment of the Stalkerware Application Industry at https://citizenlab.ca/2019/06/the-predator-in-your-pocket-a-multidisciplinary-assessment-of-the-stalkerware-application-industry/
(b) Installing Fear: A Canadian Legal and Policy Analysis of Using, Developing, and Selling Smartphone Spyware and Stalkerware Applications at https://citizenlab.ca/2019/06/installing-fear-a-canadian-legal-and-policy-analysis-of-using-developing-and-selling-smartphone-spyware-and-stalkerware-applications/
(c) Abuse of Power: An Introduction to Spy Apps & Stalkerware (comic) at https://netalert.me/stalkerware.html
(d) Know Your Rights: Spy Apps & Stalkerware (comic) at https://netalert.me/know-your-rights.html
Facilitator: Brenda McPhail
Duration: 2 hours
My project (which is at the formative stage) is inspired by the Oakland, California City Counsel ordinance establishing rules for the city’s acquisition and use of surveillance equipment, and San Francisco’s facial recognition ban. One of the primary concerns around state uses of surveillance tools such as facial recognition, automatic license plate recognition (ALPR), and IMSI catchers (cell site simulators, “Stingrays”) is often a lack of accountability or transparency regarding their acquisition and use. There are some communities, particularly in the US, who have fought back against such secrecy via regulatory action. This session at the CLSI will help to crowdsource the answers to some basic questions we’d need to answer to pursue such a strategy in Toronto/Canada while learning from people who may have experience in their home city or region with similar regulations, or may know of other such initiatives. What principles would we want a Canadian version of such regulation to protect, what differences are there between the Canadian and other contexts that would need to be accounted for, and could we pull together a group interested in carrying such an idea forward?
1. An engaged discussion regarding the benefits, risks, and hurdles for regulation of surveillance technology for public safety purposes, or public implementation more generally.
2. Co-development of a list of principles to be transformed into a concrete draft regulation.
3. Suggestions to inform a campaign plan to promote the draft regulation to local governments (potentially in various communities depending on interest and engagement).
4. Potentially identifying participants willing to work in coalition towards promoting the idea of local regulation of accountable procurement, use, and reporting structures for surveillance technologies in our communities.
20 mins introduction and background
30 minutes facilitated discussion, including pros and cons of regulatory approach, and sharing experiences.
10 minute bio break
30 minutes small group breakout (if session size permits) to discuss/develop principles to inform regulation and a supportive campaign strategy
20 minutes report back
10 minutes next steps and wrap up
1. Sounding out the feasibility of, and degree of consensus around the utility of, a regulatory approach to local use of surveillance technologies; 2. Co-creating a set of core principles to be included in draft regulatory instruments for accountable acquisition, use, and reporting policies for surveillance technologies 3. Identifying a path forward for this work, including, if there is interest, establishing a group to work together to create drafts and promote them in our local communities.
Technology, policy, legal and community advocacy perspectives will all be helpful in this session.
We can build around this sample bill (US model, from ACLU): https://www.aclu.org/other/community-control-over-police-surveillance-militarization-ccopsm-model-bill.
Facilitator: Chris Parsons and Ron Deibert
Duration: 3 hours
Description: This session will continue from discussions that began at the Canadian Cyber Dialogue in December of 2018. The session aims to: (1) engage in a policy discussion concerning a set of threats and challenges facing Canadians; (2) leverage expertise to better advance the discussions concerning Canadian cybersecurity.
Session Objectives: The session is divided into three parts. The first part will focus on the challenges posed by influence operations and disinformation, and engage with questions such as:
(1) What private or public measures can be taken to ensure that efforts aimed at countering the spread of disinformation will not infringe on information access, government accountability, and freedom of expression?
(2) How much can or should Canada engage with major technology companies located outside their jurisdiction?
(3) To what extent do we expect that state-driven defensive efforts such as media literacy will deter future influence operations?
(4) Given that foreign disinformation and IO campaigns often rely on domestic stakeholders to intake and distribute the campaign materials, how should we engage with domestic stakeholders who are ostensibly exercising their Charter-protected rights of speech?
(5) What aspects of traditional military deterrence (fail to) carry over to deterrence of influence operations?
(6) How do we evaluate the impact of influence operations in order to calibrate retaliatory action? Can this be done in advance of IOs, or much calibration take place after the campaign has begun, or concluded?
(7) To what degree is the success of influence operations a supply-side story (ingenuity and sophistication of the attacker) versus a demand-side story (susceptibility or predisposition of the target to accept the attacker’s narrative)?
The second part of the session hones in on the broad challenges facing Canada, as a wealthy country that is highly dependent on digital systems to maintain our way of life, from a supply chain and critical infrastructure perspective. Specific questions that will drive this part of the session may include:
(1) To what extent does Canada’s critical infrastructure require modernization and, moreover, can existing or future infrastructure function in a secure and resilient way in the face of growing nation-state surveillance, espionage, and intrusion efforts?
(2) What models can, or should, Canada look to domestically or internationally to better enable stakeholders to ameliorate challenges associated with digital infrastructure or associated critical infrastructures?
(3) What is the state of preparedness and what are the contemporary and near-future threats? Are current activities sufficient or must more be done?
(4) Should there be a heightened focus on inspection and control of the supply chain, or resilience in the face of potential exploit?
(5) How can infrastructure be secured given its privatized nature? What best practices have we developed, and where can lessons be drawn from our Western allies?
(6) What activities can be, and reasonably could be, undertaken with available stakeholder resources to explain the challenges associated with threats to the supply chain and critical infrastructure? Are additional resources, tactics, or strategies required to adequately express the current state of affairs to the public, industry, policy makers, and politicians?
The third, and final, part of the session will, to varying extents, evaluate the effectiveness of current transparency and accountability measures, clarify what should be monitored for forthcoming measures soon to emerge as C-59 fully comes into force, and what best-practices should be adopted either from existing organizations such as SIRC or the OCSEC or Canada’s international allies. Questions taken up during this session may include:
(1) What metrics should be considered to assess the newly instituted or forthcoming accountability and transparency mechanisms?
(2) What accountability measures are being used, or have been abandoned by, our Western allies, and how should Canada learn from those experiences?
(3) What, if any, deficits exist in the forthcoming accountability and transparency institutions? How should any such deficits be corrected, either by policy or legislation?
(4) What best practices have the OCSEC and SIRC presented, over their tenure, that absolutely must be retained moving forward? Are there any practices which should be avoided? What lessons have been learned that should be translated into NSIRA and the IC?
(5) What activities should non-governmental parties, in industry, civil society, and academia, undertake in the course of assessing and evaluating the governments new and forthcoming accountability and transparency mechanisms?
Session Outcomes: The discussions that are undertaken during this session will be used in shaping the next Canadian Cyber Dialogue events, which are planned to take place in the fall and early winter.
Useful Skills: Policy, legal, government, national security, or regulatory background
Resources: Short issue briefs, to be circulated ahead of the session
Facilitator: Joana Varon and Pablo Aguilera
Duration: 1.5 hours
It is becoming astonishing frequent that women, queer, and non-binary populations who speak up in social media get targeted with hate, censorship and other forms of technology enabled violence. The scenario is even worse for activists on sexual and reproductive rights and artists or (trans)feminist activists who use their bodies as battlefields or explore different instances of pleasure as artistic tactics either to challenge gender roles or question other aspects of patriarchy. Besides facing censorship from States and hate and other coordinated attacks by sexist individuals, these colleagues are also frequently censored by the very same online platforms that sell their services as promoters of freedom of expression.
This is because, either their content gets de-prioritized by the algorithms that runs these social media platforms or simply get completely erased, or networks are interfered and filtered to avoid women, queer, and non-binary populations to access to information, which in contexts of violence can be life-saving. Posts, pages and even complete artistic portfolios get daily completely deleted by discriminatory acts of these platforms, based on terms of services and moral standards conceived only accordingly to in their business models, not to human rights or other feminist values. The recent changes in the Tumblr terms of services banning nudity, as well as Facebook’s recurrent censorship of parts of female bodies are just a few examples on how we are observing a shrinking space for publishing feminist creative expression in the Internet. Furthermore, Telecom companies seem to have been also playing a role: recently, our measurements with Ooni Observatory have indicated some data of censorship, possibly, at the DNS level, of websites providing information about abortion in Brazil.
Re-writing biased historic understandings and documenting other approaches to female, queer, trans and non-binary bodies and pleasure is one of the core challenges of some feminist agenda in the digital realm. The different layers and expressions of surveillance and censorship mentioned above have severe impacts on the debates surrounding gender equality and the guarantee of sexual and reproductive rights, among other . This session have the goal to discuss different methods to produce concrete evidence and measure these impacts?
- Case lead presentation of the problem by the moderators (10 min)
- Presentation of the goals (5min)
- Collective Mapping of other Practices/Cases of Surveillance and Censorship of feminist discourses (private sector, State actors and by peers) (30min)
- Collective categorization of such practices and ways to document them (State surveillance? algorithm manipulation? restriction of in terms of services? other cathegories?) (15min)
- Collective brainstorming of the consequences of such practices to the protection of sexual and reproductive rights and other debates pertaining transfeminist agendas (30min)
Broader mapping of cases and actors
Collective brainstorming of ways to document it
Raw mapping of possible consequences of such practices to the promotion of sexual and reproductive rights and other debates pertaining transfeminist agendas
Research skills on gender studies and sexual and reproductive rights; documentation skills on practices of internet censorship and surveillance
Facilitator: Keith McManamen, Cecylia Bocovich
Duration: 1 hour
Given the number of users who use Psiphon and Tor to circumvent Internet censorship, we are in a unique position to observe and respond to censorship events in real time, as they interact with our currently deployed circumvention infrastructure. The technical measurements available from these networks constitute a unique snapshot of Internet censorship, circumvention tool performance, and Internet usage on the scale of ISPs, countries, or regions. Crucially, data in itself, no matter how clean or comprehensive, does not inherently possess explanatory power. To draw informed conclusions on how the social and political dynamics of information controls affect the network environment requires additional interpretation and contextualization from a constellation of technical and non-technical data sources across a number of stakeholder groups. In this session, we will show case studies where network anomalies were observed in response to censorship events, and the cross-sectoral collaborations used to contextualize them. Additionally, we identify related work, previous attempts at anomaly detection on circumvention data to detect censorship events. We aim to create a formalized, reproducible methodology that can be applied more widely to country case studies, censorship events, and other instances of network interference. Our goal is to be able to rapidly respond to censorship events by making modifications to our circumvention techniques in an informed way. Putting this data into action can contribute to a more complete picture of the censorship event that occurred and can be applied to formulate more effective responses to information controls by civil society, media organizations, the technical community, advocates, and researchers.
We'll start by presenting the methodology and review case studies of recent major censorship events, and how these have been observed from different vantage points such as OONI, Tor, and Psiphon.
We would like to work towards formalizing the methodology used for this kind of analysis so it can be reproducible and applied to future censorship events and country case studies by other researchers.
Network measurement, circumvention technology, data analysis, social sciences would all be relevant domains
You can see an earlier draft with case examples at: https://thinkmind.org/download.php?articleid=icn_2019_7_10_38005
Facilitator: Marcus Michaelsen
Duration: 1 hour
Session Description: This session builds on my OTF research project, investigating digital threats against exiled and diaspora activists from Syria, Egypt and Iran. Digital technologies have enabled authoritarian regimes to monitor and repress dissidents outside their territory. The research explores the methods, motivations and capabilities of state actors targeting human rights defenders and journalists beyond borders. It also examines the impact of these threats on the targeted communities and their practices of digital security and resilience.
The session presents the project’s key findings to gather feedback and input before completing my final research report. I am also looking for ideas on how to disseminate findings and make them useful to different key audiences (advocacy, digital security trainers, activists on the ground etc.). Finally, we will discuss possible follow-up questions, for instance for research into the legal/human rights dimensions of transnational repression or deeper investigations of threat perceptions and security behavior among human rights defenders in exiled and diaspora communities.
Useful skills: All perspectives are welcome: tech/digital security expertise, human rights advocacy, journalism, social science research
Resources: Iran's Exiled Activists and the Authoritarian State: https://doi.org/10.1080/14747731.2016.1263078
Transnational Activism, Digital Surveillance and Authoritarian Control in Iran: https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/6635
Facilitator: 'Gbenga Sesan
Duration: 1 hour
Over the past 3 years, Paradigm Initiative has been working with other partners to respond to network disruptions and advocate against repeated incidents. I think it's time to be proactive and predict when shutdowns are likely to happen. I have been able to predict a few from Gambia to Cameroon, using history and certain parameters but would love to create an e!ective model around that (similar to what TripIt has done for their "Neighbourhood Safety Score" concept), looking at laws, events (especially elections), what regional leaders do, history of rights violations, political landscape, national infrastructure (exchange points), licensing and ISP/telco relationship with government. This tool will help reduce the amount of resources wasted in rushed advocacy around network disruptions, that could have been ploughed into strategic advocacy based on a model that predicts interference with accuracy. Also, the tool can be used in pre-shutdown litigation, by showing patterns and parameters that point to the likelihood of interference.
1. To review successful predictions of Internet shutdowns in Gambia, Cameroon, etc, in order to extract conditions that precede shutdowns, such as existing laws, events (especially elections), what regional leaders do, history of rights violations, political landscape, national infrastructure (exchange points), licensing and ISP/telco relationship with government
2. To develop Version 0 of a model for Internet shutdown prediction, to allow for focused advocacy towards prevention
3. To develop a framework and timeline for work towards the possible launch of the prediction tool by March 2020, ahead of a public presentation at the Digital Rights and Inclusion Forum
1. Background discussion on network disruption predictions (15 minutes)
2. Prediction tool idea description (15 minutes)
3. Brainstorming (using break-out groups) on developing Version 0 (45 minutes)
4. Group discussion of break-out ideas (30 minutes)
5. Framework and timeline development (15 minutes)
1. Version 0 of a model for Internet shutdown prediction
2. Framework and timeline for work towards tool development
1. Strategic/Proactive Advocacy
2. Process Mapping
3. Design Thinking
4. Network Interference Research
Proposed model is similar to the Neighbourhood Safety Score tool at https://www.tripit.com/blog/2018/12/neighborhood-safety-scores-and-international-travel-tools.html
Duration: 1 hour
Censorship and propaganda are both practices of information controls. Whereas observing what content is being suppressed on the internet is getting more and more difficult due to factors like account limitation, observing what content is being promoted and encouraged gives us new insights into the information controls system in China.
I am particularly interested in studying both state-led and voluntary cybernationalism. Studies of Chinese cybernationalism focus primarily on internal-facing propaganda on domestic platforms such as Tianya, university forums and Weibo. With the dominance of WeChat among Chinese users home and broad, expressions of nationalism now have a much bigger audience beyond PRC. In the meantime, there seems to be an increasing presence of cybernationalism on overseas platforms (e.g., Twitter, Facebook, Instagram, Reddit) even though some of them are blocked in China.
Expertise with literature in propaganda/nationalism. Programming skills and advice on data collection on online platforms including WeChat/Twitter/Reddit/Facebook.
Some notable figures/platforms related to the topic:
@zlj517 Lijian Zhao 赵立坚 https://twitter.com/zlj517
Facilitator: Joss Wright
Duration: 1 hour
Internet shutdows and blackouts have become increasingly common in recent years, reflecting the growing difficulty of more fine-grained forms of filtering on the wire. Despite this, internet shutdowns and blackouts are not a single phenomenon, but reflect a range of different practices and techniques.
This session will begin to look at the different means used to cause shutdowns and blackouts, as a first step to understanding the tools being used, the spread of the practice, and the detection and analysis of blackouts as used in the real world.
The objective will be to enumerate a number of common means of causing an internet blackout, as a means to develop means to detect and differentiate blackouts and shutdowns when they are observed.
10m: Definition of the Problem
20m: Discussion of different potential means of causing a blackout.
30m: Discussion of means by which different shutdowns and blackouts can be detected and differentiated.
The session's outcomes should be:
- A proposal for rigorous terminology around 'shutdowns' and 'blackouts' based on existing cases.
- An initial list of technical means by which blackouts and shutdowns can occur
- A set of proposed techniques by which different blackouts and shutdowns can be differentiated.
Network measurement, statistical inference.
Facilitator: Ramakrishnan Sundara Raman, Reethika Ramesh
Duration: 1 hour
Censored Planet is a global censorship observatory which continuously measures network interference. In this session, we will discuss Censored Planet's ability to rapidly focus on censorship events in close to real-time. Specifically we will discuss how we used our data to infer and analyze the recent deployment of MitM interceptions to TLS connections in and out of Kazakhstan. We have an extensive report (https://censoredplanet.org/kazakhstan) and a real-time monitoring of the deployments in Kazakhstan (https://censoredplanet.org/kazakhstan/live) but further work is required to dissect all the characteristics of this network interference event. We will be using data and tools from Censored Planet (https://censoredplanet.org/data) in this session.
We hope to foster discussions on how best this rich dataset could be used in other such contexts. We want to encourage further, continuous analysis of the evolving HTTPS interception attack in Kazakhstan. Since we collect an extensive, biweekly network interference dataset, we want to encourage the use of our data across various disciplines. We will familiarize people with the data and the tools to facilitate this collaboration.
- We will start with an overview of the techniques that Censored Planet uses to detect interference on different layers of the network.
- Then, we will delve into the Kazakhstan MitM interception as a case study and demonstrate how the data pointed us to discovering this event.
- We will provide a description of how this data can be digested to enable further studies.
- Finally, we will end with soliciting feedback and suggestions on the best use for data of this scale.
We hope to foster further work similar to the Kazakhstan case study using Censored Planet data. We also hope to learn how the community approaches such events.
Familiarity with Internet measurement would help but is not strictly necessary.
https://www.censoredplanet.org, Quack (https://benvds.com/papers/quack-security18.pdf, https://censoredplanet.org/projects/quack)
OONI is a popular tool for measuring internet censorship but key workflows can be interrupted by censorship. Will share real life experiences on how to circumvent and discuss ideas on how to make it work.
Share experiences and new ideas on how to work with OONI if it itself gets blocked and come up with suggestions to the OONI team.
- Brief discussion of OONI and some of the use cases
- Discussion on possible pints of censorship to OONI
- Examples of this from the frontlines
- Discussion of other approaches and recommendations to the OONI team
Ideas to prepare for the possible block of ooni itself and ideas to make common workflows more resilient
Basic knowledge of ooni and network blocking