| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | From the Ethical Tools Report | Who added this | (Who is coding this) | Author | Title | Year | Price | To pay | Type | Type | Internal comment | Description: this is what you're supposed to be doing in here | Problem: what is the problem they are trying to address? | Ethical theory/idea/definition | Values: which values does their tool express? | Conflict mediation: how do they deal with conflict? Value conflict in particular? | Interaction: expected to be used as a group/individual/mixed/facilitators/self-administered | For whom: who is the imagined audience? | Goal: what is the goal they are trying to achieve? Why would this be used? | When would this be used? | Expected length of engagement | Assessment: self-assessment v. peer-review v. paid assessment | Format | Notes | Relevance | |
2 | 0 | AB | FUS | StudioDott collab with Know Cards | IOT Ideation Cards | 2016 | $149 | Toolkit / Cards | http://studiodott.be/en/2017/01/iot-ideation-op-thingscon-amsterdam/ | The cards are designed with complexity in mind, and that is interesting. However, it is unclear how the whole process makes the products more 'ethical' or 'responsible'. I feel like they are equating 'humane' products with designing with many people for many people, without acknowledging the potential inequalities of expression, access and (systems) knowledge. | These cards are created to construct, explore and synthesise a visual overview of an IoT system concept. This is achieved by linking people, objects and environment cards together with interaction cards. | The messiness of designing an IoT solution | No specific ethical outlook. But their goal is contributing towards more "meaningful connected products". So instead of a tech-driven perspective, they support a more humane and context driven one [sic]. | flexibility, humane, context-driven, inclusivity (allowing more people to be involved in defining IoT products that people want to use in their day to day lives [sic]. They assume it is possible to define all system components and know which component influences thhe system at any time. Complexity thinking beyond 'if this then that' paradigm) | No conflict is foreseen, no advice on conflict management. | The cards make sense in experienced teams but can also be used in an open workshop setting. | developers, designers | To design more humane products that take into account complexity and shiftt the 'tech-driven' perspective of IoT | They can be used both to understand existing products, but also to develop new ones. It says on their website that "a full 'ideation' session" would include" you start(ing) the session without a plan or goal and end with at least one (but usually a several more) Internet of Things products." | Self-assessment or they can ask the designers of the cards to come help out in the form of a workshop. | By providing a way to ideate internet of things products without primarily focussing on the technological possibility or feasibility, our goal is to contribute towards more meaningful products. | |||||
3 | 1 | AB | AB | StudioDott collab with Futurice | IOT Service Kit | 2015 | free | Games | http://iotservicekit.com/ | The IoT Service Kit is a board game that brings domain experts out of their silos to co-create user-centric IoT experiences. | Communication (and miscommunication) between stakeholders | No specific ethical outlook. But because the kit enforces that you think through different user journeys and data mapping, you may be forced to think outside of your usual perspective on the product and encompass repercussions you may not have thought of. | user-centric | They let each stakeholder ahve a part in the discussion (physically in the pieces of the kit). "The IoT Service Kit tears down the walls between participants so they can iterate rapidly to level everyone up to the same mindset to avoid blocking great ideas." | Group of mixed stakeholders / for ex: ux designer + tech engineer | developers, designers | That "people from different backgrounds and different goals can get along and create cool and realistic IoT services." | Ideation phase but also to map out already-conceived of products | Self-assessment | Physical | Co-create user-centric IoT experiences. | |||||
4 | 2 | AB | AB | Simone Mora | Tiles Toolkit | 2018 | free | Toolkit / Cards | http://tilestoolkit.io/ | Learn and invent for the IoT with no previous knowledge | Product development for the Internet of Things (IoT) is pushed by advances in technology rather than human needs. They want to keep human drives at the center of IoT development by involving end-users into creative ideation and prototyping of novel products. | No specific ethical outlook. But a presumption that people know what is best for themselves so we should just ask them. | user-centric, human needs first not advanced technology first | The workshop includes several stages that would potentially unlock conflict: specifically in the criteria phase: "Participants are then asked to collaboratively reflect and improve their concepts using CRITERIA cards and to prepare an elevator pitch to present their concept to the the audience (other participants, mentors, investors).| | Card kit: individual or mixed Workshop: facilitated IOT Stickers: prototyping interactions | students | Letting everyday people design for IoT with users / human needs as a primary driving factor | Ideation | Self-assessment and peer-review | Physical | Generate ideas and make great IoT inventions | |||||
5 | 3 | AB | AB | MIT | Moral Machine | 2016 | Website | http://moralmachine.mit.edu/ | The design your own scenario is neat | Give your human opinion on how machines should make decisions when faced with moral dilemmas, and contributing to a crowd-sourced assembly and discussion of potential scenarios of moral consequence | Measurable morality. Gain a clearer understanding of how humans perceive machine intelligence making such choices. | Utilitarianism, deontologist, consequentialism | Expected to be used as an individual | public | platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence. | Your answers are evaluated against other users of the website in the end-survey | Online | General ethics - not specific to IOT | ||||||||
6 | 4 | AB | FUS | Vi Hart + Nicky Case | Parable of the Polygons | 2014 | free | Games | http://www.gamesforchange.org/game/parable-of-the-polygons/ | Interesting tool for visualising diversity | Game about visualising diversity choices in design | Seemingly harmful design decisions can have harmful effects: segregation in this case | No specific ethical outlook. Focuses on diversity. Perhaps Care? | Diversity | Segregation is seen as 'the conflict' to be solved, and the tool advises that there won't be a moment of zero bias or that won't solve anything, so when making decisions we need to have a wholistic view. | Individual | public | To learn to demand diversity | 10 minutes | self-assessment | Explore how harmless choices can make a harmful world in this ‘playable-post’ | Interesting example but a bit too simplistic. Not all diversity issues are two-dimensional and completely overlooks intersectionality effects. | ||||
7 | 5 | AB | FUS | Simply Secure | Trustworthy IOT | 2016 | free | Worksheets | https://github.com/simplysecure/resources/tree/master/Trustworthy_IOT | I found this tool really individualistic | Work with a list of worksheets and cards | self-reflection | no specific ethical outlook but in general looks/feels consequentialist and individualist | Security, Privacy, Transparency | They seem to really care about 'personal values' of the individuals and what kind of harms they are worried about. | Not clear - could be individuals or groups | developers, endusers | Develop insights for trustworthy IoT | n/a - the tool is really general and broad | 30 minutes | self-assessment | I found the questions in the worksheet too focused on the single individual - what are their technology nightmares and inspirations etc. | so-so | |||
8 | 6 | AB | FUS | Mixed Reality Laboratory and Horizon Digital Economy Research The University of Nottingham Microsoft Research | Privacy Ideation Cards | 2016 | Toolkit / Cards | https://www.nottingham.ac.uk/research/groups/mixedrealitylab/projects/information-privacy-by-design-cards.aspx | Not clear how the cards can be obtained now that the Horizon Research is over and Lachlan (the main researcher behind the cards) has moved institutions. Most links on the website are broken. | Learn how to apply privacy by design in practice. | Difficulties of navigating through laws for non-lawyers and also designing with privacy in mind | No specific ethical outlook. Focuses on diversity. Perhaps Care? | Privacy | not clear | group or self-administered | developers, designers | learn to implement privacy by design | not clear, but probably at the initial design stage | self-assessment | There are only bits and pieces of information online about these tools - short summaries and blog posts but none of them are very detailed. Now that the project is over, it is unclear how the tools would be sustained or where one can obtain the cards. | very relevant | |||||
9 | 7 | AB/IS | JR | Maheen Sohail | Practice Ethical Design | 2017 | free | Framework | https://medium.muz.li/how-to-practice-ethical-design-d8a6a8dcf4b0 | Value alignment incorporated into the design process. Through a series of activities a team explores their own and their users' values - using a closed list - to see where they fit together, and then tries to incorporate these shared values into their design goal. | Designers want to be ethical but need a methodology to do it properly. | Human Values is the central theme. Incorporates: Friedman's Value Sensitive Design and IDEO's Human Centerd Design and Time Well Spent | It explicitly uses the 12 values in Friedman and Kahn https://vsdesign.org/publications/pdf/friedman03humanvalues.pdf The tools encodes accountability and fairness. Ask participants to adopt a personal lense to spot biases and to express and explore their own values. | Very light on conflict. The methodology requires bnarrowing doen form 12 values to 5 then 2 or 3 and expects conflict, but offers not methodology to deal with it. Simply asking workshop participants to "find balance". | The methodology consists of 3 workshops facilitated by a designer with the help of the supporting materials. | designers | The goal is to be able to design producst athat incorporate huma values. The poutcome of the process is a set of values and an improved design concept that can be used as reference to guide further activities.The author recommends putting the values on the wall. | Inital stages: research or ideation | 45 minutes | self-assessment. There is no paid service. | ||||||
10 | 8 | AB/IS | IS | Dorian Peters & Rafael Calvo for the Positive Computing Lab | Tools for Positive Computing | 2014-2017??? | free | Worksheets | http://www.positivecomputing.org/p/were-pleased-to-share-some-of-tools-and.html | It is not clear when these were designed and I can't find publications to go with these, but they do have a book out in a zillion languages. the reference sheet is the only one that is clearly developed in 2014 (there is an extended abstracts reference) | Cards are used for brainstorming determinants of well-being; poster is used to inspire and remind which determinants to think about in design; worksheet is used for individuals to think about the goals of their design projects and the reference sheet is a discussion tool throughout the design process | Designing for well-being requires thinking about what we mean by well-being and this set of tools handily transcribes what they term as "determinants" of well-being, providing a handy operationalization for what designers should think about in design as things that promote well-being. Clearly the idea is that designers need a better definition of well-being grounded in psychological literature and this is what these tools are supposed to do. As an aside, these are also deeped as ethical determinants because if well-being is supported then it is a good thing. | Self-determination theory - this is the Deci & Ryan thing that is basically making an assumption that people have intrinsic motivations that are positive and need to be supported (very virtuous, they are catholics). However, SDT has three components that they claim influence intrinsic motivation - three determinants of well-being - autonomy, relatedness and competence - there are very much similar to the virtue/capabilities/care framework that we have developed but from a psychological point of view and very focused on the individual. In this there is an ethical component - violation of these three determinants of well-being is clearly unethical because it undermines well-being. | Cards: individual empowerment with no regard to outside demands and needs - techno-deterministic; Poster: traditional list of six well-being determinants (competence, autonomy, meaning, positive emotions, engagement (flow), relatedness); Reference sheet - reframing for UX - an attempt to shift focus to broader issues than just the interface; Worksheet: layering of concerns for UX in practice. | None - everythign is very "positive" - assumption seems to be made that conflict is resolved through focus on the six determinants which work together and are not in conflict. | Independent internal use - group and individual. The materials are intended to be self-explanatory and come with a minimum of instruction | designers | The goal is to enable development of technologies that support well-being and human flourishing more broadly. They are trying to "inspire" design by bridging across theoretical frameworks and actionable design practices. | Throughout the process, but especially important in ideation and UX processes | ?? varies | self-assessment and peer discussions. Researchers probably do some consulting but this is not clear or evident from the website | There is a lot of commonality here but since they are psychologists, their view on the whole thing is really circumscribed epistemologically | ||||
11 | 9 | AB | IS | Cloud Security Alliance | Futureproofing the Connected World | 2016 | free | Guideline | https://downloads.cloudsecurityalliance.org/assets/research/internet-of-things/future-proofing-the-connected-world.pdf | A 76-page door stopper over here in PDF form. This is produced by CSA - an american organization that is a certification alliance. they provide certifications and run a global consultancy program by recruiting consulting partners. | Includes step-by-step guidance on how to integrate security into IoT products. There is a checklist for security engineers to follow during the development process | This document is meant to provide developers of IoT devices with an understanding of the security threats faced by their products. We also talk about the tools and processes that can be used to help safeguard against those threats. Although IoT systems are complex, encompassing devices, gateways, mobile applications, appliances, web services, datastores, analytics systems and more, this guidance focuses mainly on the ‘devices’ (e.g., the Things). the lack of security in IoT and the impact of IoT devices on cloud security in general, which is their main concern. | Fundamentally concerned with security vulnerabilityes and data exposure given increasing network capabilities of other devies that can be used as attack vectors. Full information security approach - individualized, secure lock-down, data minimization logic. | New capabilities are driving the human out of the decision making loop in many instances and as we rely on IoT products to do the basic thinking for us, we will need to make sure that those products and their associated services and interconnection points are each developed as securely as possible. The onus of morali decision making with respect to security is on the developers and this document is here to help. - Main value: security | They acknowledge tradeoffs, especially resource limitations but point to ways that may enable people to still achieve greater security despite these. | Checklist | engineers, management | Better and more secure IoT | Every stage of the process | self-assessment and peer discussions. CSA of course offers consulting services to improve security in every system | ||||||
12 | 10 | * | AB | IS | Markkula Center for Applied Ethics | Making a Difficult Decision | 2015 | free | Web app/Mobile app | https://www.scu.edu/ethics-app/ | Bring a decision you are about to make (this is focused on A decision) and write in all the stakeholders you can think of that might be affected. | Decision making and the ability to consider different modes of evaluating the decision given a variety of ethical frameworks | This is a multiple-theory approach. In particular: Utilitarianism; Moral rights; Fairness and justice; Common good; Virtue - to some extent, aside from utilitarianism, the rest of the approaches are treated here largely from the Aristotelian point of view but they do address different nuances. This is actually very well constructed to push people away from utilitarianism as the primary ethical decision-making approach. | support for individual decision-making is paramount - it's a virtue orientation in some ways, but it also does some number-crunching here to demonstrate and weight different ways of thinking (similar to the moral algorithms that Annelie used in later workshops). Carefully reasoned decision making is the goal here and that is valued in itself. | Their mode of addressing this is through the moral algorithm where they allow you to weight different approaches in the decision negotiation. BUT since every decision is taken by itself, the inherent value conflicts are never acknowledged. | Used by individuals, who are supposed to have a decision to think about - identify the stakeholders, get the facts and then use this app to think through the questions and use the slider to mark the effects of a particular decision (more good/harm; more/less respect for rights; more/less just; increases/decreases common good; more/less virtuous. | decision makers | To help people both recognize the ethical nature of the decisions they are making and to expand the scope of their considerations by giving them different ways of thinking about the same potential outcomes (or considering broader outcomes or effects). | any time you have to make a decision | depends on how long people have to think through each point of view - from a few minutes to perhaps an hour. | self-assessment | Online | the whole thing is interestingly general which makes it hard to pin down. so I went and looked at the app reviews for the iOS and Android apps. the major criticism is that the whole thing is very generic, it does try to quantify ethics but there are five sliders for five major theories (a slider per theory) and you are suposed to think about everyone at once - all stakeholders, all outcomes, all potentialy issues and then make a determination whole-sale. Some people are annoyed by it, but many people are generically happy with higher ratings (there aren't that many in the four years it's been on app store) and statements such as "nice!" or "good effort!" | decision-making, multiple ethical perspectives | ||
13 | 11 | * | AB | FUS | Trend Micro | Data Center Attack | 2018 | free | consultancy | Web - video - CYOA | http://datacenterattacks.trendmicro.com/ | role playing game: You play the role of a freshly hired data security specialist in a hospital. You make decisions and your decisions have a certain outcome. Generaly the game is set up in a way that you're bound to fail multiple times and the failure has drastic consequences (i.e. doctors cannot operate machines in an emergency situation, etc.) The Trend Micro security specialist then appears on the screen to tell you why you need to hire them basically. | Decision making and the ability to consider different consequences of decisions | Consequentialist/Utilitarian | Security, Privacy | They acknowledge tradeoffs, especially resource limitations but point to ways that may enable people to still achieve greater security despite these. | Mix. Individuals can self-administer quizzes and read the text on the website, facilitated workshops can also be organised. | engineers, developers, management | More secure data infrastructure | This is not an ethical tool per se. It is a game to demonstrate the importance of hiring security professionals. | Self-assessment | Online | In Data Center Attack: The Game, put yourself in the shoes of a CISO at a hospital to see if you can go back in time to prevent a data center attack from holding critical patient data hostage. You’ll be prompted to make decisions that will impact your security posture. Wrong choices could result in ransomware hijacking your patient data and putting lives at risk. Right choices will show you what happens with DevOps and IT work together, will allow doctors to see patient data, and the hospital will run as expected. See if you have the knowledge it takes to stop a data center attack, and if not, learn what defenses you need to prevent one. | decision making : data | ||
14 | 12 | AB | FUS | Data Privacy Project | Mapping Data Flows | 2015 | free | Worksheets | https://dataprivacyproject.org/ | train and library staff to support patron needs for data privacy in libraries | Protecting patron privacy in libraries | No specific ethical outlook. Mainly data privacy focus | Privacy, Security, data protection, digital inclusion | No conflict is foreseen. | Used by individuals at the libraries to learn about data privacy needs of patrons | Libraries protecting their patrons against digital discrimination, profiling, etc. | Throughout the process | varies | self-assessment and facilitation | abstract activity meant to generate conversations and debate | data ethics, knowledge of data flows | |||||
15 | 13 | AB | FUS | Artefact Group | Tarot Cards of Tech | 2018 | $45 | Toolkit / Cards | http://tarotcardsoftech.artefactgroup.com/ | Really nice self-reflection cards for humanity-centred design. Some questions were similar to those we used in our workshhops | Gaze into the future of your product and consider the impact of what you create | no single problem/issue, but self-reflection on potential issues that might arise as a result of your product | Not specifically identified but reads consequentialist with an undertone of deontological. | No mediation- Conflicts are suggested to be solved by the individual | Individual | developers, designers | Humanity-Centered Design. The goal of the cards is also inspire potential clients for future consultancy from the company | Not clear, but probably ideation stage | 30 min | self-assessment | The cards are super cool and very well designed but the problem is the individual still does not have a clear idea what to do after finding out about her 'tension' spots and shortcomings. | very relevant | ||||
16 | 14 | AB | FUS | Andrew Lovett-Barron | Decay of Digital Things | 2014 | Toolkit / Cards | http://cards.decay.io/ | help brainstorm scenarios / projects http://andrewlb.com/2016/03/e-180-the-decay-of-digital-things-the-d-school/ | started life as a series of essays exploring the role that time and decay play in networked objects. What happens when the networks and businesses that support connected devices shut their doors? How do we design for both our users’ death, and the death of the system? | Designing for death (both of users and businesses) | No specific ethical outlook, but concerned with starting a conversation on digital decay | collaboration, co-creation, speculative design | No Conflict is foreseen. | Speculative design | Design with decay of digital things/persons in mind | artistic and speculative exploration. No specific use. | This is not an ethical tool per se, but a series of essays and installations to explore the topic of digital decay | Physical | |||||||
17 | 15 | AB | FUS | Humane By Design | Principles | 2019 | free | Posters | https://humanebydesign.com/ | Tangentially relevant | Find out about values and what they mean for the digital world. | Designing ethicaly humane digital products | Deontological ethics | empowering, designing for finite resources, inclusive, respectful, thoughtful, transparent | No Conflict is foreseen. | not clear - could be individuals or groups | designers | design humane products | ||||||||
18 | 16 | AB | FUS | List of tools | List of tools | 2019 | free | tools paid | Toolkit / Cards | https://ethical.net/resources/?resource-category=tools | Not many here for now: but Website Carbon, Plnktn, Disconnect and Referenda are examples | Mainly sustainability concerns, but also humane products | No specific ethical outlook. | Sustainability, human needs first | No conflict is foreseen | design sustainable and humane products | ||||||||||
19 | 17 | AB | FUS | Amber Case | Calm Scorecard | 2018 | free | Q+A | https://medium.com/@caseorganic/is-your-product-designed-to-be-calm-cdde5039cca5 | Tangentially relevant | Assess 'calmness' of products: seamless integration to person's daily life | No specific ethical outlook. | seamless, unobtrusive and integrate with person's everyday life and habits, human-centred | No conflict is foreseen. | individual | designers | design humane products | throughout the process | 30 min | self-assessment | It feels like a Cosmo test. | |||||
20 | 18 | AB | FUS | Projects By If | New Digital Rights | 2016 | free | Digital mockups | https://newdigitalrights.projectsbyif.com/ | Tangentially relevant | Speculative design- what would digital rights look like in the future? | Designing with new rights in mind (i.e. GDPR) | No specific ethical outlook | privacy, security, data minimisation, transparency, data portability, protecting digital identity | No conflict is foreseen | individual | designers | privacy-by-design | artistic and speculative exploration. No specific use. | not a specific tool per se - but speculative design to generate conversation and discussion | ||||||
21 | 19 | AB | IS | Tactical Tech + Mozilla | Data Detox | 2016 | free | Website | https://datadetox.myshadow.org/en/home | User-oriented not developer oriented | this is intended for users to consider their data footprint and to manage it. I don't think this falls within our ethical tools purview | endusers | self-assessment | online and can print | irrelevant | |||||||||||
22 | 20 | AB | IS | DOWSE | DOWSE | 2016 | free | Interactive Object | http://dowse.equipment/ | Tools that make you question how you are designing an IOT device | Download the software, stick it onto a raspberry pie or the like, run it on your home network (or internal network of any kind). | Current IoT devices lack an OFF button for the network connectivity - to disconnect them from the Internet when you want to. So if you have smart objects they must connect to work - Dowse allows you to manage that to some extent. Makes visible home network traffic and prevents your ISP from being able to monitor what kinds of devices you have on your network. | virtue ethics (enlightenment) - you ought to know and to have personal control over network traffic. | Transparency (to the end-user), privacy, control, data protection, dignity (the idea of a curtain or a door), safety & security | They acknowledge that in communal living situations making internal network traffic visible may lead to invasions of privacy but basically say - when you choose to live with people you give up some privacy, figure it out yourselves. | Individual but group workshops are available. Build a small computer out of Raspberry pie, put the software on the computer, integrate with existing tech in the home - expectation of significant technical expertise | endusers | trying to make it possible for people to see and control their personal network traffic or any network traffic on a network that they administer. | Post-hoc protection after other IoT devices are deployed | however long it takes you to build and deploy the damn thing :) very hands on | self or peer - own networ oriented | Physical | This one is difficult because it is not like an ethics tool really - it is for end-users but it does have very clear goals | sort of | ||
23 | 21 | AB | FUS | Bjorn Karmann + Tor Knudsen | Project Alias | 2018 | Interactive Object | http://bjoernkarmann.dk/project_alias | Tools that make you question how you are designing an IOT device | Prevent your home IoT from spying on you | IoT devices listen in without you knowing. Alias protects you against it. | No specific ethical outlook. but generally hacker-maker ethics | Privacy, hacker-maker | No confict is foreseen | individual | privacy-by-design, maker | at all times when an end-user uses home IoT | Physical | ||||||||
24 | 22 | AB | FUS | Mozilla | Hacker cards | 2017 | Website | https://thimbleprojects.org/mozillalearning/308795/ | Tools that make you question how you are designing an IOT device As the user, you understand better how your data is being used | play a card game against a hacker to assess your online behaviour | Online security and privacy for safety | No specific ethical outlook. | Privacy, Security, Digital Safety | No confict is foreseen | individual | endusers | online privacy and security | self-assessment | online | |||||||
25 | 23 | AB | FUS | Open IOT Studio | Privacy Machines | 2016 | free | Concept | https://github.com/openiotstudio/privacy-machines | Makes you question how you are designing an IOT device | There are three privacy machines 1) way-back machine brings you back in time to a period by dimming, filtering or disconnecting communication channels, 2) teleport machine - modifies user's experience by switching their IP address to other countries, 3) ghost machine - creates fake energy consumption profiles to confuse commercial tracking algorithms | No specific ethical outlook. | Privacy, security, hacker-maker | No confict is foreseen | individual | endusers | privacy-by-design, maker | prototype | ||||||||
26 | 24 | AB | IS | Peter Bihr | Privacy Dimmer | 2016 | free | Concept | http://thegoodhome.org/projects/privacy-dimmer/ | This seems end-user oriented conceptual exercise on how you might design something more privacy sensitive. The "us" here refers to end-users and home residents NOT designers. I do not think this is relevant | end-users ONLY | |||||||||||||||
27 | 25 | AB | FUS | Tega Brain | The New Organs | 2018 | Website | https://neworgans.net/ | Makes you question how you are designing an IOT device | Assess how you're targeted in surveillence capitalism | Escaping surveilence capitalism | No specific ethical outlook. | Privacy | No conflict is foreseen | individual | endusers | privacy-by-design | Online | This one is not an ethics tool per se - but more like people sharing their surveillence stories online. | |||||||
28 | 26 | * | FUS | JR | Utrecht University Data School collab with data analysts from the City of Utrecht | Data Ethics Decision Aid Utrecht | 2017 | free | workshop | Toolkit / Cards | https://bit.ly/2VlMTuB | We can interview the authors if we wish. Javier has met them and talked about VIRT-EU. | The toolkit centrees around a questionnaire that helps brainstorm potential problems and issues with data in projects. The questionnaire is meant to be used in a structured workshop and the toolkiit inclides auxiliary materials. | The core problem DEDA tris to solve is the discovery of ethical issues in data projects. The secondary stated objectives are to educate and increase accountability by documenting issues and the discussions around them, but there are no clear mechanisms to work on the potential solutions. | The tool is very explicit in the final series of questions: Which outcome is the best for the most involved subjects, the city and its residents?(utilitarianism) What would a person you want to be do in this situation?(virtue ethics) Does your approach respect the autonomy of all subjects who are involved? (Kantianism) What are problems particular to this project? (moral particularism) | The tool has a list of explicit values to be "respected": Freedom of choice • Freedom of speech • Mutual respect • Trust • Diversity • Creativity • Peace and the good life. The tool also embodies over implicit values. Transparency in helping uncover the insides of the projects. Fairness in a well developed section on bias. There is some Participation in doing workshops and it has some questiosn, but it is not a tool to be used externally. | The appoint a team memebr as devil's advocate to promote some conflict in the discussions, but the tool does not have any way to deal with conflicts, particularly of values.. | It is design to be used in a workshop of up to 25 peole with a trained facilitator. | policymakers, data analyst | The tool would allow people working on a project to explore the ethical issues together. The stated outcomes: "After the workshop you will: Know about ethical issues in your everyday data projects. Know how to constructively solve practical cases with the help of ethical theory. Understand the societal relevance of data ethics. Start preparing for the General Data Protection Regulation of the EU that will be enforced in 2018." They would not get specific solutions. | It is not completely clear but it seems that it would make sense to do it in a "second half" of a project. There are no discovery or creativity tools. It is a series of questions about what is in palce or has been considered.Too early it would be difficutl to anwer questions. | The workshop is expected ot last 3 hours, but it sesm that you woud need ot do some preparation and the facilitators have to be trained. | The model is a paid facilitated workshop. The basic questionnaire is available online, but not the full workshop manual. There is a booklet of case studies. There is no assessment as such but an exploratory process. The participants can use the tool to partially self-assess for GDPR. Ther eis no peer-review. | In the report I wrote: The questions are grouped in various areas, but the structure for these is not apparent. For example, the opening questions on algorithms and explainability feel too abrupt coming before some basic data mapping and setting the sources. Policy questions on access, reuse, responsibilities, bias and transparency sit next to a section on visualisation of the data. It is possible that some of these questionnaires try to cover too broad a range of use cases, and analysts and policy makers may require different tools. The privacy questions are too general to be truly useful, e.g. “Does the dataset allow insight into the personal communication of citizens?”. Many developers may answer no, thinking that they do not monitor text or calls, but under the new e-privacy legislation in Europe, access to sensor data in devices may well be classed as such. The questionnaire also makes the incorrect claim that Privacy Impact Assessments are mandatory “when you work with personal data”, when this is only the case when there is a high risk to the rights of data subjects. | high | |
29 | 27 | * | FUS | JR | Jet Gispen (Delft Technical University) | Delft TU Ethics for Designers Toolkit | 2017 | free | Toolkit / Cards | https://bit.ly/2UfSCWq | It is actually quite good | The toolkit provides 6 completely different tools for distinct activities. There is: 1 a card game for aligning values with design ideas, 2 a large printout worksheet for uncovering assumptions in existing designs, 3 a canvas for stakeholder value discovery and alignment, 4 canvas for ideation to discover unethical situations, 5 canvas to learn about some ethical approaches, 6 canvas for ethical negotiation | Developing ethical skills in designers through solving practical problems. | The tools are very eclectic, but one tool explicitly educates about vrtue ethics, consequentialism and deontology. | Responsibility and Accountability (ethical discalimer), transparency (description). The Moral Values game includes cards for a long list of values, including tranquility, curiosity and pleasure. | The Contract canvas is designed to negotiate conflict. There is no overarching principle. | Each tool is designed for a self organised workshop setting. | designers | Each tool very clear and explicit on outcomes, but the overall theme is of developing skills. | Each tool has indicative stage: most are for framing, validating but also envisioning, realising and | each takes between 30 and 60 minutes. | self assessment | Physical | IN the report I wrote: The templates include simple forms, such as an ethical disclaimer to foresee unethical situations and responsibilities, and more complex ones such a drafting tool for ethics contracts or a chart to compare different ethics approaches (virtue, deontology and consequentialism). The toolkit itself is presents in a value-free and pragmatic manner with little discussion as to why those skills are important. One interesting tool is the Moral Agent, which is a game with cards around a challenge to make an ethical design. Players get cards with moral values, and then have to write down related ideas without revealing their encoded moral value, e.g. Inclusivity. These ideas are them auctioned, and points allocated depending on the moral values. | ||
30 | 28 | * | FUS | FUS | Tech for Good Global | Tech for Good | 2009 | free | Principles | https://bit.ly/2FMmRtS | Much of tech is for mindless growth, this one is for building responsible tech with purpose. | Mix of consequentialism and deontological ethics. | affordable, accessable, usable, addresses a real human need, responsibility, agency, inclusivity | No conflict is foreseen | developers, designers | technology with social purpose | online | |||||||||
31 | 29 | * | FUS | FUS | Ben Zevenbergen | Networked Systems Ethics | 2017 | free | Guideline | https://bit.ly/2TW8CHM | It is like a wiki page for ethics | maximise benefit, reduce harm | Developing ethical skills in stakeholders | Consequentialist/Utilitarian | security, privacy, human welfare, freedom from bias, universal useability, trust, autonomy, informed consent, accountability, courtesy, identity, calmness, environmental sustainability. | Conflict is foreseen. Discussion and mediation are suggested. | Both Individual and Mixed | developers | responsible tech | throughout the process | self-assessment | Online | ||||
32 | 30 | * | FUS | FUS | JustPeace Labs | Ethical Guidelines for PeaceTech | 2017 | free | Guideline | https://bit.ly/2FZAUO4 | Ethical Guidelines in post-conflict countries | Using ICT in post-conflict zones | Equipping ethical guidelines for PeaceTech practitioners | Consequentialist/Utilitarian | Privacy, Security, data protection, trust, reliability, inclusivity, respect, non-discrimination, | No conflict is foreseen | Each tool is designed for a self organised workshop setting. | peacetech | It is meant to be a practical tool that provides guidance on questions and issues to consider, as well as valuable resources for diving deeper into particular issues. | throughout the process | self-assessment | |||||
33 | 31 | * | FUS | FUS | Digital Analytics Association | The Web Analyst's Code of Ethics | free | Code of Practice | https://bit.ly/2TW7e8i | Code of Practice and you add your name to pledge. So far 400+ supporters | read and sign | protection of digital data | No specific ethical outlook. | privacy, transparency, consumer control, education of clients and users of potential risks, accountability | No conflict is foreseen | individual | web analysts | education of web analysts | self-assessment | online | ||||||
34 | 32 | * | FUS | FUS | The British Computer Society | DIODE Ethical Technology Assessment | 2011 | paid | Technology Assessment | https://bit.ly/2UyHQJS | it is beyond a pay wall - I do not know how much it costs or what it involves. | It is designed to help diverse organisations and individuals conduct ethical assessments of new and emerging technologies. | training and guidance through a practical meta‐methodology | No specific ethical outlook. | not clear | |||||||||||
35 | 33 | * | FUS | FUS | SATORI Project | SATORI Framework for Ethical Impact Assessment | 2017 | free | Framework | https://bit.ly/2K1jXGW | The report is very good to base our report on. | It is designed to give a background of why ethical assessment is important in the EU and what it entails for specific industries - research, business, tech, etc. | How to conduct ethical impact assessment | Mix of consequentialism and deontological ethics. | * Objectivity & impartiality • Truthfulness & transparency • Honesty & openness • Respect & fairness • Conformity to regulation, guidelines and good practices • Integrity in international cooperation • Social responsibility Honesty & integrity • Accuracy & rigour • Holding paramount safety, health and welfare of the public • Objectivity, impartiality and verifiability • Transparency & fairness • Promoting collaboration • Promoting engagement with the public and social responsibility • Continuing learning and professional development • Conformity to regulations and good practices | Conflict of interest is foreseen - but the recommendation is just to 'avoid it' | individual | developers, designers, engineers | establish ethical assessment throughout all R&A | throughout the process | n/a | self-assessment but other kinds of assessments are also suggested (e.g. civil society, research ethics boards, etc.) | This one feels more like for ethical assessment of EU Research. | |||
36 | 34 | * | FUS | FUS | Uppsala University | EthXpert | 2011 | free | Computer Aided Tools | https://bit.ly/2YH7vPU | It is a basic code where you enter a problem and then code through decisions | Coding a value into the dev process | Consequentialist/Utilitarian | It does not specify values per se, rather it gives a beta-version code where coders can find out ways to 'code in' their values. | Conflict is foreseen between stakeholder interests and "target" stakeholders, decision is left to developers to code in their interests. | Individual | developers | code ethical decisions | self-assessment | code | ||||||
37 | 35 | * | FUS | FUS | BAE Systems | BAE Systems | 2017 | free | consultancy | Scenarios | https://bit.ly/2FZgy7p | Managers look through several scenarios and organise workshops around them. | How to conduct ethical businesses | No specific ethical outlook. Generally consequentialist/utilitarian | data protection, workplace security, workplace safety, inclusivity, respect, business security, business sustainability | Conflict is foreseen and the scenarios include discussion questions on potential moments of conflict. However, the sessions are led by managers and the power implications of this is not taken into consideration. There is a session on retaliating against managers and in return receiving poor performance evaluations, but no reflection on how that person might find the power to retaliate in the discussion sessions too. | Mixed groups, facilitation | everyone in the company | ethical business | throughout the process | self-assessment | Physical | It is unclear what is "ethical business" and the stress seems to be on -business- rather than ethical. | |||
38 | 36 | * | FUS | FUS | Engineering Council | Statement of Ethical Principles | 2017 | free | Principles | https://bit.ly/2UhPp8u | Pledge to the Principles of the Engineering Council | establish codes of conduct for engineering professionals | Deontological ethics, Utilitarian, and Care | Honesty, integrity, respect for life, respect for law, sustainability, public good, care, diversity, inclusion | Conflict of interest is foreseen - but no specific recommendations | Individual & Institutional | engineers | engineering to be seen and recognised by the public as a trusted and ethical profession. | online | |||||||
39 | 37 | * | FUS | FUS | Royal Academy of Engineering | Engineering Ethics in Practice | 2011 | free | Guideline | https://bit.ly/2WFM67O | Pledge to the Principles of the Engineering Council | establish codes of conduct for engineering professionals | Deontological ethics, Utilitarian, and Care | • Accuracy and rigour • Honesty and integrity • Respect for life, law and the public good, and • Responsible leadership: listening and informing | Conflict of interest is foreseen - but no specific recommendations | Individual & Institutional | engineers | engineering to be seen and recognised by the public as a trusted and ethical profession. | online | |||||||
40 | 38 | * | FUS | FUS | Centre for Democracy and Technology | CDT DDTOOL for Algorithm Design | 2017 | free | Visual Aid | https://cdt.info/ddtool/ | To build ethical algorithms | No specific problem, general guidance questions | No specific ethical outlook. | security, privacy, inclusivity, fairness, trust, accountabiity | No conflict is foreseen | Individual | engineers, designers, developers, testers | to build ethical algorithms | throughout the process | self-assessment | online | It is more of a tick-box kind of list of questions you are navigated through. | ||||
41 | 39 | * | FUS | FUS | ADAPT Centre & Trinity College Dublin | Ethical Canvas | 2017 | free | Visual Aid | https://ethicscanvas.org/ | Brainstorm in a group about the ethical implications of a project. | No specific problem, individuals are expected to express their own. | Consequentialist/Utilitarian | privacy, inclusivity, responsibility | Conflict is not specifically foreseen, but relations are taken into account | Mixed groups | managers, developers | structure ideas about the ethical implications of the project | during a project | self-assessment | Physical | No facilitation is provided. Individuals are expected to set their values, concerns and come up with solutions for the best possible outcome. | ||||
42 | 40 | * | FUS | FUS | Open Data Institute | ODI Data Ethics Canvas | 2017 | free | consultancy, training | Visual Aid | https://bit.ly/2uM2tE9 | Provides a framework to develop ethical guidance that suits any context, whatever the project’s size or scope | ensure data ethics | Consequentialist/Utilitarian | openness, transparency, trust, responsibility, inclusivity, fairness, security, privacy | No conflict is foreseen | Individuals & groups | companies | Identify and manage ethical issues – at the start of a project that uses data, and throughout. | throughout the process | self-assessment | Physical | Paid facilitation and consultancy are offered. | |||
43 | 41 | * | FUS | FUS | Austrian digital rights activists | Data dealer | 2013 | free | Games | https://bit.ly/2FN1RTQ | an online game about collecting and selling personal data - full of irony and gleeful sarcasm. It´s a browser/serious/edu/impact game about digital culture and surveillance and aims to raise awareness about online privacy in a new and fun way. | Escaping surveilence capitalism | No specific ethical outlook | privacy, data protection | Conflict is foreseen between stakeholders, citizens, and governmental agencies. Advice is not clear. | Individual | companies | Spread awareness about how some companies gather personal data and why others want to purchase it | online game | |||||||
44 | 42 | * | FUS | FUS | Aral Balkan | INDIE ethical design | 2015 | free | Icons and Badges | https://bit.ly/1kz1K1F | Pledge to the Ethical Design Manifesto and Demonstrate it with the logo | Building responsible tech | deontological ethics | Decentralised, private, open, interoperable, accessible, secure, sustainable, functional, convenient, reliable | No conflict is foreseen | Individual | designers | pledge to the principles of ethical design | self-assessment | online | ||||||
45 | 43 | * | FUS | FUS | Usman Haque, Alexandra Deschamps Sonsino | Better IoT (IoTMark) | 2017 | free | Trustmark | https://bit.ly/2KeYoTE | Making good design actionable | to build free, accessible, open assessment tool aimed at startups and SMEs to help them design better connected products (internet of things). | Consequentialist/Utilitarian and deontological ethics | privacy, openness, interoperability, lifecycle, permissions, transparence, data governance, security. | No conflict is foreseen | Individual | developers, designers | Making good design actionable | self-assessment | online | ||||||
46 | 44 | FUS | FUS | Peter Bihr and Mozilla | Trustabletech | 2018 | free | Trustmark | https://bit.ly/2YXsi1Y | Certification for trustable tech | Building responsible tech | Consequentialist/Utilitarian and deontological ethics | privacy, transparency, security, repairability, openness | No conflict is foreseen | companies | companies | certifying trustable tech companies | when a company has a working product | 30 min. | self-assessment | online | |||||
47 | 45 | FUS | FUS | Embedded Microprocessor Benchmark Consortium | IoTMark-BLE | 2018 | paid | Trustmark | https://bit.ly/2IhjcXE | develops performance benchmarks for the hardware and software used in autonomous driving, mobile imaging, the Internet of Things, mobile devices, and many other applications. | performance evaluation of IoT devices | No specific ethical outlook | security, energy consumption, performance | No conflict is foreseen | companies | companies | developing clearly defined standards for measuring the performance and energy efficiency of embedded processor implementations, from IoT edge nodes to next-generation advanced driver-assistance systems. | paid assessment | by application | |||||||
48 | 46 | * | FUS | FUS | The Free Software Foundation | FSF Respects Your Freedom | 2012 | free | Trustmark | https://bit.ly/2FMPfvN | hardware certification | hardware that respects user freedom | Deontological ethics | openness, transparency, trust, responsibility, inclusivity, fairness, security, privacy | No conflict is foreseen | companies | companies | hardware that ensures user freedom | when a company has a working product | certification by an organization | by application | |||||
49 | 47 | * | FUS | FUS | B Lab | B Corp Impact Assessment | 2019 | free | Certification | https://bit.ly/2ybL0JA | companies to measure their impact on workers, community, environment, and customers. | to build ethical businesses | No specific ethical outlook | sustainability, inclusivity | No conflict is foreseen | companies | companies | impact assessment for companies | 90 minutes for short report, 3 hours for long report | self-assessment | online | The questions are more about corporate social responsibility than ethical tech. | ||||
50 | 48 | * | FUS | FUS | Profit Through Ethics Ltd | Responsible 100 | 2018 | free | Quantitative Assessment | https://bit.ly/2OmtkmO | The Full Assessment is an in-depth examination of an organisation's policies and practices on the wide range of social, environmental and governance issues that Responsible 100 covers. | environmental crisis | Deontological ethics | sustainability, inclusivity | No conflict is foreseen | companies | companies | lowering environmental impact of companies and rebooting capitalism | self-assessment and certification by the Responsible 100 | online | ||||||
51 | 49 | * | FUS | FUS | Platonig | Moving Communities Canvas | 2017 | free | Worksheets | https://bit.ly/2UBPbcx | To support Idea Makers transform their idea into a viable and effective project. | Consequentialist/Utilitarian | collectivity, sustainability, inclusivity | No conflict is foreseen | Mixed groups | developers | More inclusive and communitarian design for agile development | self-assessment | Physical | |||||||
52 | 50 | * | FUS | FUS | Never Again Initiative | Never Again Tech | 2016 | free | Pledge | https://bit.ly/2hsmVot | Pledge that refuses to build a database of people based on their Constitutionally-protected religious beliefs- and to facilitate mass deportations of people the government believes to be undesirable. | collection and retention of data that would facilitate ethnic or religious targeting. | Deontological ethics | inclusivity, anti-discrimination, anti-bias | Individual | engineers, designers, business executives | To refuse to build a database of people based on their Constitutionally-protected religious beliefs and refuse to facilitate mass deportations of people the government believes to be undesirable. | |||||||||
53 | 51 | * | FUS | FUS | ACM | ACM/EEE-CS Software Engineering Code | 2018 | free | Code of Practice | https://bit.ly/2IkZmvu | Code of ethics for technology professionals | To build ethical tech | Deontological ethics | human welfare, avoid harm, honesty, trust, fairness, anti-discrimination, respectfulness, privacy, confidentiality, public good, care. | Individual | engineers | pledge to principles of ethical tech development | self-assessment | online | |||||||
54 | 52 | * | FUS | FUS | ADS | Aerospace Defence Security Ethics Toolkit | 2015 | free | Scenarios | https://bit.ly/2vfcxpt | Focuses on compliance with the UK Bribery Act 2010. Interesting, but I am not sure directly related. It says it is a toolkit but it is more like principles and worksheets. | to prevent corruption and bribery in the ADS | corruption and bribery | Deontological ethics | transparency, trust, compliance, business sustainability | companies | companies | work through scenarios to early identify corruption | self-assessment | Physical | ||||||
55 | 53 | * | FUS | FUS | The American Society of Mechanical Engineers | Code of Ethics of Engineers | 2012 | free | Code of Ethics | https://bit.ly/2XEofpU | code of ethics for members of the ASME | uphold and advance the integrity, honor and dignity of the engineering profession | Deontological ethics | human welfare, integrity, honour, dignity, safety, public good, environmental sustainability, anti-corruption | Individual | engineers | pledge to code of ethics | self-assessment | online | |||||||
56 | 54 | * | FUS | FUS | The Critical Engineering Working Group Berlin | Critical Engineer Manifesto | 2011 | free | Manifesto | https://bit.ly/1IbQJxN | manifesto to establish critical engineering | to raise awareness of the critical role of engineering in our times | No specific ethical outlook | not clear | Individual | engineers | pledge to the principles of mthe manifesto | self-assessment | online | |||||||
57 | 55 | * | FUS | FUS | ALLEA - All European Academies (EU FP7) | RRI Self-Reflection Tool | 2017 | free | Toolkit / Cards | https://bit.ly/2viTtXu | This one was developed by a large EU funded consortium, so would be good to include in our analysis. | Help practitioners reflect offline on RRI principles by providing questions organised according to RRI Policy Agendas: Ethics, Gender Equality, Governance, Open Access, Public Engagement and Science Education. | establishing research integrity | Consequentialist/Utilitarian | integrity, public good, sustainability, responsibility, openness | Individual | everyone involved in research | To facilitate ethical reflection | self-assessment | online | ||||||
58 | 56 | * | FUS | FUS | Res-AGorA Project (EU FP7) | Res-AGorA Responsibility Navigator | 2016 | free | facilitation | Co-Construction Method | https://bit.ly/2PmLJww | This one was developed by a large EU funded consortium, so would be good to include in our analysis. | supports the identification, development and implementation of measures and procedures that can transform research and innovation in such a way that responsibility becomes an institutionalised ambition. | re-inventing more responsive and responsible ways of dealing with current research and innovation programs (eg. ICT in Medicare, Synthetic Biology, GMO, Shalegas fracturing, Nuclear energy) and of revising established research and innovation agendas which still actively shape our commons (e.g. Energy renovation, extraction of fossil energies). | No specific ethical outlook | not clear | No conflict is foreseen | Mixed groups | everyone involved in research | To facilitate ethical reflection | 3 days | self-assessment and facilitated assessment | Physical | |||
59 | 57 | * | FUS | FUS | British Standards Institute BS8611 | British Standards Institute BS8611: Guide to the ethical design and application of robots and robotic systems | 2016 | £176 | Guideline | https://bit.ly/2KU8461 | downloading the full file is £176 | BS 8611 gives guidelines for the identification of potential ethical harm arising from the growing number of robots and autonomous systems being used in everyday life. The standard also provides additional guidelines to eliminate or reduce the risks associated with these ethical hazards to an acceptable level. The standard covers safe design, protective measures and information for the design and application of robots. | ethical and physical hazards associated with robots and robotic systems | Consequentialist/Utilitarian | no harm | No conflict is foreseen | Companies | Companies, developers | reduce ethical and physical harm from robots and robotic technologies | Physical | ||||||
60 | 58 | * | FUS | FUS | Norton Rose Fulbright | Norton Rose Fulbright AI Ethics Toolkit | paid | Toolkit / Cards | https://bit.ly/2Dv4jhh | behind a paywall, no detailed information available. | ethical risks of AI | fairness, data, transparency, accountability | Companies | companies | ethical AI | Physical | ||||||||||
61 | 59 | * | FUS | FUS | Future of Life Institute | The Asilomar Principles | 2017 | free | Principles | https://bit.ly/2jPLY2V | Principles for AI Research and Development | ethical risks of AI | Deontological ethics | safety, transparency, responsibility, privacy, shared prosperity, liberty, public good, respect for human values, against lethal armament | No conflict is foreseen | Individuals | everyone in AI research and development | ethical AI | online | |||||||
62 | 60 | * | FUS | FUS | Doug Wallace and Jon Pekel | Ten Step Method | 2006 | free | Checklist | https://bit.ly/2ICavII | A checklist for ethical reflection | decision-making in in today's highly competitive, global context, rapidly changing business environment | Consequentialist/Utilitarian and deontological ethics | not clear | Conflict is foreseen, but is assumed to be resolved through discussion with all stakeholders | Companies | companies | ensuring ethics throughout product development | self-assessment | Physical | Ethics is used as a general buzzword, it is not explained or substantiated. | |||||
63 | 61 | * | FUS | FUS | Ethics and Policy Integration Centre | Organization and Business Ethics Toolkit | free | Toolkit / Cards | https://bit.ly/2XCOGMl | A research tool for conscientious creatives | ensure business ethics | Consequentialist/Utilitarian and deontological ethics | business security, trust, compliance, efficiency, effectiveness, responsibility | Conflict is foreseen, but no clear guidance is given | Individuals, companies | companies | ensuring compliance and business ethics | online | ||||||||
64 | 62 | * | FUS | FUS | Airbnb and NewsDeeply | Airbnb Anothher Lens | 2017 | free | Toolkit / Cards | https://bit.ly/2ukFYTT | designing for everyone without understanding the full picture. developed by Airbnb | A research tool for conscientious creatives | identify, examine and reflect on our biases | Consequentialist/Utilitarian and deontological ethics | anti-discrimination, anti-bias, inclusivity | No conflict is foreseen | Individuals | designers, creatives | to rise above bias | self-assessment | online | |||||
65 | 63 | * | FUS | FUS | Microsoft | Microsoft Inclusive Design Toolkit | 2018 | free | Toolkit / Cards | https://bit.ly/1NuHOsk | a methodology, born out of digital environments, that enables and draws on the full range of human diversity. Most importantly, this means including and learning from people with a range of perspectives. | ensuring inclusivity and diversity in design | Consequentialist/Utilitarian and deontological ethics | anti-discrimination, anti-bias, inclusivity | No conflict is foreseen | Individuals, mixed groups | designers, engineers, developers | ro rise above bias, ensure inclusivity in design | self-assessment | Physical | ||||||
66 | 64 | * | FUS | FUS | ThingsCon | Trustmark | 2018 | free | Trustmark | https://bit.ly/2GCU50t | for building consumer trust in the IoT | identifying IoT products that are responsible, trustable and ethical | Consequentialist/Utilitarian and deontological ethics | agency, trust, responsibility, security, openness, repairability, consumer protection | No conflict is foreseen | Companies | Companies | Verifying and promoting consumer trust | self-assessment | Physical | ||||||
67 | 65 | * | FUS | FUS | International consultation, international agencies, NGOs and governments | Principles for Digital Development | 2015 | free | Principles | https://bit.ly/2rDbRLo | Principles for digital development for international development organisations | establishing common standards for digital development for international development organisations | Deontological ethics | user-centred design,sensitity, scalability, sustainability,openness, privacy, security, reusability, collaboration | No conflict is foreseen | Individuals, mixed groups | International development agencies | ensuring common standards in digital development in international development organisations | self-assessment | online | ||||||
68 | 66 | * | FUS | FUS | International development NGOs + rights groups | Responsible Data Handbook | 2018 | free | Toolkit / Cards | https://bit.ly/2VhirVJ | List of various tools, essays, handbook etc. | Resources for digital development for international development organisations | responsible data usage in international development | Deontological ethics | responsibility, accountability, practicality, rights-based-approach | No conflict is foreseen | Individuals, mixed groups | International development agencies | ensuring common standards in digital development in international development organisations | self-assessment | online | |||||
69 | 67 | FUS | FUS | Jo Edelman | Helpful Practices for Strong Communities and for Three Types of Activism | free | Worksheets | https://bit.ly/2VVzVIp | I see only one worksheet- is there more? | Worksheet for identifying values and norms that guide action. | aligning action with personal values | Consequentialist/Utilitarian | empathy, alternative thinking, public good | No conflict is foreseen | Individuals, mixed groups | developers, designers, engineers | aligning personal values with actions | self-assessment | Physical | I do not think this link gives the full list of worksheets and tools. I could not find it online, please share if you have more. | ||||||
70 | 68 | FUS | FUS | Katherine Mzhou | Design Ethically | 2019 | free | Toolkit / Cards | https://bit.ly/2YHmngk | They have many interesting tools. Defintiely worth a look. | various tools to guide designers through ethical design | ensuring ethical design amidst uncertainty and unknown consequences | Consequentialist/Utilitarian | not clear | No conflict is foreseen | Mixed groups | designers, developers, creatives | ensuring ethics throughout product development | varies | self-assessment | Physical | |||||
71 | 69 | JR | UK Government | Data Ethics Workbook | free | Principles and questionnaire | https://www.gov.uk/government/publications/data-ethics-workbook | Questionnaire to implement data Ethics Framework | ||||||||||||||||||
72 | 70 | JR | National Society of Professional Engineers | NSPE Code of Ethics for Engineers | 2019 | free | Code | https://www.nspe.org/resources/ethics/code-ethics | ||||||||||||||||||
73 | 71 | JR | Industrial Designers Society Of America | IDSA Code of Ethics | free | Code | https://www.idsa.org/code-ethics | |||||||||||||||||||
74 | 72 | JR | The French Designers Alliance – AFD | AFD Code of Ethics | 2012 | free | Code | http://www.alliance-francaise-des-designers.org/code-of-ethics-for-professional-designer.html | ||||||||||||||||||
75 | 73 | JR | International Council of Design | ico-D Model Code of Professional Conduct for Designers | 2011 | free | Code | https://www.ico-d.org/database/files/library/icoD_BP_CodeofConduct.pdf | ||||||||||||||||||
76 | 74 | JR | Markkula Center for Applied Ethics | Ethical Toolkit for Engineering/Design Practice | 2018 | free | Toolkit / Cards | https://www.scu.edu/ethics-in-technology-practice/ethical-toolkit/ | ||||||||||||||||||
77 | 75 | JR | Institute for the Future and Omidyar Network | EthicalOS Toolkit | 2018 | free | Toolkit / Cards | https://ethicalos.org/ | ||||||||||||||||||
78 | 76 | JR | DrivenData | Deon | free | Checklist | http://deon.drivendata.org/ | Deon is a command line tool that allows you to easily add an ethics checklist to your data science projects. We support creating a new, standalone checklist file or appending a checklist to an existing analysis in many common formats. | ||||||||||||||||||
79 | 77 | JR | AI Principles | 2018 | free | Principles | https://www.blog.google/technology/ai/ai-principles/ | |||||||||||||||||||
80 | 78 | JR | Responsible Innovation COMPASS | Method kit | 2019 | free | Toolkit / Cards | https://innovation-compass.eu/method-kit/ | ||||||||||||||||||
81 | 79 | JR | Batya Friedman and David Hendry | Envisioning Cards | 2012 | free | Toolkit / Cards | https://www.envisioningcards.com/ | Explanation here https://vsdesign.org/publications/pdf/p1145-friedman.pdf | |||||||||||||||||
82 | 80 | FUS | Association of Internet Researchers | Ethical Guidelines/Questions | 2012 | free | Guideline | https://aoir.org/ethics/ | added 20/10/2019 | |||||||||||||||||
83 | 81 | FUS | Fairness Measures | Algorithmic Discrimination | 2017 | free | Open source API and definitions | http://www.fairness-measures.org/ | added 20/10/2019 | |||||||||||||||||
84 | 82 | FUS | EU | Ethics Guideline for Trustworthy AI | 2018 | free | Guideline | https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai | added 20/10/2019 | |||||||||||||||||
85 | 83 | FUS | Data Science for Social Good | Data Ethics Checklist | 2013 | free | Checklist | http://www.dssgfellowship.org//2015/09/18/an-ethical-checklist-for-data-science/ | added 20/10/2019 | |||||||||||||||||
86 | 84 | FUS | Accenture | Data Ethics Insights | 2016 | free | Guideline | https://www.accenture.com/us-en/insight-data-ethics | added 20/10/2019 | |||||||||||||||||
87 | 85 | FUS | PwC | Responsible AI Toolkit | 2019 | free | Toolkit / Cards | https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai.html | added 20/10/2019 | |||||||||||||||||
88 | 86 | FUS | University of Michigan | Data Science Ethics | 2019 | free | Course | https://www.coursera.org/learn/data-science-ethics | added 20/10/2019 | |||||||||||||||||
89 | 87 | FUS | O'Reilly | Etics and Data Science | 2018 | free | Book | https://www.oreilly.com/library/view/ethics-and-data/9781492043898/ | added 20/10/2019 | |||||||||||||||||
90 | 88 | FUS | Ethics in CSS | Tech Ethics Currciula | ongoing | free | Curriculum list | https://docs.google.com/spreadsheets/d/1jWIrA8jHz5fYAW4h9CkUD8gKS5V98PDJDymRf8d9vKI/edit#gid=0 | added 20/10/2019 | |||||||||||||||||
91 | 89 | * | FUS | FUS | Carsten Maple and Hugh Boyes (Uni Cardiff) and EDF Energy | Home Area Network Code of Practice | 2014 | free | Code of Practice | https://bit.ly/1eoqbem | UK Government Consultation | |||||||||||||||
92 | ||||||||||||||||||||||||||
93 | ||||||||||||||||||||||||||
94 | ||||||||||||||||||||||||||
95 | ||||||||||||||||||||||||||
96 | ||||||||||||||||||||||||||
97 | ||||||||||||||||||||||||||
98 | ||||||||||||||||||||||||||
99 | ||||||||||||||||||||||||||
100 |