ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
2
Step 1 - Detemine whether the company needs a Data and AI DD (It does if the answer to at least one of the following questions is "yes")
3
Does the company develop AI?Y/N
4
Does the company use AI?Y/N
5
Does the company process large amouts of data, sensitive data, or data collected from end-users?Y/N
6
7
Step 2 - Determine the company's level of AI risk.
8
What is the risk that the application will conflict with AI regulation?Extreme / High / Moderate / Minimal
9
Extreme risk = AI applications that are or are likely to become illegal. For example, applications that are prohibited by the EU AI Act
10
High risk = AI applications that are or likely to become heavily regulated. For example, applications that are in the EU AI Act's "high risk" list.
11
Moderate risk = AI Applications that are or likely to become lightly regulated. For example, applications that are in the EU AI Act's "transparency obligations" list.
12
Minimal risk = AI applications that are not excepted to be regulated
13
An overview of EU AI Act is available in the Responsible Investing in AI GuidebookLink to the Guidebook
14
What is the risk that the application type will conflict with your values?Extreme / High / Moderate / Minimal
15
Extreme risk = AI applications that are excluded by your values. For example, weapons, gambling
16
High risk = AI applications that are at high risk of conflicting with your values. For example, facial recognition may threaten human rights.
17
Moderate risk = AI applications that are at moderate risk of conflicting with your values.
18
Minimal risk = AI applications that are aligned with your values.
19
An overiview of AI-related harms is avaialble in the Responsible Investing in AI GuidebookLink to the Guidebook
20
21
Stage 3 - Determine the company's data and AI ethics maturity level
22
23
With respect to core concepts of AI ethics, including fairness, data rights, transparency and explainability, and human control:
24
Knowledge - To what extent does the company understand prominent AI ethics themes?Low/Medium/High
25
Guiding questions:
26
Risk Articulation - How well can the company articulate how prominent AI ethics risks relate to it?
27
Diverse input collection - To what extent does the company collect diverse input about prominent AI risks its technology pose?
28
Employee Education - How extensively does the company educate its employees about prominent AI ethics risks?
29
Workflow - To what extent do the company's workflows mitigate the risk of conflicts with prominent AI ethics themes?Low/Medium/High
30
Guiding questions:
31
Strategy and Measures - Does the company have an AI ethics strategy, including clear metrics and standards?
32
Implemented Procedures - To what extent do the company’s workflows include AI ethics practices, including all stages of their development life cycle?
33
Incentives - To what extend to the company's internal incentive structures support the execution of its AI ethics strategy and procedures?
34
Oversight - To what extent does the company’s oversight support compliance with prominent AI ethics themes?Low/Medium/High
35
Guiding questions:
36
Internal reporting - To what extent does the company report on its AI ethics progress to internal stakeholders, such as a senior AI ethics owner?
37
External reporting - To what extent does the company report on its AI ethics progress to external stakeholders, such as its board?
38
Periodical External Audits - How regularly and extensively does the company undergo external audits?
39
40
What is the company's overall responsible AI maturity?Beginner/Intermediate/Advanced
41
Advanced = At least: High knowledge, High workflow, Medium oversight
42
Intermediate = At least: High knowledge, Medium workflow, Low oversight
43
Beginner = No minimum requirements
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100