1 of 15

Re-viewing evidence

as call to action

AI Assurance and Inclusion for AI Action

Dr Susan Oman

Senior Lecturer, Data AI & Society

AI & in/Equality Lead, Centre for Machine Intelligence

Dr Sara Cannizzaro

Post-doctoral researcher

Ensuring public voices are front and centre in AI research, development and policy

PublicVoicesinAI@digitalgood.net

2 of 15

Outputs

Framework for Including People in AI & related rationale

Resources for AI stakeholders to improve public involvement

Numerous reports on

public attitudes survey n3513

expert survey n4260

qualitative, participatory research, inc projects funded by Public Voices in AI Fund, led by community organisations

evidence review of differences in research on how

different publics experience AI 

Webtoon about the evidence review

3 of 15

Stakeholders committed to responsible AI, including AI public participation organisations

Target beneficiaries

AI researchers, developers & policy community not yet committed to responsible AI

Publics, esp groups most negatively affected by / underrepresented in AI research, policy & development

4 of 15

Why an evidence review? - quality and assurance

Research exists on what the public think about AI, but not the differences across it

Contention across methods, frameworks, theories on approaches to public opinion/attitudes/ experiences - and politics of participation/inclusion

Not all research outputs good quality / useful evidence for policy-makers or developers/deployers

Not all evidence assures inclusion of diverse or marginalised populations

AI researchers claim they want AI to reflect human values but do not pay attention to social science research**

5 of 15

Some of the questions we’re interested in:

  1. Who is doing this research, how is it done, why?
  2. Which research makes good evidence and how does that affect what we know?
  3. What are the gaps in the research? - who is being asked questions, & about what? (e.g. AI in general, or healthcare)
  4. How are inequalities incorporated into the way a research project is designed?
  5. How successful are these inclusion strategies?
  6. What does good research look like that is good at including people who are often overlooked and provides trustworth evidence?

6 of 15

Focus: around 350 pieces of evidence

public experiences, understanding and perceptions of, attitudes towards and feelings about AI

Categorised, as following:

WHO (researchers) say WHAT, in what WAY, HOW, WHY and WHERE (their motivations, money, research design, data, findings, conc and recommendations)

And

WHO (people/ publics) say WHAT about what, in what WAY, HOW, WHY and WHERE (how people feel about which AI, how they were asked, where they are from, who they are (demographics), why they were asked

7 of 15

Research claiming people not concerned is rare, but it is there…

2022 survey 15,000, (Europe) “the majority of respondents were not concerned with AI at all”,

“Ethical issues are voiced only by a small subset of citizens with fairness, accountability, and transparency being the least mentioned ones.”

8 of 15

Concern - as headline finding / foregrounded - less than expected

  • a, awareness reported neutrally

  • b, ambivalence: “Americans remain split on whether AI will have mostly positive or mixed effects on society”

  • c, AI not priority over: cybersecurity, child sexual abuse online, misinformation

  • d, privileging the positive aspects of AI in reporting “There are relatively high levels of support from the Australian public for the development of AI”

  • e, privileging white, male, m/class perspectives, over others, which plays down concern / discomfort.

9 of 15

Concerned

critics

  • BUT, what about these WHO questions?

Opportunity advocates

Technology acceptance models

UX testing

public experiences, understanding and perceptions of, attitudes towards and feelings about AI

Social Sciences

Participatory approaches

10 of 15

“Demographic traits explained the most variance in comfort with AI revealing that men and those with higher perceived technology competence were more comfortable with AI” (US, 2024)

  • BUT, what about these WHO questions?

Are we represented?

Are we

misrepresented?

  • inclusion - instrumental targeting of specific marginalised groups to suggest acceptance
  • inclusion - boost to sampling / demographic data, but not really inclusive research design / analysis
  • little evidence of much evidence about ‘diverse populations’

11 of 15

Re-viewing evidence - a call to action

EU AI Act: 2 February. Article 4 addresses the ‘providers and deployers of AI systems

they must ensure staff understanding of ‘the context the AI systems are to be used in, & considering the persons or groups of persons on whom the AI systems are to be used’

We must ensure:

1, all voices, especially those most affected by AI systems, reach those working in AI tech/policy

2, voices are not misrepresented or missing

3, AI literacy in developers, deployers and policy meaningfully includes diverse publics’ perspectives

Call to action:

12 of 15

Thank you for listening!

13 of 15

Further information slides at the end…

Dr Susan Oman

Senior Lecturer, Data AI & Society

AI & in/Equality Lead, Centre for Machine Intelligence

Dr Sara Cannizzaro

Post-doctoral researcher

Ensuring public voices are front and centre in AI research, development and policy

PublicVoicesinAI@digitalgood.net

14 of 15

Intended outcomes

  1. Increased understanding of the value of meaningful inclusion of public voice & increased methodological capacity to do so. STEM partners are engaging more systematically with public voice research.
  2. Increased understanding of public views & experiences of AI, especially of how underrepresented groups are differentially impacted by AI & the subsequent need for equity-driven approaches. 
  3. PVAI has demonstrated good practice in how to engage underrepresented communities in developing, co-designing & producing AI public voice research.
  4. People from underrepresented groups have participated in AI & in shaping PVAI.
  5. Public voice informs AI. There is more engagement with people affected by technologies, and more public voice evidence cited in policy documentation and industry codes of practice.

  1. Stakeholders committed to responsible AI, including AI public participation organisations

  • AI researchers, developers & policy community not yet committed to responsible AI

  • Members of the public, especially from groups most negatively affected by / underrepresented in AI research, policy & development.

Target beneficiaries

15 of 15

Next steps

Events

  • @ Paris AI Summit on Participatory AI Governance
  • Weds 26th March, House of Commons, for policymakers
  • Fri 28th March, Sheffield, for researchers
  • Webinar on public attitudes survey, April
  • Events organised by Flexible Fund recipients
  • Other proposed dissemination events
  • RAi UK all hands meeting, Sept

Possible impact & engagement work

  • with Cabinet Office, Welsh Government, other national policymakers & local government

PublicVoicesinAI@digitalgood.net