Re-viewing evidence
as call to action
AI Assurance and Inclusion for AI Action
Dr Susan Oman
Senior Lecturer, Data AI & Society
AI & in/Equality Lead, Centre for Machine Intelligence
Dr Sara Cannizzaro
Post-doctoral researcher
Ensuring public voices are front and centre in AI research, development and policy
PublicVoicesinAI@digitalgood.net
Outputs
Framework for Including People in AI & related rationale
Resources for AI stakeholders to improve public involvement
Numerous reports on
public attitudes survey n3513
expert survey n4260
qualitative, participatory research, inc projects funded by Public Voices in AI Fund, led by community organisations
evidence review of differences in research on how
different publics experience AI
Webtoon about the evidence review
Stakeholders committed to responsible AI, including AI public participation organisations
Target beneficiaries
AI researchers, developers & policy community not yet committed to responsible AI
Publics, esp groups most negatively affected by / underrepresented in AI research, policy & development
Why an evidence review? - quality and assurance
Research exists on what the public think about AI, but not the differences across it
Contention across methods, frameworks, theories on approaches to public opinion/attitudes/ experiences - and politics of participation/inclusion
Not all research outputs good quality / useful evidence for policy-makers or developers/deployers
Not all evidence assures inclusion of diverse or marginalised populations
AI researchers claim they want AI to reflect human values but do not pay attention to social science research**
Some of the questions we’re interested in:
Focus: around 350 pieces of evidence
public experiences, understanding and perceptions of, attitudes towards and feelings about AI
Categorised, as following:
WHO (researchers) say WHAT, in what WAY, HOW, WHY and WHERE (their motivations, money, research design, data, findings, conc and recommendations)
And
WHO (people/ publics) say WHAT about what, in what WAY, HOW, WHY and WHERE (how people feel about which AI, how they were asked, where they are from, who they are (demographics), why they were asked
Research claiming people not concerned is rare, but it is there…
2022 survey 15,000, (Europe) “the majority of respondents were not concerned with AI at all”,
“Ethical issues are voiced only by a small subset of citizens with fairness, accountability, and transparency being the least mentioned ones.”
Concern - as headline finding / foregrounded - less than expected
Concerned
critics
Opportunity advocates
Technology acceptance models
UX testing
public experiences, understanding and perceptions of, attitudes towards and feelings about AI
Social Sciences
Participatory approaches
“Demographic traits explained the most variance in comfort with AI revealing that men and those with higher perceived technology competence were more comfortable with AI” (US, 2024)
Are we represented?
Are we
misrepresented?
Re-viewing evidence - a call to action
EU AI Act: 2 February. Article 4 addresses the ‘providers and deployers of AI systems’
they must ensure staff understanding of ‘the context the AI systems are to be used in, & considering the persons or groups of persons on whom the AI systems are to be used’
We must ensure:
1, all voices, especially those most affected by AI systems, reach those working in AI tech/policy
2, voices are not misrepresented or missing
3, AI literacy in developers, deployers and policy meaningfully includes diverse publics’ perspectives
Call to action:
Thank you for listening!
Further information slides at the end…
Dr Susan Oman
Senior Lecturer, Data AI & Society
AI & in/Equality Lead, Centre for Machine Intelligence
Dr Sara Cannizzaro
Post-doctoral researcher
Ensuring public voices are front and centre in AI research, development and policy
PublicVoicesinAI@digitalgood.net
Intended outcomes
Target beneficiaries
Next steps
Events
Possible impact & engagement work
PublicVoicesinAI@digitalgood.net