1 of 10

Social Impact and Inclusivity

CS470- Artificial Intelligence

Team 2

Svetozar Draganitchki, Woodrow Reese, Dante Barton, Aditya Shriyan, William Ding, Naomi Adebo-young, Emilia Morgan

2 of 10

3 of 10

AI Replacing Jobs

  • Duolingo Fired employees and replaced them with AI-Led translations

https://tech.co/news/duolingo-ai-layoffs

  • Social: AI keeps replacing jobs, no jobs left for anyone
  • Ethical: Is it morally right, for companies to make all the profit off of AI?
  • We could have social structures in place, everyone gets the benefit of AI not just a select few

Svetozar Draganitchki

4 of 10

Emilia Morgan - Bias in AI algorithms

5 of 10

Transparency and Explainability

  • 3rd party auditing
  • Documentation & FAQ
  • Engage with stakeholders, adhere to their feedback
  • Transparency reports� Acknowledge possible biases, limitations, and optimal use cases
    • How do you source/handle data?
      • Security
      • 3rd party buyers

Woodrow Reese

6 of 10

How can AI address diversity and inclusivity?

The Organization for Economic Co-operations and Development key principles for regulating the impact of AI solutions:

  1. "AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being."
  2. "AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards — for example, enabling human intervention where necessary — to ensure a fair and just society."

Source: https://www.forbes.com/sites/forbestechcouncil/2021/08/12/artificial-intelligence-for-social-inclusion-technologies-and-necessary-steps/?sh=4f80c9ed61ec

https://www.oecd.org/digital/artificial-intelligence/

Dante Barton

7 of 10

Role of Public Institutions to Audit AI algorithms (FRTE)

The National Institute for Standards and Technology publishes a report called FRTE (Face Recognition Technology Evaluation) where they evaluate face recognition algorithms submitted by industry and academic developers (this report is updated regularly with new algorithms submitted) on their precision.

  • Among U.S.-developed algorithms, high rates of false positives in one-to-one matching for Asians, African Americans and native groups relative to images of Caucasians.
  • Notable exception for some algorithms developed in Asian countries.
  • For one-to-many matching, the report saw higher rates of false positives for African American females.

Reports like this is important to recognize the possible adverse social impact of such AI algorithms in their possible implementations (e.g law enforcement)

Aditya Shriyan

Photo Courtesy: Sony

8 of 10

False Accusations

Robert Julian-Borchak Williams, a Michigan resident, was falsely accused of a theft case in 2020. Police used facial recognition on a grainy security video footage

Case later dismissed without prejudice

Countless incidents involving misrecognition still occur, bringing up the reliability of facial recognition into question. Is facial recognition reliable? Or do you we need humans to verify the results of a facial recognition algorithm?

9 of 10

Reinforcement of Inclusivity through Ethics

  • The emerging ethical challenges in AI call for legal boundaries and enforceable accountability for AI design.

  • Esteemed researchers and scholars in AI have concluded that in order to utilize ethically designed AI without bias, the industry requires a harmonious collaboration among several factors:

    • Interdisciplinary collaboration
    • Ethical impact assessments
    • Ethical review
    • Transparency and explainability
    • Ethical AI literacy
    • Fairness and bias mitigation
    • Ethical regulations and policies

William Ding

10 of 10

Fairness: Ensuring inclusivity

  • Having a diverse range of people as the data set.

  • See something, Say Something: Speaking up or reporting whenever we notice something is off.

  • Educating our population on the basics of AI in higher institutions as well as secondary and primary levels also.

  • Creating a future where the developers of these applications are a reflection of its users

Sources: Kleinman, Zoe. “Artificial Intelligence: How to Avoid Racist Algorithms.” BBC News, BBC, 13 Apr. 2017, www.bbc.com/news/technology-39533308.

Naomi Adebo-young