1 of 27

AI and DEIA: A Starting Point 

Laura Nagel, Reference and Instruction Librarian

Marisol Moreno Ortiz, Reference and Instruction Librarian

2 of 27

Agenda

  • Introductions
  • This Space
  • Definitions
    • DEIA = Diversity, Equity, Inclusion, Antiracism
    • AI = Artificial Intelligence
    • Systemically Non-Dominant (Jenkins 1995-present)
  • How is AI currently being used?
  • Game: AI or Human?
  • DEIA concerns and opportunities with AI
  • Discussion: Equitable Ways to Use AI
  • What Next?
  • Questions

3 of 27

This Space

  • Use “I” statements.​ ​
  • Listen to understand rather than reply.​ 
  • Approach workshop with open mind and growth mindset.​ 
  • Be open to sharing and being vulnerable.​ 
  • Others?​

4 of 27

What is Artificial Intelligence?

5 of 27

What is Artificial Intelligence?

Not Skynet...Yet?

6 of 27

SO...What is Artificial Intelligence?

7 of 27

Let's define it!

Artificial Intelligence (AI) 

AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways, two of the primary ways are: (1) through the use of rules provided by a human (rule-based systems); or (2) with machine learning algorithms. Many newer AI systems use machine learning.

Machine Learning (ML)

Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

8 of 27

Let's define it!

Large Language Models (LLMs)

Large language models form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent.

Generative AI (GenAI)

A type of machine learning that generates content, currently such as text, images, music, videos, and can create 3D models from 2D input.

9 of 27

Major Players

OpenAI

Anthropic

Microsoft

Google

Stability

Ilya Sutskever (Russian), Greg Brockman (American), Trevor Blackwell (Canadian-American), Vicki Cheung (BIPOC),  Andrej Karpathy (Slovak-Canadian), Durk Kingma, Jessica Livingston (white American), John Schulman, Pamela Vagata (BIPOC), and Wojciech Zaremba (Polish)

Bill Gates, Paul Allen (both white American)

Larry Page (Jewish American)

Sergey Brin (Russian)

Dario Amodei, Daniela Amodei (brother and sister, Italian-American)

Emad Mostaque

(BIPOC)

10 of 27

How is AI being used?

11 of 27

AI or human?

What do you think?

12 of 27

2 sentences about your teaching philosophy/approach as a librarian

  1. In my teaching philosophy, I adhere to the belief that "Information literacy empowers people in all walks of life to seek, evaluate, use, and create information effectively to achieve their personal, social, occupational, and educational goals" (American Library Association, 2017). By embodying this philosophy, I cultivate a learning environment where critical thinking and lifelong learning are fostered, enabling individuals to navigate the complexities of the information landscape with confidence and discernment.

2. I tell students that it is my job to work with them to find, evaluate, and use information to “ask and answer questions that matter to them and to the world around them” (Elmborg, 2006). I want students to approach research with curiosity, using their existing experience and expertise and connecting it to what they're learning to shape and re-shape their worldview.

3. In my philosophy of teaching, I draw inspiration from Socrates' famous dictum "I know that I know nothing," fostering humility among my students as we embark on the journey of intellectual exploration together. Guided by Aristotle's insight that "The more you know, the more you realize you don't know," I encourage a lifelong commitment to critical inquiry and self-reflection in our pursuit of knowledge.

13 of 27

Chat GPT screenshot

NWREC 2024

Tuesday, February 2, 20XX

14 of 27

DEIA Concerns and Opportunities with AI

  • (Student)� privacy
  • Increase digital divide
  • Replicating human bias
    • Who creates AI?
    • What content is AI trained on?
  • Unequal impact on systemically non-dominant people
  • Surveillance
    • How effective is AI in distinguishing between actions and the spectrum of human bodies?
    • Domestic Violence
    • Profiling
      • Facial recognition 
  • Accessibility?
  • Advocacy?
  • Reducing bias?

15 of 27

Student privacy, increase digital divide, AI replicating human bias

(Student) Privacy

  • We don't know how AI is storing/reusing our inputs
  • Activity monitoring (Laird et al., Brown, Meaker)
    • Microsoft's Recall will store encrypted snapshots locally on your computer with Copilot+ PCs (Rahman-Jones)
  • Data breaches (Laird et al.)
  • Google building its Gemini Nano AI model into Chrome desktop (Lardinois)

Increase digital divide

  • Malicious ads or code being shared by chatbot and on social media (Segura, Wakefield, Newsroom)
  • Access to technology, skills, and support (Trucano)

AI replicating human bias

  • “A researcher typed sentences like 'Black African doctors providing care for white suffering children' into an artificial intelligence program designed to generate photo-like images....Despite the specifications, the AI program always depicted the children as Black.  And in 22 of over 350 images, the doctors were white.” (Drahl)
  • Left vs right leaning news content based on what’s available, which AI model you ask (Bowens)
  • Gender bias in recommendation letters (Bowens)
  • “The model's dialogue nature can reinforce a user's biases over the course of interaction. For example, the model may agree with a user's strong opinion on a political issue, reinforcing their belief.” (Bowens)
  • Reducing visual culture to stereotypes (Turk)

16 of 27

Impact on systemically non-dominant people and surveillance

Impact on systemically non-dominant people

  • As of May 2023, 54% of men were using AI compared to 35% of women (Howington)
    • Women are underrepresented in STEM fields
    • AI may still feel like science fiction (Costa)
    • Confidence gap (Costa)
  • Lack of representation in generated images (Turk)
  • Blaming AI for creating sexist image (Turnbull)
  • Mis-gendering with facial recognition, mislabeling LGBTQ+ content as "adult" (McAra-Hunter)
  • Worse recognition with emotion AI and voice assistance (Adair & Harts)

  • Chilling free speech; barriers to employment, housing (Yan)
  • "Donating" our data and personal knowledge and having that commodified, often into tools we have to buy back (Boughida et al.)
  • Use of BIPOC creative work and scholarship without compensation or recognition

Surveillance

  • Domestic violence/stalking
    • PimEyes (Allyn)
  • Profiling and facial recognition
    • Predicting health via social media (Ploug, NIDA)
    • 99% accuracy for white men, 35% for Black women (Hassanin)

17 of 27

Accessibility, other concerns/opportunities

  • Creating synthetic voices (Veltman)
  • Text to speech/speech to text (Boughida et al.)
  • Summarizing (Boughida et al.)
  • "Blank page anxiety" (Boughida et al.)
  • Automate routine tasks (Gala)
  • Language caption and translation (Gala)
  • But...
    • Not always accurate (Milne)
    • Can perpetuate ableist biases (Milne)
    • May reinforce expectation to conform to existing structures, rather than challenging them
  • Access to tools and tool creation

Other concerns or opportunities you can think of or have experienced?

Accessibility

Advocacy

  • Re-creating voices of victims of gun violence to lobby Congress (Moore)
  • Help close digital divide?
  • Digital companionship

Reducing bias?

  • Using well-trained AI to reduce human bias in evaluation of college essays (Lira et al.)
    • Employment, housing, etc.?

18 of 27

Discussion: Equitable ways to use AI

How is AI being used in your work?

19 of 27

How can you take action for yourself or the people you work with to prevent AI being used inequitably?

20 of 27

Take action

  • Understand and ask how AI tools are trained
  • Transparency/giving credit when AI is used
    • Phones adding metadata when an image is altered (Waters)
    • Facebook and Instagram to label photos
    • Big tech vows action on 'deceptive' AI in elections
  • Services like AntiFake that scramble your voice so it can’t be deepfaked (Veltman)
  • Deepfake detection technologies (Veltman)
  • Use very specific prompts
  • Evaluate AI content like we evaluate anything else
  • Increase media/digital/information literacy throughout education and in the workplace
  • Get involved in the technical and ethical sides of AI (Boughida et al.)
  • Provide opportunities for employees and community to explore
  • Use existing ethical frameworks
  • When output has bias, provide feedback to the developer
  • Participate in/support organizations working on guidance for AI governance that will benefit the public interest (Yan, FACT SHEET)
  • Think about each tool's risk vs. reward
    • Check the settings!
  • Know, understand, and accept that the conversation regarding AI changes

Next Step: Set an action item for yourself!

21 of 27

Thank You!

Questions?

Laura: lnagel@clark.edu

Marisol: mmorenoortiz@clark.edu 

22 of 27

Questions?...We Mean It...Really!

23 of 27

Additional Resource List: More AI and DEIA

BOOKS

  • Algorithms of Oppression: How Search Engines Reinforce Racism (2018) by Safiya Umoja Noble
  • Race After Technology: Abolitionist Tools for the New Jim Code (2019) by Ruha Benjamin
  • AI ethics and higher education : good practice and guidance for educators, learners, and institutions
  • Algorithms for the People: Democracy in the Age of AI by Josh Simons
  • Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford
  • Unmasking AI: My Mission To Protect What Is Human In A World Of Machines by Joy Buolamwini
  • Sentencing and Artificial Intelligence edited by Jesper Ryberg and Julian V. Roberts
  • Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil 
  • Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher 

ARTICLES

24 of 27

Additional Resource List: More AI and DEIA

ORGANIZATIONS

TOOLKITS

EXTRA

WEBSITES

25 of 27

Image Credits 

References 

26 of 27

References 

27 of 27

References