1 of 27

AI Hidden Fears:

Navigating Inherent Bias & Hallucinations

Images: DALL-E

2 of 27

Session Facilitators

Jevonia Harris

University of Delaware

Software Engineer

jnova@udel.edu

Lauren Kelley, PhD

University of Delaware

Instructional Designer

laurenmk@udel.edu

3 of 27

  • Explore sources of bias in AI.

  • Identify potential pitfalls in AI-generated outputs.

  • Learn strategies to mitigate bias in educational settings.

Session Overview

Image: DALL-E

4 of 27

AI has the potential to significantly enhance the learning process as a catalyst for more efficient, personalized, and dynamic learning experiences.

However, it's important to maintain a balance, ensuring that AI augments rather than replaces human cognitive processes and creativity.

Reality — what we perceive as comprising the facts of existence — takes on a different shape depending on the linguistic tools we use to describe it.

-A.Wilkinson

Images: DALL-E

5 of 27

Understanding Hybrid Intelligence: HUMAN + MACHINE Collaboration

  • AI has a skillset; humans possess deep contextual knowledge.

  • Combined, they add meaningful insights to AI outputs.

  • Importance of interpretation: Effective interpretation is crucial.

Note: LLMs are NOT agnostic. Unless prompted, they will most likely give different responses based on perceived user characteristics.

Images: DALL-E

6 of 27

  • ChatGPT enterprise

  • AI Student

  • Streamlining operations

  • More advanced AI models

HIED

AI Use Cases:

Images: DALL-E

7 of 27

Poll Activity

Image: Google

8 of 27

An image of hands erasing a book, with stacks of books with torn out pages. The hands are people in suits, and rich people. The hands should be dirty (no tattoos).

An image of hands erasing a book, with stacks of books with torn out pages. The hands are people in suits, and rich people. The hands should be dirty (no tattoos). The hands are from marginalized communities.

Prompt 1

Prompt 2

Images: DALL-E

Images: DALL-E

9 of 27

An image of hands erasing a book, with stacks of books with torn out pages. The hands are people in suits, and rich people. The hands should be dirty (no tattoos). The hands are from marginalized communities.

Show me a group of marginalized people.

Prompt 2

Follow Up Prompt

Images: DALL-E

Images: DALL-E

10 of 27

Show me a group of marginalized people.

Follow Up Prompt

Images: DALL-E

A powerful and diverse group of marginalized people standing together in solidarity. The group includes individuals of different ethnic backgrounds, genders, and abilities, dressed in everyday clothing. They display resilience and strength, with expressions of determination and unity. The background is urban, symbolizing community and struggle, with a hopeful and empowering atmosphere (ChatGPT image prompt)

11 of 27

THOSE WHO INTERPRET THE DATA HAVE POWER

Training data based on statistical minority

We risk developing AI systems that have a narrow, fragmented view of the world

AI as a “voice of authority”

AI-generated responses might be viewed as equally valid to those from trained professionals

Transparency Challenges

It is often difficult, if not impossible, to understand exactly how LLM arrive at their conclusions or why they prioritize one piece of information over another

Images: DALL-E

12 of 27

Sources of Bias in AI Systems

Having more information allows for more questions

No flat map can perfectly represent the spherical Earth without distortion

The choice of projection often involves trade-offs between accuracy of size, shape, distance, or direction…

Who controls the narrative?

Whose perspective shapes the 'acceptable' representation?

Images: DALL-E

13 of 27

Dangers of Bias in AI Systems

When language is removed, so are the communities and issues they represent.

AI systems learn from the data they’re given—what happens when critical knowledge is erased?

censored

censored

censored

censored

Images: DALL-E

14 of 27

Dangers of Bias in AI Systems

Censorship often starts subtly - until entire histories and identities are erased.

AI models, including large language models and search engines, are trained on vast datasets that reflect the dominant narratives of society.

If key concepts, communities, or histories are removed from the data, AI learns to ignore them—reinforcing and amplifying the erasure rather than correcting it.

AI is only as good as the data it learns from. When that data is incomplete, biased, or intentionally censored - the gaps in knowledge become systemic, automated, and self-reinforcing.

Images: DALL-E

15 of 27

Dangers of Bias in AI Systems

“We can disagree and still love each other unless your disagreement is rooted in my oppression and denial of my humanity and right to exist." - James Baldwin

If textbooks and academic papers systematically exclude Black, Indigenous, and other marginalized histories, AI-generated content will also reflect this exclusion.

If online discussions about systemic racism, gender identity, or reproductive rights are frequently flagged or removed, AI will begin to treat these topics as irrelevant, controversial, or even non-existent.

If companies remove terms like “climate change” or “trans rights” from official databases, AI tools used for research and decision-making will no longer surface these topics as valid areas of inquiry.

The result? Invisible censorship. AI doesn’t just repeat what’s missing—it creates a world where those missing pieces seem like they were never there to begin with.

16 of 27

What do you think the prompt is for this generated output?

Output 1

ChatGPT

17 of 27

What do you think the prompt is for this generated output?

Output 2

ChatGPT

18 of 27

Tell me about the greatest scientists in history.

Tell me about indigenous scientists in history.

Prompt 1

Prompt 2

19 of 27

What Can We Do?

We are not powerless; Technology does not have to “happen” to us.

Intervene in AI Training: Ensure that training data includes diverse perspectives and is not reliant on sanitized, government-approved narratives.

Build Ethical AI: Encourage AI companies to audit their models for bias and erasure and introduce mechanisms for flagging missing knowledge.

Push for Transparency: Demand that AI companies disclose how their models are trained and allow scrutiny of how content moderation decisions are made.

Images: DALL-E

20 of 27

Create & Archive Knowledge – AI pulls from what’s online—so make sure marginalized histories, diverse perspectives, and underrepresented topics are written, published, and shared in digital spaces. Use university repositories, open-access journals, and even Wikipedia edits to preserve critical knowledge.

We are not powerless; Technology does not have to “happen” to us.

Ask AI the Right Questions Sometimes AI bias shows up in how we ask questions. Try reframing queries to get fuller, more nuanced responses. (e.g., instead of “Who are the greatest inventors?” try “What are examples of Indigenous inventors in history?”)

Fact-Check and Document – If AI-generated content starts misrepresenting history or omitting facts, document it! Take screenshots, compare sources, and share discrepancies

Challenge Search ResultsIf Google or AI assistants give a sanitized or biased answer, dig deeper. Look for non-mainstream sources, independent media, and community knowledge hubs.

Preserve Cultural Memory Offline – Not everything should be digital. Books, oral traditions, local archives, and community storytelling are essential in preventing AI-driven knowledge gaps from becoming permanent.

Be Mindful of AI in Social Media – AI-driven moderation disproportionately flags activist content, LGBTQ+ discussions, and racial justice topics while allowing misinformation to spread. tay aware of what’s being suppressed.

Images: DALL-E

21 of 27

The potential impact of generative AI may touch every aspect of our lives. That's why it's crucial that everyone has a seat at the table - not just tech giants and policymakers, but artists, educators, ethicists, and voices from marginalized communities.

WHO’s at the Table?

Image: DALL-E

22 of 27

Building a More Equitable Future with AI in Education

  • Creating systems that actively counteract biases
  • Ensuring transparency and accountability in AI development and deployment
  • Empowering students to understand, use, and critique AI systems
  • Recognizing the crucial role of diverse voices in shaping the future of educational AI

Image: DALL-E

23 of 27

Poll Activity

Image: Google

24 of 27

Questions to Keep Asking Ourselves

  • Whose perspective shapes what we consider "acceptable" representation?

  • How do our tools, from maps to algorithms, shape not just what we know, but how we think and the questions we ask?

  • What voices are we amplifying, and which are we silencing?

Image: Freepik

25 of 27

Q&A

  • Open the floor for questions.

  • Encourage exploration of additional resources.

  • Call to Action: Continue to push and test AI models while identifying biases.

Image: DALL-E

26 of 27

Please Share

Your Feedback!

Image: Google

27 of 27

Thank you

for coming!

Image: DALL-E