1 of 23

Introduction to AI Ethics

& Responsible Use

2 of 23

OVERVIEW

Today's Agenda

01

What is AI?

Defining artificial intelligence and its core technologies

02

What is AI Ethics?

Principles and frameworks for responsible development

03

Why Ethics Matters

Professional responsibility for AI engineers

04

AI Misuse in Academia

Academic integrity in the age of AI

05

Class Debate

"Is using ChatGPT cheating?"

06

Assignment

AI Usage Audit: 2-week monitoring project

3 of 23

01

Understanding

Artificial Intelligence

Defining AI and Its Core Technologies

4 of 23

FOUNDATIONS

What is Artificial Intelligence?

Core Definition

Artificial Intelligence (AI) refers to computer systems capable of performing tasks that typically require human intelligence. These include understanding natural language, recognizing patterns, making decisions, and learning from experience.

Key Characteristics

Learning: Improves performance through experience and data

Reasoning: Draws inferences and applies logic to solve problems

Perception: Interprets sensory information (vision, speech, text)

Language: Understands and generates human language

The AI Hierarchy

AI encompasses Machine Learning (ML) , which uses algorithms to learn from data. ML includes Deep Learning (DL) , which employs neural networks with multiple layers to process complex patterns. DL powers Generative AI like ChatGPT.

Types of AI

1

Narrow AI (Weak AI)

Designed for specific tasks (e.g., chess, translation, image recognition). All current AI systems fall into this category.

2

General AI (Strong AI)

Hypothetical AI with human-like general intelligence across all domains. Not yet achieved.

3

Super AI

Theoretical AI surpassing human intelligence in all aspects. Subject of ongoing debate and research.

Key Insight

While we often use "AI" to describe today's tools, most are sophisticated forms of machine learning—not truly "intelligent" in the human sense, but powerful pattern recognition systems.

5 of 23

REAL-WORLD APPLICATIONS

AI in Everyday Life

Virtual Assistants

Siri, Alexa, Google Assistant use natural language processing to understand voice commands, answer questions, and control smart home devices.

Technology: NLP, Speech Recognition

Recommendation Systems

Netflix, Spotify, YouTube analyze your preferences and behavior to suggest movies, music, and videos tailored to your tastes.

Technology: Collaborative Filtering, ML

Autonomous Vehicles

Self-driving cars use computer vision, sensors, and deep learning to navigate roads, detect obstacles, and make real-time driving decisions.

Technology: Computer Vision, Deep Learning

Medical Diagnosis

AI systems analyze medical images, detect diseases, and assist doctors in diagnosing conditions like cancer, eye diseases, and heart conditions.

Technology: Image Recognition, Neural Networks

Language Translation

Google Translate, DeepL use neural machine translation to convert text between languages with increasing accuracy and fluency.

Technology: NLP, Transformers

Generative AI

ChatGPT, DALL-E, Midjourney generate human-like text, images, and creative content based on user prompts and instructions.

Technology: Large Language Models, GANs

Important Note: AI is already deeply integrated into our daily lives, often in ways we don't even realize. Understanding how these systems work—and their ethical implications—is essential for informed citizenship and responsible engineering.

6 of 23

02

Foundations of

AI Ethics

Principles and Frameworks for Responsible Development

7 of 23

ETHICAL FOUNDATIONS

What is AI Ethics?

Definition

AI Ethics is the study of moral issues and societal impacts arising from the development and deployment of artificial intelligence systems. It examines how AI technologies affect human values, rights, and wellbeing, and establishes guidelines for responsible innovation.

Why AI Ethics Matters

High Stakes: AI systems make consequential decisions affecting millions of lives in healthcare, criminal justice, finance, and employment

Rapid Innovation: Technology evolves faster than regulations, creating gaps that ethical frameworks must fill

Power Imbalances: AI can amplify existing inequalities and create new forms of discrimination

Accountability Gaps: When AI makes mistakes, determining responsibility is complex

"Innovation in AI consistently outpaces government regulation, creating an urgent need for internal ethical leadership."

— Digital Ethics Research, 2024

Core Ethical Principles

1

Fairness & Non-Discrimination

AI systems should treat all individuals equitably, without amplifying societal biases or creating discriminatory outcomes

2

Transparency & Explainability

AI decision-making processes should be understandable to stakeholders, enabling scrutiny and informed consent

3

Accountability & Responsibility

Clear lines of responsibility must exist for AI system outcomes, with mechanisms for redress when harms occur

4

Privacy & Data Protection

Personal data must be protected throughout the AI lifecycle, with individuals maintaining control over their information

Additional Principles

Beneficence: Promoting good

Justice: Fair distribution

Autonomy: Respecting choice

8 of 23

GLOBAL FRAMEWORK

UNESCO's 10 Core Principles for AI Ethics

1

Proportionality & Do No Harm

AI use must not exceed what's necessary to achieve legitimate aims. Risk assessments should prevent harms to individuals and society.

2

Safety & Security

AI systems must avoid unwanted harms (safety risks) and vulnerabilities to attacks (security risks) through robust design and testing.

3

Right to Privacy & Data Protection

Privacy must be protected throughout the AI lifecycle. Adequate data protection frameworks should be established and enforced.

4

Multi-stakeholder Governance

International law and national sovereignty must be respected. Diverse stakeholder participation is necessary for inclusive AI governance.

5

Responsibility & Accountability

AI systems should be auditable and traceable. Oversight mechanisms must avoid conflicts with human rights and environmental wellbeing.

6

Transparency & Explainability

Ethical AI deployment depends on transparency. The level should be appropriate to context, balancing with privacy and security needs.

7

Human Oversight & Determination

AI systems must not displace ultimate human responsibility. Humans must remain in control of critical decisions affecting people's lives.

8

Sustainability

AI technologies should be assessed against sustainability goals, including environmental impact and alignment with UN Sustainable Development Goals.

9

Awareness & Literacy

Public understanding of AI should be promoted through education, civic engagement, digital skills training, and AI ethics literacy programs.

10

Fairness & Non-Discrimination

AI actors should promote social justice and fairness, taking an inclusive approach to ensure AI's benefits are accessible to all.

Global Context: These principles, adopted by UNESCO member states in 2021, represent the first global standard on AI ethics. They provide a human-rights centered approach that all AI practitioners should understand and apply.

9 of 23

03

Ethics for

AI Engineers

Why Professional Responsibility Matters

10 of 23

PROFESSIONAL RESPONSIBILITY

Why Ethics Matters for AI Engineers

The Stakes Have Never Been Higher

AI systems now power critical decisions in healthcare, criminal justice, finance, and employment. As an AI engineer, your code can affect millions of lives—determining who gets a loan, who receives medical treatment, or who is flagged as a security risk.

Critical Statistic: Over 85% of AI projects are projected to deliver erroneous outcomes due to ethical oversights by 2024, highlighting the urgent need for ethical frameworks in development.

The Innovation-Regulation Gap

AI technology evolves exponentially faster than laws and regulations can adapt. This creates a dangerous gap where harmful practices can become entrenched before safeguards are implemented.

Government regulation typically lags 5-10 years behind technology

Industry self-regulation is often insufficient or nonexistent

Engineers must proactively consider ethical implications

Engineer's Ethical Responsibilities

Consider Societal Impact

Think beyond technical requirements to how your system affects communities and individuals

Include Diverse Perspectives

Ensure development teams represent diverse backgrounds to identify blind spots

Test for Bias

Regularly audit algorithms for discriminatory outcomes across different groups

Document Decisions

Maintain clear records of design choices and ethical considerations

Speak Up

Report ethical concerns even when it's uncomfortable or unpopular

Key Insight

Ethical AI development isn't about preventing innovation—it's about ensuring that innovation serves humanity's best interests and doesn't cause unintended harm.

11 of 23

CASE STUDIES

Real-World Consequences of Unethical AI

Biased Hiring Algorithms

Amazon developed an AI recruiting tool that was trained on 10 years of hiring data. The system learned to penalize resumes containing the word "women's" (as in "women's chess club captain") and favored male candidates for technical positions.

Impact: The system perpetuated gender discrimination at scale. Amazon eventually scrapped the project, but not before it had influenced hiring decisions.

Facial Recognition Bias

Multiple studies found that facial recognition systems from major tech companies had significantly higher error rates for darker-skinned women (up to 34.7%) compared to lighter-skinned men (0.8%).

Impact: These systems, deployed in law enforcement and security contexts, led to false arrests and surveillance disparities affecting marginalized communities.

Criminal Justice Algorithms

The COMPAS recidivism prediction tool, used in courtrooms across the U.S., was found to falsely flag Black defendants as future criminals at twice the rate of white defendants.

Impact: Judges used these biased risk scores to determine sentencing and parole decisions, perpetuating systemic racism in the criminal justice system.

Privacy Violations

AI systems trained on vast datasets often contain sensitive personal information. Cambridge Analytica harvested data from 87 million Facebook users without consent to build psychological profiles for political targeting.

Impact: Personal data was weaponized to manipulate elections and undermine democratic processes, violating fundamental privacy rights.

Critical Lesson: These aren't just technical failures—they're ethical failures with real human consequences. Each case demonstrates why AI engineers must prioritize fairness, transparency, and accountability from the earliest stages of development.

12 of 23

04

AI Misuse in

Academia

Understanding Academic Integrity in the Age of AI

13 of 23

BY THE NUMBERS

The Rise of AI in Academic Settings

Student AI Usage Statistics

43%

of college students

have used ChatGPT or similar AI tools

89%

of AI users

used it for homework assistance

53%

of AI users

used it for writing essays

48%

of AI users

used it for at-home tests

The Cheating Epidemic

UK

Nearly 7,000 UK university students were formally caught cheating with AI tools in the 2023-24 academic year

This represents 5.1 cases per 1,000 students—triple the rate from the previous year

K-12

26% of K-12 teachers have caught a student cheating with ChatGPT

50% of teachers know at least one student who faced consequences for AI misuse

HS

6.4% to 24.1% of high school students admitted to using AI to cheat, varying by school type

Charter schools: 24.1% | Public schools: 15.2% | Private schools: 6.4%

Student Perspectives

Think using ChatGPT is cheating

51%

Still use it despite believing it's cheating

22%

Believe AI should explain concepts

60%

The Nuanced Reality

While many students recognize ethical concerns, they also see legitimate educational uses for AI. The challenge is distinguishing between appropriate assistance and academic dishonesty—a distinction that varies by context and assignment.

14 of 23

REAL-WORLD CASE

Case Study: The Grammarly Girl Incident

The Incident

In October 2023, Marley Stevens, a student at the University of North Georgia (UNG), received a zero on a paper and was accused of using AI to cheat. Her offense? Using Grammarly—an AI-powered grammar and spell-checking tool—to proofread her work.

The Irony: Grammarly was listed as a recommended resource on UNG's own website for promoting "grammar and style."

The professor's syllabus prohibited AI use, but many students understood this to mean generative AI like ChatGPT, not grammar checkers they'd been using for years.

The Consequences

Zero on the paper affecting her GPA

Academic probation until February 2025

Lost scholarship and financial aid

Required to attend academic integrity workshops

Six-month appeals process with no ability to further appeal

The Detection Problem

Turnitin's AI detection software flagged Stevens' paper as AI-generated. However, AI detectors are known to be highly unreliable:

University of Pennsylvania study: detectors easily fooled by spelling variations

Stanford study: biased against non-native English speakers

OpenAI disabled their own detection tool due to low accuracy

University of Reading: 94% of AI-written submissions went undetected

The Aftermath

Stevens took her story to TikTok, where it gained widespread attention. This public pressure prompted the university to address the case, but the damage was already done.

Positive Outcome: Grammarly developed "Authorship"—a tool to track text sources and AI modifications—in response to Stevens' case. She was invited to speak at Educause about her experience.

Discussion Questions

  1. Was using Grammarly "cheating" in this context?
  2. How should universities clarify AI policies?
  3. What are the risks of relying solely on AI detectors?

15 of 23

TECHNICAL CHALLENGES

The AI Detection Problem

How AI Detectors Work

AI detection tools analyze text for patterns like "burstiness" (variation in sentence structure) and "perplexity" (unpredictability of word choices). They assume human writing is more varied and creative than AI-generated text.

Key Difference: Unlike plagiarism detectors that compare text to databases, AI detectors look for statistical patterns—which can be highly unreliable.

Major Detection Failures

Racial & Linguistic Bias

Stanford study found detectors misclassified over 50% of non-native English writing as AI-generated, while native speaker accuracy was nearly perfect. This creates discriminatory outcomes.

High False Negative Rate

University of Reading test: 94% of AI-written submissions went undetected. Most cheaters aren't being caught, while some innocent students are falsely accused.

OpenAI Abandoned Detection

ChatGPT's own creator disabled their AI detection platform due to low accuracy rates, acknowledging the fundamental limitations of current detection methods.

Why Detection is Fundamentally Flawed

1

Arms Race Dynamics

As AI models improve, they produce more human-like text, making detection increasingly difficult

2

Human Editing Blurs Lines

Students can edit AI-generated text, making it statistically indistinguishable from human writing

3

Writing Style Variation

Human writing varies enormously by individual, context, and purpose—no universal "human pattern" exists

4

False Positives Harm Innocents

Wrongful accusations damage students' academic records, mental health, and future opportunities

Better Alternatives

Process-Oriented Assessment

Require drafts, outlines, and revision histories to track student work over time

Oral Exams & Presentations

Have students explain their thinking in person, demonstrating genuine understanding

In-Class Assignments

Design assessments that occur in controlled environments where AI use is limited

Clear AI Policies

Establish explicit guidelines about when and how AI tools can be used in coursework

Expert Consensus: Leading researchers and even AI companies agree that current detection tools are not reliable enough for high-stakes academic decisions. Relying on them risks harming innocent students while failing to catch actual cheaters.

16 of 23

05

Class

Debate

Is Using ChatGPT Cheating?

17 of 23

POSITION: YES

Arguments FOR: Using ChatGPT is Cheating

Academic Integrity Violation

Passing off AI-generated content as your own original work violates the fundamental principles of academic honesty. When you submit work that wasn't created by you, you're misrepresenting your knowledge, skills, and effort.

Key Point: Academic integrity requires that submitted work be your own authentic creation, produced through your own intellectual effort.

Skill Development Prevention

Using ChatGPT to complete assignments prevents you from developing essential skills: critical thinking, problem-solving, writing ability, and research skills. These competencies are the entire point of education—not just the final product.

Key Point: The learning happens in the struggle, the drafting, the revision—not in receiving a finished product from AI.

Unfair Advantage

Students who use AI to complete assignments gain an unfair advantage over those who do the work honestly. This creates an uneven playing field and devalues the achievements of students who put in genuine effort.

Key Point: If some students use AI while others don't, grades no longer reflect actual learning or ability.

Assessment Invalidation

Assignments are designed to assess your understanding and skills. If AI completes the work, the assessment becomes meaningless—neither you nor your instructor can accurately gauge what you've actually learned.

Key Point: Education relies on accurate assessment. AI use makes it impossible to evaluate genuine learning.

Student Opinion

51% of students believe using ChatGPT is cheating, and 95% of private high school students say AI should never be allowed to write an entire paper. Many students themselves recognize the ethical concerns.

18 of 23

POSITION: NO

Arguments AGAINST: Using ChatGPT is NOT Cheating

AI as a Learning Tool

ChatGPT is just another tool in the learning process, similar to calculators, spell-checkers, grammar tools, or search engines. These technologies were once controversial but are now accepted as legitimate educational aids.

Key Point: Tools don't cheat—people do. The ethical line depends on how you use the tool, not the tool itself.

Legitimate Educational Uses

Using AI for brainstorming, understanding complex concepts, getting explanations, or checking grammar enhances learning rather than replacing it. These uses are similar to asking a tutor or using educational resources.

Key Point:46-60% of students believe AI should always be allowed for explaining concepts—this is educational support, not cheating.

Context Matters

Whether AI use is appropriate depends entirely on context: the assignment's learning objectives, instructor guidelines, and institutional policies. Blanket bans ignore the nuanced reality of different educational scenarios.

Key Point: Using AI for a creative writing assignment differs from using it on a final exam. Context determines appropriateness.

Professional Preparation

In professional settings, using AI tools is becoming standard practice. Learning to use AI ethically and effectively is a valuable skill that prepares students for the modern workforce where AI assistance is the norm.

Key Point: The future workplace will require AI literacy. Banning AI in education may leave students unprepared.

The Transparency Solution

Many argue that the solution isn't banning AI, but requiring transparency and disclosure. If students clearly indicate how they used AI tools, they maintain academic integrity while benefiting from technological assistance—similar to citing sources or acknowledging collaborators.

19 of 23

NUANCED PERSPECTIVE

Finding the Middle Ground

The Ethical Use Spectrum

The ethical use of ChatGPT depends on a combination of factors: context, assignment guidelines, transparency, and learning objectives. Rather than a simple yes/no answer, we must consider where on the spectrum a particular use falls.

Ethical AI Use = Context + Transparency + Intent

✓ Generally Appropriate Uses

Brainstorming & Idea Generation

Getting initial ideas for essays, projects, or creative work

Concept Explanation

Asking AI to explain difficult topics in different ways

Grammar & Style Checking

Proofreading your own work (with disclosure if required)

Learning Coding Concepts

Understanding how code works, not copying solutions

Research Assistance

Finding sources and getting overviews (verifying accuracy)

✗ Generally Inappropriate Uses

Submitting AI-Generated Work as Original

Passing off AI-written essays, code, or assignments as your own

Using AI on Individual Assessments

Tests, exams, or assignments meant to evaluate your knowledge alone

Failing to Disclose When Required

Not acknowledging AI assistance when policies require transparency

Replacing Core Learning Activities

Using AI to skip essential practice that builds skills and understanding

Critical Thinking Framework

Before using AI for academic work, ask yourself:

1. What are the assignment's learning objectives?

2. Does my instructor allow AI use for this task?

3. Am I using AI to enhance learning or avoid it?

4. Can I explain and defend my work if asked?

5. Am I being transparent about AI assistance?

The Bottom Line: The ethical use of AI in education isn't about finding loopholes or pushing boundaries—it's about using technology to enhance your learning while maintaining integrity. When in doubt, ask your instructor and err on the side of transparency.

20 of 23

06

Your

Assignment

AI Usage Audit: 2-Week Monitoring Project

21 of 23

PRACTICAL EXERCISE

Assignment: AI Usage Audit

Assignment Overview

For the next two weeks, you will conduct a comprehensive audit of your AI tool usage. This exercise is designed to help you develop awareness of when, how, and why you use AI tools—and to reflect on the ethical implications of your choices.

Goal: Build self-awareness and ethical judgment about AI use, not to judge or penalize you. There are no "right" or "wrong" answers—only honest reflection.

What to Track

1

Every AI Tool Instance

Record each time you use ChatGPT, Grammarly, GitHub Copilot, Midjourney, or any other AI-powered tool

2

Purpose & Context

Note what you were trying to accomplish (homework help, coding, brainstorming, grammar check, etc.)

3

Course/Assignment

Identify which class or assignment you were working on (if applicable)

4

Time Spent

Estimate how long you used the AI tool for that task

5

Ethical Assessment

Reflect on whether this use felt appropriate, questionable, or inappropriate

Sample Tracking Template

Date

Tool

Purpose

Assessment

03/04

ChatGPT

Explain recursion

✓ OK

03/05

Grammarly

Proofread essay

✓ OK

03/06

ChatGPT

Debug code

? Maybe

03/07

ChatGPT

Write conclusion

✗ No

Reflection Questions

At the end of two weeks, answer these questions in a 1-2 page reflection:

1. What patterns did you notice in your AI usage?

2. Which uses felt most/least ethically justified? Why?

3. How did AI use impact your learning and skill development?

4. What would you do differently? What will you continue?

5. How has this audit changed your perspective on AI ethics?

Submission Details

Due Date: Two weeks from today. Submit your tracking log and reflection paper via email. This assignment will be evaluated on completeness, thoughtfulness, and honest self-reflection—not on how much or how little AI you used.

22 of 23

SUMMARY

Key Takeaways & Best Practices

Understanding AI

AI is a powerful tool with both benefits and risks

Machine learning and deep learning are subsets of AI

AI is already deeply integrated into daily life

Current AI is "Narrow AI"—not general intelligence

AI Ethics

Core principles: Fairness, Transparency, Accountability, Privacy

UNESCO provides 10 core principles for ethical AI

Ethics is about societal impact, not just technical performance

Principles are interconnected and must be balanced

Engineer's Role

AI engineers have special responsibility to society

Consider societal impact from the earliest design stages

Test for bias and include diverse perspectives

Document decisions and speak up about concerns

Academic Integrity

AI misuse is a growing problem in education

AI detection tools are unreliable and biased

False positives can harm innocent students

Clear policies and transparency are essential

The ChatGPT Debate

Context determines whether AI use is appropriate

Transparency and disclosure are key ethical practices

AI can enhance learning when used appropriately

Submitting AI work as original is academic dishonesty

Best Practices

Always check your instructor's AI policy first

Be transparent about AI assistance when required

Use AI to enhance learning, not replace it

Develop critical thinking alongside technical skills

Remember: Ethics is not a one-time decision but an ongoing practice. As AI continues to evolve, so too must our ethical frameworks and personal guidelines. The goal is not perfection, but thoughtful consideration of how our actions affect ourselves, our communities, and society at large.

23 of 23

Questions for Reflection

How will you use AI tools ethically in your academic journey?

What responsibilities do AI engineers have to society?

How can we balance innovation with ethical considerations?

"The real question is not whether machines think, but whether humans do."

— B.F. Skinner