1 of 46

AI and Ethics (CS60016)

This course is sponsored by Google

Are intelligent machines friend or foe?

2 of 46

Course Team

  • Instructors: Animesh Mukherjee, Invited guests
  • Teaching Assistants:
    • Punyajoy Saha (works in hate speech detection and mitigation)
    • Sayantan Adak (works on historical text processing)
    • Siddharth Jaiswal (works on bias and fairness in user interfaced systems)
    • Contact: aiethicsta@googlegroups.com

3 of 46

Course Details

  • Regular lectures
  • Continuous evaluation (60%)
  • Term projects (40%)
  • Term projects: A group of 4-5 students. Each group will be given one problem statement and necessary dataset (if any)/materials/knowhows for the project.
    • One presentation before midsem
    • One presentation before endsem

4 of 46

Attendance Rules

  • This is an experimental course
  • Assumption: Students come by their own choice
  • No attendance will be taken
  • But that means … you might also miss interesting guest lectures!

5 of 46

Books

  • No regular textbooks
  • The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Aaron Roth and Michael Kearns
  • Human Compatible: AI and the Problem of Control by Stuart J. Russell
  • Towards a Code of Ethics for Artificial Intelligence by Paula Boddington
  • Moral machines by Wendell Wallach
  • Rebooting AI: Building Artificial Intelligence We Can Trust by Ernest Davis and Gary Marcus

6 of 46

Artificial Intelligence

When a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"

7 of 46

AI IS ALREADY EVERYWHERE, EVERYDAY

8 of 46

You live in the age of the data-driven algorithms

Decisions that affect your life — are being made by mathematical models.

9 of 46

Why the rush to AI?

  • Cheaper computing
  • More data
  • Better algorithms

…its because we can

10 of 46

Why the rush to AI?

  • Decision automation is now an inevitable economic imperative
  • Driven by a faster-paced, micro-managed, interconnected, automated, and optimized world
  • Never-asleep autonomous decision making - it is here now

11 of 46

Why the rush to AI?

  • Decisions are made in view of assessed positive and negative projected outcomes
  • Positive and negative are merely derived (learned) weights
  • Relative to some system of value
  • Moving toward or away from objectives & problems

-

+

12 of 46

Why the rush to AI?

  • Weights are encoded intent
  • Based on some worldview, culture, rule of law, economic goal and philosophical perspective
  • So autonomous systems are encoded with intent

13 of 46

Why the rush to AI?

  • A linked chain from software to intent
  • How can we impose systems that bend code creating and learning systems toward positive intent for our friends and potentially negative intent for

the evil-doers?

14 of 46

The good

15 of 46

More precision

16 of 46

Better reliability

17 of 46

Increased savings

18 of 46

Better safety

19 of 46

More speed

20 of 46

21 of 46

22 of 46

23 of 46

Ray Kurzweil

We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress.

24 of 46

The bad

25 of 46

26 of 46

Stephen Hawking

“Success in creating AI would be the biggest event in human history,…”

“Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon

systems that can choose and eliminate targets.” “…humans, limited by slow biological evolution,

couldn’t compete and would be superseded by A.I.

27 of 46

Bill Gates

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

28 of 46

Elon Musk

AI is “our greatest existential threat…”

“I’m increasingly inclined to think that there should be

some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

“I think there is potentially a dangerous outcome there.”

29 of 46

When really smart people get worried

We need to make it a habit to pay attention

!

30 of 46

More than 16,000 researchers and thought leaders have signed an open letter to the United Nations calling for the body to ban the creation of autonomous and semi-autonomous weapons,

31 of 46

32 of 46

“…it’s all changing so fast…”

33 of 46

No one before has seen the change you have seen

It is nothing compared to the change that is coming

34 of 46

The ugly

The

ugly

Another fatal Tesla crash reportedly on Autopilot emerges, Model S hits a streetsweeper truck – caught on dashcam

35 of 46

Remember I-ROBOT & Asimov’s 3 Laws

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

All about how to distribute “the good”! How to decide to distribute “the harm” if harm is inevitable?

36 of 46

The ugly (autonomous cars & the trolley predicament)

Ethical questions arise when programming cars to act in situations in which human injury or death is inevitable, especially when there are split- second choices to be made about whom to put at risk.

37 of 46

The ugly (Facial recognition systems)

38 of 46

The ugly (Facial recognition systems)

Ethical questions arise when detecting gender and age from images using AI-based facial reconition softwares

39 of 46

The ugly (gap-filling non-human care providers)

AI-based applications could improve health outcomes and quality of life for millions of people in the coming years— but only if they gain the trust of doctors, nurses, and patients.

40 of 46

The ugly (non-human directed education)

Though quality education will always require active engagement by human teachers, AI promises to enhance education at all levels,

especially by providing personalization at scale.

41 of 46

The ugly (lights-out economy)

The whole idea is to do something no other human—and no other machine—is doing.

If we all die, it would keep trading!

42 of 46

The ugly (no work for you – reskill?)

In the first machine age the vast majority of Americans worked in agriculture. Now it's less than two percent. These people didn't simply become unemployed, they reskilled.

One of the best ideas that America had was mass primary education. That's one of the reasons it became an economic leader and other countries also adopted this model of mass education, where people paid not only for their own children but other people's children to go to school.

43 of 46

Safe exploration - agents learn about their environment without

executing catastrophic actions?

Robustness - machine learning systems that are robust to changes in the

data distribution, or at least fail gracefully?

44 of 46

Avoiding negative side

effects- avoid undesired effects on the

environment?

Avoiding “reward hacking”

- prevent agents from “gaming” their reward

functions

45 of 46

Scalable oversight - agents efficiently achieve goals for

which feedback is very expensive?

For example, can we build an agent that tries to clean a room in the way the user would be happiest with, even feedback from the user is very rare

46 of 46

…and so

  • AI adoption and sophistication is speeding up
  • It is an economic imperative outpacing constraints
  • Decision making is being coded into every system and product
  • Decision making overlaps ethics and will be autonomous
  • Forward thinkers are CONCERNED and starting to work this problem

Carbon-based work-units unite!