Published using Google Docs
Syllabus Artificial Intelligence: Designing for the Long-term
Updated automatically every 5 minutes

Designing Artificial Intelligence: for the Long-term

Chinmay Kulkarni

Fall 2020 MW 09:50AM-11:10AM REMOTE [Zoom Link]

Office hours by appointment.

Other links:

Sign up as a discussant: Open for dates till Oct 19.

Class committee discussion: google doc. Please feel free to take the discussion elsewhere.

Many courses teach you how to solve a problem with AI (e.g. speech recognition, language “understanding”, or robotic manipulation.) This class is different. It focuses on *what* problems should AI solve, and what are the characteristics of a “good” solution. By taking a “long-term” perspective, it forces us to consider not only what implications a solution might have right away, but what it might do decades from now.

This course is centered around three themes: 1) How AI shapes us, 2) How AI shapes society, and 3) How to shape AI. We will cover a number of topics across these themes: agency and initiative, AI and power (and relatedly bias, fairness, and justice), the nature of infrastructure, co-adaptation, privacy, and more. These topics will be explored with real-life  examples in computer vision, automatic speech recognition, data science, recommender systems, social networks, online dating, UI personalization, and others. Finally, we will also consider algorithmic and design tools to improve long-term effects; such as parsimonious machine learning models, design futuring techniques, and even machine learning models that decline to work when it is responsible to do so!

What students do in class

You will:

  1. Complete individual bi-weekly mini-projects that ask you to reflect on your AI-augmented life across a wide variety of domains.
  2. Read many scholarly papers (but also blog articles, and other readings),
  3. Turn in one paragraph about each reading, and participate in class discussions about readings.
  4. Complete a three week final project with a partner. Projects accommodate a wide range of technical and design ability

On average, this class (including class time) should be about 10 hours/week. This is the first time I am teaching this class, so I don’t have an estimate of which weeks are going to be particularly difficult/busy/etc. However, I will be happy to take your feedback into account, and make changes in the course as we go along.

Week-by-week Schedule.

This is the detailed schedule detailing what we will do every week. This syllabus is designed to improve continuity from week-to-week, take holidays into account etc. If you are interested in learning what the topics of discussion are overall, look at the Topics of Discussion section.

Week 1: Introduction [Theme: how AI shapes us]

Monday Aug 31: No readings due today.

Introduction to class [slides]

Introduction to classmates [slides]

Sign up as a discussant: Open for dates till Sept 16.

Class committee discussion: google doc. Please feel free to take the discussion elsewhere.

Wednesday Sept 2: Reading reflection due today:

  1. Do artifacts have politics? by Langdon Winner

Reminder: write one paragraph about this article. In your writing, you could reflect if this article’s argument applies to your technology choice in Assignment # 0.

Turn in reflection on this google doc.

Week 2: Visions of the Future [Theme: How AI shapes us]

Monday Sept 7: Labor day, no classes. Nothing is due. I recommend you get started on readings due Wednesday early; together they are long.

Wednesday Sept 9:

  1. Utopia? ARTIFICIAL INTELLIGENCE AND LIFE IN 2030 [Introduction only. It ends at p.11. You do not need to read the entire report]
  2. Dystopia? Anthropological/Artificial Intelligence & the HAI
  3. Myopia? Fairness and Abstraction in Sociotechnical Systems (somewhat dense reading. Please consider it carefully when you write your reflection)

Please post your reflection to Canvas.

In class: We’re building a food recommendation system!

Week 3: Case study: recommender systems [Theme: How AI shapes us, how to shape AI]

Monday Sept 14: Reading reflections due today:

  1. Recommender Systems by Paul Resnick and Hal Varian, March 1997. [Fun fact: Hal Varian also led the early development of Google’s ad-auction mechanisms]
  2. “People who like lattes also like…” Why Do Liberals Drink Lattes? 

Suggestion: In your reflection, consider what are the effects of using collaborative-filtering style algorithms for recommendation.

Sidenote: Lattes are a hot topic. A follow-up discussion is here:  The Real Reason Liberals Drink Lattes [This reading is not required]

Wednesday Sept 16: Reading reflections due today:

  1. The Making of a YouTube Radical on the New York Times. Note: this article mentions rape, violence, and contains offensive racial and other slurs. If you are uncomfortable, you may skip this article.
  2. Recommending What Video to Watch Next: A Multitask Ranking System by many authors at Google. [Google owns Youtube]

Suggestion: In your reflection, you could discuss if Youtube’s new system would have made a difference to the protagonist in the New York Times article.

Mini-project 1 due Sept 16: due Sept 20:  Follow a Youtube trail. Start at a video, repeatedly click the “most recommended” video. Where do you end up after 20 videos?

Details: Open Youtube while logged out of your Google account e.g. in incognito mode. You can start at any video, either from the homepage or by searching for a topic. Watch at least 30 seconds of each video, then click the recommended video (The first video in “Up Next”). Write two or three paragraphs about the trajectory of your videos e.g. was there a common theme, did the videos get more “niche” over time (you can measure this by the number of video views.) Did you see older content over time? What about the ads?

Week 4: Privacy [How AI shapes society, How to Shape AI]

Monday Sept 21. Readings discussed in class today:

  1. Protecting Civil Liberties During a Public Health Crisis 
  2. Big data meets Big Brother as China moves to rate its citizens from Wired Magazine

Wednesday Sept 23: Readings discussed in class today:

  1. The Politics of Privacy Theories: Moving from Norms to Vulnerabilities
  2. (optional) Obscurity: A Better Way to Think About Your Data Than 'Privacy' 

Week 5: Power [How AI shapes society]

Monday Sept 28 Readings discussed in class today: Everyday powerlessness:

  1. British Grading Debacle Shows Pitfalls of Automating Government
  2. Dark Patterns after GDPR

Wednesday Sept 30 Readings discussed in class today: What do we do now?

  1. Don't ask if artificial intelligence is good or fair, ask how it shifts power 
  2. Costanza-Chock (2020) Design Justice, Chapter: Design Practices: “Nothing about Us without Us”

NOT required reading: Epistemic Injustice (you might find Chapter 1 interesting.)

Week 6: Infrastructure [How AI shapes society]

Monday October 5 Readings discussed in class today:

The Infrastructure of Experience and the Experience of Infrastructure: Meaning and Structure in Everyday Encounters with Space

Wednesday October 7 Readings discussed in class today: How infrastructure affects us

  1. Access Denied: Faulty Automated Background Checks Freeze Out Renters 
  2. Shopping for Sharpies in Seattle: Mundane Infrastructures of Transnational Design

Mini project Due Sunday Oct 11: Changed from the original: Write about one particular AI system/algorithm you encounter every day (or at least frequently) and how your experience with the system would differ if you had less privilege or different physical infrastructure (e.g. if you did not speak English, had vision impairment, had fewer hours of electricity etc.) The goal is to sketch how physical infrastructure, identity and privilege interact with AI systems.

Grading for this assignment:

  1. 20 points: choice of algorithm - You get more credit for choosing a more commonly used AI system/algorithm
  2. 40 points: Your analysis of how your physical infrastructure and digital infrastructure interact (e.g. what physical/societal infrastructure does your AI algorithm need? Map apps/optimized routing work better when you have reliable addresses, for instance. )
  3. 40 points: Your analysis of how infrastructure (physical and digital) interact with privilege, identity, or ability.

The expectation is you’ll turn in one page. Please do not write more than 2 pages. (i.e. no more than 1000 words)

Week 7: Jobs, Jobs, Jobs! [How AI shapes Us, How AI Shapes Society]

Monday October 12 Readings discussed in class today:

  1. Rosy: The Jobs That Artificial Intelligence Will Create 
  2. Not so rosy: The Humans Working Behind the AI Curtain 

Guiding questions (based on a suggestion by Frank): As you read these readings, consider:

Wednesday October 14 Readings discussed in class today:

  1. Hazy: The future of work: Why are there still so many jobs?

Anuprita says “this is a great project: https://100jobsofthefuture.com/

 

 Note: Oct 16: Midterm grades only include materials turned in before or on this date.

PART 2: So what should we do?

Week 8: Design methods

Monday October 19 Readings discussed in class today:

  1.  Expanding Modes of Reflection in Design Futuring
  2. Optional reading: no reflection required Designing for Slowness, Anticipation and Re-visitation: A Long Term Field Study of the Photobox

Wednesday October 21 Final project activity. No readings today.

Mini project: NOTE slight change based on feedback. For one day, note down all the things you use your smartphone for (alternatively, any personal computing device.) What “jobs” does this device do for you? What “jobs” do you do for the device? (e.g. labeling, explaining, sustaining etc.) What parts of these jobs replace human jobs? Which of these might be enriched by greater human involvement?

Rubric:

  1. 20 points: Credit for demonstrating you observed a whole day.
  2. 40 points: Description of jobs done by device, jobs you do for device.
  3. 40 points: Description of how does it interact with human jobs? Replacement/enrichment/new human jobs.

The expectation is you’ll turn in one page. Please do not write more than 2 pages. (i.e. no more than 1000 words)

Week 9: Interaction models

Monday Oct 26: Humans-in-the-loop?

  1. Scaling B12 Recommendations with a human-in-the-loop recommender system
  2. Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time 

Wednesday Oct 28: Machines in the loop?

  1. A Case for Backward Compatibility for Human-AI Teams
  2. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making

Week 10 Algorithmic issues -1

Monday November 2:

  1. How to recognize AI snake oil
  2. Can we just wait a minute? [1711.06664] Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer 

Week 10 Algorithmic issues - 2

Mon Nov 9

  1. Matthew Kay, Tara Kola, Jessica R. Hullman, Sean Munson (2016) When (ish) is My Bus? User-centered Visualizations of Uncertainty in Everyday, Mobile Predictive Systems

Wed Nov 11

        Project group work -- please have a concrete plan of what you want to accomplish during class time!

Optional reading (no reflection!) The Principles and Limits of Algorithm-in-the-Loop Decision Making 

Wednesday November 4: Project activity; no readings.4min presentations of proposal + 4 min of feedback for each team  

Project slide presentation format suggestion:

Slide 1: What are your goals? E.g. “We want to build a proof-of-concept voice-based assistant that embodies feminist values” Why is it a good goal? E.g. “Feminist voice-assistants may help reduce societal biases and unequal pay”

Slide 2: How are you going to do this? What is the plan? E.g. build it with Voiceflow, based on readings on feminist design by X, Y, Z. Based on reading by A et al in class, we will ensure that.... Etc. Form of solution: It will be a website. We will use PHP to build the backend

Slide 3: Has anyone else tried something similar before (for your goals in Slide 1). What have they done? How well does it work? How will your approach in Slide 2 improve on this?

Slide 4: How do you know you’ve succeeded? E.g. “we will try it with three people, and conduct interviews with them to see how their notions of gender-normed household work have changed. We expect that participants....”

Note that what you put here, you need to do! For the final project submission, I expect that you’ll have something quite similar  to what you propose (but improved with class feedback). Particularly important for Slides 2 and 4 -- don’t overpromise!

Week 11 - Algorithms - Opacity

Monday Nov 16: Readings:

  1. The Illusion of Control: Placebo Effects of Control Settings 

Wednesday Nov 18: Guest lecture by Motahhare Eslami.

Reading: User attitudes towards algorithmic opacity and transparency in online reviewing platforms

Week 12: Organizational issues

Mon: Nov 23

Readings:

  1. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI 

Wed: Nov 25 is a holiday!!

Week 13: Organizational issues cont’d

Monday Dec 1: Project work day

Wed Dec 3: Guest lecture by Ken Holstein

Reading:

  1. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? 

Week 14: Final stretch!

Dec 7: Paper reading:

``This Place Does What It Was Built For'': Designing Digital Institutions for Participatory Change 

Dec 9 Final project presentation

Each team presents its project; 7 min total per presentation (5 min presentation + 2 min for questions.)

Presentation should include at minimum:

Your final project deliverables (presentation file, code as a zip file, figma link etc) are due Dec 13 (Sunday). You are allowed to make changes to your project between presentation and final deliverable.

Other dates in semester

Oct 16: Midterm grades only include materials turned in before or on this date.

Nov 27: Thanksgiving holiday; no classes

Dec 9: Last day of classes; office hours available throughout the semester by appointment.

Topics of Discussion

Here is a starting point for discussion topics, they are not in order. We will collaboratively add to this to be of interest to the class. See below about course committees.

Theme: How AI shapes us

Recommendations, Filters, Radicalization

  1. How recommendation systems work (e.g. Netflix, Amazon, Youtube)
  2. Filter bubbles and the curation of taste
  3. Radicalization -- The case of Youtube

Co-adaptation: Humans adapt to technology over time, and change their behavior because of the existence of technology.

  1. What is co-adaptation? How to design systems for co-adaptation
  2. Co-adaptation of autonomy: How managers adopted Slack because it freed them from their desk; and how Slack chained them instead to their phones
  3. Co-adaptation of Culture: The Case of OKCupid and Tinder

Privacy & Obscurity: What is private anymore? Why should you care if you have done nothing wrong? Is privacy different from obscurity? As long as you consent before your private information is shared, it’s OK right?

  1. Privacy and consent: the long-term consequences of being able to consent to making private information public.
  2. Algorithms for protecting sensitive information: benefits, and limitations
  3. Algorithms that don’t need private information: are they any good? And isn’t more data always better?

Theme: How AI shapes society

AI & Power

  1. What is power? How does it affect justice?
  2. How does AI wield power?
  3. When we say AI wields power, do we just mean “the creators of AI wield power”, or do we mean something else?

Infrastructure: Physical infrastructure is stuff like roads, electricity, and water. What is AI infrastructure?

  1. Why does infrastructure matter? Is there “infrastructure for good” and “infrastructure for evil”?
  2. The Case of Google Cloud AI: rhetoric and incentives

Politics of artifacts:

  1. Do AI systems have their own politics? (i.e. do certain systems always lead to certain societies?)
  2. The Case of Facial Recognition: Does it necessarily lead to a surveillance state?

Jobs & Inequality

  1. Why do we still have jobs anyway? Will we have jobs in the future?
  2. As AI advances, are jobs that need less training the ones that will disappear? (Spoiler alert: no.)

Theme: How to shape AI

Algorithmic concerns:

  1. Long-term effects and accuracy: is it always better to have algorithms that are more accurate? Might more accurate algorithms be more harmful?
  2. Algorithms and secrecy: Many companies say that if they told us exactly how their algorithms work, they would be gamed and stop working. Is that true?
  3. The Case of Simple Scores: can simply adding numbers be better in the long-run than famously complex algorithms?

The loss-function as a design material

  1. Why designers struggle with AI and what we can do about it.
  2. The case of deep semantic vectors: creating a computational design material.
  3. Prototyping for AI: practices in the real-world (and how academic concerns miss the mark)

What must we never do/What we must always do

  1. Codes of Conduct: their charms, and their limitations
  2. The Case of ACM and Facial Recognition
  3. Design futuring: basic techniques

Pre-requisites

There are no formal prerequisites. However, you should NOT enroll in this course if you do not want to participate in class discussions. (Seriously, you will not like it!) You should also be ready to read a fair deal -- think at least 30 pages every class.

Mini-Projects/Assignments

Week 1: Add a slide to Assignment # 0. This is a pass/fail assignment.

Week 3: Follow a Youtube trail. Start at a video, repeatedly click the “most recommended” video. Where do you end up after 20 videos?

Week 5: Notice the algorithms you encounter -- for one day, write down every AI system you use/that has an effect on your life. How many do you count?

Week 7: Live without the “smart” parts of your cellphone for a day. No computers either. How did it go?

Week 9: Use all the data that is available to you (eg. gmail, facebook, etc.) to create as detailed an account as possible of what you did on a particular day (e.g. August 17, 2012 -- exact date will be announced in class). How detailed can you be? What stood out to you about this process?

Final project

Choose a partner, and create a project with one of the following three themes:

  1. Create a (very simple) AI system that will be “better” in the long-term than an existing system. You must be concrete about what the system does, and very specific about how it is “better”; what harms does it avoid? What benefit does it offer?
  2. Create a teach-with-demonstration system to teach future students in this class about one particular long-term harm of an AI system.
  3. Create a tool to help non-technical designers/policy makers/everyday people to reason about the long-term effects of a proposed new AI tool (e.g. someone proposes to use always-on geolocation tracking to recommend restaurants you should try). Be clear about who the tool is meant for (e.g. designers).  

End of semester polling

A few questions we might want to poll the class about:

  1. Is designing the “right” AI/technology really about selecting the right designers and architects to build it? (inspired by Beccy Zheng)

Grading

This is a seminar-style, readings and project focused class. Here is how you’re graded.

  1. Readings: 30%
  1. Reflections on readings: 15%
  2. Leading a class discussion: 15% [You must do this once during the semester]
  1. Mini-Projects/Assignments: 30%
  1. Each assignment is 6%
  1. Project: 30%
  1. There are two milestones. The first milestone is a proposal of the project. That’s 10%. The second milestone is the final project. That’s the other 20%
  1. Participation in class discussions: 10%

There is no midterm. There is no final examination. No assignments/projects are timed (i.e. you don’t have to sit down and finish them in an hour, etc.) Assignments/projects are not proctored but *are* checked for plagiarism.

Plagiarism policy: Any material that you adopt/quote etc. must be attributed even if you significantly modify it. Use without attribution will be considered plagiarism. To be fair to your classmates, I will report every incident of plagiarism to the university.  

Late day and attendance policy: Reading reflections and discussions do not qualify for late days. Mini-projects and the final project qualify. You are limited to three late days over the semester under normal circumstances. However, I have always made reasonable exceptions. If you need additional time, please talk to me. Caregiving to close family/partners/roommates recovering from COVID-19 is one such valid exception.

Unless you are sick, attendance and active participation is required. “Preparing for a job interview” is not an excused absence; however, if you have a job interview that overlaps with class time, then the absence will be excused.

A few other things…

Course committees:  I would like the class to elect/nominate two students (undergrad) and one student (grad) to act as representatives in a course committee. I will meet with these students every week to seek feedback on how to improve the remainder of the course. We will elect new representatives after the mid-term.

(Also, not all suggestions your committee makes can be integrated into class because of how much time there is to make revisions. When that happens, it’s my fault, not theirs!)

Staying well: For many years, I worked without enough sleep. First, it was because college was very hectic, then it was because my job was hectic, then because grad school was hectic. I thought I was doing great, until finally someone told me I was not. I was stressed, I was unhappy, and I was working all the time but not getting any work done.

This isn’t just a personal story; there is a lot of scientific evidence that sleep is essential to learning. Please heed the evidence -- get enough sleep if you can! It’s hard to do with the current pandemic and bad timezones for some of you, so if I can help, I would love to.

More generally… take care of yourself. Do your best to maintain a healthy lifestyle this semester by eating well, exercising, avoiding drugs and alcohol, getting enough sleep and taking some time to relax. This will help you achieve your goals and cope with stress.

All of us benefit from support during times of struggle. There are many helpful resources available on campus and an important part of the college experience is learning how to ask for help. Asking for support sooner rather than later is almost always helpful.

If you or anyone you know experiences any academic stress, difficult life events, or feelings like anxiety or depression, we strongly encourage you to seek support. Counseling and Psychological Services (CaPS) is here to help: call 412-268-2922 and visit their website at http://www.cmu.edu/counseling/. Consider reaching out to a friend, faculty or family member you trust for help getting connected to the support that can help.

If you or someone you know is feeling suicidal or in danger of self-harm, call someone immediately, day or night:

CaPS: 412-268-2922

Re:solve Crisis Network: 888-796-8226

If the situation is life threatening, call the police

On campus: CMU Police: 412-268-2323

Off campus: 911

An imperfect world We live in a flawed world. And it’s up to us to make it better. For example, CMU needs to improve in many dimensions, including how we improve gender, and particularly racial diversity.

Many many topics in this course, such as about bias, fairness, justice, and representation, will hopefully help you design a better world. But it isn’t here yet. If you find minority authors who I have overlooked, researchers that I’ve omitted, or simply topics that I haven’t thought to include, please tell me. I will work to include them, if not this year, the next time this course is taught.