Human Computer Interaction in AI

University of Zürich

Fall 2023

Where/When

  • Binzmühlestrasse 14 – Room: BIN-2.A.10
  • Time:  Tu & Th 8AM - 9:45AM, September 19 - December 21, 2023
  • My office: BIN-2.B.07

Staff

  • Daniel M. Russell, Instructor
  • Office Hours: set up an appointment by email or catch me immediately after class.  I’ll be around until noon each Tuesday and Thursday for conversation / discussions.  
  • Course Assistant:  Ibrahim Al Hazwani   alhazwani@ifi.uzh.ch

Syllabus

Outline of the course - HCIAI - Zürich Fall 2023

This document:  bit.ly/UZHHCIAI 


Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S. T., Bennett, P., Inkpen, K., Teevan, J., Kikin-Gil, R., and Horvitz, E. (2019) Guidelines for Human-AI Interaction (CHI 2019). { General background on HAI guidelines from a few years ago.  Worth reading as it’s a common reference in the HAI field. }

2023 AI Index Report, Stanford (April 2023). { Read the TOP TAKEAWAYS from the first page. }

ASSIGNMENT #1  handed out (due next week: Sep 28)

Readings:

Ben Schneiderman and Pattie Maes. Direct Manipulation vs. Interface Agents. CHI 1997 (an infamous debate about the role of AI agents vs. human-directed control)  

Eric Horvitz. Reflections on Challenges and Promises of Mixed-Initiative Interaction. AAAI Magazine 28, Special Issue on Mixed-Initiative Assistants (2007) (what will work in designing interactions with AI agents using interleaved actions by computers and people)


Google Handbook on Mental Models { fairly short, easy read that defines mental models in the context of AI systems. }


        
ASSIGNMENT #1  due before class today


ASSIGNMENT #2 handed out today; due Oct 5

Readings:

Unpredictable Black Boxes are Terrible Interfaces Maneesh Agrawala


ASSIGNMENT #2  DUE today

ASSIGNMENT #3  WIzard of Oz assignment handed out  (group project), due Oct 19

Readings:

Lazar, S., Nelson, A. AI Safety on whose terms? Science, v 381, n 6654.


Kocielnik, R., Amershi, S., and Bennett, P. (2019)
Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-User Expectations of AI Systems. (CHI 2019).

 

Cai, C., et al. "``Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making." CSCW (2019).

 

Transit blog (Canada) Can we make Montreal’s buses more predictable? No. But machines can. A case study of Montreal transit using ML to improve predictions of bus arrival times.

Pappas, S. Data Fail! How Google Flu Trends Fell Way Short (LiveScience.com, 2014)



  • How to create an explanation. Why we need to do so.  


   Pre-recorded class lecture (additional reading)

        •
Collaboration and Explanation in AI (Carrie Cai)
   Three short videos from CHI:
       •
Evaluating Large Language Models (10 mins)
        User Perceptions about Biases for Human-Centered XAI (8 mins)
        "Help Me Help the AI": Understanding Explainability (8 mins) 

Readings:

Millecamp et al.(2019) To explain or not to explain: the effects of personal characteristics when explaining music recommendations. IUI 2019: 397-407

Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19), 275–285. [Read blog posts]

 


Building handoffs into AI systems–fast, intermediate, slow

Readings:

Green, B., & Chen, Y. (2019).  The Principles and Limits of Algorithm-in-the-Loop Decision Making Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-24.

Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM Transactions on Computer-Human Interaction (TOCHI) 26, 5: 31:1–31:35. [Read blog posts from other students commenting on this]

Landolfi, Nicholas C. and Anca D. Dragan. “Social Cohesion in Autonomous Driving.” 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2018): 8118-8125.

 

ASSIGNMENT #3 DUE today



 


  • Tues Nov 7:  VUIs Video (a la James Giangola) (part 2) - discuss Assignment #4

ASSIGNMENT #4  DUE today

Readings: Bender, Emily M., et al.  On the dangers of stochastic parrots (Can language models be too big?) Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.


                (with Steven Johnson, recorded)

FINAL PROJECT ASSIGNMENT handed out today


  • Note that your Final Project Team is due today in the spreadsheet

  • Thu Nov 23: Creativity and Ethics Video (with Kelly Moran)  15 mins TRT
                    
     Creativity and AI  (with Ed Feigenbaum)  75 mins TRT, but only watch
                    From 5:20 to 46:00 (40:40 watch time–but you can speed it up to
                    1.25X realtime, making it around 30 minutes.  

            
             [Dan away this day ]

Reading: Spellburst–a large LLM-driven interactive canvas for visual artists 
        (T. Angert, et al.)  
summary

 

More reading… If you’re curious:

Deep Dream github (iPython Notebook)

Hayes, B. Computer vision and computer hallucinations American Scientist

Field Guide for Making AI Art Responsibly Medium post By Claire Leibowicz and Emily Saltz.  Points to Field Guide

Google AI Turns Text Into Images (Petapixel)  - overview

Imagen Outperforms DallE-2 (Medium post by Teemu Määttä)

AI Designed Drugs Financial Times article.  

Google’s Imagen Google’s website.  Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding technical paper by Chitwan Saharia, et al.  Drawbench spreadsheet (prompts for images)  

Copyright issues caused by stable diffusion algorithms? Medium post by Aaron Brand  

Stable Diffusion (Hackaday article, Wikipedia on Stable Diffusion; Tweet about SD for clothing and animation)

Reading:  Sparks of Artificial General Intelligence: Early experiments with GPT-4,
        Sébastien Bubeck, et al. (March 2023, Arxiv)
Youtube short summary

        Is AGI real?  What would it take for it to be really real?  


Extra lecture if you’re interested: Accessibility and AI  (Merrie Morris, Google AI)  

 


See Final Project Spreadsheet  Feedback form  (aka bit.ly/UZH-HCIAI-form )

Assignments

Assignment #1: assigned Sep 19, due Sep 28  (1.5 weeks)

Listen to a podcast and write essay on prompt

Assignment #2: assigned Sep 28, due Oct 5  (1 week)

Generative AI

Assignment #3: assigned Oct 5, due Oct 19 (2 weeks)  

Interactive AI

Assignment #4:  assigned Oct 24, due Nov 7 (2 weeks)

        Data visualization for AI

Final project: assigned Nov 14, due Dec 19/21 (4 weeks)  

        Your choice of project (but check with me first)  

Prerequisites for the course: Basic HCI knowledge including some back-ground about what a usability study is and knowing how to identify implicit user needs. You don’t need to know how to program, but it would be really helpful. (And you should have a willingness to explore low-code im-plementations.)

4 assignments and a final project:  

Assignment 1: 15% - analysis task

Assignment 2: 15% - simple implementation of AI diagramming task

Assignment 3: 15% - needs analysis of systems with AI components

Assignment 4: 15% - a field study of AI needs / system

Final project: 40% - small team (2-3 people) building or analyzing an AI system, delivered as a final presentation (5 – 10 mins each, depending on schedule).  Final date: Dec 19 and 21, 2023, in class.  (On the 19th we’ll have half of the class present; on the 21st, we’ll have the second group present.)  

Learning goals for the class

  • Know the language and vocabulary used around AI and ML
  • Understand what human-centered AI (HAI) is, what values it entails
  • Be able to design, prototype, research interactions with AI systems
  • Be a better AI practitioner, know how to apply human-centered values in your own work, understand the responsibility that comes with building ML systems, and be able to evaluate datasets, systems and products
  • Challenge the perception and practice of AI and recognize the importance of human-centered design for the future of AI

Flow of each week’s class

  • Reflection on last week's class: “What were your take-aways?” - 10 min
  • Lecture (with lots of discussion and feedback) - 50 min
  • Break - 5 mins
  • Activity  ( interspersed with lecture) - 30 min
  • Debrief-form on the way out of class– a form asking a basic question from the lecture  - 5 min

Post-class survey

QR code for the post-class survey