1 of 45

Human-Computer Interaction

saadh.info/hci

Week 13 (Tuesday): Cognitive Walkthrough & User Testing

1

2 of 45

Attendance and Agenda

  1. Cognitive Walkthrough
  2. User Testing

2

3 of 45

Announcements

  • Assignment 3 overdue
  • Continue working on milestone 2! Due December 6
    • Assignment 4 (due December 2) is conducting heuristic evaluation of one screen flow
    • Discuss dividing among team members
  • Test 2 on Nov 21
    • Same format as test 1
    • Read chapters 4-7 of Human-Computer Interaction: An Empirical Research Perspective by MacKenzie, I. Scott, if you can

3

4 of 45

Test 2 Next Thursday

4

5 of 45

1. Perception and Cognition

5

6 of 45

2. User Research Methods & Qualitative Analysis

6

Interviews

Contextual Inquiry

Think-out Aloud

7 of 45

3. Experimental Research in HCI

Error bars show

±1 standard deviation

8 of 45

4. Analytical Evaluations

Next week!

Last Week

9 of 45

5. Modeling Interactions

Units: bits

RT = a + b log2(n + 1)

Fitts’ Law

Hick-Hyman’ Law

10 of 45

6. Human-AI Interaction

11 of 45

4. Analytical Evaluations

Today!

Last Week

12 of 45

Postscript

12

  • Models presented here were of two types:
    • Descriptive models
    • Predictive models
  • The “space” for modeling is likely richer and more diverse
  • Below is a Model Continuum Model (MCM)

13 of 45

Analytical Evaluations

13

14 of 45

Analytical Evaluations

14

Heuristic Evaluations

Cognitive Walkthroughs

Claim Analysis

Cognitive Modeling

15 of 45

Cognitive Walkthroughs

  • Walkthroughs are methods where an expert (that would be you, the designer) defines tasks. Afterward, rather than testing those tasks with real people, you walk through each step of the task and verify that a user would:
    • know to do the step,
    • know how to do the step,
    • would successfully do the step, and
    • would understand the feedback the design provided.
  • If you go through every step and check these four things, you’ll find most problems with a design.

Polson, P. G., Lewis, C., Rieman, J., & Wharton, C. (1992). Cognitive walkthroughs: a method for theory-based evaluation of user interfaces. International Journal of Man-Machine Studies.

15

16 of 45

Performing Cognitive Walkthrough

  • Select a task to evaluate (probably a frequently performed important task that is central to the user interface’s value). Identify every individual action a user must perform to accomplish the task with the interface.
  • Obtain a prototype of all of the states necessary to perform the task, showing each change. This could be anything from a low-fidelity paper prototype showing each change along a series of actions, or it might be a fully-functioning implementation.
  • Develop or obtain persona of representative users of the system. You’ll use these to help speculate about user knowledge and behavior.

16

17 of 45

Identifying Design Flaws using Walkthrough

  • Will the user try to achieve the right effect?
    • Would the user even know that this is the goal they should have?
  • Will the user notice that the correct action is available?
    • If they wouldn’t notice, you have a design flaw.
  • Will the user associate the correct action with the effect that the user is trying to achieve?
    • Even if they notice that the action is available, they may not know it has the effect they want.
  • If the correct action is performed, will the user see that progress is being made toward the solution of the task?
    • Is there feedback that confirms the desired effect has occurred?

17

18 of 45

Performing Cognitive Walkthrough

18

19 of 45

Pros and Cons

  • Pro: Systematic and granular!
  • Pro: Just one persona at a time
  • Con: Can end up with problems that are not real problems, and overlooking serious issues that you believed were not problems

19

20 of 45

GenderMag Walkthrough

  • Five customizable personas to cover:
  • A user’s motivations for using the software.
  • A user’s information processing style (top-down, which is more comprehensive before acting, and bottom-up, which is more selective.)
  • A user’s computer self-efficacy (their belief that they can succeed at computer tasks).
  • A user’s stance toward risk-taking in software use.
  • A user’s strategy for learning new technology

20

21 of 45

GenderMag Walkthrough

21

22 of 45

Analytical vs. Empirical Evaluations

22

Usability Testing

23 of 45

Is a usability test “empirical research”?

  • Answer is, it depends
  • Based on classical scientific empirical approach
    • Hypothesis
    • Random sample, randomly assigned, sufficient size
    • Control variables. Definition of independent, dependent variables
    • Control groups
  • But, in practice, this tends to be less formally conducted... It must fit the practical requirements of a business/design setting, which has limits on time the time and resources available

23

24 of 45

When?

  • Differ in goals, when carried out:
  • Formative
    • Carried out during the iterative design process; intended to influence development
    • Exploratory: Evaluate preliminary design, verify assumptions about users, guide design process
  • Summative
    • Carried out on the finished product
    • How good is it? (compared to the initial usability goals; compared to competitors)

24

25 of 45

Where?

25

Usability Lab

Online

26 of 45

Input

  • Tools
    • A screen without or with special software running on test system (e.g., Morae ) that records…
    • All mouse interactions (movement and clicks)
    • keyboard interactions
    • Custom events
  • Optionally: Eye movements
  • Webcams or other video or audio recording device
  • Remote observation stations
  • Paper, Pencil, Clipboard

26

27 of 45

Output

  • Formal report
  • Formal presentation with summary of findings
  • Highlight reel
    • Like sports highlights
    • Short clips of key findings
    • Fair and Balanced

27

28 of 45

Usability Evaluation Steps

  1. Plan and prepare
  2. Conduct the test
  3. Collect data
  4. Analyze data
  5. Draw conclusions
  6. Document results
  7. Repeat step

28

29 of 45

Step 1: Plan and Prepare

  • Will the focus be on ease of learning or use? Users and task focus for the test?
  • Design the tasks
    • Identify usability requirement attributes and levels
    • Develop benchmark and representative tasks
    • Should be quantifiable (measurable)

29

30 of 45

Step 1: Plan and Prepare

  • Design the test and develop test materials
    • Protocols and procedures
    • Scripts, instruments, props
  • Design and assemble the test environment
    • Recruit/schedule pilot test users (1 - 2)

30

31 of 45

Step 1: Plan and Prepare

  • Types of data you will collect:
    • Frequency of request for online assistance
      • What did people ask for help with?
    • Frequency of use of different parts of the system
      • Why are parts of system unused?
    • Number of errors and where they occurred
      • Why does an error occur repeatedly?
    • Time it takes to complete some operation
      • What tasks take longer than expected?

31

32 of 45

Step 2: Conduct the test

  • Experimenter refrains from interfacing with user
    • Don’t make comments while watching the user
    • Refrain from helping user – be cautious
    • You want to observe the thought process and path of discovery.

32

33 of 45

Step 2: Conduct the test

  • Type of data

33

Data Type

Objective

Subjective

Quantitative

Task times, Number of steps

Users perceptions, Questionnaire scores, perceptions (e.g., SUS, NASA TLX)

Qualitative

Verbal Protocol, Critical Incidence, Steps done out of order

What do you think?

34 of 45

Step 2: Conduct the test (Picking a Scale)

34

NASA TLX Scale

System Usability Scale

35 of 45

Step 3: Collect the data

  • Quantitative data
    • Benchmark tests
    • Time to complete tasks
    • Number of tasks completed in a set time
    • Number of errors made during task completion
    • User preference scales, rankings, ratings
  • Qualitative
    • See listing of methodologies on next slides.

35

36 of 45

Step 3: Collect the data (Picking a Procedure)

  • Concurrent Verbal Protocol (“Think-aloud protocol”): Users asked to verbalize their thoughts while working
  • Constructive Interaction (“co-discovery learning”): Two test subjects collaborating in trying to solve tasks while using a computer system
    • Often used with children who might find it difficult to follow instructions
  • Retrospective protocol: Testing and then debriefing, perhaps focusing on critical incidents (positive or negative)

36

37 of 45

Step 3: Collect the data (More Procedures)

  • Shadowing: Expert user in the task domain explains test user’s behavior
  • Coaching: Test administrator as coach – Good for novice users
  • Question-asking protocol: Test administrator ask questions
  • Teaching: Test user interacts & learns system, then teaches it to a novice

37

38 of 45

Step 3: Collect the data

  • Interview or questionnaire about their impressions of the system.
  • Preplanned questions
    • What did you like best?
    • What would you change?
  • Collection techniques
    • Note taking
    • Collection sheets
    • Video, audio
    • Built-in-instruments

38

39 of 45

Step 4: Analyze the data

  • Summarize data
    • Summarize comments
    • Summarize timings
  • Interpret data
    • What does high number of errors on a task mean?
    • What does user clicking on wrong buttons mean?

39

40 of 45

Steps 5, 6, 7

5. Draw conclusions and formulate recommended design changes

6. Document/present results

7. Repeat from step 1

    • For usability test #2, #3, #4...

40

41 of 45

41

42 of 45

42

43 of 45

43

44 of 45

How to be a good usability study participant?

  • Be HONEST!
  • There is no such thing as a “stupid user” in a usability test.
  • Users let you know what is good or bad about the device
  • If you have ANY usability or other issues, let the team know, even if you think it is minor.

44

45 of 45

Attendance & Next Time

  • Usability Testing

45