Published using Google Docs
PTA130_Usability_Test-Doc_final.docx
Updated automatically every 5 minutes

PTA130 Canvas Course

Usability Test Plan

Version 1.0

Jeff Cohen, Colleen Kelemen

January 2023


Table of Contents

Usability Test Plan        1

Table of Contents        2

Document Overview        3

Methodology        4

Participants        5

Training        5

Procedure        5

Roles        6

Trainer (This should be one person)        6

Facilitator (Moderator)        6

Data Logger (This should be one person)        7

Ethics        7

Usability Tasks        7

Usability Metrics        8

Scenario Completion        8

Critical Instances        8

Non-critical Instances        8

Subjective Evaluations        9

Scenario Completion Time (time on task)        9

Usability Goals        9

Completion Rate        9

Error-free rate        9

Time on Task (TOT)        9

Subjective Measures        9

Problem Severity         10

Evaluation        10

References        11


Document Overview

This document describes a test plan for conducting a usability test during the redesign of the course PTA130: Diseases of the Human Body. This usability test focuses on the redesign of the PTA130 Canvas course shell. The goals of the usability test include establishing a baseline of user performance within Canvas, identifying potential navigation and page design concerns, and identifying issues for specific performance tasks.  The results of these tasks will aid the eLearning team in addressing course design concerns and identify potential solutions to improve user performance of specific Canvas features.

The usability test objectives are:

The research team will conduct two rounds of testing between January and February of 2023. The number of total participants for the first round of testing will include three instructors and two facilitators/moderators to administer the testing. The second round of testing will have three to five student participants with two facilitators/moderators to administer the testing.

The primary usability test participant for the first round will be the lead instructor for the PTA130 course. The other participants include the PTA department head and an additional PTA professor. Testing will take place in either the instructor’s main office or in a PTA classroom, whichever is most convenient for the participant and research team.

For the second round of testing, the usability test participants will be students selected from the PTA program. Recruitment will rely on contact information provided by the PTA department head. The testing team will reach out to students through email to gain volunteers for the test. If responses are limited, other recruitment methods will include having PTA instructors post details and contact information from the original recruitment email within their course with a set deadline to respond. In addition, students will be offered P.I.T. swag and a gift card to encourage participation. Testing will take place on the third floor PTA computer lab.

Executive Summary

This usability test plan will focus on specific tasks and ease of navigation for each role. Tasks for the instructor will include finding specific links and pages, creating assignments in Canvas, and grading student work using SpeedGrader. Navigation functions for the instructor will be evaluated for the ease of finding specific pages, the organization of subpages, and editing modules independently. Tasks for the student will include finding specific links and pages, working through an assignment, and opening sections of a specific module. Usability goals of the usability test plan will place an emphasis on establishing a baseline of user performance within Canvas, identifying potential navigation and page design concerns, and identifying issues related to specific performance tasks.

Methodology

There will be two rounds of usability testing. Round one will require three instructor participants. Testing will take place in either the instructor’s office or PTA classroom on the third floor of the college. The participant will interact with the Canvas prototype using either a laptop or PC. The prototype will be loaded on a Google Chrome browser for testing.

Round two of testing will require three to five student participants. Participants are required to be part of the college’s PTA program. Testing will take place in a PTA classroom on the third floor. The participants will interact with the Canvas prototype using a Windows platform. The prototype will be loaded on a Google Chrome browser for testing.

Outside of identifying potential issues and concerns through observation, the eLearning team will gather user satisfaction data. Participants will be provided with a demographic and post-study system usability questionnaire (PSSUQ). A basic demographic survey will be developed and administered by the eLearning team. The survey will include the participant’s name, age, gender identification, length of time at P.I.T., and school email address.

The PSSUQ was developed by IBM. It asks the participant 19 questions regarding overall satisfaction, system usefulness, information quality, and interface quality. It uses a 7-point Likert scale ranging from strongly agree to strongly disagree in order to gauge the responses. Lastly, it provides a comments section at the bottom of each question for participants to express their impressions about a given topic (eHealth Observatory, 2011). Questions may be omitted or modified to reflect tasks completed during testing.

Participants

Round one will include three participants. The main eligibility requirement is that the participants are instructors in the PTA program. The participants have minimal to moderate experience with using the learning management system Canvas.

Round two will include three to five student participants. Participants will be recruited through emails, in class announcements by the instructor, and announcements in all PTA Canvas courses. To be eligible, participants must currently be part of P.I.T.’s PTA program, be able to operate a PC independently, be proficient in English, be available during the testing period, and can physically be at the testing location.

Participants for each round will complete tasks that focus on navigation and completing scenarios related to their role as either an instructor or student. Participants will be encouraged to provide honest, verbal feedback while completing the tasks. In addition, participants will be directed to participate in the post-test questionnaire.

To determine eligibility, participants will fill out a short form that includes qualifying questions and contact information.  It is expected that participants have a basic knowledge of Canvas because it is the main learning management system of the college. Since Canvas is being utilized in other courses, student participants should have experience with completing student related tasks like turning in assignments and finding the modules page.

Training

The participants will receive an overview of the equipment and testing procedures from the moderator who will use a usability test script. As of this writing, no parts of the test environment or simulation will be nonfunctional.

Procedure

Participants will take part in the usability test in an unused PTA classroom on the third floor at the P.I.T. campus. A laptop or PC with the prototype Canvas course open will be used in a classroom setting. Participant interactions will be monitored by an eLearning team member seated in the same room. Notes will be taken by the same team member while the participants actions are recorded through screen capturing video software (ie: Zoom, Loom, Snagit, Camtasia) in addition to a camera recording in the room.

The moderator will follow a set script that briefly explains the usability testing process and the concept of concurrent think aloud. Participants will sign a consent form that confirms they understand that: participation is voluntary, the participant can stop at any time, the participant may decide that their results will not be used, the session will be recorded, and that their privacy and identity will be protected.

The moderator will ask the participants if they have any questions before starting the tasks. Participants will review demographic data obtained through the initial recruitment phase and fill out a background information questionnaire. The moderator will explain the amount of time to complete each task will be measured and that the participant should focus on completing all tasks presented. Exploration outside of tasks should be avoided. At the start of each task, the participant will read aloud each description from a printed copy or note card, then begin the task. For complex, multi-step tasks, the participant will be able to keep and read the note card for reference. Measurements of time on task will begin once the participant starts the given task.

The moderator will prompt the participant to think aloud (concurrent think aloud) to ensure verbal records of their reactions and interactions to the prototype are recorded. In addition, the moderator will observe and keep notes of participant interactions in the data logging sheet.

When finished the tasks, the participant will complete the PSSUQ questionnaire. When the questionnaire is completed, the participant will be offered P.I.T. swag or a $5-10 gift card for their time and feedback testing the prototype. Receipts will be completed in duplicate: one for the participant and one for the eLearning team.

Roles

The roles involved in a usability test are as follows. An individual may play multiple roles and tests may not require all roles.

Trainer (This should be one person)

Facilitator (Moderator)

Data Logger (This should be one person)

Test Participants

Ethics

All persons involved with the usability test are required to adhere to the following ethical guidelines:

Usability Tasks

Tasks for the usability test were created based on needs that were identified after a needs analysis was completed for the course PTA130: Diseases of the Human Body. The usability tasks are functions and actions that are most commonly performed within Canvas. Usability tasks will be different based on the participant’s role in Canvas. Instructor and student roles have different functions available to them within Canvas.

All usability testing will be completed in a prototype environment that is independent of other courses within Canvas. No concurrent activities will affect the usability testing environment or tasks. Data from all usability tests will be used to finalize designs, navigation, and functions of the course shell before being applied in a live course on Canvas.

The order of tasks and their descriptions will be approved by the eLearning and Digital Instructor Coordinator to ensure that the format and content are representative of a real use environment that evaluates the effectiveness of the prototype. Approval of the tasks and participants should be approved prior to the start of usability testing.

Usability Metrics

Usability metrics refers to user performance measured against specific performance goals necessary to satisfy usability requirements.  Task completion success rates, adherence to dialog scripts, navigation with the least number of steps or keystrokes, error rates, time and ease to recover from errors, and subjective evaluations will be used.  Time-to-completion of scenarios will also be collected.

Scenario Completion

Each task will request that the participant navigate to specific pages or complete common functions associated with their role in Canvas. Tasks will be viewed as completed when the participant verbalizes or indicates they have completed the task. Success or failure will be determined by the moderator and noted in the data log. The moderator will not give any guidance outside of reiterating the task since these tasks should be completed independently. Failure to complete a task will be seen as a critical error and addressed in the prototype after the round of usability testing is completed.

Critical Instances

Critical instances are deviations from the completion of scenario targets. Obtaining or otherwise reporting of the wrong data value due to participant workflow is a critical error. Participants may or may not be aware that the task goal is incorrect or incomplete.

Independent completion of the scenario is a universal goal; help obtained from the other usability test roles is cause to score the scenario a critical error.  Critical errors can also be assigned when the participant initiates (or attempts to initiate) an action that will result in the goal state becoming unobtainable.  In general, critical errors are unresolved errors during the process of completing the task or errors that produce an incorrect outcome.

Non-critical Instances

Non-critical instances are errors that are recovered from by the participant or, if not detected, do not result in processing problems or unexpected results.  Although non-critical instances can be undetected by the participant, when they are detected, they are generally frustrating to the participant.

These errors may be procedural, in which the participant does not complete a scenario in the most optimal means (e.g., excessive steps and keystrokes).  These errors may also be errors of confusion (ex., initially selecting the wrong function, using a user-interface control incorrectly such as attempting to edit an un-editable field).

Noncritical instances can always be recovered during the process of completing the scenario.  Exploratory behavior, such as opening the wrong menu while searching for a function will be coded as a non-critical instance.

Subjective Evaluations

Subjective evaluations regarding ease of use and satisfaction will be collected via questionnaires, and during debriefing at the conclusion of the session.  The questionnaires will utilize free-form responses and rating scales.

Scenario Completion Time (time on task)

The time to complete each scenario, not including subjective evaluation durations, will be recorded.

Usability Goals

 The next section describes the usability goals for the PTA130 course prototype in Canvas. 

Completion Rate

Completion rate is the percentage of test participants who successfully complete the task without critical errors.  In other words, the completion rate represents the percentage of participants who, when they are finished with the specified task, have an "output" that is correct.  Note: If a participant requires assistance in order to achieve a correct output then the task will be scored as a critical error and the overall completion rate for the task will be affected.

A completion rate of 100% is the goal for each task in this usability test.

Error-free rate

Error-free rate is the percentage of test participants who complete the task without any errors (critical or non-critical errors).  A non-critical error is an error that would not have an impact on the final output of the task but would result in the task being completed less efficiently.

An error-free rate of 80% is the goal for each task in this usability test.

Time on Task (TOT)

The time to complete a scenario is referred to as "time on task".  It is measured from the time the person begins the scenario to the time he/she signals completion.

Subjective Measures

Subjective opinions about specific tasks, time to perform each task, features, and functionality will be surveyed.  At the end of the test, participants will rate their satisfaction with the overall system.  Combined with the interview/debriefing session, these data are used to assess attitudes of the participants

Problem Severity

The severity of a usability problem is a combination of three factors:

Finally, one needs to assess the impact of the problem that has the potential to affect multiple departments within the college. Certain usability problems can create a severe disruption of a course and create a backlog for departments to find solutions to errors that are generally time sensitive. This in turn can create the potential effect of Canvas not being used by the instructor. Even though severity has several components, it is common to combine all aspects of severity in a single severity rating as an overall assessment of each usability problem in order to facilitate prioritizing and decision-making.

The following 0 to 4 rating scale can be used to rate the severity of usability problems:

0 = I don't agree that this is a usability problem at all

1 = Cosmetic problem only: need not be fixed unless extra time is available on project

2 = Minor usability problem: fixing this should be given low priority

3 = Major usability problem: important to fix, so should be given high priority

4 = Usability catastrophe: imperative to fix this before product can be published

(Nielsen, 1994)

Evaluation

The UX Test Notes and Post-test Questionnaires completed by the participants will be evaluated using formative qualitative data analysis. Hartson and Pyla (2012) state that “formative analysis of qualitative data is the bread and butter of UX evaluation. The goal of formative data analysis is to identify UX problems and causes (design flaws) so that they can be fixed, thereby improving product user experience” (Hartson and Pyla, 2012).

First and foremost, all the data should be set up in a way that displays problem instances or critical instances. A problem instance is defined as “…a single occurrence of an encounter with a given problem by a given user, inspector, or participant” (Hartson and Pyla, 2012).  A critical instance is an “…event that occurs during user task performance or other user interaction, observed by the facilitator or other observers or sometimes expressed by the user participant that indicates a possible UX problem” (Hartson and Pyla, 2012). The critical instance is the most important aspect of formative qualitative analysis.

It is essential to isolate critical instances and break up large problems into a set of smaller individual instances. The opposite can also be true where one problem may be seen in several instances by more than one participant in a study. The process continues by consolidating all the instances under the one observed problem (Hartson and Pyla, 2012).

Problems should be given names in order to expedite conversations with others. It is imperative to understand each problem for what it is, gain insight into its causes (and possible solutions), and be aware of its relationship to other potential problems (Hartson and Pyla, 2012).

References

Assistant Secretary for Public Affairs. “Templates & Downloadable Documents | Usability.gov.” Usability.gov, 2020, www.usability.gov/how-to-and-tools/resources/templates.html. Accessed 6 Jan. 2023.

eHealth Observatory. (2011). Part 2. Post Study System Usability Questionnaire (PSSUQ) (3rd ed., pp. 3-8). University of Victoria. Retrieved from ¶¶

Hartson, R., & Pyla, P. S. (2012). The UX book: Process and guidelines for ensuring a quality user experience. Amsterdam: Morgan Kaufmann Publishers In.

Nielsen, Jakob. “Severity Ratings for Usability Problems.” Nielsen Norman Group, 1 Nov. 1994, www.nngroup.com/articles/how-to-rate-the-severity-of-usability-problems/. Accessed 6 Jan. 2023.