1 of 36

KUDO Interpreter Assistant (Beta)�Usability Discovery Findings – Phase 1

APRIL 2021

2 of 36

USABILITY DISCOVERY PURPOSE

By bringing our users to the center of the design process we can:

• Give all teams a collective understanding of user pain points

• Perform unbiased research

• Engage and learn from actual users

• Provide results quickly – at a fraction of the cost of building first / finding issues later

• Deliver actionable recommendations

• Discover new ideas for features–or products!

Simply put, we’re capturing the voice of the user!

3 of 36

BACKGROUND AND OBJECTIVES

OBJECTIVES

  • Evaluate the creation and modification of a glossary through �the lens of ease of use and understandability for the users

  • Determine the difficulties that users encounter during the process of creation of the glossary and updating custom terms

  • Understand users’ expectations when creating a glossary

BACKGROUND

KUDO has developed a new platform to quickly generate custom glossaries by using Machine Learning and Natural Language Processing–saving interpreters valuable preparation time. The application is in a Beta mode, and the team wants to get feedback on overall usability of the application before release to a wider audience.

4 of 36

FINDINGS OVERVIEW

• Conducted task-based moderated user testing to find big gaps in usability

• Most of the insights here are UI-focused, not utility or quality of the AI / ML translation.

• Should be reexamined broadly:

• Form Fields

• Glossary Table and UI

• Color Contrast to meet AA (minimum) WCAG compliance

• Hit usability findings saturation point with about 3 internal KUDO employee users

(found most big usability themes after 3 sessions-however, pilot test and

extra people are always helpful to plan for–we can always cancel if we overbook)

• At twice the number of test users needed, twice the amount of time needed,

first time testing at KUDO and COVID mixed in–we gained a lot of insights with no � development or design costs

5 of 36

TEST PLAN RECAP

OBJECTIVE: Uncover any large usability issues with uploading PDFs and adding custom terms

METHOD: Think Aloud Protocol / Ease-of-Use Survey

Qualitative Moderated Remote Study / 5 participants (+/- 1)

Internal KUDO Interpreters (we may have some test bias, as some know about this application)

Phase 1 - 3 Tasks

• Create glossary / Upload one source document

Upload second source document (skipped this task)

• Add custom term

ANALYZE SESSIONS

DESIGN RECOMMENDATIONS

SHARE OUT

6 of 36

DISCOVERY FINDINGS

Task 1 – Create a Glossary

7 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

1. ”Project” was synonymous with “Glossary” by most users

2. ”Select Project Type” gave most users pause, but � knowing they were uploading dismissed the other labels

3. “Blank Project” created the most confusion of all

”Project Types”, and ”Keywords” was unclear but quickly

dismissed by most users

Recommendation: replace “project” with “glossary”

to minimize confusion.

Recommendation: reexamine the need for this,

and allow one glossary to have multiple media sources,

and finesse a label speaking to “media” and not a “project”.

Also add assistive text to these option.

Recommendation: guide users with more assistive text

8 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

1. ”Project Language” and “Translation Language”

either stumped users, or they asked us what � the labels referenced

Recommendation: interpreters confidently told us “Source Language and Target Language” is part of their daily vernacular, so we should represent their wording here

2. Assistive Text next to form labels helped users

understand what to enter or what it’s used for.

When it was absent, some looked for it or remarked

how other assistive text aided them.

Recommendation: guide users with more assistive text near form labels

9 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

  • 3 of 6 users added in EN > FR by hand �in the description.

Recommendation: perhaps our Source > Target Language UI / Selector should follow the visual structure of EN > FR to mirror the interpreters’ real-world usage

2. Participants in a test situation often fill out everything.

In a real-world scenario, people are often looking to fill

out as little as possible, or needed at the time.

Recommendation: mark optional fields as optional

10 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

  • Almost all users stuttered with drop zone for files �has very little indication there is a file waiting �to be dropped

Recommendation: enhance hover effect with more pronounced visual indicators, radically change the language, show the ”to be released” file names in the drop zone.

2. Some participants missed the uploaded files because � the icons appeared below their browser edge/where � they haven’t scrolled down to yet.

Recommendation: move visual indicators within the drop area or give a notification of success.

11 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

“I don’t recognize the icon as a PDF icon, so I’m not sure if it uploaded or what.”

Participant 4

  • Some users had high expectations for visual �feedback or icons for the uploaded documents

Recommendation: Use “pdf” icon we use in the glossary page, once recognized as a PDF by the system.

“Oh it’s down here? I’d expect a popup window and be able to drop it in like 95% of websites out there.”

Participant 4

1. Some users were surprised by the dropzone being at� the bottom of the page instead of by the “upload” � radio button, as well as higher expectations for the.� dropzone UI.

Recommendation: relocate the drop area / URL entry within the form flow. Use “pdf” icon when recognized as a PDF by the system.

“I’d expect the drop

area to be higher up

by the ‘upload �documents’ selection.”

Participant 6

12 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

  • Once forms were complete, most users had to hunt for next steps to complete the page

Recommendation: place a purposefully redundant “save” option at the bottom of the form page (also leave the current button), as that is where the user ends up

after completing the form.

We had to tell some users to scroll up to save,

and it is hidden due to the length of the form

“I’m at the bottom of the page, why would I look again at the top of the page?”

Participant 3

13 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

  • This wasn’t commented on directly in tests, �but there are accessibility that don’t pass�the minimum ratio for WCAG Color Contrast

Recommendation: This has already been �ticketed for adjustment / review

“Many things are hard to read, and very light–and I

have good eyes.

Participant 6

  • Some users were uncertain if elements were interactive

because the difference in color between the inactive � state and hover state were so similar.

Recommendation: We should look at contrast difference in a system like Google Material as a benchmark of hover difference and adjust for our color scheme.

We also should consider many users will be on inexpensive company laptops, with small gamut range of colors (unlike a Mac Retina) in situations with harsh fluorescent lighting

Inactive button color

Hover button color

14 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

2. Step of “Generate Glossary” was unexpected

to 6 of 7 users, as the act of uploading or “creating

project” spinner seemed to be processing something

which users considered a “glossary creation”

Recommendation: have “create project” trigger the “generate glossary” action, and remove the “generate glossary” trigger for the user.

  • In cases where spinner of “Creating Project” went on longer than 5s, users expected feedback

Recommendation: we should have a progress bar, progress update or similar for longer wait times, so user knows system is operating. (Dan’s note: I have global guidelines if we want to systematize user wait messages)

“I wasn’t sure if I lost connection or what…”

Participant 1

“I was thinking it would generate the glossary when I hit “save project…”

Participant 2

“One I tap on ‘upload document,’ I expect upload activities to happen…”

Participant 4

“It would be good

to know how long it

will take.”

Participant 5

15 of 36

Task 1- Create a Glossary - Opportunities for Improvement

KEY FINDINGS

  • “Processing Glossary” also leaves the user uniformed about the variable process length

Recommendation: we should have a progress bar, progress update or similar for longer wait times, so user knows system is operating.

“I would expect 20 seconds. This is already a bit long…”

Participant 2

“I should be able to

add my own as an option.

I am trying to type and it won’t let me.”

Participant 2

2. Some users expected to be able to select

something like “Add New” from the droplist

Recommendation: adding “Add New” as a persistent option at the top of the list will drive engagement. (Dan’s note: I did learn after 20+ uses, you can type text directly in the field but it’s a very uncommon action of a dropdown menu.)

“It’s definitely my wi-fi…”

Participant 5

(Dan’s note: it wasn’t)

16 of 36

DISCOVERY FINDINGS

Task 3 – Add a Custom Term To The Glossary Table

17 of 36

Task 3- Add a Custom Term

KEY FINDINGS

  • Our Glossary Table got a lot of passionate feedback and we didn’t really even test the UI’s functionality

Recommendation: we should revisit the functionality of this broadly, with a robust understanding of 1) curation 2) training and 3) in session use and 4) machine learning expectations

“I would expect a bigger visual distinction of the columns… italic, color”

Participant 3

“I don’t understand “edit

mode.”

Participant 1

“I would expect a bigger visual distinction of the columns… italic, color”

Participant 3

“I missed the “Add term”

button”

Participant 2

“I’m clicking “Add Term”

and nothing is happening” Participant 2

“It takes a lot of concentration to look at”

Participant 3

“I might want French

in the left column.”

Participant 3

“Can I see how each

term was translated?”

Participant 3

“I did not see the delete

button here.”

Participant 3

“I did not see Edit Mode.”

Participant 2

“What translation engine

was used for this?”

Participant 2

“I want to be able to

quickly delete terms I know”

Participant 6

18 of 36

Task 3- Add a Custom Term to Glossary UI

KEY FINDINGS

  • The actual toggle of Edit Mode �was missed by 6 of 7 users

Recommendation: we should have feedback when a user is clicking something, that is dependent on another function to happen first

  • Edit Mode’s value was viewed as low, if at all, as curation is an ongoing task / artform.

Recommendation: we should have a tertiary-level action of “lock this gallery”, or just remove “Edit Mode” for the time being. It’s an obstacle, not a feature for creating or curating a glossary. We need to increase flexibility overall for removal, editing, renaming.

  • Add Term button was either completely missed or there was a significant delay in finding by 6 of 7 users

Recommendation: we should at minimum label this button

“We are always editing”

Participant 3

19 of 36

Task 3- Add a Custom Term to Glossary UI

KEY FINDINGS

  • Some users didn’t notice when an empty space appeared in the glossary table.

Recommendation: we should have all fields open awaiting user entry

“I would expect my

cursor to be here, blinking,

waiting for my input”

Participant 3

  • 5 of 6 users entered a term, only to have the system�immediately delete it when they didn’t hit “Enter.” Some users entered the sample text 3 to 4 times before understanding the system’s limitation.

Recommendation: save the term as it’s typed (at least in the front-end–this is a common expectation), allow “Enter” and clicking out of the area to save the content. Also “tabbing” would be useful to take user to the next field input.

20 of 36

Task 3- Add a Custom Term to Glossary UI

KEY FINDINGS

  • While opening this accordion was not part of the study, entry fields should be open and ready for input.

Recommendation: we should have all fields open awaiting user entry

  • Understanding the amount of curation that is done on a term-level by interpreters will be helpful. Interpreters have the expectation that this table should be fluid as an Excel document.

Recommendation: learn more about glossary curation from interpreters

21 of 36

DISCOVERY FINDINGS

Surprises, or the users’ “unmet needs”

AKA (Where Innovation Happens)

22 of 36

BIG SURPRISES

Participant 1 expected to be able to parse or select areas of the provided source documents, and not import the whole document

Opportunities for Improvement

“I was first focusing on what to extract out of the source document. I don’t want to take everything into my glossary…”

Participant 1

Recommendation: we need to learn more about Source documents–their forms, formats, languages, single or double languages, is word order/material presentation chronology important?

This may comment may be a result of the test PDF information being very broad and not ‘precurated’ by the client.

23 of 36

BIG SURPRISES

Participant 2 expected that a “project” might be a topic “portfolio” containing multiple glossaries with similar topics or for a similar event

Opportunities for Improvement

“A project or an event might have

multiple glossaries, right?”

Participant 2

“If I have a master glossary, how

do I use that to reference in an

meeting-specific glossary?”

Participant 5

Recommendation: exploring interpreters’ glossary structure would aid our designs. Example:

Master Glossary > Topic > Subtopic > Event > Meeting

and how the terms can trickle down so effort isn’t duplicated

24 of 36

BIG SURPRISES

Participant 2 was interested in what machine �learning protocol or translation engines we use,

as accuracy and rework is a concern of interpreters.

Opportunities for Improvement

“I would want to know what computer assisted translation tools were used to create the glossary… if it was Microsoft or Google Translate… and to show how the terms are generated… or if it was parallel text. ”

Participant 2

“If we fed the machine parallel text, � would the system align their terms?”

Participant 2

Recommendation: educate the users on what methods and technologies are used, and ideally allow expert or more knowledgeable users to switch protocols, engines, order of operation or remedy their own race conditions.

25 of 36

BIG SURPRISES

Participant 3 said we often get documents in both the Source Language and the Target (about “95% of the time” from insitutions like the UN), and would want to pull those in and compare or ”mark as accurate”. For example—”Translation Memory”

Opportunities for Improvement

“If the client is an institution, like the UN, � it’s about 95% of the time we’ll get both � documents translated beforehand.”

Participant 3

“Lets first look and see if this material is already cross-referenced–and available or approved by the client in a different language.”

Participant 3

“It will be a lot more accurate, efficient, less time consuming and more reliable to compare the two documents— and then we can go to machine translation for the rest.”

Participant 3

“We get documents that

are in both French and

English from Canada.

Uploading both these

would be wonderful! YES!”

Participant 5

Recommendation: allow side-by-side reading, highlighting or extraction from parallel documents, and expert or more knowledgeable users to switch protocols, ML engines, order of operation or remedy their own race conditions.

26 of 36

BIG WINS

“Using the search bar in the session, just

keying in a few letters and getting a result… that’s how I like it!”

Participant 5

“The tool is amazing and will help me tremendously”

Participant 1

While some of the usability was challenging, the utility (or features) of Interpreter Assist was clearly understood by all users.

Utility

Usability

Useful = Utility + Usability

What we build (the features)

How we build (ease and clarity of use)

27 of 36

SATISFACTION SURVEY

Difficult

Not Satisfying

Not Confident

Easy

Very Satisfying

Very Confident

1

7

4

5

6

2

3

1

7

4

5

6

2

3

1

7

4

5

6

2

3

Task 1 – Create a project, upload source material, create glossary

Overall, users found the task relatively easy to complete and satisfying, and were confident in their answers / feelings.

28 of 36

SATISFACTION SURVEY

We skipped this task due to session time

Task 2 – Upload second source material item, update glossary

29 of 36

SATISFACTION SURVEY

Difficult

Not Satisfying

Not Confident

Easy

Very Satisfying

Very Confident

1

7

4

5

6

2

3

1

7

4

5

6

2

3

1

7

4

5

6

2

3

Task 3 – Add a custom term to Glossary Table

Overall, users did not have strong opinions on the task’s ease or satisfaction, and were confident in their answers / feelings.

30 of 36

NEXT STEPS

Further understanding is proposed of:

• How event attributes can affect glossary-curation methods (region, register, audience)

• Form Fields and their assistive text

• The scenarios of source documents (one, two, three source languages, � chronological meetings, parallel lists/ fuzzy matching, existing master glossary)

• Machine Learning Engines and what they mean to interpreters

• How the Glossary Table is used in curation and within the session

Findings report and videos will be uploaded to UX Discovery link #####

For further questions or discovery projects, contact:

Dan Benner, Product Designer

dan@kudoway.com

31 of 36

SESSION VIDEOS

Links to usability sessions

32 of 36

FUTURE STUDY IDEAS

Using the Glossary Table UI (ADDING, CLEANSING, LEARNING, IN SITU/EVENT)

(this is the primary piece to focus on for Usability, after some basics are understood)

Compact / Normal / Roomy Versions of the UI

How users prepare beforehand (their methods outside the system)

How customers use system during sessions (hardware, software, when they get stuck)

Manual Term Extractor

Competitive Usability Baseline

(IntepretBank, Interpreter’s Help, Interplex, Interprefy, Interactio, Word–from Nancy’s Next Gen doc)

33 of 36

FUTURE STUDY IDEAS

Learn how interpreters might manage glossaries from one location (nesting, sharing)

How authentication works with existing KCP credentials

Adding metadata to glossaries

Glossary sharing workflows

Merging or leveraging master glossaries

34 of 36

APPENDIX

35 of 36

ISSUE LEGEND

Utility

Usability

Useful = Utility + Usability

Whether it provides the features you need

How easy and pleasant those features are to use

Design / UX

Tech / Dev

Multi-Discipline

Unknown

Urgency

Estimated Tech Impact / Debt

Teams Required

Bug

Minor

Major

Hot Fix

Estimated Positive User Impact

This is estimated by a non-engineer and is only

an approximation for conversation / project assignment purposes

This is estimated by best practices and research throughout worldwide applications outside of this project. Big issues will be found by user testing.

Low

Medium

High

Low

Medium

High

Teams Required

Est. Tech Impact

Est. User Impact

Urgency

Multi

Low

High

Major

In-Page Widget

This can be used for Heuristics and Usability Testing Documentation

36 of 36

https://www.nngroup.com/articles/ten-usability-heuristics/�

REFERENCE

Version 5.2.8.14

https://kai-beta.meetkudo.com/home

APPLIED HEURISTICS / BEST PRACTICES

  • Visibility of System Status
  • Match System and the Real World
  • User Control and Freedom
  • Consistency and Standards
  • Error Prevention
  • Flexibility and Efficiency of Use
  • "Recognition" Rather Than "Recall"
  • Aesthetic and Minimalist Design
  • Recognize, Diagnose and Recover from Errors
  • Give Help and Documentation