1 of 15

Clarity: Using Teachable AI tool to help low-vision people locate lost glasses

By: Saumya Verma, Hitarthi Bhinde, Nikita Khowala

2 of 15

Understanding the Problem: User Interview

3 of 15

About Our Participant

Our 26-year-old participant has low vision and faces three recurring challenges:

difficulty reading small or low-contrast digital text,

glasses fogging up when moving between environments,

frequently misplacing their glasses at home or in their workspace.

Image 1: Photo of the low-vision participant.

4 of 15

An overload of insights

#1: My issues with reading intensify at night, when I am stressed, or after long work sessions.

#3: I often put my glasses absentmindedly in bags, on tables, in random corners and then spend time searching for them.

#2: Fogging is fast, unpredictable, and socially awkward.

#4: I rely on my phone for daily activities but am open to wearable solutions.

Image 2: Collaborative co-design session.

5 of 15

How might we support a young adult who has low vision by reducing visual strain, preventing fog-related interruptions, and simplifying the process of finding misplaced glasses?

OUR INITIAL GOAL

+

+

6 of 15

BRAINSTORMING SOLUTIONS TO FLESH OUT 2 IDEAS

  1. Smart Adaptive Glasses featuring an anti-fog activation button, a Bluetooth tracker embedded in the frame, and a slide-in button for enhanced screen text readability.

  • Assistive App with an adaptive display widget, environmental alerts for fog prevention, and a Find My Glasses feature with sound and AR directions.

7 of 15

Why We Pivoted

What we learned

Fogging interruptions were frustrating but required specialized hardware to solve

Losing glasses happened almost every day, causing stress and delays

The participant relies heavily on their phone, making mobile solutions more adoptable

Why we shifted focus

We focused on solving the most frequent and fixable challenge: locating misplaced glasses. A Teachable AI tool offered fast, independent support using the participant’s existing phone.

8 of 15

How might we support a young adult who has low vision by simplifying the process of finding misplaced glasses?

OUR FINAL GOAL

9 of 15

Early Exploration

Option A: Use a pre-trained model (COCO SSD) that detects generic glasses

Drawback: Poor detection in cluttered backgrounds.

Option C: Bluetooth/BLE signals for location tracking

Option B: Google Teachable Machines

Drawback: Requires large, diverse training data; accuracy depends heavily on user-collected images.

Option D: Marker/QR Approach

Drawback: Needs hardware attached to glasses; added weight, cost, and adoption barriers.

Drawback: User must print and attach a visible marker themselves; not aesthetic and fails in low light.

10 of 15

Exploring Teachable AI

We experimented with Teachable Machine to see how well a camera-based model could detect glasses in real environments

Trained an initial model with images from different angles, lighting conditions, and backgrounds

Early tests showed limitations with contrast, distance, and misclassifications, guiding our next iteration

Image 3: Different background photos teach the model to recognize the glasses anywhere.

Image 4: Early testing revealed detection issues.

11 of 15

Our Solution: Clarity

An assistive AI tool that identifies a user’s glasses and guides them toward it with simple visual prompts and audio cues. The experience is designed to be fast, accessible, and supportive for people with low vision.

12 of 15

Our Solution: Clarity

Custom-trained AI model that recognizes user's specific glasses

Simple, high-contrast interface with one large action button

Audio guidance to help users train the model

A clear sound cue when the glasses are detected

13 of 15

Image 5: A step-by-step flow showing how users train and use Clarity to locate their glasses.

(a) User begins by adding a new item to train.

(b) User names the object

(c) Instruction to place glasses

(e) Instruction to place glasses

(g) Instruction to place glasses

(i) Instruction to place glasses

(d) User takes the first centered photo

(f) User takes a second photo.

(h) User takes a third photo

(j) User takes final training photo

(k) Training Complete

(l) Start Scanning

(m) The app scans the room for the glasses

(n) App indicates the glasses are in the left corner

(o) User is guided to bring the phone closer

(p) Detection circle helps guide alignment

(q) Glasses successfully located in the space

14 of 15

Prototype Demo

15 of 15