1 of 19

Adapting Online Customer Reviews for Blind Users: A Case Study of Restaurant Reviews

1

@accessodu @WebSciDL

05/07/2025

Mohan Sunkara, Akshay Kolgar Nayak, Sandeep Kalari, Yash Prakash, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

Department of Computer Science,  

Old Dominion University, Norfolk, VA

2025 Web Science & Digital Libraries Research Group Expo

2 of 19

Importance of Online Reviews

  • Online reviews influence business reputation, customer trust, and profits.
  • Especially important for restaurants, hotels, and e-commerce.
  • Reviews act as digital word-of-mouth, sharing first-hand customer experiences.
  • Consumers often trust peer reviews more than advertisements.
  • Positive reviews can boost sales significantly; negative reviews can severely impact brand image.

2

@accessodu @WebSciDL

05/07/2025

3 of 19

Interaction with Online Review Systems

  • Interaction Challenges:
    • Large volume: Too many reviews.
    • Redundancy: Repeating content.
    • Irrelevance: Unhelpful or outdated reviews.
  • Current Solutions:
    • AI-based summarization.
    • Most frequent topics.
    • Search and sort options.
    • Question-answering support.

3

@accessodu @WebSciDL

05/07/2025

4 of 19

Limited support for Blind Users

  • Almost all interface features designed mainly for sighted users.
    • Help sighted users skim quickly and prioritize important information.
  • Limited support for blind users predominantly rely on audio-based screen readers.
    • Audio interaction is slower, sequential, and causes listening fatigue.
    • Blind users cannot quickly skim through reviews.
    • Existing AI summaries provide only generic “high-level” information.

4

@accessodu @WebSciDL

05/07/2025

5 of 19

Screen Readers Demo

5

@accessodu @WebSciDL

05/07/2025

6 of 19

In-depth Investigation of Interaction Issues/Needs

6

  • Interview study:
    • 30 blind participants (Age range: 22–63 years, Gender distribution: 13 males, 17 females).
    • All participants proficient in web screen reading.
    • Varying employment background: Students, teachers, social workers, freelancers.
    • Most preferred the JAWS screen reader, only few used NVDA.
    • Participants recruited through email lists and word-of-mouth referrals.

@accessodu @WebSciDL

05/07/2025

7 of 19

Interview Study – Design

  • Semi-structured interviews to facilitate deep discussions.
  • Seed questions included:
      • What challenges do you face while browsing reviews?
      • What information do you typically search for in reviews?
      • How can your experience with online reviews made better?
  • Interviews were conducted remotely via Zoom or phone calls.
  • Average interview duration: ~45 minutes per participant.
  • Feedback analyzed using open coding and axial coding methods for identifying common themes across participants.

7

@accessodu @WebSciDL

05/07/2025

8 of 19

Interview Study – Findings

  • Interaction challenges:
      • Information overload and listening fatigue.
      • Content redundancy.
      • Outdated information and reviews.
  • User-interface needs:
      • Aspect-based organization (Food, Service, Pricing, Ambiance, Hygiene).
      • Sentiment-specific summaries (Positive, Negative).
    • Based on these findings, we designed and developed QuickCue extension.

8

@accessodu @WebSciDL

05/07/2025

9 of 19

QuickCue Browser Extension

9

@accessodu @WebSciDL

05/07/2025

10 of 19

QuickCue: Workflow Schematic

10

@accessodu @WebSciDL

05/07/2025

11 of 19

QuickCue: Joint Classification

  • Joint classification:
    • Given a review, find all relevant aspect-sentiment pairs
    • One review can have information pertaining to multiple aspects and sentiments.
  • Using LLM: GPT-4
  • Clue and Reasoning Prompting (CARP):
    • Instruct the LLM to look for `clues’ and use that in the reasoning process.
    • Clues: Keyword, phrase, contextual element that provides evidence for classification.

11

@accessodu @WebSciDL

05/07/2025

12 of 19

Joint Classification Performance

  • Evaluation:
    • Dataset:
      • 50 manually annotated restaurant reviews sampled from diverse cuisines and locations.
    • Metrics:
      • Standard Precision, Recall, and F1-Score.
    • Results:
      • Few-shot CARP prompting achieved 0.8001 Precision, 0.8201 Recall, 0.8099 F1-Score.
      • Zero-shot CARP prompting achieved 0.615 Precision, 0.625 Recall, 0.6199 F1-Score.

12

@accessodu @WebSciDL

05/07/2025

13 of 19

QuickCue: Focused Summarization

  • Focused summarization:
    • Summary pertaining to a specific aspect-sentiment pair.
    • Reviews usually contain additional information.
  • Directional Stimulus Prompting (DSP):
    • Providing keywords as directional stimuli to tailor the summarization process via selective information prioritization.
    • Helps avoid unrelated or redundant information.

13

@accessodu @WebSciDL

05/07/2025

14 of 19

Focused Summarization Performance

  • Evaluation:
    • Dataset:
      • 50 examples (5 examples for each of the 10 aspect-sentiment pairs). Ground-truth summaries manually handcrafted and verified for factuality.
    • Metrics:
      • Factuality Score (1–10 scale): Measures accuracy of summaries.
      • Noisiness Score (1–10 scale): Measures absence of irrelevant information.
    • Results:
      • Average Factuality Score: 7.9
      • Average Noisiness Score: 8.3

(Higher scores are better.)

14

@accessodu @WebSciDL

05/07/2025

15 of 19

QuickCue Evaluation

  • User study:
    • 10 blind participants (4 females, 6 males), aged 22–43 years.
    • All participants familiar with web screen reading using JAWS.
    • No overlap with interview study participants.
    • Within-subject experimental design:
      • Conditions: Screen Reader baseline vs. QuickCue augmentation.
      • Task: Compare two restaurants' reviews and make a dining decision.
      • Platform: Google Maps.
    • Different restaurants in different conditions to avoid familiarity bias.

15

@accessodu @WebSciDL

05/07/2025

16 of 19

Results

16

@accessodu @WebSciDL

05/07/2025

17 of 19

Qualitative Feedback

  • Simplistic design and ease of use:
    • Access via basic keyboard shortcuts.
    • “listen less and learn more”.
    • “If I have these summaries, I will not at all listen to the reviews. It is extremely frustrating to listen to a lot of irrelevant and repeated feedback that barely tells me anything about what I would like to know about the food and the experience.”
  • Extension of QuickCue to other review platforms:
    • E-commerce websites like Amazon.

17

@accessodu @WebSciDL

05/07/2025

18 of 19

Discussion

  • Limitations:
    • Small datasets for evaluating joint classification and focused summarization performances.
    • Focus on JAWS + Chrome + Google Maps desktop platform.
    • Support for only English restaurant reviews.
    • Extraction of reviews based on handcrafted XPath rules.
  • Future studies with larger participant groups:
    • Quantitative metrics.
    • Beyond Google Maps and restaurant domain.
    • Smartphone interaction with customer reviews.

18

@accessodu @WebSciDL

05/07/2025

19 of 19

Concluding remarks

  • QuickCue enables blind users to quickly access desired information in restaurant reviews.
  • LLMs can be instructed via adapted CARP and DSP prompts to analyze and generate customized summaries from customer reviews.
  • Customized summaries facilitate more-informed decision-making.
  • QuickCue opens pathways for broader adoption in other domains beyond restaurant reviews.

19

@accessodu @WebSciDL

05/07/2025