1 of 15

The Distributed Sensemaking Worksystem Screening Tool: �A Questionnaire for Formatively Evaluating Collaborative Sensemaking Environments(ICCRTS Paper 68)

Simon Attfield,  Andrew Leggatt, Huw Gibson, George Raywood-Burke, Nicola Turner

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

2 of 15

Introduction

What is Sensemaking?

  • The deliberate effort to understand events. (Klein et al. , 2007)
  • A process of reasoning about information to construct a view, belief, or understanding of a situation. (Attfield, Fields & Baber, 2018)
  • Relevant to military and security settings, including command and control (C2).
    • Seeking and gathering information.
    • Organising information to aid interpretation.
    • Interpreting information (framing and reframing).
    • Collaborating, communicating and negotiating meaning.

2

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

3 of 15

Introduction

What is Sensemaking?

  • Sensemaking is best understood as an embodied, situated activity in which people utilise their environments to enhance and amplify the thinking process.

3

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

4 of 15

Introduction

The Distributed Sensemaking Worksystem Screening Tool (DSM WST)

  • There are few tailored Human Factors (HF) methods for studying Sensemaking.
  • The Distributed Sensemaking Worksystem Screening Tool (DSM-WST) addresses this by enabling HF practitioners to:
    • Quickly assess sensemaking work settings.
    • Identify issues that may limit the quality and outcomes of sensemaking
    • … and can be addressed through technology or team design.

4

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

5 of 15

Nine Principles for Distributed Sensemaking

  1. Provide sufficient cues for sufficient sensemaking.
  2. Support low-cost information workflows.
  3. Represent information quality and provenance.
  4. Promote expertise and associated domain knowledge.
  5. Allow time to acquire information to build an evidence-based and coordinated situation picture.

5

  1. Use strategies for the negotiation of sense.
  2. Where appropriate, use strategies for frame enumeration and elimination.
  3. Provide explanatory context for actions, orders and requests.
  4. Minimise the costs of achieving and maintaining common ground.

Attfield, Minocha, Elliott, Fields, Baber, Hutton, Leggatt, Harryman (2021). Nine Principles for Supporting Distributed Sensemaking, Naturalistic Decision Making and Resilience Engineering Symposium 2021, Toulouse, France.

Elliott, Attfield, Minocha, Fields, Hutton, Baber (2020) Enhancing Sensemaking: Supporting Distributed Groups in the Future Operating Environment. 25th International Command and Control Research & Technology Symposium (ICCRTS).

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

6 of 15

Item Generation and refinement

1. Initial set of 106 items (questions) based on capabilities referred to in the nine principles.

e.g. Principle 1 - Provide sufficient cues for sufficient sensemaking.

    • During the task, I was able to find cues that helped me to make sense of things.
    • During the task, I was able to find clues that helped me to make sense of things.
    • During the task, I was able to find signs that helped me to make sense of things.
    • During the task, I was able to find indicators that helped me to make sense of things.

2. Item reduction by research team to maximise coverage and minimise duplication.

3. Qualitative Feedback by three military and three non-military reviewers (professionals).

    • One-to-one sessions over MS Teams.
    • Briefing on context and purpose, items read one by one, comments on readability noted and adjustments made.

6

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

7 of 15

Rating and Weighting

7

Principles may not apply universally to all sensemaking settings and tasks.

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

8 of 15

DSM WST

Three Parts:

  1. Instructions for HF Practitioners
  2. Final Questionnaire: 46 items in 9 sections (1 section per principle)
  3. Scoring method (rating x weighting)

1. Cues and Clues

2. Working with and manipulating information

3. Information reliability and sources

4. Background knowledge

5. Time to think and respond

9. Common Ground

7. Considering alternatives

8. Giving explanations

6. Discussing what things mean

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

9 of 15

Example Application

  • Experiment to assess ways of enhancing Common Ground in a DSM task.
  • Herb Clark’s Common Ground refers to shared knowledge, beliefs and assumptions for successful communication.
  • Aimed to manipulate:
    • Situational Common Ground = in virtue of a shard situation or context
    • Cultural Common Ground = in virtue of a shared cultural group
  • Collaborative task using C3Fire - simulated world, individuals work together to extinguish forest fires over an area of around 100 square km
  • Participants control firefighting assets.

9

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

10 of 15

Experimental Design

  • Thirty-six participants (27 male, 9 female)
  • Worked in teams of three people.
  • Participants worked at separate workstations and communicated using simulated radio.
  • Each group performed three experimental runs.
  • Individually completed the DSM WST after each run.

10

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

11 of 15

Situational Common Ground Manipulation�(within participants)

11

Shared Unit

See locations of all units, plus any fire occurring within one grid square of all units.

God’s Eye

See locations of all units plus all fire.

Unit Only

See locations of own units only, plus fire occurring within one grid square of own units.

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

12 of 15

Cultural Common Ground Manipulation�(between participants)

  • Half of the groups used a structured debrief method between runs for discussing, sharing, and co-developing their mental models, goals or conceptual frameworks relating to the task.
  • They wrote their insights for planning and acting on Post-it notes, ranking them, discussing and re-ranking them as a group (Post-it Group). The control group engaged in open, unstructured discussion (No Post-it Group).
  • Situational Common Ground intervention had a significant effect on an overall measure of the quality of Distributed Sensemaking, this was not the case for the Cultural Common Ground intervention.

12

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

13 of 15

Summative Analysis

A mixed ANOVA showed

  • Highly significant main effect for Interface View (F = 23.216, p < 0.001)
  • No significant main effect for Post-It task.
  • Significant interaction F(2, 34) = 3.84, p= .026.

13

Mean aggregated, weighted DSM WST ratings shown for different levels of Interface View (Gods Eye vs. Shared Unit vs. Unit Only) and the two levels of Post-It task (Post-it vs. No Post-it).

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

14 of 15

Analysis by Weighted Item

  • Horizontal colour banding indicates consistency across conditions.
  • Effect of the Interface View manipulation indicated in responses items 9.2, 9.4, and 9.11
  • Recording and organising information (2.4, 2.6) presented some difficulty.

14

© Copyright Trimetis Ltd. 2023. All Rights Reserved.

15 of 15

Conclusion

  • DSM WST is tailored for formative evaluation of sensemaking work environments.
  • Provides systematic assessment of adherence to a set of DSM principles (informed by SM theory and evidence).
  • Sensitive to relevant manipulations.
  • Reliable across similar conditions.
  • Can form part of analysis of sensemaking environments to support socio-technical redesign.
  • The tool is in the paper!

15

© Copyright Trimetis Ltd. 2023. All Rights Reserved.