1 of 18

The Sensemaking Assessment Questionnaire:�A Tool for Conducting Summative Assessment of Collaborative Sensemaking Environments.

Nicola Turner, Trimetis - nicola.turner@trimetis.co.uk

Andrew Leggatt, Trimetis - andrew.leggatt@trimetis.co.uk

W. Huw Gibson, Trimetis - huw.gibson@trimetis.co.uk

George Raywood-Burke, Trimetis - george.raywood-burke@trimetis.co.uk

Simon Attfield, Trimetis - simon.attfield@trimetis.co.uk

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

2 of 18

Overview

  • Sensemaking is a crucial concept in the context of command and control (C2).
  • It is important to study sensemaking within its natural environment to,

(a) understand how it is achieved and the factors that affect it,

(b) what design changes improve sensemaking outcomes in any given context.

  • Quantitative evaluation relies on having suitable measures.
  • There are limited quantitative evaluation measures of DSM.
  • The aim of this work was to create an instrument to support the measurement of distributed sensemaking in complex situations.
  • Items were generated and data were collected to explore the factor structure to form the Sensemaking Assessment Questionnaire (SMAQ).
  • Reliability and validity assessments were conducted and the items were reviewed by a panel of ex-military and civilian staff.
  • There is a full version of the instrument with 22 items and a shortened version with 12.
  • SMAQ has since been applied to measure distributed sensemaking quality in a C2 experiment and was sensitive enough to detect a difference in sensemaking quality as teams practiced the task.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

3 of 18

Background

Sensemaking is “a motivated, continuous effort to understand connections (which can be among people, places and events) in order to anticipate their trajectories and act effectively

- Klein, et al., 2007

C2

Military

Emergency response

Security Operations Centre

Rescue

Fire services

Medical

Seeking and gathering information

Organising information

Interpreting information

Collaboration, communication, and negotiation

Creating meaningful narratives

Police

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

4 of 18

Background

Typically explored through qualitative evaluation:

    • How is sensemaking achieved?
    • What factors affect it?

But, how can we find out what changes can improve sensemaking?

    • How can we measure this?

Aims:

  • To create an instrument where the factor structure matched a theoretical model;
  • To measure the quality of sensemaking in both individual and distributed tasks.

Factor

Item

Comprehension and gaining insight

Q1: Gain insight from the available information.

Q2: Construct an understanding from the available information.

Q3: Make sense of the available information.

Drawing on prior knowledge

 

Q4: Draw a link between the available information and things you were aware of already.

Q5: Draw a link between information you encountered and your prior knowledge.

Structuring

 

 

Q6: Develop a coherent view of the information.

Q7: Find structure in the information.

Q8: Find a way to (mentally or otherwise) organise the information.

Understanding connections

Q9: Understand connections between things.

Gap discovering and bridging

 

Q10: Discover where the gaps are in how you understand a situation.

Q11: Bridge gaps in your understanding of a situation.

Reducing confusion and ambiguity

Q12: Reduce any confusion.

Q13: Reduce any ambiguity.

The Individual Sensemaking Questionnaire (ISMQ)

- Alsufiani, Attfield & Zhang, 2017

  • Based on multiple theories of sensemaking;
  • Treats sensemaking as an individual activity;
  • Designed to have 6 factors but loads onto a single factor.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

5 of 18

Item development – theoretical underpinning

Sensemaking activities from Klein et al. (2007) pg 133.

Klein et al.’s (2007) Data-Frame Theory of Sensemaking was selected as a single theory because it:

  • Describes the process of sensemaking in great detail;
  • Is based on, and applicable to, a wide variety of situations;
  • Applies to novices as well as experts.

Applied the 9 assertions as a guide:

  1. Sensemaking is the process of fitting data into a frame and fitting a frame around the data.
  2. The “data” are inferred, using the frame, rather than being perceptual primitives.
  3. The frame is inferred from a few key anchors.
  4. The inferences used in sensemaking rely on abductive reasoning as well as logical deduction.
  5. Sensemaking usually ceases when the data and frame are brought into congruence.
  6. Experts reason the same way as novices but have a richer repertoire of frames.
  7. Sensemaking is used to achieve a functional understanding.
  8. People primarily rely on just-in-time mental models.
  9. Sensemaking takes different forms, each with its own dynamics.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

6 of 18

Item development – proposed theoretical model

  • 45 items were generated to measure individual sensemaking.
    • Including 6 of the original ISMQ items
  • Each factor had between 7 and 12 items.
  • Existing ISMQ items were coded under the factor where they appeared to fit best.
  • A theoretical model of individual sensemaking was created describing the predicted relationships between proposed factors.

Proposed theoretical model of factor structure.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

7 of 18

Scale development - method

  • 242 participants recruited through Prolific.
  • Conducted a sensemaking task involved reading one of 6 narratives with a time limit of 5 minutes:
    • Narratives were challenging to interpret in different ways,
    • Intended that participants’ ability to make sense of them would vary depending on which one they received.
  • Responded to 5 multiple choice questions to test their understanding (performance), all 45 individual SMAQ items on a 5-point Likert scale, and PANAS-SF.
  • Data collated and screened for missing values, unengaged responses, outliers, skewness and kurtosis.
    • All SMAQ response options were used at least once.
    • No items were removed for skew or kurtosis.
  • EFA used to analyse latent structure of the items and to reduce items.
  • Inter-item correlations and item-total correlations calculated and checked, and items were correlated with task performance.
  • EFAs conducted on variations to produce clean pattern matrix with acceptable levels of the KMO measure of sampling adequacy, communalities, and factor loadings that were supported by underpinning theory.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

8 of 18

Factor model

  • Constructing frames: The ability to create, adjust, and explore different explanations and as new information is found. This factor incorporates items relating to elaborating, questioning, comparing, and discarding frames.

  • Applying existing frames: The application of existing knowledge and previous experience of similar situations to support understanding of the current scenario.

  • Achieving functional understanding: Gaining sufficient understanding of the situation to be able to make connections, for the information to fit, and to be able to complete the set task.

Theoretical model of factor structure

The final factor model consisted of three factors (not including the distributed sensemaking items):

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

9 of 18

Factor loadings – full version

Item

Factor Loading

 

 

1

Achieving functional understanding

2

Applying existing frames

3

 

Constructing frames

Communality

I created explanations based on key pieces of information

0.613

0.504

I adjusted my understanding based on key pieces of information

0.811

0.611

I identified alternative explanations as I encountered new information

0.567

0.372

I explored different explanations for the information provided

0.686

0.423

I drew links between information provided and my prior knowledge

0.813

0.677

I applied existing knowledge to help me understand the scenario

0.709

0.533

I applied existing understanding of similar situations

0.773

0.599

I speculated on explanations based on previous similar experience

0.628

0.379

I had sufficient understanding to [answer the questions]

0.862

0.606

I felt confident that I could give adequate [answers to the questions]

0.821

0.583

My understanding of the scenario fit with the information provided

0.619

0.492

I made sense of the available information

0.645

0.538

I understood connections between things

0.515

0.431

Eigenvalues

5.02

1.81

1.33

 

% of variance

34.98

10.36

6.57

 

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

10 of 18

Factor loadings – short version

Item

Factor Loading

 

 

1

 

Constructing frames

2

Applying existing frames

3

Achieving functional understanding

Communality

I created explanations based on key pieces of information

0.680

 

0.533

I adjusted my understanding based on key pieces of information

0.795

 

0.608

I explored different explanations for the information provided

0.622

 

0.394

I drew links between information provided and my prior knowledge

0.567

0.459

I applied existing understanding of similar situations

1.019

0.985

I had sufficient understanding to [answer the questions]

 

0.754

0.570

I felt confident that I could give adequate [answers to the questions]

 

0.825

0.681

Eigenvalues

2.95

1.29

1.05

 

% of variance

25.87

22.12

12.45

 

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

11 of 18

Scale evaluation

Reliability

Internal reliability assess with Cronbach’s alpha:

Validity

Criterion validity was examined by exploring the relationship that both SMAQ scale versions had with task performance, positive affect, and negative affect. Both versions of the scale were shown to have small but significant positive correlations with:

  • Task performance (r(232) = .263, p <.001; r(232) = .256, p <.001); and,
  • Positive affect (r(232) = .337, p <.001; r(232) = .325, p <.001); and,
  • A significant negative correlation with negative affect (r(232) = -.156, p =.017; r(232) = -.143, p =.029).

Quantitative assessment

SMAQ

α = .861

* α = .762

Constructing frames

α = .767

*α = .736

Applying existing frames

α = .821

*α = .777

Achieving functional understanding

α = .831

*α = .762

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

12 of 18

Scale evaluation

Qualitative assessment

  • Diverse mixture of reviewers
    • Officer ranks.
    • Non-commissioned officer ranks.
    • Civilians.
  • Shown a list of the items and asked to comment on readability, ease of understanding for a non-technical audience, and suggest potential improvements.
  • Changes made where feedback was consistent and they could be made without altering the meaning behind the items, e.g.
    • Where items started with “Team members…” this was changed to “Our team…
    • Individual words were altered, such as changing “speculated” to “considered”;
    • And “articulated” to “communicated”;
    • And one item was re-phrased completely from, “My understanding of the scenario fits with the information provided, to a negatively coded, “There were pieces of information that did not fit with my understanding”.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

13 of 18

Final items and factors

#

Item wording

Factor

1*

I formed explanations based on key pieces of information.

Constructing frames

2*

I adjusted my understanding based on key pieces of information.

Constructing frames

3

I identified alternative explanations as I encountered new information.

Constructing frames

4*

I considered alternative explanations for the information provided.

Constructing frames

5*

I drew links between information the provided and my prior knowledge/experience.

Applying existing frames

6

I applied existing knowledge to help me understand the scenario.

Applying existing frames

7*

I knew what to do from previous similar situations

Applying existing frames

8

I considered explanations based on previous similar. experiences.

Applying existing frames

9*

I had sufficient understanding to complete the task.

Achieving functional understanding

10*

I felt confident that I could adequately complete the task.

Achieving functional understanding

11

(-)There were pieces of information that did not fit with my understanding.

Achieving functional understanding

12

I made sense of the available information.

Achieving functional understanding

13

I understood connections between things.

Achieving functional understanding

14*

Our team had valuable knowledge for making sense of the situation.

Team sensemaking

15*

Our team communicated clearly to increase our shared understanding.

Team sensemaking

16*

Our team explained their thinking.

Team sensemaking

17

Our team shared the right information at the right time.

Team sensemaking

18

Our team anticipated the information needs of others.

Team sensemaking

19

Our team communicated their information needs.

Team sensemaking

20*

Our team worked effectively together to grow shared understanding.

Team sensemaking

21

Our team contributed to gaining a collective understanding.

Team sensemaking

22*

Our team shared the same understanding by the end of the task.

Team sensemaking

*Short version

(-) Reverse coded

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

14 of 18

Pilot testing

  • Experiment involving computer simulation with small teams communicating and coordinating to achieve specific objectives (e.g. control the spread of fires to save lives, properties, and land).
  • 36 participants, working as 12 teams of 3.
  • Teams were given a training round, followed by three further rounds.
  • SMAQ used to measure sensemaking quality for manipulations within a C2 task, and to test whether it was sensitive to manipulations that affect performance.
  • Significant main effect for the trial run number (F(3, 34) = 42.142, p < .001), indicating that the more practice the teams had at using the simulation, the more the mean SMAQ rating increased.
  • SMAQ scores followed similar trends to other performance scores, i.e. as sensemaking quality increased, the “number of casualties” decreased.

Mean SMAQ score between common ground conditions across each trial run presented in chronological order. Error bars indicate +/- standard error.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

15 of 18

Discussion

  • SMAQ enables sensemaking quality to be assessed and compared across tasks and scenarios, and for complex team tasks.
  • Should data from team sensemaking activities be aggregated?
    • Individual perspectives within a large team vs. team differences.
  • Promising results from pilot study with intended target audience.
  • Limitations
    • Relatively small sample size (n = 242) in initial item reduction analysis.
    • The need for a “sensemaking task” to provide frame of reference
      • Would benefit from a more granular performance measure.
      • Difficulty collecting large sample sizes of team sensemaking data.
  • Further validation activities should be undertaken, along with CFA and SEM to explore relationships between constructs, and application to an operational environment.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

16 of 18

Conclusion & next steps

  • SMAQ measures the quality of sensemaking, related but separately to performance measures.
  • Easy to administer self-report tool.
  • Can be used in collaborative, complex, dynamic, and uncertain environments.
  • Useful for comparison and evaluation.
  • Some validation already undertaken but more would be desirable.
    • Testing in more sensemaking research.
    • Confirmatory factor analysis to test the factor structure.

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

17 of 18

Questions?

© Copyright Trimetis Ltd. 2024. All Rights Reserved.

18 of 18

© Copyright Trimetis Ltd. 2023. All Rights Reserved.