1 of 17

Data quality and analysts’ role in AI enhanced C2

Erik Bjurström, Mälardalen University, Sweden

Martin Schüler, University West, Sweden

Anette Strömberg, Mälardalen University, Sweden

Git Roxström, Swedish Defence Research Agency (FOI)

2 of 17

We need to discuss … with regard to AI in C2

  • The very quality of data
  • Operators’ awareness of the validity of predictions with regard to any specific situation
  • Urgent debate given that full transparency may be impossible
  • Underpinning data may be based on exercises, simulations, real-time data … and a mix thereof
  • How data of different origins and quality can be managed and communicated to allow for operators to assess the basis for predictions
  • Classification of different kinds of uncertainties
  • Look at other fields for how to highlight what data that is at hand
  • How AI may change the role of analysts

3 of 17

Decision-making is at the heart of the military profession

  • ’Less than AI-capabilities’ have been around for a long time in C2
  • AI gained public awareness through ChatGPT (autumn 2022)
  • Raising all kinds of questions about this ’new technology’
  • Calls for a ban on AI
  • AI def. = a technology doing what hereto only humans could do
  • A chock that civil applications are ahead (as was PlayStation2 in 1999)
  • AI more far-reaching consequences than PS2 for the military as it touches at the notion of excellence in the military profession itself
  • ChatGPT suits the insistence on speed in military decision-making

4 of 17

AI and heuristics

  • Speed – an argument for human heuristics (rules of thumb)
  • Simple ranking may be superior to optimization in the face of uncertainty, where speed and simplicity matter
  • However, AI may provide both speed and accuracy, coordination as well as direction (i.e. C2)
  • It is predictable that AI will be an integral part of C2 systems
  • C2 as a matter of design – a matter of viewing human and ’non-human partners’ as integral components of the C2 system of a software-based defence
  • Who’s judgment should be trusted: AI-driven probabilities or human heuristics?
  • Only matter of practicality, or also social/ethical dimensions?
  • Pragmatist validity (including values) vs. Positivist ’culture of objectivity’?

5 of 17

We need to talk …

  • Risks and uncertainties (possible overconfidence in quantification)
  • How they are percieved and treated
  • Possible threat to the core of the military profession’s task as decision-maker
  • Professional ethos, identity and ethics

6 of 17

Classification of uncertainties

7 of 17

Attitudes towards uncertainty

Accept

Control

Predict

3 Dimensions:

- Intellectual (how we think about it)

- Practical (how we actually treat it)

- Emotional (how we feel about it)

8 of 17

Qualitative vs. Quantitative uncertainties

  • Quantitative uncertainties – probabilistic (legitimate when future events can be expected to obey the same fundamental logic)
    • A closed probability space
    • Historical frequencies
    • Baysean including subjective probabilites, based on same basic logic
  • Qualitative uncertainties – non-probabilistic (legitimate when alternative future events obey structurally different logics)
    • Lack knowledge not only about frequency, but also about what may happen
    • i.e. no closed probability space
  • Notions of quantitative vs. Qualitative uncertainty will be fundamental for an operators assessment of whom to trust in the face of uncertainty

9 of 17

Intentional vs. Stochastic uncertainties

- What are we up against?

  • Stochastic uncertainty – random distribution (fluctuations, logistics)
  • Intentional uncertainty – typically (still) human actors
  • ’Autonomous’ robots, vehicles etc – not really autonomous in the sense of intentionality of human action in line with Bruner’s (1986) ’narrative mode of cognition’
  • Intuitive heuristics are more similar to interpretative research than quantitative analysis in that it is all about emphatic interpretation of what is going on in an adversary’s mind; what story is guiding his intentions and action, which is fundamental for warfare’s longstanding prctices of deception

10 of 17

Dynamic vs. Static uncertainties

  • Static uncertainties – the decision-maker is forced to make a high-stake decision under uncertaintyt, without room for later adjustment (prospect theory)(soccer penalty kick)
    • Warning time < Reaction time
  • Dynamic uncertainties – a situation which allows for probing actions (ice-hockey penalty)
    • Warning time > Reaction time
  • ’Dynamic decision-making’ (Brehmer)

11 of 17

- Maneuver warfare in an era of AI-based on real-time data?

    • Shortening reaction-time may convert static uncertainty into dynamic uncertainty.
    • In the face of AI-systems, this core of military common sense about ’getting inside the adversaries decision-loop’ is severely challenged and may imply near total irrelevance of maneuver warfare for any actor that hasn’t integrated non-human AI partners in the design of the C2 system
    • Remaining remedies may be massive force, attrition at high cost, and exploitation of intentionality through deception

12 of 17

Classification of data feeding the AI

13 of 17

Beyond (un)supervised learning: What data?

  • What information has shaped the AI?
  • Military research and military training and exercises in splendid isolation from each other – only a fraction of data is transferred between these spheres
  • Research programs in splendid isolation from military practice
  • Hoffman et al (2022) 10 recommendations for AI-development:
    • Small-scale experiments
    • Real-world operators, actually having knowledge about their task and equipment
    • Identify any gains before scaling up

14 of 17

Neither ’hard’ or ’soft’ OR interested?

  • ’hard’ (American) OR – quantitative, modelling & simulation (AI)
  • ’soft’ (British) OR – qualitative, decision-making
  • While this divide has got attention, interest in practice has not
  • Hence a call for more practice-based exploration of possibilities for AI enhancement – that has not been met
  • From an operator’s perspective what fed the AI makes a world of difference:
    • Assumptions-based simulations
    • One-sided exercise data
    • Double-sided exercise data
    • Real-time combat data

15 of 17

The professional ethos

  • Leaning towards emotional aspects of uncertainty, rather accepting than predicting and controlling
  • Emphasizing the social and ethical – if not holy – aspects of risking one’s life as well as the life of others by accepting the responsibility of taking the decision
  • The severity of the trade-off betwen rationality and the military ethos also makes it important to know the very quality of the data feeding and thereby forming the ’non-human partner’
  • … thus taking us back to the core of the most human of uncertainties; that of intentionality
  • i.e. the gold-standard for data quality
  • If military ethos and professional norms are going to be challenged by ’non-human partners’ – this had better be good

16 of 17

Looking at other fields

17 of 17

Who else is handling a mess of different data for the sake of practicality?

  • Public health research and practice
  • Has to deal with the entire breadth of human behavior – far beyond the laboratory or clinical practice
  • Estimate what measures that would be adequate and what recommendations that would also be followed …
  • … an insight that the ones carrying the ultimate risk should also have a say
  • Topological data analysis – providing an understanding of the basis for analysis (rather than explaining the analysis)