1 of 12

Loyal Servant or Loose Cannon? �Capabilities and Concerns with Autonomous AI for C2 and Crisis Management

Bjørn T. Bakken, Inger Lund-Kordahl, Kristoffer Lie Eide

University of Inland Norway (INN)

2 of 12

  • Autonomous AI has the potential to revolutionize C2 and crisis management, as the pivotal capability to support the decision-making cycle in conflict, crisis and warfare.
  • However, adding an autonomous capability to generative AI systems comes with an inherent risk. Such systems will be impossible to test fully against unforeseen, hybrid threats.

Research Problem

Research Question

  • How can autonomous, generative AI be used efficiently, effectively, and safely within civilian and military (C2) crisis management?

3 of 12

Automation: Cognitive Effort

“When the system functions with no/little human operator involvement: however, the system performance is limited to the specific actions it has been designed to do.”

Autonomy: Cognitive Control

“The degree to which the system has the capability to achieve mission goals independently, performing well under significant uncertainties, for extended periods of time, with limited or non-existent communication, and with the ability to compensate for system failures, all without external intervention.”

Automation vs. Autonomy in AI

4 of 12

Traditional AI: Rule-based

  • Explicit instructions
  • Predefined rules
  • Specific tasks
  • Expert knowledge

Generative AI: Data-driven

  • Machine learning
  • (Deep) neural networks
  • Patterns and structures
  • Generates new content

Traditional vs. Generative AI

5 of 12

  • Risk Assessment
    • Mapping critical events by probability and consequence
    • Qualitative judgements of threats and vulnerabilities
    • Identify dependencies and cascade effects
  • Preparedness Planning
    • Propose and analyze measures for prevention and mitigation
    • Analyze demand for resources and capabilities, relative to availability
    • Compute optimal strategies, and courses of action (COA)
  • Training and Exercises
    • Assessment of training needs in relation to requirements
    • Exercise program design and scheduling
    • Scenario generation and event management
    • Evaluative feedback: lessons identified, learned and implemented

Roles and Functions for AI in C2

6 of 12

  • H1.1: A decision-maker will tend to use more supervised AI when time pressure (TP) is low compared to when TP is high.
  • H1.2: A decision-maker will tend to use more autonomous AI when TP is high compared to when TP is low.
  • H2.1: A decision-maker will tend to use more classical AI when VUCA is low compared to when VUCA is high.
  • H2.2: A decision-maker will tend to use more generative AI when VUCA is high compared to when VUCA is low.

Hypotheses

Applications of AI across CM context

Wickedness (VUCA)

Low

High

 

Time pressure

 

Low

Classic AI

Supervised

Generative AI

Supervised

High

Classic AI

Autonomous

Generative AI

Autonomous

7 of 12

Research Model, Experimental Method

Complexity (VUCA)

Context / Scenario

Time Pressure

AI Usage for Decision Support

Performance & �Outcomes

Learning (I & II)

8 of 12

Thanks for your attention!

This study is (partially) �financed by the EU/Interreg, �Sweden-Norway program �SWE: 20363842�NOR: IRSN2023-0026

References in original paper

9 of 12

Prominently:

  • Automation Paradox
  • Alignment Problem
  • Automation Bias�

Lack of:

  • Transparency
  • Explainability
  • Accountability
  • Objectivity

=> Can Generative AI be Trusted?

Concerns with Autonomous, Generative AI

10 of 12

    • Stable → VUCA*
    • Vulnerable → Resilient

System & context

    • Reactive → Proactive
    • Routines → Improvisation

Strategy & decision-making

    • Sentralized → Distributed
    • Rigid → Flexible

Management & organization

* VUCA = volatility, uncertainty, complexity, and ambiguity

Theoretical/Contextual Landscape

11 of 12

High

Environment Complexity

Relative �Pace of AI�Innovations

Low Medium High

Automation

Autonomy

Low

Past Present Future

“extension of automation”

“intermediate levels of autonomy”

“becomes more capable over time”

“a gradual evolution of system control”

Time Horizon

AI Innovations: The Transition from Automation to Autonomy �– from mechanisation of cognitive effort, to transfer of cognitive control

IBM.com: «Intelligent automation»:�Self-improving and self-correcting �production process control system

Captions from Endsley (2015, p. 4)

12 of 12

AI and VUCA – a «fitness» hypothesis

TIME

VUCA

RESOURCES

Traditional AI Generative AI

Rules&procedures

Static, pre-defined �knowledge base

Supervised learning

Neural network

Unsupervised learning

(self-learning)

Deep neural network

DEMAND

SUPPLY

Chaos «piloting»�Disruptive, chaotic�Loss of control�Improvisation

Strategic leverage

Crisis management

Complex, volatile

Indirect control�Resilience, agility

Operational leverage

Incident handling

Linear, stable

Direct control

Routine, procedure

Tactical leverage