1 of 29

Natalie Nakhla, M.A,Sc., Ph. D,

Mackenzie Tummers, Susan Watson, and Jean-Pierre Sabbagh El-Rami

Cyber Operations and Resilient Communications (CORC) section, Ottawa Research Center

Department of National Defence

Cyber effects analysis and characterization

DGRDPA/DRDCCS

ICCRTS’24 (Paper 58)

2 of 29

2

Outline

  • Problem/introduction
  • Research questions and applications
  • Proposed approach and methodology overview
  • Results and analysis
  • Discussion and lessons learned
  • Summary and future work

3 of 29

3

Problem

  • Calculating metrics for cyber operations is much less studied compared to metrics for kinetic operations
  • Factors:
    • limited options for generation of cyber capabilities/effects
    • lack of explicit quantitative observations (to assess accuracy and effectiveness of the cyber action)
      • Kinetic actions characterized and bound by physical properties
    • challenges with receiving direct feedback and SA as to whether a cyber action was successful

4 of 29

4

What do we mean by “cyber effects analysis” ??

Effect: A change in the state of a target or a system resulting from an event or combination of events in the operating environment [1]

[1] DND Defence Terminology Bank, (terminology.mil.ca)

[2] Bernier, M. (2013), Military Activities and Cyber effects (MACE) Taxonomy, DRDC CORA, DRDC-CORA-TM-2013-226

Cyber effects: Interruption, modification, degradation, fabrication, interception of the ITI (or the information that resides within it) 🡪 achieve military effects deny, degrade, destroy, disrupt [2]

Often used to also imply the means/capability of achieving the goal

Cyber effects analysis: Metrics, assessments, and characterizations of cyber effects to determine prob(success), how the effect performs under various conditions, etc.

5 of 29

Research Questions

5

- How can we characterize cyber effects in terms of their attributes and MoPs?

- How can we select from a set of cyber actions/attacks to employ, what are the criteria? Trade-offs?

- How can we conduct CEE? Estimate the propagation of attack? Higher-order effects? Collateral damage?

- ….etc…..See [3] for full list

[3] Nakhla, N., Dondo, M., and Watson, S., Metrics and measures for cyber effects analysis - decision support for cyber operations, (PA), DRDC-RDDC-2023-L086 to D Cyber Ops FD, March 2023, PROTECTED A.

Sample MoPs:

  • Covertness/stealthiness
  • Attribution
  • Probability of success and reliability
  • Propagation
  • Potential for collateral damage
  • etc…[3]

Goal: Trade-off analysis

6 of 29

Application to Mission Planning and the �Joint Targeting Cycle

6

      • Detectability risk
      • Attribution risk
      • Co-opting risk
      • Misuse risk
      • Security vulnerability risk

  • Stage 3 - Capabilities analysis [4]:
    • Capabilities assignment
    • Feasibility assessment
    • Effect estimate
    • Vulnerability Assessment

More metrics/characterizations!

  • Stage 6 - Assessment [4]

[4] Targeting Staff Handbook v. 1.9, 2023

Dynamic targeting

  • Stage 5 - Execution [4]

7 of 29

7

Outline

  • Problem/introduction
  • Research questions and applications
  • Proposed approach and methodology overview
  • Results and analysis
  • Discussion and lessons learned
  • Summary and future work

8 of 29

Proposed approach and overview

8

  • Developed methodology and metrics analysis approach for effects analysis in an enterprise network
  • Focused on probability of success and reliability metrics

  • Two main capabilities:

1. Caldera adversary emulation platform

2. Cyber Gym for Intelligent Learning (CyGIL) environment

9 of 29

Proposed approach and overview

9

  • Two main capabilities:

1. Caldera adversary emulation platform

2. Cyber Gym for Intelligent Learning (CyGIL) environment

  • Caldera: Automated adversary emulation platform developed by MITRE
  • Network defence hardening/red teaming
  • Operations based on MITRE ATT@CK* framework

* https://attack.mitre.org/

10 of 29

Proposed approach and overview

10

  • Two main capabilities:

1. Caldera adversary emulation platform

2. Cyber Gym for Intelligent Learning (CyGIL) environment

CyGIL environment [5]

  • Trained agents (reinforcement learning) implement cyber operations on test networks
  • Agents are trained to execute attack sequences in order to reach a certain goal
  • Agents identify optimal paths to reach goal
  • Once trained, run attacks on network using evaluation sequence

[5] Li, L. et. al, “ Building artificial intelligence agents for cyber operations using deep reinforcement learning – A sim-to-real agent training environment”, DRDC-RDDC-2022-R160, Oct 2022

11 of 29

Proposed approach and overview

11

CyGIL VM

Proxy

Sequence agent

Actions

Analysis

Observation space, logs, other metrics, etc.

Network VMs

Caldera server

Observation space, logs

Network traffic generation

Manual Configurations

Network perturbations

Test

environment

  • Test enviro. includes CyGIL VM, virtual scenario network, and Caldera server
  • After CyGIL training is complete for a scenario and goal (before our analysis), reinforcement learning agents identify optimal attack paths
  • Sequence agent sends explicit attack paths to CyGIL VM for execution, results are forwarded to analysis block

12 of 29

12

  • Actions are executed repeatedly until they succeed or have reached max number of trials
  • Probability of success, Pj, for action j:

  • Pj metric worsens as more attempts are required🡪 Overt, could be flagged by blue defenses

Metrics analysis

Pj=0 if max no. of trials reached

13 of 29

Metrics analysis

13

  • Factors:
    • Action dependency: if an action fails, subsequent dependent actions won’t even be dispatched🡪 Analogous to cutting attack graph closer to attacker
    • Prob of success- granularity
        • E.g., Action-level vs. Objective-level🡪 our case, success of individual actions in the context of the entire attack path
        • Which actions are more “difficult” than others?
        • Repeatability, i.e., are the sequence of steps successful every time? If not, why? What factors contribute to this?
        • Covertness

14 of 29

14

Outline

  • Problem/introduction
  • Research questions and applications
  • Proposed approach and methodology overview
  • Results and analysis
  • Discussion and lessons learned
  • Summary and future work

15 of 29

Results and analysis- Scenario

15

AD server

Attacker

Goal

Critical VMs for attack chain

START

Initial foothold

  • Goal: obtain elevated privileges on the AD server
  • CyGIL agents identified optimal paths (RL)

16 of 29

Results and analysis

16

Baseline environment:

  • Experimental setup remained the same as for original CyGIL training scenario
  • No perturbations
  • Sequence agent ran 50 iterations of optimal path, with Trialmax=10

🡪 All Pj=1 (succeeded on first try)

🡪 Except 1 action Pj=0.58, (1.72 ave. attempts to succeed)

17 of 29

Results and analysis

17

Perturbation #1: Varying reachability

  • Reachability (ability to send/receive packets) was limited between attack-critical machines in experimental scenario
  • Mimic real-world networks where connectivity can be unpredictable
  • Actions that required connectivity failed
  • Varying reachability of non-critical machines did not affect Pj
  • Pj worsens as get closer to target🡪 dependencies between actions

Actions move closer to target

 

18 of 29

Results and analysis

18

Perturbation #2: Varying traffic and bandwidth

  • Generated network traffic via large file transfer
  • Limited bandwidth of machine ethernet connections
  • Actions related to critical machines failed if bandwidth was limited,
    • E.g., Action 14: discovery tactic (reverse nslookup), failed since VM-2 bw throttled, subsequent dependent actions also failed

 

19 of 29

19

Outline

  • Problem/introduction
  • Research questions and applications
  • Proposed approach and methodology overview
  • Results and analysis
  • Discussion and lessons learned
  • Summary and future work

20 of 29

Discussion/Lessons learned

20

  • Results as expected for perturbations to reachability, traffic, etc.
  • Certain actions failed if they could not connect to others for required operations

🡪 mimics real world, network/environmental conditions

  • Perturbations to non-critical machines (wrt attack) did not affect success metric
  • Baseline enviro: All conditions in place, some actions did not succeed on first trial,

🡪 inherent to cyber operations

  • Action dependency: If action failed, subsequent dependent actions also failed

🡪 Systems/operators should be able to pivot to other actions, more covert (less attempts of same action)

  • Filter out results due to ‘experiment-isms’ and underlying architecture, e.g. action failed due to VMs needing to be reset🡪 pollute metrics results

21 of 29

Discussion/Lessons learned

21

  • Reachability and traffic results highlighted importance of knowing, if possible, Pattern of Activity of target network🡪 when network is busiest, users, etc.🡪 should be considered in planning process

  • Network visibility: We used emulated sample network🡪 Full visibility
    • Not always the case, how to model with limited visibility into target network?
    • Need knowledge of dependencies and relationships between assets, services, capabilities, missions, in order to estimate higher order effects and collateral damage (ongoing challenge, feed into INT requirements)

22 of 29

22

  • Analysis of cyber operations more difficult compared to kinetic operations
  • Characterizing cyber effects is valuable for capabilities analysis, weaponeering, joint targeting cycle🡪 attack planning
  • In this work, developed methodology, metrics analysis for actions in enterprise network with focus on Prob(success) metric
  • Results showed that perturbations to reachability and connectivity for attack- critical machines affected action success
  • Future work: Apply methodology to characterize effects for operations in the RF domain
    • Wireless/IoT testbed, deliver effect, analyze metrics for baseline/perturbed environments
      • Additional metrics: covertness, repeatability etc.

Summary

23 of 29

23

Thank you!

24 of 29

24

Extra slides

25 of 29

25

1. Data Collection

Collect data:

  • Intelligence data
  • Adversarial network facts
  • Vulnerability data

2. Exploit management

  • Identify exploits for target environment
  • Modify/improve existing exploits
  • Generate exploits if needed
  • Evaluate exploits- measure effect attributes in simulation of target environment

3. Capabilities analysis

  • Use analysis techniques (e.g. MADM, attack graphs + RL), to support COA selection and decision making
  • Estimate success likelihood of achieving desired effects
  • Conduct collateral effects estimation, trade-off analysis, etc.

4. Effect deployment

5. Post-deployment analysis

Conduct:

  • Battle damage assessment
  • Weapons-effect assessment
  • Re-attack assessment

Effects analysis framework

26 of 29

26

27 of 29

27

Mapping of military to cyber effects with examples

28 of 29

28

29 of 29

Challenges and final thoughts

29

  • Limited visibility into the attacker’s network – how to model based on limited intelligence data
  • Need knowledge of dependencies and relationships between assets, services, capabilities, missions in order to estimate higher order effects and collateral damage
  • We are not hackers🡪 leverage existing expertise and platforms
    • Collaborative Security Test Environment (CSTE):
      • Access to training, cyber range
      • Observe/participate in exercises to understand the hacker’s process, further develop/define metrics,

e.g.,

        • How do they know if they have succeeded?
        • What indicators do they look for?

Feed into INT

requirements

Add future work