1 of 16

Human Machine Interactions as Partnership

Dr. Shelley P. Gallup, Dr. Mark E. Nissen

Naval Postgraduate School

Information Sciences Department

2 of 16

Team and Partnership: BLUF

  • Human-Machine Teams: We find that the idea of human-machine teaming is only partially adequate as a description of future systems.
    • To argue that people and AS represent teammates is a bit like saying that a driver “teams” with his or her automobile when engaging cruise control.
  • Human-Machine Partnerships: “we create our world through language, an observation that has important consequences for design.” (Winograd and Flores)
  • Understanding of biological system essence through autopoeiesis: “An autopoietic system is one in which a network of processes of production (transformation and destruction) of components that produces the components that (1) through their interactions and transformations continuously regenerate the network of processes (relations) that produced them and (2) constitute it (the machine) as a concrete unity in the space (Maturana and Varela)

3 of 16

Awareness, what is it really?

  • In human-machine teaming, the concept of awareness remains undefined, especially on the machine side of the “team.” Instead, what serves as a “team” reduces to pre-designed algorithms as decision-aids to human decision-makers.

4 of 16

The Logic of Distinction

  • Philip Herbst: Distinction as a non-severable triad of noticing from the universe of possibilities an “inside” (Circle) and “outside” (all outside of the circle) and “crossing” (how one notices the distinction). (What Happens When We Make a Distinction: Elementary Introduction to Co-Genetic Logic; 1993.
    • It is co-genetic: the three elements come into being together
    • It is non-separable
    • It is non reducible (there cannot be less than three components)
    • It is contextual

5 of 16

Update to OODA loop

6 of 16

….and now including Distinction

7 of 16

Intention

  • In-tention, to bring into view what ought to be, the navigation towards an outcome.
  • In the form of a distinction.

8 of 16

Competent Dialogue

  • Trading of distinctions between two entities, arriving at a common ground.

    • Human—recognizes context
    • Machine—resolves data

    • Dialogue: bringing both together to create a new thought, or action.

9 of 16

Anticipation

  • Create a dialogic machine that can represent data and its view of the way forward.
  • Create the means for trading distinctions (human) with enhanced autonomy of a machine.
  • Future research: following slides

10 of 16

Use of the Human Autonomic Nervous System (ANS) as an exemplar

  • Human beings have autonomy “baked in” via the amygdala and the 10th cranial nerve.

  • Also known as Poly-Vagal theory

11 of 16

Adaptation of Poly-Vagal Theory Analog

Hyperarousal

Window of Tolerance

Requisite Variety

Hypoarousal

Fight or Flight

Fawn or Freeze

Human 10th Cranial Nerve enervates all organs of the body producing reactions through production of hormone response.

Learning

Human-Machine Partnership follows the same development cycles as infant to individuated adult.

12 of 16

Extending the analogy to systems of systems

Increasing Danger of combat

Dynamic Window of Tolerance

Requisite Variety must expand to meet increase in context factors. Or decrease to focus combat power in smaller context.

CDRs intent and ROE-towards fight or towards retreat

Self Defense if attacked

.

Context outside the WOT requires feedback to SoSs to increase factors improving RV. Second order cybernetics

Dynamic contexts

Learning to adjust to risk-Dialog with automation.

Unknown unknowns

Trigger

13 of 16

Window of Control (or Window of Tolerance) v. requisite variety

Dynamic window of tolerance

Requisite Variety needed for control

Increase

Increase

Adaptation through Human-machine partnership

14 of 16

Discussion: Main Points and Example

Although this is a biological analog as an example, it follows closely the cybernetic models of H Ross Ashby’s explanation of control systems. In cybernetics the control function keeps the “system” within tolerances in order to perform its function. This is first order control. Occasionally the system exceeds a threshold and requires a second-order response. The controller’s ability to bring the system back into tolerance is a function of the controller’s requisite variety to deal with exceptions.

In the first slide this is operationalized by creating a “window of tolerance.” This is first order control, and our autonomic nervous system takes care of the ups and downs within this window. When exceptional triggers are discerned from the outside environment the autonomic nervous system reacts through the sympathetic (action inducing) and parasympathetic nervous system (quieting).

However, In the next slide the window of tolerance is NOT fixed. It is dynamic, determined by context, commander’s intent, being targeted unexpectedly etc. Many external conditions can affect the WOT. How the system of systems react and its speed of reaction is a function of a human-machine partnership, each doing what they are best at, and with the human having the last word. This partnership results from a kind of “dialog” between the humans and the machine partner interacting with a time critical controller. A Monterey Phoenix simulation is included here. (add MP)

15 of 16

Relationship to AI development for war at sea

No doubt “AI” (I prefer advanced autonomy) has made strides. But they are simplistic, under the theory that if we feed enough examples to the machine, it will “learn” what is correct. So far these systems do NOT include adequate requisite variety to be a “controller” in fully autonomous modes to have a weapons release authority. The concept of the human being on another platform to act as the ultimate controller for weapons on an autonomous platform is rife with difficulties, not the least of which is an assumption of clear and consistent communications.

In a new concept, the “machine” is not “educated” to be able to recognize and properly react to all contexts. It will not have the requisite variety needed.

Instead, what is needed is an understanding of what constitutes a “trigger” in dynamic contexts, adjusting the window of tolerance and thereby adjusting response in dialog with the human. What changes the window of tolerance? Commander’s Intent, effects (kinetic and non-kinetic), weapons availability (for each ship and across the force), tactics, willingness to take casualties, influence of the effects on the larger strategic picture, deterrence or hot war? There are many factors that expand or condense the WOT. When condensed, there are more triggers, and second order actions by forces to bring war back into a window of tolerance or to expand it.

16 of 16

Words are important

There is a problem with our use of the term “artificial intelligence.” What is meant by artificial? By intelligence? These terms are problematic by themselves, and together only confuse what we are really talking about.

We can create systems that have a great deal of ability to recognize a “thing.” However, there is no attachment at the system level that a human would recognize as an emotional connection. That is why a dialogic approach is also needed. Anthropomorphism is important to give the actors a sense that they are understood and care about the other’s intentions and reactions.

War is a set of initial conditions, shifts in those conditions, and complex set of unknowns and unexpected emergent behaviors.

All AI is autonomy. It can be complex autonomy, but it is still based on human development, and includes biases, and assumptions. Triggers do not include these. Triggers create defense mechanisms (to borrow from psychology and Dr. Wilhelm Reich) How those defense patterns are used is in the human domain.