Published using Google Docs
Fall Test Plan
Updated automatically every 5 minutes

Autonomous Material Handling for CMU MFI’s Lego Assembly/Disassembly Testbed

Fall Test Plan

Team Name

Dock, Dock, Go! (Team H)

Team Members

Sergi Widjaja

Siddhant Wadhwa

Soham Bhave

Sushanth Jayanth

Vineet Tambe

Sponsor

Dr. Gary Fedder, CMU Manufacturing Futures Institute (MFI)

Additional Stakeholders

Rod Heiple

Dr. Ji Zhang

Dr. Oliver Kroemer

Dr. David Bourne

Dr. Dimi Apostolopoulos

Date

September 20th, 2023

Project Title

Autonomous Material Handling for MFI’s Lego Assembly/Disassembly Testbed


Table of Contents

Table of Contents        2

Introduction        3

Logistics        3

Location        3

Personnel        3

Equipment        3

Schedule        4

Tests        5

Test 1: Robot Teleoperation Test        5

Test 2: Multi-robot Planner Waypoint Following Test        6

Test 3: Centralized Planner Test        7

Test 4: Docking State Machine Transition Test        8

Test 5:  Localization and Perception Integration Validation        9

Test 6: Model Predictive Path Integral Controller Test        10

Test 7: Waypoint Following with Obstacle Avoidance        11

Test 8:  Low-level Perception Unit Test        12

Test 9: Testbed Emulator (Offboard symbolic assembly planner) Validation 1/3        13

Test 10: Testbed Emulator (Offboard symbolic assembly planner) Validation 2/3        15

Test 11: Testbed Emulator (Offboard symbolic assembly planner) Validation 3/3        16

Test 12: Fleet infrastructure (Offboard) REST API Validation        17

Test 13: Offboard Mission Controller        18

Test 14: Fall Validation Demonstration        19

Appendix: List of requirements        24

Introduction

This document provides a structured overview of the tests Team DockDockGo plans to carry out during the 2023 semester. Our aim is to ensure that our system meets the defined performance standards. Alongside the test descriptions, we've also included a schedule. This schedule gives clarity on the expected completion dates for various milestones and the corresponding tests, helping the team to monitor progress and stay aligned with our goals.

Logistics

Location

In collaboration with the Manufacturing Futures Institute (MFI), we've secured the MFI Mezzanine area for testing our fleet of Neobotix MP400 wheeled ground robots Beyond routine testing, we also have permission to conduct demonstrations in the Mezzanine, showcasing our robots' capabilities and advancements.

Personnel

Beyond the five members of Team H, there is no additional personnel needed for routine testing and demonstration. Sushanth Jayanth will be the main person in charge of operating the robot. Vineet Tambe will be in charge of planner integration testing. Soham Bhave is in charge of obstacle avoidance tests and robot software bring-up. Siddhant Wadhwa is in charge of the offboard stack tests. Sergi Widjaja is in charge of overseeing the overall testing procedure.  

Equipment

Two Neobotix MP400 Autonomous Mobile Robots (AMRs), sensor suite, batteries and charging infrastructure, offboard server, and human-robot interface (HRI) setup comprise all the equipment required for the demonstration. The test environment has already been set up, with most equipment excluding the two AMRs already present.

Schedule

Date

PR

Milestone

Tests

Requirements

Sep 27th, 2023

8

  • Ability to send velocity commands to the robot.
  • Ability to receive wheel odometry information coming from the robot.
  • Sensor pod installation

1

M.P.03, M.P.06

Oct 11th, 2023

9

  • The robot is able to localize on the constructed map.
  • The robot is able to perceive obstacles and represent them on the map.
  • Ability to send and execute waypoints on the map
  • The state of the robot is appropriately
  • Offboard infrastructure is realized to accommodate a single lego assembly

2, 4, 5, 8, 9, 10

M.P.01, M.P.02,

M.P.03, M.F.02

M.F.03, M.P.01

Nov 1st, 2023

10

  • Ability for the robot fleet to centrally plan and execute a set of non-conflicting paths.
  • Ability for the robot fleet to execute path using an MPPI-based controller
  • Ability to execute a path while avoiding dynamic obstacles not present on the map.
  • Offboard infrastructure is realized to accommodate multiple concurrent end-to-end lego assembly
  • Ability to publish real time task and system status

3, 6, 7, 11, 12, 13

M.P.03, M.F.04, M.F.05, M.F.06, M.F.09, M.F.08, M.F.09, M.P.04,

M.P.05

Nov 15th, 2023

11

  • Multiple task requests coming from the user can be realized by the system with known failure cases

Nov 20th,

2023

12

  • Multiple task requests coming from the user can be realized by the system with no failure cases

14

All

Tests

Test 1: Robot Teleoperation Test

Objective

Validate that the fleet of mobile robots can receive and execute velocity commands and publish wheel odometry information

Equipment

Neobotix MP400, centralized workstation

Elements

Robot infrastructure, Neobotix off-the-shelf stack, localization stack

Personnel

Sergi, Vineet, Sushanth, Soham

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Boot up the robot while making sure that the communication interfaces between the robot’s computer and the NVIDIA Orin embedded compute unit are established.
  • Ensure that the sensor pod are physically installed on top of the robot.
  • Send velocity commands through the NVIDIA Orin embedded computer.
  • Ensure that velocity commands are appropriately executed.
  • Ensure that we can read wheel odometry information coming from the robot’s compute
  • Throughout the process, keep track of the wheel odometry information coming from the robot’s computer and compare them with the velocity commands

Validation

  • User-input velocity commands should be propagated from the NVIDIA Orin embedded compute unit to the robot’s onboard compute.
  • User-input velocity commands should be realized in the ODD.
  • Wheel odometry information should remain consistent with the velocity commands with an acceptable latency.
  • The installed sensor pod on top of the robot should remain stable throughout the operation.

Test 2: Multi-robot Planner Waypoint Following Test

Objective

Validate that the robots can navigate to a valid specified location in the testbed environment.

Equipment

Neobotix MP400, centralized workstation

Elements

Global Planner, Localization, (Neo Local Controller) Local Controller

Personnel

Vineet, Sergi, Sushanth

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Initialize the system consisting of the centralized workstation and the fleet of mobile robots.
  • Ensure that the NVIDIA Orin embedded compute unit and the centralized workstation are in the same network.
  • A graphical user interface depicting the robot’s location on the map will be initialized.
  • The robot should localize within the map appropriately.
  • Set a target location in the ODD and the robot should navigate to this target pose.

Validation

  • The robot should follow the path in the ODD as computed by the planner.
  • The robot should not delocalize while following the path in the ODD.
  • The planned paths in the ODD should not intersect with the static obstacles as depicted in the map
  • The robots should reach the goal location.

Test 3: Centralized Planner Test

Objective

Validate that the centralized planner generates non-conflicting paths

Equipment

Neobotix MP400, centralized workstation

Elements

CBS Global Planner, Localization, (Neo) Local Controller

Personnel

Vineet

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Initialize the system consisting of the centralized workstation and the fleet of mobile robots.
  • Set up a known scenario where the decentralized planner fails due to symmetry in the scene or corridor scenario.
  • The centralized planner will come up with a set of waypoints per robot depicting the planned path to follow.
  • The fleet of robots should be able to trace the assigned waypoints and execute the plan.

Validation

  • The planner successfully generates non-conflicting paths
  • The paths are sent to the robots and the robots execute the paths successfully.

Test 4: Docking State Machine Transition Test

Objective

Validate switching between LIDAR-based localization and fiducial marker-based localization

Equipment

Neobotix MP400, Fiducial Markers

Elements

Localization, Fiducial Marker Detection, State Machine Node

Personnel

Sushanth

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Initialize the robot with the fiducial marker, LIDAR localization, state machine, and all navigation-dependent nodes active. Ensure that the robot is in the LIDAR localization state to begin
  • Autonomously navigate the robot to the vicinity of the fiducial marker
  • State machine node should automatically switch localization source from LIDAR to fiducial marker–handling the robot-to-map and robot-to-fiducial_marker transformations

Validation

  • The coordinate frames of the map, robot, and fiducial marker should be stable throughout the transition process
  • The robot must start planning to move toward the fiducial marker
  • The map server must remain stable through transition (the planner will depend on the map server’s information)

Test 5:  Localization and Perception Integration Validation

Objective

Validate that the obstacles detected by the perception subsystem are accurately localized within the global map

Equipment

Sensor Pod, dummy obstacles in the ODD, Neobotix MP400, centralized workstation

Elements

Localization, Perception

Personnel

Soham, Sergi, Sushanth

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Position the robot/sensor pod at a predefined location
  • Arrange various obstacle types (such as humans, chairs, etc.) around the robot
  • Calculate the error between the obstacle positions in the costmap and the ground truth
  • Repeat the experiment varying the sensor pod's locations and the placement of obstacles.
  • Report error statistics between reported location of the obstacles and the ground truth.

Validation

  • The error between positions in the cost map and the ground truth should be < 35 cm

Test 6: Model Predictive Path Integral Controller Test

Objective

Validate the MPC-based local planner follows paths

Equipment

Neobotix MP400, centralized workstation

Elements

CBS Global Planner, Localization, MPPI Local Controller

Personnel

Vineet

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Initialize the system consisting of the centralized workstation and the fleet of mobile robots.
  • A graphical user interface depicting the robot’s location on the map will be initialized.
  • The fleet of robots should be localized within the map appropriately.
  • Select a robot in the fleet to execute a waypoint.
  • Set a target location in the ODD and the selected robot should navigate to this target pose using velocity commands coming from the local MPPI-based controller.
  • Select another robot and repeat the procedure

Validation

  • The local controller should generate appropriate velocity commands to reach the specified waypoint
  • The selected robots move to the specified location and stops.

Test 7: Waypoint Following with Obstacle Avoidance

Objective

Validate that the robot can reach waypoints while avoiding collision with all obstacles

Equipment

AMR, dummy obstacles

Elements

Localization, planner, and perception subsystems

Personnel

Soham, Sergi, Sushant, Vineet

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Position the robot/sensor pod at a predefined location.
  • Arrange various obstacle types (such as humans, chairs, etc.) around the robot such that the robot has to avoid them to reach its goal.
  • Send a waypoint command to the robot.
  • Measure the time taken by the robot to reach the waypoint and compute average speed.
  • Repeat the experiment varying the sensor pod's locations, the placement of obstacles, and goal waypoints.
  • Report performance statistics.

Validation

  • The robot can reach waypoints at an average speed exceeding 0.25 m/s while avoiding collisions with all obstacles.

Test 8:  Low-level Perception Unit Test

Objective

Validate the performance of the low-level perception module

Equipment

Sensor pod, dummy obstacles

Elements

Perception subsystem

Personnel

Soham

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Position the robot/sensor pod at a predefined location
  • Arrange various obstacle types (such as humans, chairs, etc.) around the robot
  • Evaluate the detection recall for these obstacle identifications
  • Repeat the experiment varying the sensor pod's locations and the placement of obstacles

Validation

  • Recall of detected obstacles > 90%

Test 9: Testbed Emulator (Offboard symbolic assembly planner) Validation 1/3

Objective

To confirm the ability of the testbed emulator to synchronously coordinate one stage of the lego model assembly through a limited subset of the testbed work-cells using human operators with no pre-requisite training.

Equipment

Developer workstation, Chosen Human-Robot Interface (HRI) platform (iPad), LAN

Elements

Tested Emulator (subset of the offboard infrastructure subsystem)

Personnel

Siddhant Wadhwa

Location

Location Agnostic

Procedure

  1. End user presses a button on the HRI terminal’s “Request New Assembly” screen
  2. Test supervisor approves the newly enqueued assembly on the control panel HRI screen
  3. The test supervisor uses a mock endpoint to indicate that an AMR has arrived at the stock room.
  4. The human operator presses a button on the stock room interface indicating that the payload has been placed on the AMR.
  5. Test supervisor approves progression of the DAG to allow AMR#1 to travel to the kitting station on the control panel HRI screen
  6. The test supervisor uses a mock endpoint to indicate that an AMR has arrived at the kitting station.
  7. The human operator at the kitting station presses a button on the HRI interface indicating that the payload has been picked up on the AMR.

Validation

  • Step 1 of the procedure results in a new assembly DAG to be enqueued on the Testbed emulator ‘Control Panel’
  • Step 2 of the procedure results in an AMRTransport task being created and sent to the AMR Fleet Manager via REST API, with the destination set to Stock Room.
  • Step 2 of the procedure results in the HRI terminal at the stock room to display instructions to the human operator about which parts bins to load onto an AMR when it arrives.
  • Step 3 of the procedure updates the internal state of the stock room work cell dock to “occupied” by “AMR#1”. (Using AMR#1 as a placeholder name for a particular AMR in our fleet)
  • Step 3 of the procedure updates the assembly DAG to progress to wait for completion of HumanOperatorTask to place parts bins on AMR#1
  • Step 4 of the procedure updates the assembly DAG to progress to wait for completion of AMRTransportTask to travel to the kitting station.
  • Step 4 of the procedure of the procedure results in an AMRTransport task being created and sent to the AMR Fleet Manager via REST API specifically for AMR#1, with the destination set to Kitting Station.
  • Step 4 of the procedure updates the internal state of the stock room work cell dock to “unoccupied”.
  • Step 6 of the procedure updates the internal state of the kitting station work cell dock to “occupied” by AMR#1.
  • Step 6 of the procedure updates the assembly DAG to progress to wait for completion of HumanOperatorTask to pickup payload from AMR#1
  • Step 7 of the procedure updates the assembly DAG to progress to wait for completion of HumanOperatorTask to compile kit using transported parts bins
  • Step 7 of the procedure results in the HRI terminal at the kitting station to display instructions to the human operator about the quantity and types of parts to compile into a kit.

Test 10: Testbed Emulator (Offboard symbolic assembly planner) Validation 2/3

Objective

To confirm the ability of the testbed emulator to synchronously coordinate one end-to-end lego model assembly through the full range of the testbed work-cells using human operators with no pre-requisite training.

Equipment

Developer workstation, Chosen Human-Robot Interface (HRI) platform (iPad), LAN

Elements

Tested Emulator (part of the offboard infrastructure subsystem)

Personnel

Siddhant Wadhwa

Location

Location Agnostic

Procedure

  1. Extends the procedure from test x1 [single delivery from one work cell (stock room) to another (kitting station)] by extending HRI and REST API transactions to the assembly work-cell and final display station after the kitting station.

Validation

  • Each HRI transaction results in appropriate updates to the state of the assembly DAG, state of occupancy of relevant docks and relevant AMRTransportTasks being sent to the AMR fleet manager via REST API
  • Out of order HRI transactions are declined based on precondition checks.

Test 11: Testbed Emulator (Offboard symbolic assembly planner) Validation 3/3

Objective

To confirm the ability of the testbed emulator to synchronously coordinate multiple concurrent end-to-end lego model assembly through the full range of the testbed work-cells using human operators with no pre-requisite training.

Equipment

Developer workstation, Chosen Human-Robot Interface (HRI) platform (iPad), LAN

Elements

Tested Emulator (part of the offboard infrastructure subsystem)

Personnel

Siddhant Wadhwa

Location

Location Agnostic

Procedure

  1. Extends the procedure from test x2 [Single End-to-End lego model assembly] by executing requesting multiple lego model assemblies concurrently.

Validation

  • Each HRI transaction results in appropriate updates to the state of the assembly DAG, state of occupancy of relevant docks and relevant AMRTransportTasks being sent to the AMR fleet manager via REST API
  • Out of order HRI transactions are declined based on precondition checks.
  • Conflicts amongst AMRs in docking are avoided
  • Tasks at work cells are appropriately enqueued when the work cell is not idle

Test 12: Fleet infrastructure (Offboard) REST API Validation

Objective

To confirm the ability of the fleet infrastructure to accept, validate and respond to AMRTransportTask REST API requests, queue them using naive load balancing, and send task completion notifications to task requester.

Equipment

Developer workstation, LAN

Elements

Fleet infrastructure (part of the offboard infrastructure subsystem)

Personnel

Siddhant Wadhwa

Location

Location Agnostic

Procedure

  1. Test supervisor uses postman to send valid AMRTransportTask to fleet infrastructure

Validation

  • Step 1 of the procedure results in the AMRTransportTask to be added to the pending task queue
  • Step 1 of the procedure results in a status OK response back to the test supervisor to confirm validity and receipt of task request.
  • Step 1 of the procedure results in query for idle AMRs to service task request.
  • Based on engineered test conditions, if any AMR in fleet is idle, AMRTransportTask is allocated to AMR, whose state in fleet infra is updated to ‘busy’ i.e. not idle.
  • Notification of task allocation to AMR is sent to testbed user via REST API, indicating that the AMRTransportTask is active
  • Notification of task completion by AMR is sent to testbed user via REST API, indicating that the AMRTransportTask is completed.




Test 13: Offboard Mission Controller

Objective

Test data transfer and task handling of two AMRs and offboard infrastructure using Mission Controller

Equipment

2x Neobotix MP400, Offboard Server, Global Planner

Elements

Fleet Management, State Machine Node, Localization (LIDAR or fiducial)

Personnel

Sushanth

Location

Manufacturing Futures Institute, Mezzanine Area

Procedure

  • Setup the Mission Controller Node as an interface between AMRs and Offboard Infrastructure
  • The mission controller should accept task requests from offboard infrastructure, request global planner for waypoints, and relay waypoints to respective AMRs
  • The mission controller should retrieve AMR localization pose and navigation state (docking, undocking, etc) as requested by offboard server

Validation

  • The mission controller must accept per-AMR task requests and show streamlined communication with global planner and two AMRs
  • The mission controller must get robot pose and navigation state upon request

Test 14: Fall Validation Demonstration

Objective

To demonstrate 2 concurrent lego model assemblies at the MFI testbed with our fleet of AMRs responsible for all transportation of material in the assembly workflow while human operators perform manipulation operations based on instruction provided through human-robot interface (HRI) terminals.

Equipment

MFI server machine, 2 AMRs with sensor suites integrated, HRI terminals at all testbed work-cells

Elements

Full AMR system + testbed emulator including HRI terminals at each of the following work-cells:

  • 1x Display station (with the HRI terminal to request new lego model assemblies, and receive final assembly)
  • 1x stock room
  • 1x kitting station
  • 2x assembly work-cells

Personnel

Full MRSD team

Location

Manufacturing Futures Institute, Mezzanine Area

Timeline view of procedure and success criteria (time progresses downwards)

The procedure below corresponds to one lego model assembly. During the demo, we will be executing 2 model assemblies concurrently, challenging the system’s concurrency and the multi-agent planner’s ability to plan non-conflicting paths for the fleet of AMRs. To emulate this, the following procedure will be run twice concurrently, with the second model assembly being queued after a slight time offset after the first.

Procedure (chronologically arranged)

Success / validation criteria (chronologically arranged)

The demo user at the HRI terminal (iPad) displaying the “Request New Lego Assembly” page with a list of options of available assemblies selects a lego model assembly to execute on the testbed

  • The HRI terminal at the stock room displays instructions for the human operator at that work cell to fetch parts bins from the stock that will be required to complete the assembly.

The human operator servicing the stock room follows the instructions to fetch the correct parts bins that will be required for the requested assembly

  • An idle AMR from the fleet is dispatched to the stock room to collect the parts bins required for the assembly

  • Once the AMR arrives at the stock room, the human operator at that work cell is instructed to place the fetched parts bins on the payload surface of the AMR via the HRI terminal, and then press a button on the interface indicating that the payload has been placed on the AMR

The human operator servicing the stock room follows the instructions and places the parts bins on the AMR, then presses a button on the interface indicating that the payload has been placed on the AMR

  • The AMR departs from the stock room, navigates to and docks at the kitting station.

  • The HRI terminal at the kitting station displays instructions for the human operator to pick up the parts bins from the AMR, then press a button indicating that the payload has been picked up.

The human operator at the HRI terminal at the kitting station picks up parts bins from the payload surface of the AMR, then presses a button on the interface indicating that the payload has been picked up.

  • The AMR is free to service other task requests (if any) in the task queue. If none, the AMR navigates to and docks at a parking location (bonus task: auto-charging dock)

  • The HRI terminal at the kitting station displays instructions for the human operator to compile a ‘kit’ by picking the precise quantity and types of parts required for the model assembly from the parts bins supplied. Once the kit is ready, the operator is asked to press a button on the interface indicating operator task completion

The human operator at the kitting station compiles the kit by following on-screen instructions, then presses a button indicating that the kit is ready for pickup

  • The next available AMR from the fleet is dispatched to the kitting station to pick up the compiled kit. The AMR navigates to and docks at the kitting station.

  • The HRI terminal at the kitting station instructs the operator to place the kit on the payload surface of the docked AMR, then press a button indicating that payload has been securely placed on the AMR.

The human operator at the kitting station places the kit on the docked AMR following on-screen instructions, then presses a button indicating that the AMR is loaded and can depart.

  • The AMR at the kitting station departs for the next available assembly work cell (when it becomes available) , navigating to it and docking safely.

  • The HRI terminal at the assembly work cell displays instructions for the human operator to pick up the kit payload from the AMR, then press a button indicating that the payload has been picked up.

The human operator at the kitting station picks up the kit payload from the AMR, then presses a button on the interface indicating that the payload has been picked up.

  • The AMR is free to service other task requests (if any) in the task queue. If none, the AMR navigates to and docks at a parking location (bonus task: auto-charging dock)

  • The HRI terminal at the relevant assembly work-cell displays step-by-step instructions for the human operator to assemble the requested model using the pre-fetched parts in the supplied kit. Once the model is ready, the operator is asked to press a button on the interface indicating operator task completion

The human operator at the assembly work-cell assembles the requested model following on-screen step-by-step instructions using the parts supplied in the kit, then presses a button indicating that the model is ready for pickup

  • The next available AMR from the fleet is dispatched to the assembly work cell to pick up the compiled kit. The AMR navigates to and docks at the assembly work cell.

  • The HRI terminal at the assembly work cell instructs the operator to place the assembled model on the payload surface of the docked AMR, then press a button indicating that payload has been securely placed on the AMR.

The human operator at the kitting station places the kit on the docked AMR following on-screen instructions, then presses a button indicating that the AMR is loaded and can depart.

  • The AMR at the assembly work cell departs for the display station, navigating to it and docking safely.

  • The HRI terminal at the display station indicates mission success and allows the demo user to take a picture of the assembled model and add it to the model gallery. :)

At all times throughout the demo:

  • No path conflicts amongst the fleet of AMRs.
  • No collisions with static / dynamic obstacles in the testbed work area
  • If a dock is occupied by an AMR, other AMRs needing to use the same dock wait for the dock to be cleared before attempting to dock

















Appendix: List of requirements

Requirement ID

Requirement Description (“The system shall ”)

M.F.01

Create a map of the testbed environment

M.F.02

Localize in map of the testbed environment

M.F.03

Perceive obstacles

M.F.04

Plan path to navigate Testbed Environment

M.F.05

Execute planned path while avoiding obstacles

M.F.06

Receive material handling requests

M.F.07

Pick up and drop off payloads at payload docks [DEPRECATED]

M.F.08

Publish real time task and system status

M.F.09

Coordinate interactions with human operators

Requirement Description

“The system shall ”

Mandatory thresholds

Mandatory requirement ID

Desired thresholds

Desired requirement ID

Localize in map of the testbed environment with a maximum error of n cm

n = 25

M.P.01

n = 10

D.P.01

Perceive and avoid obstacles within a radius of n meters of the AMR with a minimum recall of r and precision of p

n = 5,

r = 70%,

p = 70%

M.P.02

n = 5,

r = 90%,

p = 90%

D.P.02

Execute planned path from source to destination in testbed environment with an average speed > n m/s

n = 0.25

M.P.03

N/A

N/A

Execute Pick up and drop off of payload within n seconds of reaching the dock’s vicinity. [DEPRECATED]

n = 60

M.P.04

N/A

N/A

Dock within a radius of n cm, and within m degrees of desired docking pose.

n = 10,

m = 10

M.P.05

n = 10,

m = 5

D.P.05

Handle payloads with size of (l,w,h) cm and weight up to m kg

l = 50,

w = 50,

h = 20,

m = 5

M.P.06

N/A

N/A

Requirement Description

“The system will be ”

Mandatory thresholds

Mandatory requirement ID

Desired thresholds

Desired requirement ID

Open-sourced

N/A

M.N.01

N/A

N/A

Reliable: The system shall execute tasks with at least n% probability of success

n = 80

M.N.02

n = 95

D.N.02

Scalable to a fleet of AMRs

N/A

M.N.03

N/A

N/A

Independently demonstrable from the state of the testbed project

N/A

M.N.04

N/A

N/A