1 of 21

An Autonomous UAV-UGV Collaborative Framework for Search and Rescue Operations

Arushi Khokhar

2 of 21

Introduction

  • Thousands of lives are lost due to natural or man-made disasters.
  • Search and rescue operations in areas hit by tsunamis, earthquakes, avalanches, etc. are not easy for a human rescue team.
  • Follow-up disasters like fire-outbreaks, rumbling down of buildings, gas leaks, etc.
  • Man-made disasters like chemical disasters, nuclear disaster, etc. are extremely dangerous for human rescue teams and can be fatal.
  • To overcome these challenges in rescue operations, robots are included in these missions.
  • In most grievous incidents, like the 9/11 attacks, it has been reported that many victims lost their lives due to late assistance.

3 of 21

To overcome these challenges in rescue operations, robots are included in these missions.

4 of 21

Literature Survey

  • Using UAVs with multi-model sensors such as high-quality cameras, gas detectors, etc. helps to minimize search time [1],[2]

  • For subterranean environments, UGVs have been used for navigation. It’s difficult to use UAVs and UGVs individually in such unstructured and unknown environments, especially in GPS denied areas[3], [4], [5]

5 of 21

  • A LIDAR sensor is mounted on the UGV to map the environment.

  • It uses the simultaneous localisation and mapping (SLAM)[6],[7] technique to generate point clouds of the environment [8]–[10].

  • However, introducing a LIDAR sensor into an unknown, subterranean, potentially hostile environment can be risky to the rescue operation.

6 of 21

Why collaboration?

  • Recently, there has been an increase in using an aerial and a ground robot team for rescue, search, military operations, etc.

  • The main idea behind a UAV-UGV team is to decentralise the operation, ensuring more information about the environment and allowing more flexibility of tasks to be performed in the rescue mission.

7 of 21

  • Ground robots have a limited field of view and require more time in mapping an unknown environment. However, ground robots can be used in detecting survivors, potential hazardous material, delivering medical supplies, searching through rubble, etc.

  • Aerial vehicles provide a bird’s eye view of the environment and using state of the art technologies, they can help in creating an accurate 3D map of the environment.

  • Thus, the ground robot (UGV) and the aerial vehicle (UAV) complement each other and speed up the operation

8 of 21

Problem Statement

  • The goal is to create an autonomous multi-robot system which will be capable of exploring an unstructured and unknown environment, including GPS denied areas

  • Perform mission specific tasks like looking for survivors, distributing medical supplies, detecting explosives, etc.

  • A robust mechanism, taking sensor failure into account, to aid complex rescue and search operations to ensure minimum loss of life of rescuers and potential survivors by providing timely assistance.

9 of 21

Proposed Solution

  1. Mapping the environment (UAV’s Role)
  2. Passing environment information to UGV and trajectory planning
  3. UGV’s role

10 of 21

1

2

3

4

5

Fig. 1: Visualisation of proposed solution

11 of 21

Mapping the Environment

  • The task of mapping the environment will be done by the UAV using Visual Simultaneous Localisation and Mapping (VSLAM) [11]
  • Visual SLAM revolves around establishing a robot's location and creating a representation of the explored zone using images as the only source of external input.
  • The reason of proposing a camera-based localisation and mapping system instead of a LIDAR based SLAM technique is a that a camera can obtain more information about the environment like appearance, terrain type, etc.
  • Furthermore, cameras have a low power consumption and are lightweight. This makes them better-suited to be mounted on a UAV.

12 of 21

Passing environment information to UGV and trajectory planning

  • A 2.4 GHz Wi-Fi connection between the UGV and the UAV has been proposed as the communication channel of the two robots.
  • As soon as the UAV starts getting information about the environment, including the UGV’s location, it sends that data to the UGV.
  • the UGV knows its location in the environment and the location of nearby obstacles.
  • Using this information, the UGV can start planning it’s trajectory using a path planning algorithm, like A* or RRT[12], [13], and start navigating through the environment.
  • The UGV continues navigating through the environment as it gets more information from the UAV.

13 of 21

UGV’s role

  • The UGV also receives the location of any humans in the environment. It can use that information to deliver medical supplies and verify if the human is alive using an onboard CO2 sensor
  • In addition to this, the UGV has its own camera to detect humans in the environment as it navigates through the environment.
  • This two-factor approach (detecting humans using UGV and UAV both) to detect humans is being proposed to ensure that there are maximum survivors.
  • The UGV can also be used to detect fire as it travels. An on-board IR sensor can be used to detect temperature changes. In the event of an abnormally high temperature, the UGV will be stimulated to extinguish the fire.
  • Even if the sensors on the UGV fail, it will still be able to navigate through the environment, avoiding obstacles and sending a live feed of the environment from up close, without subjecting the navigation system to any danger.

14 of 21

1

2

3

4

5

Fig. 1: Visualisation of proposed solution

15 of 21

Workflow

  • The entire project will be implemented using ROS (Robot Operating System).

  • A simulation of a restrictive hostile environment created using Gazebo

  • A UGV and a UAV model capable of navigating the environment

  • Multiple simulations of different hostile environments considered for the purpose of generalisation of the solution.

16 of 21

UGV’s motion planning

Find Goal Point

Start DWA Node

Goal reached?

No

Yes

Task executed?

WAIT for execution

No

Yes

Start

Stop

17 of 21

18 of 21

Fig. 2: Parrot drone simulation

19 of 21

Expected Results

  • The proposed framework is expected to ensure minimum loss of life in search and rescue operations by providing timely assistance.
  • It also ensures that maximum people trapped in a post-disaster environment, are tracked and brought to safety.
  • The mechanism will ensures robustness so that it can be implemented in an extremely harsh, GPS-denied and hostile environment, taking sensor failure into consideration.
  • Since the navigation is done through a camera instead of a LIDAR sensor, less power will be consumed and the operation will be carried out effectively.
  • In addition to this, an accurate 3D map of the environment is obtained because of the RGB-D camera instead of depending on images only as done in prior works.

20 of 21

References

  • [1] I. Martinez-Alpiste, G. Golcarenarenji, Q. Wang, and J. M. Alcaraz-Calero, “Search and rescue operation using UAVs: A case study,” Expert Syst. Appl., vol. 178, no. March, p. 114937, 2021, doi: 10.1016/j.eswa.2021.114937.
  • [2] E. T. Alotaibi, S. S. Alqefari, and A. Koubaa, “LSAR: Multi-UAV Collaboration for Search and Rescue Missions,” IEEE Access, vol. 7, pp. 55817–55832, 2019, doi: 10.1109/ACCESS.2019.2912306.
  • [3] J. Ni, J. Hu, and C. Xiang, “A review for design and dynamics control of unmanned ground vehicle,” Proc. Inst. Mech. Eng. Part D J. Automob. Eng., vol. 235, no. 4, pp. 1084–1100, Apr. 2020, doi: 10.1177/0954407020912097.
  • [4] N. Srilatha, T. Malyadri, V. V. D. Sahithi, and P. Morampudi, “Design of unmanned guided vehicle for rescue missions,” Mater. Today Proc., vol. 45, pp. 3177–3185, 2021, doi: https://doi.org/10.1016/j.matpr.2020.12.361.
  • [5] G. Quaglia and P. Cavallone, “Rese_Q: UGV for Rescue Tasks Functional Design,” ASME 2018 International Mechanical Engineering Congress and Exposition. Nov. 09, 2018, doi: 10.1115/IMECE2018-86395.
  • [6] M. Pierzchała, P. Giguère, and R. Astrup, “Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM,” Comput. Electron. Agric., vol. 145, pp. 217–225, 2018, doi: https://doi.org/10.1016/j.compag.2017.12.034.

21 of 21

  • [7] C. Stachniss, J. J. Leonard, and S. Thrun, “Simultaneous localization and mapping,” Springer Handb. Robot., no. September, pp. 1153–1175, 2016, doi: 10.1007/978-3-319-32552-1_46.
  • [8] P. Kim, J. Park, and Y. K. Cho, “As-is geometric data collection and 3D visualization through the collaboration between UAV and UGV,” Proc. 36th Int. Symp. Autom. Robot. Constr. ISARC 2019, no. July, pp. 544–551, 2019, doi: 10.22260/isarc2019/0073.
  • [9] G. Christie, A. Shoemaker, K. Kochersberger, P. Tokekar, L. McLean, and A. Leonessa, “Radiation search operations using scene understanding with autonomous UAV and UGV,” J. F. Robot., vol. 34, no. 8, pp. 1450–1468, 2017, doi: 10.1002/rob.21723.
  • [10] H. Qin et al., “Autonomous Exploration and Mapping System Using Heterogeneous UAVs and UGVs in GPS-Denied Environments,” IEEE Trans. Veh. Technol., vol. 68, no. 2, pp. 1339–1350, 2019, doi: 10.1109/TVT.2018.2890416.
  • [11] J. Fuentes-Pacheco, J. Ruiz-Ascencio, and J. M. Rendón-Mancha, “Visual simultaneous localization and mapping: a survey,” Artif. Intell. Rev., vol. 43, no. 1, pp. 55–81, 2015, doi: 10.1007/s10462-012-9365-8.
  • [12] I. Noreen, A. Khan, and Z. Habib, “A Comparison of RRT, RRT* and RRT*-Smart Path Planning Algorithms,” IJCSNS Int. J. Comput. Sci. Netw. Secur., vol. 16, no. 10, pp. 20–27, 2016.
  • [13] A. K. Guruji, H. Agarwal, and D. K. Parsediya, “Time-efficient A* Algorithm for Robot Path Planning,” Procedia Technol., vol. 23, pp. 144–149, 2016, doi: https://doi.org/10.1016/j.protcy.2016.03.010.