The robot exploration problem focuses on maximizing the volumetric map of a previously unknown environment. This is a relevant problem in several applications, such as search and rescue and monitoring, which require autonomous robots to examine the surroundings efficiently. Graph-based planning approaches embed the exploration information into a graph describing the global map while the robot incrementally builds it. Nevertheless, even if graph-based representations are computational and memory-efficient, the exploration decision-making problem complexity increases according to the graph size that grows at each iteration. In this paper, we propose a novel Graph Neural Network (GNN) approach trained with Reinforcement Learning (RL) that solves the decision-making problem for autonomous exploration. The learned policy represents the exploration expansion criterion, solving the decision-making problem efficiently and generalizing to different graph topologies and, consequently, environments. We validate the proposed approach with an aerial robot equipped with a depth camera in a benchmark exploration scenario using a high-performance physics engine for environment rendering. We compare the results against a state-of-the-art planning exploration algorithm, showing that the proposed approach matches its performance in terms of explored mapped volume. Additionally, our approach consistently maintains its performance regardless of the objective function used to explore the environment.

Learning Heuristics for Efficient Environment Exploration Using Graph Neural Networks

Herrera-Alarcon E. P.
Writing – Original Draft Preparation
;
Baris G.
Writing – Review & Editing
;
Satler M.
Writing – Review & Editing
;
Avizzano C. A.
Writing – Review & Editing
;
Loianno G.
Supervision
2023-01-01

Abstract

The robot exploration problem focuses on maximizing the volumetric map of a previously unknown environment. This is a relevant problem in several applications, such as search and rescue and monitoring, which require autonomous robots to examine the surroundings efficiently. Graph-based planning approaches embed the exploration information into a graph describing the global map while the robot incrementally builds it. Nevertheless, even if graph-based representations are computational and memory-efficient, the exploration decision-making problem complexity increases according to the graph size that grows at each iteration. In this paper, we propose a novel Graph Neural Network (GNN) approach trained with Reinforcement Learning (RL) that solves the decision-making problem for autonomous exploration. The learned policy represents the exploration expansion criterion, solving the decision-making problem efficiently and generalizing to different graph topologies and, consequently, environments. We validate the proposed approach with an aerial robot equipped with a depth camera in a benchmark exploration scenario using a high-performance physics engine for environment rendering. We compare the results against a state-of-the-art planning exploration algorithm, showing that the proposed approach matches its performance in terms of explored mapped volume. Additionally, our approach consistently maintains its performance regardless of the objective function used to explore the environment.
2023
979-8-3503-4229-1
File in questo prodotto:
File Dimensione Formato  
Learning_Heuristics_for_Efficient_Environment_Exploration_Using_Graph_Neural_Networks.pdf

accesso aperto

Tipologia: PDF Editoriale
Licenza: Copyright dell'editore
Dimensione 1.42 MB
Formato Adobe PDF
1.42 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/564074
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
social impact