Multi-agent system control is a research topic that has broad applications ranging from multi-robot cooperation to distributed sensor networks. Reinforcement learning is shown to be promising as a control strategy in cases where the dynamics of the agents are non-linear, complex, and highly uncertain since it can learn policies from samples without using much model information. The presented manuscript proposes a multi-agent decentralized control approach based on a new multi-agent reinforcement learning setting in which two virtual agents, sharing the same environment, control a single avatar but have access to complementary details necessary to finish the task. Each of them is responsible for solving a portion of the problem, and in order to efficiently solve it, a collaboration should emerge among the virtual agents not to compete but to focus on the final goal. Each virtual agent, performing individually, is not fully autonomous since it does not have a complete vision of the scene and needs the other one to properly command the avatar. The proposed approach proved to be able to solve efficiently constrained navigation problems in two different simulated setups. An actor-critic architecture with a Proximal Policy Optimization (PPO) algorithm has been employed in continuous action and state spaces. The training and the testing have been done in a maze-like environment designed using the StarCraft II Learning Environment.

A Reinforcement Learning Decentralized Multi-Agent Control Approach exploiting Cognitive Cooperation on Continuous Environments

D'Avella S.;Avizzano C. A.;Tripicchio P.
2022-01-01

Abstract

Multi-agent system control is a research topic that has broad applications ranging from multi-robot cooperation to distributed sensor networks. Reinforcement learning is shown to be promising as a control strategy in cases where the dynamics of the agents are non-linear, complex, and highly uncertain since it can learn policies from samples without using much model information. The presented manuscript proposes a multi-agent decentralized control approach based on a new multi-agent reinforcement learning setting in which two virtual agents, sharing the same environment, control a single avatar but have access to complementary details necessary to finish the task. Each of them is responsible for solving a portion of the problem, and in order to efficiently solve it, a collaboration should emerge among the virtual agents not to compete but to focus on the final goal. Each virtual agent, performing individually, is not fully autonomous since it does not have a complete vision of the scene and needs the other one to properly command the avatar. The proposed approach proved to be able to solve efficiently constrained navigation problems in two different simulated setups. An actor-critic architecture with a Proximal Policy Optimization (PPO) algorithm has been employed in continuous action and state spaces. The training and the testing have been done in a maze-like environment designed using the StarCraft II Learning Environment.
2022
978-1-6654-9042-9
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/551153
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
social impact