Individuals tend to cooperate or collaborate to reach a common goal when the going gets tough creating a common frame of reference that is a common mental representation of the situation. Information exchange among people is fundamental for building a shared strategy through the grounding process that exploits different communication channels like vision, haptic, or voice. Indeed, human perception is typically multi-modal. This work proposes a two-fold study investigating the cognitive collaboration process both among humans and virtual agents of a multi-agent reinforcement learning (MARL) system. The experiment with humans consists of an interactive virtual shared environment that uses multi-modal channels (visual and haptics) as interaction cues. Haptic feedback is fundamental for a good sense of presence and for improving the performance in completing a task. In this manuscript, an experiment, consisting of escaping a virtual maze trying to get the best score possible, is introduced. The experiment is meant to be performed in pairs, and the perceptual information is split among the participants. A custom haptic interface has been used for the interaction with the virtual environment. The machine learning case, instead, proposes two virtual agents implemented using a tabular Q-learning paradigm to control a single avatar in a 2D labyrinth, introducing a new form of MARL setting. As it is known, it is not easy to get familiar with haptics for people that have never used it, and that if not properly transmitted, the cognitive workflow does not produce any improvements. However, the main findings of the proposed work are that haptic-driven multi-modal feedback information is a valuable means of collaboration since it allows to establish a common frame of reference between the two participants. The machine learning experiments show that even independent agents, implemented with properly designed rewards, can learn the intentions of the other participant in the same environment and collaborate to accomplish a common task.

On Multi-Agent Cognitive Cooperation: Can virtual agents behave like humans?

D'Avella S.;Camacho Gonzalez G.;Tripicchio P.
2022-01-01

Abstract

Individuals tend to cooperate or collaborate to reach a common goal when the going gets tough creating a common frame of reference that is a common mental representation of the situation. Information exchange among people is fundamental for building a shared strategy through the grounding process that exploits different communication channels like vision, haptic, or voice. Indeed, human perception is typically multi-modal. This work proposes a two-fold study investigating the cognitive collaboration process both among humans and virtual agents of a multi-agent reinforcement learning (MARL) system. The experiment with humans consists of an interactive virtual shared environment that uses multi-modal channels (visual and haptics) as interaction cues. Haptic feedback is fundamental for a good sense of presence and for improving the performance in completing a task. In this manuscript, an experiment, consisting of escaping a virtual maze trying to get the best score possible, is introduced. The experiment is meant to be performed in pairs, and the perceptual information is split among the participants. A custom haptic interface has been used for the interaction with the virtual environment. The machine learning case, instead, proposes two virtual agents implemented using a tabular Q-learning paradigm to control a single avatar in a 2D labyrinth, introducing a new form of MARL setting. As it is known, it is not easy to get familiar with haptics for people that have never used it, and that if not properly transmitted, the cognitive workflow does not produce any improvements. However, the main findings of the proposed work are that haptic-driven multi-modal feedback information is a valuable means of collaboration since it allows to establish a common frame of reference between the two participants. The machine learning experiments show that even independent agents, implemented with properly designed rewards, can learn the intentions of the other participant in the same environment and collaborate to accomplish a common task.
2022
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/543974
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
social impact