Predicting user satisfaction in Human-Robot Interaction (HRI) tasks is essential to enhancing system adaptability and user experience. This study addresses this challenge in the context of teleoperated Autonomous Ground Vehicles (AGVs) by introducing the Robot Motion Dataset (RMD). This multimodal dataset integrates motion, control, and haptic feedback signals with user satisfaction scores. Data were collected from 30 participants performing navigation tasks with and without a haptic-actuated glove interface. A set of statistical features was extracted from inertial measurements, control commands, and haptic feedback signals, and the most informative features were selected through a sequential forward selection process. Several machine learning algorithms were trained to classify satisfaction levels, with evaluation performed on both internal and participant-independent external test sets. The best performance was achieved by a Weighted k-Nearest Neighbors classifier, reaching an accuracy above 80% in both experimental conditions. The results demonstrate the feasibility of predicting user satisfaction from multimodal sensor data in real time, highlighting the potential of the proposed framework for adaptive HRI systems.
Machine Learning for Predicting User Satisfaction in Human–Robot Interaction (HRI) Teleoperation Tasks
Di Tecco, Antonio
Primo
;Frisoli, Antonio;Loconsole, ClaudioUltimo
2025-01-01
Abstract
Predicting user satisfaction in Human-Robot Interaction (HRI) tasks is essential to enhancing system adaptability and user experience. This study addresses this challenge in the context of teleoperated Autonomous Ground Vehicles (AGVs) by introducing the Robot Motion Dataset (RMD). This multimodal dataset integrates motion, control, and haptic feedback signals with user satisfaction scores. Data were collected from 30 participants performing navigation tasks with and without a haptic-actuated glove interface. A set of statistical features was extracted from inertial measurements, control commands, and haptic feedback signals, and the most informative features were selected through a sequential forward selection process. Several machine learning algorithms were trained to classify satisfaction levels, with evaluation performed on both internal and participant-independent external test sets. The best performance was achieved by a Weighted k-Nearest Neighbors classifier, reaching an accuracy above 80% in both experimental conditions. The results demonstrate the feasibility of predicting user satisfaction from multimodal sensor data in real time, highlighting the potential of the proposed framework for adaptive HRI systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
Machine_Learning_for_Predicting_User_Satisfaction_in_HumanRobot_Interaction_HRI_Teleoperation_Tasks.pdf
non disponibili
Descrizione: Article
Tipologia:
PDF Editoriale
Licenza:
Copyright dell'editore
Dimensione
2.23 MB
Formato
Adobe PDF
|
2.23 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

