This paper presents a multimodal system capable to understand and correct in real-time the movements of Tai-Chi students through the integration of audio-visual-tactile technologies. This platform acts like a virtual teacher that transfers the knowledge of five Tai-Chi movements using feed-back stimuli to compensate the errors committed by a user during the performance of the gesture. The fundamental components of this multimodal interface are the gesture recognition system (using k-means clustering, Probabilistic Neural Networks (PNN) and Finite State Machines (FSM)) and the real-time descriptor of motion which is used to compute and qualify the actual movements performed by the student respect to the movements performed by the master, obtaining several feedbacks and compensating this movement in real-time varying audio-visualtactile parameters of different devices. The experiments of this multimodal platform have confirmed that the quality of the movements performed by the students is improved significantly.
|Titolo:||Real-Time Gesture Recognition, Evaluation and Feed-Forward Correction of a Multimodal Tai-Chi Platform|
|Data di pubblicazione:||2008|
|Appare nelle tipologie:||4.1 Contributo Atti Congressi/Articoli in extenso|