test
Search publications, data, projects and authors

Free full text available

Other

English

ID: <

10670/1.aoaubh

>

·

DOI: <

10.26226/morressier.5cf632baaf72dec2b0554665

>

Where these data come from
Creating a Framework for Whole-Body Emotional Expression to Improve the Social Communication Abilities of Artificial Agents

Abstract

A sophisticated framework for detecting emotion from facial expressions is beginning to emerge, but until we develop a clear understanding of how affective information is conveyed through whole-body movements our appreciation of human social-signalling remains incomplete. Some forms of classical dance involve the performance of abstract whole-body movements designed to induce an emotional state in the observer. Therefore, dance can provide an ideal tool for exploring how the human body in motion can transmit affective valence information. Previous research shows that dancers demonstrate superior accuracy in recognising emotion from the body and that dancersu2019 expertise-driven visual-fixation patterns differ from non-experts. It is possible that these outcomes are related but this has never been empirically tested. Therefore, this investigation aims to identify how different body-regions communicate expressed emotionality to an observer. Our study was conducted to bridge this gap by exploring the relationship between emotion recognition and visual-fixation patterns across three groups: professional dancers (n=7), amateur dancers (n=16), and non-dancer controls (n=16). Participants watched a series of 5-6s whole-body dance sequences and completed a binary-choice affective-valence decision task while their eye movements were recorded. Visual-attention for four body regions of interest (head, arms, torso and legs) was explored. Dancers (experts and amateurs) were significantly more accurate in identifying movements with a positive affective-valence compared to non-dancer controls. Results from eye tracking did not support the hypothesis that divergent visual processing strategies support this difference. No significant group differences emerged in visual-attention toward any of the target features, nor did any relationship emerge between emotion-recognition accuracy and location of visual-attention. Through examination of perceived affect and different quantitative movement-parameters, we found that slow, fluid motions with an indirect trajectory significantly predicted perceptions of negative valence.This study forms part of an ambitious three-year project (funded through the ESRC Artificial Intelligence remit) which aims to benefit the technological industry through creation of a framework for characterising affective expression in human movement. The ultimate aim is for this resource to be used by engineers and computer scientists to increase social communication abilities of artificial agents (including robots and avatars), by improving their motion profiles.

Your Feedback

Please give us your feedback and help us make GoTriple better.
Fill in our satisfaction questionnaire and tell us what you like about GoTriple!