SealMates

2024
Social
Human Augmentation
No items found.

The limited nonverbal cues and spatially distributed nature of remote communication make it challenging for unacquainted members to be expressive during social interactions over video conferencing. Though it enables seeing others' facial expressions, the visual feedback can instead lead to unexpected self-focus, resulting in users missing cues for others to engage in the conversation equally. To support expressive communication and equal participation among unacquainted counterparts, we propose SealMates, a behavior-driven avatar in which the avatar infers the engagement level of the group based on collective gaze and speech patterns and then moves across interlocutors' windows in the video conferencing. By conducting a controlled experiment with 15 groups of triads, we found the avatar's movement encouraged people to experience more self-disclosure and made them perceive everyone was equally engaged in the conversation than when there was no behavior-driven avatar. We discuss how a behavior-driven avatar influences distributed members' perceptions and the implications of avatar-mediated communication for future platforms.

Contributors

MARK ARMSTRONG, Keio University, Graduate School of Media Design, Japan
CHI-LAN YANG, The University of Tokyo, Graduate School of Interdisciplinary Information Studies, Japan
KINGA SKIERS, Keio University, Graduate School of Media Design, Japan
MENGZHEN LIM, Meiji University, Graduate School of Arts and Letters, Japan
TAMILSELVAN GUNASEKARAN, The University of Auckland, Empathic Computing Lab, New Zealand
ZIYUE WANG, Keio University, Graduate School of Media Design, Japan
TAKUJI NARUMI, The University of Tokyo, Japan
KOUTA MINAMIZAWA, Keio University, Graduate School of Media Design, Japan
YUN SUEN PAI, Keio University, Graduate School of Media Design, Japan

Collaborators
Exhibitions & Publications
Media & Awards
More Info