Skip to main content

"In 2013 the Furhat team was awarded the Robotdalen innovation award" with Jonas Beskow

Face-to-face is the most natural and efficient form of communication we know. Analysis, modeling and simulation of human face-to-face interaction behavior is a central research theme in the SRA environment "Human in the loop". The ultimate goal of this line of research is to make communication between humans and computers or robots as fluent as that between humans. In order to accomplish this, we have to be able to understand and model an array of interaction phenomena that are conveyed via speech, gaze, facial expressions and body gestures.

Jonas Beskow, Professor at the department of Speech, Music and Hearing at KTH.

However, it is not sufficient to study these modalities in isolation. We need a framework where interaction technologies may be implemented and integrated into a comprehensive multimodal, situated, conversational interaction system. Multimodal, because many modalities (speech, gaze, motion etc) have to be studied simultaneously. Situated, because real interaction takes place in an shared space of people and artifacts, thus the system needs an embodiment in this space and the abilities to perceive and act in it (such as looking at objects or people as it speaks). And conversational because spoken interaction is our primary communication channel.

The Furhat robot provides this framework. Furhat is a robot head with an animated face that is human-like in anatomy. The facial animation allows for accurate synchronized lip movements with speech, and the control and generation of non-verbal gestures, eye movements and facial expressions in a way that would have been next to impossible to realize using conventional robotics technology. In Furhat, the animated face is back-projected on a translucent mask. It has a moveable neck that allows for head movements with three degrees of freedom.

Furhat, a Social Robot and a platform for interaction research.

Furhat is built to study, implement and validate patterns and models of human-human and human-machine situated and multiparty multimodal communication, a study that demands the co-presence of the talking head in the interaction environment. To date it has been used in a large number of studies on human verbal and non-verbal communication, investigating e.g. gaze behaviour, lipreading, turn taking etc.

But Furhat is also a social robot. We have built a fully autonomous system for multi-party social spoken dialogue. Currently the domain is general chit-chat and quiz games, but the domain can easily be extended to encompass other tasks. The system uses speech recognition, face tracking and modelling of attention from multiple sources of information.

The Furhat system has been exhibited at the London Science Museum (2011) and at scientific conferences e.g. Interspeech 2013, IVA 213 and ICMI 2012, where it received best demo award. In 2013 the Furhat team was awarded the Robotdalen innovation award. 

Belongs to: Information and Communication Technology - The Next Generation
Last changed: Apr 25, 2019