A vision for the future are social robots that will appear in supermarkets, schools, manufacturing indus-try, and the homes of people. The success of this development will depend on how well humans can communicate with these robots, and the most natural way of interacting with them is likely to be spoken face-to-face interaction.
In this project, we explore the use of Augmented Reality (AR) to create virtual replicas of social robots. AR is a technology that facilitates the overlay of computer graphics constructs onto the real world. Whereas physical robot platforms are very expensive to build, alter and maintain, we aim to develop a platform where we can do experiments on human-robot interaction, without the need of physical robots. Such a platform would allow us to study the impact of the robot’s design on the interaction, before it is manufactured.
The project will focus on investigating phenomena such as engagement, turn-taking, feedback and joint attention in human-robot interaction, and how these things are coordinated using multimodal cues in the face, body and voice of both the user and the robot.