|

Meta Communication Lab.
Director: Prof. Katsuhiko Shirai
We are doing close studies of the methods used to communicate with robots smoothly as well as for the recognition of human faces under various conditions. We analyzed and evaluated the methods used to control the robot in expressing non-verbal information. As the first step in recognizing the motion of a human face, we tried to extract the human face under various conditions, and to detect facial poses and eye-gazes.
|
Outputs of non-verbal information with a CG robot
|
|
Non-verbal Communication
We focused on the use of non-verbal information in human-robot dialogue to realize natural communication as human beings. First, the output timing of non-verbal information during interactive dialogues between human beings was analyzed. Moreover, we analyzed influences of output timing by controlling the dialogue with a CG robot. As a result, we clarified the differences of the output timing according to various types of non-verbal information. From experiments of communication between a human and the robot, it was confirmed that the use of non-verbal information with the appropriate output timing at the beginning and the end of utterances makes speaker-change smooth, which is the same as in human-human dialogues.
|
|
Results of Pose and Eye-Gaze Detection
|
|
Facial Pose and Eye-Gaze Detection
When a human instructs operations to a robot, or works cooperatively with a robot, it is necessary to inform the robot of the human's intentions and interests. The motion of a human's face and gaze is important non-verbal information that indicates a human's intention and interests. Then, the recognition of such motion can lead us to smoother communication with the robot. We are doing research on facial poses and eye-gaze detection.
|
Content Top
Copyright by Humanoid Robotics Institute, Waseda University. All rights
reserved.
|