|

Perceptual Computing Lab.
Director: Prof. Tetsunori Kobayashi
We are developing a man-machine communication interface for humanoid robots, especially for use in our daily life communication. The main part of our study is the modeling and development of a multi-modal interface, which is needed to deal with our daily life communication style.
|
|
ROBITA
(Real-world
Oriented
BI-modal
Talking
Agent)
|
|
|
Speech & Image Processing for Man-Machine Communication
We are developing a large continuous vocabulary speech recognizer based on HMM. Natural language processing techniques are utilized in order to understand the meaning of spoken words. For image processing, robots can recognize the face and face direction using a statistic method (PCA, ICA). Gesture recognition is utilized using video stream processing.
|
|
|
|
Multi-modal Interaction
Humans tend to use a non-verbal form of expression (such as "Take that box!" with a pointing gesture), rather than giving a full account of instructions using linguistic expression. ROBITA can understand the pointing gestures of humans using image processing and can convey his will, using gestures expressed by his arms.
|
Content Top
Copyright by Humanoid Robotics Institute, Waseda University. All rights
reserved.
|