Harmonized Human Machine Enviroment Lab.

Director: Prof. Shuji Hashimoto

Our laboratory is responsible for research on the vision system and the KANSEI information system for humanoid robots. The research topics include various fields such as environment recognition, human recognition and KANSEI interface. Throughout these works, we aim to realize a harmonized human-machine environment.

Survival robot and a population of robots

Analysis of KANSEI with neural network

Physical model of a human face based on anatomy

Artificial Creatures

By constructing robots, we intend to understand the meaning of learning, evolution and the emergence of emotion. In this direction, several projects are ongoing such as an autonomous survival robot and also the creation of a population of robots. We are also actively involved in the research of Neural Networks and Genetic Algorithms. "Virtual Humanoid", in which a physical model of a human face, based on human anatomy, was built, and the analysis of KANSEI information with an artificial neural network are included in our research interests.

Robot vision system of Hadaly based on a three dimensional environmental model

Active Vision System used to understand the environment

Robust and active face tracking system using distributed cooperative processing

Auditory & Visual Scene Understanding

We consider that it is important for robots to have the ability to not only actively work in the human environment, but also to manage an interface which is flexible and safe without specific training or tuning. The model based recognition system that can automatically understand the environment and the active vision system for a 3D environmental map have been developed so far. We have also realized an active human face tracking system which utilizes audio and visual features. Studies concerning auditory information such as sound localization and sound source separation are also future research topics.

(A) (B)
(A) Tele-handshaking system through telephone line.
(B) Haptic and tactile display for human tele-communication

(C) (D)
(C) Hand-Shaped Force Interface for a Human- Cooperative Mobile Robot
(D) Motion interface by weight shifting for an omni-directional vehicle
(E) Music generation driven by a grasping motion

Advanced Interface

Humanoid Robots are required to have a multimodal interface in order to use visual, audio and other sensory communication channels to share action and information space with humans. We have developed some advanced interfaces, so that humans can operate machines intuitively, such as a haptic and tactile interface, a motion interface and a grasping interface. These are applied to robot control, music generation and human communication as a sort of PostGUI device.

Music creation from moving image and auditory information, and Dance system with a multimodal mobile robot

Four dimensional virtual space display

Media Fusion

It is important to build multimedia interfaces where sound, image and haptic information can supplement each other. Using a large-sized display and a motion captor, we tried to display objects in a 4 dimensional space. An interactive system that creates music from moving images and environmental sound has been created. A mobile platform for dancing and musical performance has also been proposed.

Content Top    

Copyright by Humanoid Robotics Institute, Waseda University. All rights reserved.