“Today a character costume can be seen in many places, such as amusement facilities, sport stadium and so on. They perform comical and funny body action for us. In general, the performers can’t control their costume’s facial expression. We developed ‘mimicat’ that can synchronizing performer’s facial action and costume’s one. A character costume performer can do more comical action by using mimicat. At first our motivation is combining animatronics and face and expression recognition.”
Researchers Rika Shoji, Toshiki Yoshiike, Yuya Kikukawa, Tadahiro Nishikawa, Taigetsu Saori, Suketomo Ayaka, Tetsuaki Baba, and Kumiko Kushiyama of the Graduate School of System Design, Tokyo Metropolitan University, Tokyo, Japan, presented their project ‘mimicat : Face input interface supporting animatronics costume performer`s facial expression.’ at SIGGRAPH 2012, Los Angeles, California, August 5 – 9, 2012.
Click the cat’s head, or here to read the paper in full
Although the mimicat adequately fulfilled requirements in the costume support action synchronization arena, the research team have already identified possible improvement strategies :
“On the other hand our actuators are relatively too few and simple.
So for the future work we will install other actuators and try to
express more minute facial expression, such as an air actuator to
express the puff of user’s cheek.”