Site icon

Laughing Researchers in The Netherlands (and elsewhere)

Laugh_Detection_Multi_Modal

Dr. Khiet Truong, who is a post-doctoral researcher at the Human Media Interaction (HMI) group of the University of Twente, The Netherlands, is not only one of the co-authors of the ‘Laughing Mirror‘ paper featured here recently, but is also a key figure in a number of other laughter-centric research projects. See for example : ‘On the acoustics of overlapping laughter in conversational speech.’ (In : Proceedings of Interspeech 2012, pp. 851-854.)

“The social nature of laughter invites people to laugh together. This joint vocal action often results in overlapping laughter. In this paper, we show that the acoustics of overlapping laughs are different from non-overlapping laughs…. people appear to join laughter simultaneously at a delay of approximately 500ms”

The HMI Group have also created and evaluated (at least) two software-based Laugh Detection Systems. See: ‘Automatic Detection Of Laughter’ (In: Proceedings of Interspeech 2005, Lisbon, Portugal, pp. 485-488.)

“We have shown that it is possible to automatically distinguish human laughter from speech. Laughter is only one example of paralinguistic information that can be extracted from the speech signal. In the future, we hope to use similar methods as described in this paper for automatic detection of other paralinguistic events to make classification of emotion in speech possible.”

And, a more advanced bi-modal laughter detector which uses video as well as audio data (from which the photo-compilation above is taken) ‘Decision-level Fusion for Audio-Visual Laughter Detection.’

“Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed by fusing the results of separate audio and video classifiers on the decision level. This results in laughter detection with a significantly higher AUC-ROC than single-modality classification”

(In : Proceedings of 5th Joint Workshop on Machine Learning and Multimodal Interaction, Utrecht, The Netherlands, pp. 137-148.)

BONUS: Sony’s US patent  ‘Laugh detector and system and method for tracking an emotional response to a media presentation’  granted Feb. 2011.

Exit mobile version