Cerebella automates the generation of physical behaviors for virtual humans, including nonverbal behaviors accompanying the virtual humans dialog, responses to perceptual events as well as listening behaviors. Modular processing pipelines transform the input into behavior schedules, written in the Behavior Markup Language and then passed to a character animation system.

Designed as a highly flexible and extensible component, Cerebella realizes a robust process that supports a variety of use patterns. For example, to generate the character’s nonverbal behavior for an utterance, Cerebella can take as input detailed information about a character’s mental state (e.g., emotion, attitude, etc.) and communicative intent. On the other hand, in the absence of such information, Cerebella will analyze the utterance text and prosody to infer that information. It can be used online to generate behavior in real-time or offline to generate behavior schedules that will be cached for later use. Offline use has allowed Cerebella to be incorporated into behavior editors that support mixed initiative, iterative design of behavior schedules with a human author, whereby Cerebella and the human author can iterate over a cycle of Cerebella behavior schedule generation and human author modification the schedule.

The above clip is an example of Cerebella performing nonverbal behavior generation and listening behaviors just using the analysis of the utterance text and audio.


Stacy Marsella, PhD
Teresa Dey, MFA
Margot Lhommet, PhD

Former Personnel:
Jina Lee, PhD
Yuyu Xu, PhD