The Gesticulator architecture overview.

The fantastic work on automatic gesture generation got accepted at ICMI20. We present a model designed to produce arbitrary beat and semantic gestures together. Our deep-learning based model takes both acoustic and semantic representations of speech as input, and generates gestures as a sequence of joint angle rotations as output. The resulting gestures can be applied to both virtual agents and humanoid robots. Subjective and objective evaluations confirm the success of our approach.

@inproceedings{kucherenko2020gesticulator, title={Gesticulator: A framework for semantically-aware speech-driven gesture generation}, author={Kucherenko, Taras and Jonell, Patrik and van Waveren, Sanne and Henter, Gustav Eje and Alexandersson, Simon and Leite, Iolanda and Kjellstr{\"o}m, Hedvig}, booktitle={Proceedings of the 2020 International Conference on Multimodal Interaction}, pages={242--250}, year={2020}, }