A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength

A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength

Abstract

Emotion is considered to be a core element in performances [1]. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modelled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work which encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength, by controlling which a data-driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low-level motion features to the emotion strength. Since the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run-time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools and virtual reality applications, as well as offline applications such as animation and movie production.

Publication

Jacky C. P. Chan, Hubert P. H. Shum, He Wang, Li Yi, Wei Wei and Edmond S. L. Ho,
"A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength",
Computer Animation and Virtual Worlds (CAVW)
, 2019

# Impact factors are artificially designed to facilitate this assignment
## Citation counts are artificially designed to facilitate this assignment

Links and Downloads

Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail
Paper
Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail Thumbnail
Video
Thumbnail
DOI - Publisher's Page

YouTube

References

BibTeX

@article{chan19generic,
 author={Chan, Jacky C. P. and Shum, Hubert P. H. and Wang, He and Yi, Li and Wei, Wei and Ho, Edmond S. L.},
 journal={Computer Animation and Virtual Worlds},
 series={CAVW '21},
 title={A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength},
 year={2019},
 volume={30},
 number={6},
 pages={e1871},
 numpages={20},
 doi={10.1002/cav.1871},
 publisher={John Wiley and Sons Ltd.},
 Address={Chichester, UK},
}

EndNote/RefMan

TY  - JOUR
AU  - Chan, Jacky C. P.
AU  - Shum, Hubert P. H.
AU  - Wang, He
AU  - Yi, Li
AU  - Wei, Wei
AU  - Ho, Edmond S. L.
T2  - Computer Animation and Virtual Worlds
TI  - A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength
PY  - 2019
VL  - 30
IS  - 6
SP  - e1871
EP  - e1871
DO  - 10.1002/cav.1871
PB  - John Wiley and Sons Ltd.
ER  - 

Plain Text

Jacky C. P. Chan, Hubert P. H. Shum, He Wang, Li Yi, Wei Wei and Edmond S. L. Ho, "A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength," Computer Animation and Virtual Worlds, vol. 30, no. 6, pp. e1871, John Wiley and Sons Ltd., 2019.

Similar Research

 

 
 

Last updated on 21 April 2022, RSS Feeds