StyleCap: Styling virtual character animation with sparse motion capture data and natural language prompts

Project partners: University of Edinburgh, Retinize

This project will investigate the use of natural language prompts to style the movements of virtual characters animated through sparse motion capture data in real-time, increasing access and creative freedom through responsive control and experimentation.

3D character animation is incredibly expensive, often taking up to one third of the total budget in games, 3D films and TV. Due to its time-consuming nature, it is hard for creatives to iterate in this space, leading to long workflows that can hinder creativity. 

Efficient styling of motion capture data has the potential to solve this problem and increase creativity and cost effectiveness in this space.

Researchers at the University of Edinburgh will work with experts from immersive technology studio Retinize, to develop, test and integrate new machine learning models within a 3D animation production pipeline app, Animotive.

As developers of the app, Retinize have captured 20 hours of high quality full body motion capture and are applying machine learning to reduce the number of tracking points to the positions of the headset and two hand-held controllers. While this produces life-like animations, real creative control requires the ability to further tweak the generated movements to maximise expressiveness and style. 

The project team will focus on natural language for adjusting style in real time. For example, being able to say “make the character more exuberant” rather than providing explicit instructions for each joint of adjusting keyframes manually.

Researchers will investigate what supervised machine learning methods and data labelling strategies are effective in producing adjustments to motion capture data that convey qualitative characteristics in 3D characters (for example; sad, happy, worried, lightness, bounce, energy). 

The team will undertake user testing to explore how the results of these methods are perceived by animators. The collaboration will also explore how the models developed might be implemented in a system for real time animation to increase experimentation and produce a more satisfying creative experience. 

This project is one of seven projects supported by the XR Network+ Embedded R&D (round two) funding call, with grants of up to £60,000 awarded to researchers at UK universities to explore the transfer of knowledge between academia and industry in areas aligned with Virtual Production. 

Categories: Film, Games, Research, Technology, TV