StyleCap: Styling virtual character animation with sparse motion capture data and natural language prompts

Project partners: University of Edinburgh, Retinize

XR Network+ has funded a partnership between the University of Edinburgh and immersive technology studio, Retinize, that developed a machine learning model to style the movements of virtual characters in real time using natural language prompts.

3D character animation is expensive, often taking up to one third of the total budget in games, 3D films and TV. Due to its time-consuming nature, it is hard to iterate in this space, leading to long workflows that can hinder creativity. Efficient styling of motion capture data is a potential solution to this problem that could significantly increase creativity and cost effectiveness in the field.

The project team developed a dataset and machine learning model that is able to create full body animation from three datapoints and a style label. The new workflow enables users to say “make the character more exuberant”, rather than providing explicit instructions for manually adjusting each joint of a keyframe.

Researchers worked with members of the animation community, undertaking semi-structured interviews to explore what the animators valued in their work and what terms they used most frequently. Using the findings from this qualitative research, the team created a database of the most frequently used animation terms, along with metadata categorising these terms and their alignment to Disney’s Twelve Principles of Animation, one of the most referenced frameworks for analysing animation style

In parallel, the project team employed actors to perform the different style labels used within the dataset during a motion capture session and applied animations that were relevant to the intended style. 

The model developed as part of the project will eventually be integrated into a 3D animation production pipeline app, Animotive. 

Whilst the aim of the collaboration was to develop tools that enhance the work of animators and enable them to work more freely, research activities highlighted a negative sentiment towards Artificial Intelligence in the animation and wider creative communities.

The work has emphasised the need for UK research priorities, particularly regarding the use of machine learning and artificial intelligence, to demonstrate that they reflect the values of those impacted by the research and concerns regarding issues including Intellectual Property (IP) and automation. 

 This collaboration is one of seven projects supported by the Embedded R&D (round two) funding call, with grants of up to £60,000 awarded to researchers at UK universities to explore the transfer of knowledge between academia and industry in areas aligned with Virtual Production. The projects took place over a six month period, commencing from September 2024.

Categories: Film, Games, Research, Technology, TV