Themes

The XR Network+ project is framed by six themes spanning virtual production (VP) related content creation and consumption.

VP integration of virtual game worlds and physical content

At the centre of VP is the role of the game engine: a real-time 3D interactive software design environment that provides offline/real- time graphics rendering, collision detection/interaction, audio, scripting/coding, animation, AI, etc. As game engines become a key design tool, from product design to automotive engineering, the challenge arises of how to work with digital assets created within 3D game worlds, and live content from traditional production/broadcast practice. Challenges include building virtual cinematography production tools in the game engine; applying interactive game-based world building contexts in a production environment; improving latency of in-camera VP stages; enhanced reflection modelling to include multi-camera viewpoints; multi-person volumetric/motion capture and collaboration in shared spaces; considering bandwidth efficiencies and perceptual thresholds for live experiences; future platforms for next generation content makers.

Sound Design in VP

Traditional film/TV sound production is based around on-set dialogue capture in quiet, controlled environments. Music and sound effects are added in post production, with production dialogue replaced as needed. Game audio uses an object-based approach, where additional context, behavioural or conditional metadata can be attached to sound generating elements. VP stages, especially those based on large volumes and large, highly reflective LED projection walls are acoustically challenging and bring further complexities for production sound and post-production integration. New VP blended sound design workflows need to be defined and solutions needed to deliver high quality, intelligent audio content, for instance using image processing to add metadata to sound objects in production environments.

Environments, Characters and Objects

VP aims to simulate the real world and create fictional worlds that allow audiences to experience believable content quickly, easily and immersively. Fundamental to this is the concept of digital twinning and the development and application of computer- generated imagery to capture, create and alter digitally produced environments, characters, and objects. This starts with capture to build environments, through e.g. use of LiDAR/SLAM, live motion capture for character animation, and facial capture for expression, communication and interaction. Clothing/textiles present unique challenges including the development of new light transport, reflectance models and sampling techniques to improve textile appearance while decreasing render times. Translating VP workflow contexts to textile designers and artists will also be key.

Digital Assets, Data, Ethics, and IP

VP has the potential to extrapolate value from existing digital assets e.g. digitised video of historical events, objects, places, people. Image processing can be used to extract 3D information from linear 2D content allowing the reconstruction of period settings from existing historical content, generating “new” digital settings for VP. This will transform how digitised archives might be used but has implications for the creative industries, the digital preservation of such media, and the custodians of this content. There are EDI, ethical, procedural, and legal issues, not limited to copyright, IPR, GDPR, and data protection, and a growing understanding of the ethical implications of reusing historical information. AI enhancement of existing material for upscaling to modern standards adds further complexity. The role of the content consumer is also relevant especially in live, data-rich contexts where shared personalised data gives rise to ethical and design questions for future media-rich interactive experiences.

AI and data-driven automation

The growth of VP will lead to a democratisation of creativity, lower barriers to entry, enable smaller more agile production teams, and make relevant VP/XR technologies more accessible and commonplace. VP workflows are complex and labour intensive with huge potential for AI and generative algorithms to facilitate the cost-effective production of virtual studio segments [ProMod] including procedural set creation and virtual camera operation, automation and integration with physical camera positions. There are also important questions to be addressed on the balance between creative control and procedural generation.

Translation and Impact in the Digital Economy

VP is currently being driven by the needs of the creative/screen industries but the technologies, workflows and platforms that emerge and that drive its innovation potential more widely will have application across and beyond the digital economy. XR Network+ will build collaborations with, and translate research in fields including generative design, real-time 3D technology, and audience/client experience design, to other industries such as manufacturing, automotive, healthcare and culture and heritage. These activities will also support the creative sector to continue to innovate through engaging with this user base in their own VP based R&D activities.