The Problem With Microsoft Mixed Reality Capture Studio

Microsoft Mixed Reality Studio Review

You wont be seeing armies of volumetric Orcs or legions of stormtroopers coming out of a Microsoft’s Mixed Reality Capture Studio, not yet anyway.

Microsoft are on a push with their Mixed Reality platform and along with their Mixed Reality Capture Studio solutions, but there’s a funky problem, call it a pain-point if you like. It’s how the Microsoft system relies heavily on  infrared sensors and how they see materials. There’s a workaround but its an expensive one.

What is Volumetric Video Capture?

Volumetric capture promises to record bodies, faces and heads in detail with depth data, as well as overlaid optical information showing colour and texture like that shot from a traditional camera. Don’t get us wrong we all love this idea, but Microsoft have rushed to sell what we’d call out as vapourware.

Shot in a volume, volumetric video promises 3D animation-ready VR/AR characters in the geometry based .MP4 format used for games and interactive experiences. The system overlays and merges optical, depth and animation data so that you can import 3D animated characters directly into your Unity or Unreal Engine game dev. But in the industry we do all this already with mo-cap and plenty of hard work. The benefit of volumetric video capture should be one of speed, but that’s not the case so far.

microsoft mixed reality studio

With the Microsoft Mixed Reality Studio system, 190 cameras are used in a 360-degree studio, each camera is equipped with one optical and in the case of Microsoft one infrared sensor which detects depth and movement and records them as .FBX data.  That’s all fine and dandy, but its the IR sensors part that seems to be the course of so many problems.

The problem with volumetric IR depth sensors.

When shooting with IR as in Microsoft’s Mixed reality capture studio its essential that you take into consideration the wardrobe of the production, because it could make or break your volumetric video production budget. We are not saying this makes the system a total waste of time, but in the interest of seeing disappointed faces on set, you need to know this.

volumetric costume design
Volumetric costume design.

Wardrobe and Production Design are Key.

The reason why wardrobe and design are so important is because infrared cameras struggle with many different materials, making the uninitiated stylist’s job a potential hell. 

Inferred cameras have trouble with:

  • Leathers  
  • Glass  
  • Jewellery
  • Dark colours (especially BLACK!) 
  • Shinny metals
  • Plastics
  • Stitching
  • Patterns

But then in saying all that, in tests one pair of black jeans vanished while another ‘brand’ of black jeans worked perfectly fine. So what’s going on?

These restrictions (or maybe not restrictions depending on the unique properties of each) may not seem like a massive problem but these are just some materials that have coursed problems in shooting volumetric video.

You have to take into consideration the variables like density, chemical make up, the weave pattern, fabric texture, even the physical qualities of makeup applied and its colour. These restrictions may limit a lot of creative decisions for Directors, stylists and production designers.

The reason why the infrared doesn’t pick up or detect these materials is because (here comes the science) when infrared waves (light) are sent towards an object, they do one of 3 things, depending on the properties of the object and surface they hit.

infrared wave lengh, Volumetric video capture

IR light waves are either reflected, absorbed or transferred. When shooting volumetric video with Microsoft mixed reality capture the beams have to be reflected back to the camera to relay the needed data.  

How can this be helped? 

Certain aspects of the problem can be reduced for example the metals can be sprayed with dulling spray to bring down the amount of reflection to have more control of the IR waves. There’s other exceptions, but you need to think like a chemist and physicist to know why.

As we said, some clothes for example black jeans shouldn’t work but then when tested some worked fine due to their different weave, dye and chemical make up. Turning trousers inside out and shooting the opposite side of the fabric solved one shoot. Some stitching scans but then stand out above the fabric, it seems to be down to the actual physical and chemical properties as well as colour and reflectivity.

So what do you do?

The only real way to know if a costume is going to work, is to go to a studio and do physical tests with the system, cause as reported there’s sometimes exceptions and unless you test you will never know, you’ll just be guessing.

Is this just a problem with Microsoft’s volumetric capture solution? We’d say its a problem for any IR based depth and motion sensors. Testing costume and props before shooting means more time in the volumetric volume and that means more costs.

With the cost to hire a volumetric stage using the Microsoft Mixed Reality Capture Studio software currently set around $20,000 a day, at least half of the day is spent tweaking and testing costume and props in regards to getting a good  result from the systems IR sensors.

testing volumetric video

Some volume operators are also charging additional facility fees for each costume change, as the system needs to be repeatedly tested and possibly changes made on-set to the costume for every actor.

Imagine having to scan 20 different types of game characters, having to test all the armour, weapons and kit, then only to find that they don’t ‘read’ in the volumetric volume. VFX Supervisors need to budget for testing, re-testing, shooting, cleaning up the data, and then finally shooting.

Volumetric video capture solutions… next!

Unreal Engine Plugin Market Opens Up

Take a look at what’s possible, Theia Interactive built Optim as a plug-in for Datasmith thanks to how Epic Games Unreal Engine enables Python developers to get in, under the engines hood to make custom plugins possible for all kinds of functionality.  This is another example of how Epic Games is inventing new markets in its own ecosystem.

Optim 

With Epic recently making the editor 100% scriptable with python, developers have total freedom enabling them to write simple code to automate everyday actions like imports processes, or deleting and generating UVs. Simple tasks made automatic saving developers time, efforts and speeding up production.  

Optim uses Datasmith’s 20+ format support to get high-fidelity data out of those core applications. Epic has focused their technical skills on accurately bringing over metadata, converting materials to Unreal Engine, and dealing with a wide range of data-prep issues. Optim builds on that base to simplify the optimization process. 

Some examples of things that you could automate with Optim are: 

  • Skip the import of any meshes with names containing a certain string 
  • Create LODs for any mesh larger than a given number of triangles 
  • Instance any mesh with a particular name 
  • Merge everything with a particular property under a single group 
  • Replace all materials by name with existing materials from the Content Browser 

Python Scripts 

Unlike Blueprints, the Python environment is only available in the Unreal Editor, not when your Project is running in the Unreal Engine. That means that you can use Python freely for scripting and automating the Editor or building asset production pipelines, but you cannot currently use it as a gameplay scripting language. If you wish to use a gameplay scripting language blueprints are the best option for now.  

Python coding in Unreal Engine

New Markets? 

Optim shows the newly found possibilities of Unreal with the help of python being integrated with Datasmith it allows all developers to make the Unreal Engine theirs and customize every aspect of it. Optim will also be sold as a subscription package in early 2019. Could this be the start of a new market? Epics developed a design engine which is now stable enough to possible be the home of a totally new market. A market where third-party teams like Theia Interactive can take advantage of the Unreal Engine creating services and content without any barriers to enter the market and profit to be made.

AI for VFX and whats it mean for you.

AI VFX on-set

Artificial Intelligence is set to change the way VFX is approached and produced. There’s been discussions by some of the biggest names in the industry like Digital Domain concerning the applications and various forms of AI and VFX, with the question, “how they can be integrated together?”   

Image result for digital domains

Last year at SIGGRAPH there were a series of key talks and panels all discussing the topics of deep learning and examples of convolutional neural networks, generative adversarial networks and autoencoders. The panels discussed how deep learning and convolutional neural networks will be able to benefit the VFX and 3D design industries with everything from face and fluid simulation to image denoising, character animation, facial animation and texture creation.  

The panel was focused mainly on how these new tools will be changing the VFX pipelines. Doug Roble stated that the technologies are “scary tools when seen for the first time” although he went on to claim “you can use these to do visual effects in a completely brand-new way” showing the abilities of the new machine learning technologies. 

However, with these changes in the technologies there’s no denying that jobs will be displaced. But they will also be openings and a shift for jobs in the industry to the visual effect and programming sector.  

The use of machine learning within VFX gives the possibility of making models without the requirement for texturing, lighting and rendering, due to the computer knowing how to do these aspects itself. This would fundamentally change the way the VFX pipeline works and massively decrease the post production times on films.  

We’re closer to being able to have these opportunities because of the newly found data driven approach rather than the previous mathematical methods of programming and hand tuned algorithms.  

Motion control robot for previsulazation

What’s it mean for us? 

With the abilities of AI machine learning constantly developing it won’t be long before AI totally changes the VFX pipeline and possibly shifts it from post to pre-production as is the shift virtual production methods and on-set graphic systems. If the AI can edit a scenario by adding VFX we will be able to make the entire process real-time and cut out a large portion of post-production, cutting down production times and costs.  

They’d still be a lot of work within VFX but it’d be done before a shoot instead of afterwards therefore cutting times. It would benefit actors due to them not having to imagine the VFX they could simply see them on a monitor and then react appropriately; Producers would enjoy the faster production times and less costs. Everyone involved within the production would benefit.

 

Octane Render Plugin for Unreal Engine News 2019

It’s happening, just waiting on 2019.1 features in core for wider beta release” was the response of OTOY CEO Jules Urbach in a comment in the OctaneRender Facebook Group, when OSF pushed for release date news of the new OctaneRender UE4 plugin and integration.  

Earlier this year OTOY and Epic Games announced that OctaneRender and Unreal Engine will be integrated in the first half of 2019. The software’s have been designed together to help improve AI-accelerated GPU path-tracking and light field baking for unreal powered games, films, VFX, arch vis and mixed reality and virtual production applications.   

The new software abilities and features will be included in the OctaneRender’s $20/month baseline subscription package. This subscription enables you to have 20 GPU’s, network rendering and over 20 DCC integrations all at your fingertips.  

See the source image

Earlier this year at SIGGRAPH where there was an Unreal Engine integration display the engine showed abilities to: 

  • Rapidly do automatic conversion of Unreal Engine scenes and materials into OctaneRender. 
  • Brigade Engine was demonstrated during the display, an OctaneRendering tool was used to realtime path trace games and interactive content, the tool was powered by the Unreal Engine. 
  • The display showed the ability to AI Light, AI Scene, AI Spectral and Volumetric Denoising, and Out-of-Core Geometry, UDIM Support and Light Linking for production-ready final rendering.  
  • Volumetric Geometry and Lighting for infinite detail and granular lighting control. 
  • By using OTOY’s ORBIX scene format you’ll able to support more than 20 of the industry’s leading DCC tools- including Cinema 4D, Autodesk Maya, and 3DS Max. This enables artists to create scenes wherever they prefer and easily drop them into the Unreal engine and the content be fully responsive.  
  •  OctaneRender is optimized to support: NVIDIA RTX Ray Tracing, Vulkan, DXR, CUDA and Metal iOS/Mac OS backends built on OTOY’s cross-platform RNDR framework.  
OSF virtual set Octane render in Unreal Engine

Above: OSF virtual sets rendered in octane, baked in Unreal Engine.

This isn’t the first collaboration between OTOY and Epic Games, this collaboration expands on the previous release of Paragon assets which was given to all Unreal Engine developers. Which included OTOY’s LightStage and this gave the developers more facial scanning technology to reach new levels of photorealism. The integration of OctaneRender 2019 into Unreal Engine will gain developers access to the RNDR SDK which will serve as the portal for OTOY’s end-to-end holographic mixed reality.