You wont be seeing armies of volumetric Orcs or legions of stormtroopers coming out of a Microsoft’s Mixed Reality Capture Studio, not yet anyway.
Microsoft are on a push with their Mixed Reality platform and along with their Mixed Reality Capture Studio solutions, but there’s a funky problem, call it a pain-point if you like. It’s how the Microsoft system relies heavily on infrared sensors and how they see materials. There’s a workaround but its an expensive one.
What is Volumetric Video Capture?
Volumetric capture promises to record bodies, faces and heads in detail with depth data, as well as overlaid optical information showing colour and texture like that shot from a traditional camera. Don’t get us wrong we all love this idea, but Microsoft have rushed to sell what we’d call out as vapourware.
Shot in a volume, volumetric video promises 3D animation-ready VR/AR characters in the geometry based .MP4 format used for games and interactive experiences. The system overlays and merges optical, depth and animation data so that you can import 3D animated characters directly into your Unity or Unreal Engine game dev. But in the industry we do all this already with mo-cap and plenty of hard work. The benefit of volumetric video capture should be one of speed, but that’s not the case so far.
With the Microsoft Mixed Reality Studio system, 190 cameras are used in a 360-degree studio, each camera is equipped with one optical and in the case of Microsoft one infrared sensor which detects depth and movement and records them as .FBX data. That’s all fine and dandy, but its the IR sensors part that seems to be the course of so many problems.
The problem with volumetric IR depth sensors.
When shooting with IR as in Microsoft’s Mixed reality capture studio its essential that you take into consideration the wardrobe of the production, because it could make or break your volumetric video production budget. We are not saying this makes the system a total waste of time, but in the interest of seeing disappointed faces on set, you need to know this.
Wardrobe and Production Design are Key.
The reason why wardrobe and design are so important is because infrared cameras struggle with many different materials, making the uninitiated stylist’s job a potential hell.
Inferred cameras have trouble with:
- Dark colours (especially BLACK!)
- Shinny metals
But then in saying all that, in tests one pair of black jeans vanished while another ‘brand’ of black jeans worked perfectly fine. So what’s going on?
These restrictions (or maybe not restrictions depending on the unique properties of each) may not seem like a massive problem but these are just some materials that have coursed problems in shooting volumetric video.
You have to take into consideration the variables like density, chemical make up, the weave pattern, fabric texture, even the physical qualities of makeup applied and its colour. These restrictions may limit a lot of creative decisions for Directors, stylists and production designers.
The reason why the infrared doesn’t pick up or detect these materials is because (here comes the science) when infrared waves (light) are sent towards an object, they do one of 3 things, depending on the properties of the object and surface they hit.
IR light waves are either reflected, absorbed or transferred. When shooting volumetric video with Microsoft mixed reality capture the beams have to be reflected back to the camera to relay the needed data.
How can this be helped?
Certain aspects of the problem can be reduced for example the metals can be sprayed with dulling spray to bring down the amount of reflection to have more control of the IR waves. There’s other exceptions, but you need to think like a chemist and physicist to know why.
As we said, some clothes for example black jeans shouldn’t work but then when tested some worked fine due to their different weave, dye and chemical make up. Turning trousers inside out and shooting the opposite side of the fabric solved one shoot. Some stitching scans but then stand out above the fabric, it seems to be down to the actual physical and chemical properties as well as colour and reflectivity.
So what do you do?
The only real way to know if a costume is going to work, is to go to a studio and do physical tests with the system, cause as reported there’s sometimes exceptions and unless you test you will never know, you’ll just be guessing.
Is this just a problem with Microsoft’s volumetric capture solution? We’d say its a problem for any IR based depth and motion sensors. Testing costume and props before shooting means more time in the volumetric volume and that means more costs.
With the cost to hire a volumetric stage using the Microsoft Mixed Reality Capture Studio software currently set around $20,000 a day, at least half of the day is spent tweaking and testing costume and props in regards to getting a good result from the systems IR sensors.
Some volume operators are also charging additional facility fees for each costume change, as the system needs to be repeatedly tested and possibly changes made on-set to the costume for every actor.
Imagine having to scan 20 different types of game characters, having to test all the armour, weapons and kit, then only to find that they don’t ‘read’ in the volumetric volume. VFX Supervisors need to budget for testing, re-testing, shooting, cleaning up the data, and then finally shooting.
Volumetric video capture solutions… next!