The Problem With Microsoft Mixed Reality Capture Studio

Microsoft Mixed Reality Studio Review

You wont be seeing armies of volumetric Orcs or legions of stormtroopers coming out of a Microsoft’s Mixed Reality Capture Studio, not yet anyway.

Microsoft are on a push with their Mixed Reality platform and along with their Mixed Reality Capture Studio solutions, but there’s a funky problem, call it a pain-point if you like. It’s how the Microsoft system relies heavily on  infrared sensors and how they see materials. There’s a workaround but its an expensive one.

What is Volumetric Video Capture?

Volumetric capture promises to record bodies, faces and heads in detail with depth data, as well as overlaid optical information showing colour and texture like that shot from a traditional camera. Don’t get us wrong we all love this idea, but Microsoft have rushed to sell what we’d call out as vapourware.

Shot in a volume, volumetric video promises 3D animation-ready VR/AR characters in the geometry based .MP4 format used for games and interactive experiences. The system overlays and merges optical, depth and animation data so that you can import 3D animated characters directly into your Unity or Unreal Engine game dev. But in the industry we do all this already with mo-cap and plenty of hard work. The benefit of volumetric video capture should be one of speed, but that’s not the case so far.

microsoft mixed reality studio

With the Microsoft Mixed Reality Studio system, 190 cameras are used in a 360-degree studio, each camera is equipped with one optical and in the case of Microsoft one infrared sensor which detects depth and movement and records them as .FBX data.  That’s all fine and dandy, but its the IR sensors part that seems to be the course of so many problems.

The problem with volumetric IR depth sensors.

When shooting with IR as in Microsoft’s Mixed reality capture studio its essential that you take into consideration the wardrobe of the production, because it could make or break your volumetric video production budget. We are not saying this makes the system a total waste of time, but in the interest of seeing disappointed faces on set, you need to know this.

volumetric costume design
Volumetric costume design.

Wardrobe and Production Design are Key.

The reason why wardrobe and design are so important is because infrared cameras struggle with many different materials, making the uninitiated stylist’s job a potential hell. 

Inferred cameras have trouble with:

  • Leathers  
  • Glass  
  • Jewellery
  • Dark colours (especially BLACK!) 
  • Shinny metals
  • Plastics
  • Stitching
  • Patterns

But then in saying all that, in tests one pair of black jeans vanished while another ‘brand’ of black jeans worked perfectly fine. So what’s going on?

These restrictions (or maybe not restrictions depending on the unique properties of each) may not seem like a massive problem but these are just some materials that have coursed problems in shooting volumetric video.

You have to take into consideration the variables like density, chemical make up, the weave pattern, fabric texture, even the physical qualities of makeup applied and its colour. These restrictions may limit a lot of creative decisions for Directors, stylists and production designers.

The reason why the infrared doesn’t pick up or detect these materials is because (here comes the science) when infrared waves (light) are sent towards an object, they do one of 3 things, depending on the properties of the object and surface they hit.

infrared wave lengh, Volumetric video capture

IR light waves are either reflected, absorbed or transferred. When shooting volumetric video with Microsoft mixed reality capture the beams have to be reflected back to the camera to relay the needed data.  

How can this be helped? 

Certain aspects of the problem can be reduced for example the metals can be sprayed with dulling spray to bring down the amount of reflection to have more control of the IR waves. There’s other exceptions, but you need to think like a chemist and physicist to know why.

As we said, some clothes for example black jeans shouldn’t work but then when tested some worked fine due to their different weave, dye and chemical make up. Turning trousers inside out and shooting the opposite side of the fabric solved one shoot. Some stitching scans but then stand out above the fabric, it seems to be down to the actual physical and chemical properties as well as colour and reflectivity.

So what do you do?

The only real way to know if a costume is going to work, is to go to a studio and do physical tests with the system, cause as reported there’s sometimes exceptions and unless you test you will never know, you’ll just be guessing.

Is this just a problem with Microsoft’s volumetric capture solution? We’d say its a problem for any IR based depth and motion sensors. Testing costume and props before shooting means more time in the volumetric volume and that means more costs.

With the cost to hire a volumetric stage using the Microsoft Mixed Reality Capture Studio software currently set around $20,000 a day, at least half of the day is spent tweaking and testing costume and props in regards to getting a good  result from the systems IR sensors.

testing volumetric video

Some volume operators are also charging additional facility fees for each costume change, as the system needs to be repeatedly tested and possibly changes made on-set to the costume for every actor.

Imagine having to scan 20 different types of game characters, having to test all the armour, weapons and kit, then only to find that they don’t ‘read’ in the volumetric volume. VFX Supervisors need to budget for testing, re-testing, shooting, cleaning up the data, and then finally shooting.

Volumetric video capture solutions… next!

Octane Render Plugin for Unreal Engine News 2019

It’s happening, just waiting on 2019.1 features in core for wider beta release” was the response of OTOY CEO Jules Urbach in a comment in the OctaneRender Facebook Group, when OSF pushed for release date news of the new OctaneRender UE4 plugin and integration.  

Earlier this year OTOY and Epic Games announced that OctaneRender and Unreal Engine will be integrated in the first half of 2019. The software’s have been designed together to help improve AI-accelerated GPU path-tracking and light field baking for unreal powered games, films, VFX, arch vis and mixed reality and virtual production applications.   

The new software abilities and features will be included in the OctaneRender’s $20/month baseline subscription package. This subscription enables you to have 20 GPU’s, network rendering and over 20 DCC integrations all at your fingertips.  

See the source image

Earlier this year at SIGGRAPH where there was an Unreal Engine integration display the engine showed abilities to: 

  • Rapidly do automatic conversion of Unreal Engine scenes and materials into OctaneRender. 
  • Brigade Engine was demonstrated during the display, an OctaneRendering tool was used to realtime path trace games and interactive content, the tool was powered by the Unreal Engine. 
  • The display showed the ability to AI Light, AI Scene, AI Spectral and Volumetric Denoising, and Out-of-Core Geometry, UDIM Support and Light Linking for production-ready final rendering.  
  • Volumetric Geometry and Lighting for infinite detail and granular lighting control. 
  • By using OTOY’s ORBIX scene format you’ll able to support more than 20 of the industry’s leading DCC tools- including Cinema 4D, Autodesk Maya, and 3DS Max. This enables artists to create scenes wherever they prefer and easily drop them into the Unreal engine and the content be fully responsive.  
  •  OctaneRender is optimized to support: NVIDIA RTX Ray Tracing, Vulkan, DXR, CUDA and Metal iOS/Mac OS backends built on OTOY’s cross-platform RNDR framework.  
OSF virtual set Octane render in Unreal Engine

Above: OSF virtual sets rendered in octane, baked in Unreal Engine.

This isn’t the first collaboration between OTOY and Epic Games, this collaboration expands on the previous release of Paragon assets which was given to all Unreal Engine developers. Which included OTOY’s LightStage and this gave the developers more facial scanning technology to reach new levels of photorealism. The integration of OctaneRender 2019 into Unreal Engine will gain developers access to the RNDR SDK which will serve as the portal for OTOY’s end-to-end holographic mixed reality.  

 

Mixed Reality Tools Intel RealSense Depth Cameras

Deep compositing with no green screen requires a volumetric, depth view of the scene. The Intel Realsense range of cameras are a consumer level mass product that allows creators to combine VR sets with real world images based on depth, so that elements like background and foreground can be easily separated from the camera image, or more precisely layered in the correct order so that your picture makes sense.

Best Computers for VR Production

On Set Facilities

Most people don’t realise that VR games require seven times the graphics power of normal 3D games. This is because the graphics card has to deliver two different high-resolution images to both eyes at 90 frames per second.

Want to build the metaverse? In this post we are going to take a look at the best specifications for VR development workstations, what you’ll need.

The Best GPU for VR Production:

 

Boost Realtime VFX in Unreal Engine (UE4) with multiple GPUs using SLI

GPU Render Performance

Scalable Link Interface (SLI) is a multi-GPU configuration that offers increased rendering performance by dividing the workload across multiple GPUs.

Since UE4.15 Unreal Engine has been able to take advantage of machines and servers with multiple GPUs, so long as the GPU and system are compatible with SLI functionality.

Realtime Ray Tracing Demo
Realtime VFX demo Ray Tracing at GDC 2018 by Unreal Engine ILMxLab and Nvidia.

To take advantage of SLI, the system must use an SLI-certified motherboard. Such motherboards have multiple PCI-Express x16 slots and are specifically engineered for SLI configurations.

Building Multiple GPU machines

To create a multi-GPU SLI configuration, [NVIDIA] GPUs must be attached to at least two of these slots, and then these GPUs must be linked using external SLI bridge connectors.

Once the hardware is configured for SLI, and the driver is properly installed for all the GPUs, SLI rendering must be enabled in the NVIDIA control panel. At this point, the driver can treat both GPUs as one logical device, and divide rendering workload automatically depending on the selected mode.

There are five SLI rendering modes available:

  • Alternate Frame Rendering (AFR)
  • Split Frame Rendering (SFR)
  • Boost Performance Hybrid SLI
  • SLIAA
  • Compatibility mode

If you are building a multiple GPU system with GPUs of different capabilities, say a Titan X and then a couple of Quadros, you can utilise the SLI Compatibility mode. This mode enables UE4 to push rendering tasks to the most suitable GPU in your set up. Hard tasks go to the more powerful while the other less powerful GPUs in your rig handle the less, and more appropriate tasks. If you are interested in understanding more about SLI take a look at the following page on the Nvidia website.

UPDATE 26/03/2018 after posting this post to the Octane Render group on Facebook a few interesting comments came up that we thought we’d ad to this post.

James Hibbert said “just for clarification, this article is talking a lot about SLI, and using SLI bridges, you do not need any of that for rendering with Octane using multiple GPUs.” But then added “IF you are using UE4, then yes you will probably want SLI if in the context of your project it actually gives you some benefit. That is not always a given with Raster rendering. However with Octane, your speed scales 1:1 with the number of GPUs you have.

James Hibbert, just an aside, every PC Tech guru seems to agree on one thing. For games, at least the vast majority of them, a gamer is better off getting the fastest single GPU they can afford, rather than getting 2 slower/cheaper cards and running them in SLI/Crossfire. For Octane, and Red Shift, you simply need as many GPUs as you can afford.

Just remember Multi-GPU and SLI are not the same thing. SLI is a specific technology from Nvidia. Octane does not use SLI, Octane uses muli-gpu (not sure exactly wich flavor there is, but your motherboard does it on it’s own with the help of the OS).

There is a difference.

Now there is another form of of Multi-GPU from nvidia called NV Link, NV Link is similar to SLI, but allows you to do things like stack GPU memory, so if you have 4 GPUs with 11gb of VRAM you will have a total of 44gb of VRAM, where as all other forms would still leave you with the original 11gb. keep in mind that NV Link is not available on consumer GPUs, and you need to use Quadro or Tesla cards to use it.

Hopefully that will change with the next line of consumer GPUs from Nvidia. SLI support from nvidia has dropped off quite a bit to the point where they only support 2 way SLI officially. I kinda suspect that they will either be dropping SLI altogether or migrate everything to NVLink in future products. Because of the Raytrace UE4 demo, UE4 will feature support for NV Link in a future build, because they land to link multiple GPUs to get it to run in real-time.”

A look at Facebook’s New 360 Volumetric Video Cameras

Facebook 360 video cameras

Facebook Unveils Two New Volumetric Video ‘Surround360’ Cameras, Coming Later this Year. I’m going off to figure out why they qualify as ‘volumetric’ – I’ll be right back.

OK, I got the answer, they are volumetric, here’s why.

Facebook today announced two new Surround360 cameras. These hardware initiatives are poised to make facebook 360 videos more immersive. And they are volumetric as they can see – depth!

Unveiled at the company’s yearly developer conference, F8, the so-called x24 and x6 cameras are said to capture 360 videos with added “depth information” giving captured video six degrees of freedom (6DoF).

This means you can not only move your vantage point up/down, left/right like before, but now forwards/backward, pitch, yaw, and roll are possible while in a 360 video.

Even the best stereoscopic 360 video cannot go backward and forward, so the idea of a small, robust camera(s) that can record volumetric data, is exciting, especially when you’re in the immersive worlds of positional / motion tracking capabilities of the Oculus Rift, HTC Vive, or PSVR.

360 facebook camera

360 facebook camera