X

Owlchemy Labs Teases New In-Engine Mixed Reality Tech

Owlchemy Labs, the studio known for the genre-defying game Job Simulator, have cooked up a new way of doing mixed reality that not only promises to be more realistic, but is sure to grab the attention of VR streamers and content creators alike. They’re calling it ‘Depth-based Realtime In-app Mixed Reality Compositing’. It sounds complex, but it seems to simplify the entire production pipeline.

Green screen VR setups have littered expos ever since Northway Games teased mixed reality integration in Fantastic Contraption earlier this year. Requiring little more than a green sheet, an external camera and a few other bits and bobs (Northway published a step-by-step guide), the results are easy to see:

The video above however is the result of extensive polishing and after effects like rotoscoping to correctly occlude items, making it appear that the player is in 3D space instead of flatly sandwiched between the foreground; the contraption, and the background; the virtual environment.

image courtesy Owlchemy Labs

Owlchemy Labs recently teased a new in-engine method of putting you in the middle of the action, correctly occluded, that promises to eliminate extra software like Adobe After Effects or composition software like OBS from the equation.

They do it by using a stereo depth camera, recording video and depth data simultaneously. They then feed the stereo data in real-time into Unity using a custom plugin and a custom shader to cutout and depth sort the user directly in the engine renderer. This method requires you to replace your simple webcam with a 3D camera like the ZED 2K stereo cam—a $500 dual RGB camera setup that importantly doesn’t use infrared sensors (like Kinect) which can screw with VR positional tracking. But if you’re pumping out mixed reality VR footage on the daily, then the time savings (and admittedly awesome-looking results) may be worth the initial investment.

[gfycat data_id=”HotHardtofindArmyant”]

Owlchemy says you’ll be able to capture footage with either static or full-motion, tracked cameras, and do it from a single computer. Because the method doesn’t actually require a VR headset or controllers, you can technically capture a VR scene with multiple, non-tracked users.

“Developing this pipeline was a large technical challenge as we encountered many potentially show-stopping problems, such as wrangling the process of getting 1080p video with depth data into Unity at 30fps without impacting performance such that the user in VR can still hit 90FPS in their HMD,” writes Owlchemy. “Additionally, calibrating the camera/video was a deeply complicated issue, as was syncing the depth feed and the engine renderer such that they align properly for the final result. After significant research and engineering we were able to solve these problems and the result is definitely worth the deep dive.”

The studio says it still needs more time to complete the project, but they “have plans in the works to be able to eventually share some of our tech outside the walls of Owlchemy Labs.” We’ll be following their progress to see just how far reaching it becomes.

Related Posts
Disqus Comments Loading...