Google's newly announced Seurat rendering tech purportedly makes use of 'surface light-fields' to turn high-quality CGI film assets into detailed virtual environments that can run on mobile VR hardware. The company gave Seurat to ILMxLab, the immersive entertainment division of Industrial Light and Magic, to see what they could do with it using assets directly from Star Wars. Google just announced Seurat this week, a new rendering technology which could be a graphical breakthrough for mobile VR. Here's what we know about how it works so far: Google says Seurat makes use of something called surface light-fields, a process which involves taking original ultra-high quality assets, defining a viewing area for the player, then taking a sample of possible perspectives within that area to determine everything that possibly could be viewed from within it. The high-quality assets are then reduced to a significantly smaller number of polygons—few enough that the scene can run on mobile VR hardware—while maintaining the look of high quality assets, including perspective-correct specular lightning. As a proof of concept, Google teamed with ILMxLab to show what Seurat could do. In the video above, xLab says they took their cinema-quality CGI renders—those which would normally take a long time to render each individual frame of final movie output—and ran them through Seurat to make them able to playback in real-time on Google's mobile VR hardware. You can see a teaser video heading this article. https://gfycat.com/InfatuatedBruisedJohndory "When xLab was approached by Google, they said that they could take our ILM renders and make them run in real-time on the VR phone... turns out it's true," said Lewey Geselowitz, Senior UX Engineer at ILM. Star Wars Seurat Preview I got to see the Star Wars Seurat-rendered experience teased in the video above for myself running on a prototype version of Google's standalone Daydream headset. When I put on the headset I was dropped into the same hangar scene as shown in the video. And while there's no replacing the true high quality ray-traced output that comes from the cinematic rendering process (that can take hours for each frame), this was certainly some of the best graphics I've ever seen running on mobile VR hardware. In addition to sharp, highly detailed models, the floor had dynamic specular reflections, evoking the same sort of lightning you would expect from some of the best real-time visuals running on high-end PC headsets. https://gfycat.com/HomelyMajesticGroundbeetle What's particularly magic about Seurat is that—unlike a simple 360 video render—the scene you're looking at is truly volumetric, and properly stereoscopic no matter where you look. That means that when you move your head back and forth, you'll get proper positional tracking and see parallax, just like you'd expect from high-end desktop VR content. And because Google's standalone headset has inside-out tracking, I was literally able to walk around the scene in a room-scale sized area with a properly viewable area that extended all the way from the floor to above my head. I've seen a number of other light-field approaches running on VR hardware and typically the actual viewing area is much smaller, often just a small box around your head (and when you exit that area the scene is no longer rendered correctly). That's mainly for two reasons: the first of is that it can take a long time to render large areas, and second is that large areas create huge file sizes that are difficult to manage and often impractical distribute. [irp posts="59801" name="Watch: How 'Star Wars: Rogue One' Used Steam VR Tracking to Shoot VFX"] Google says that Seurat scenes, on the other hand, result in much smaller file sizes than other light-field techniques. So small that the company says that a mobile VR experience with many individual room-scale viewing areas could be distributed in a size that's similar to a typical mobile app. Continued on Page 2: Combining Real-time Elements » Combining Real-time Elements for Interactive Gameplay Of course, one challenge with synthetic light-field rendering approaches is that they're typically static (because if you wanted to generate a series of light-field 'frames' you'd again run into issues with file size). One approach to get around that issue is to combine the light-field scene with traditional real-time assets. https://gfycat.com/NecessaryBelovedArieltoucan The Star Wars Seurat scene demonstrated just that using a real-time model of the Rogue One droid character K-2SO who was strolling about the scene, and his real-time reflection was seamlessly composited against the shiny floor that was part of the Seurat scene. Because the model is running in real-time, I could see that he didn't look quite as shiny and high-quality as the environment around him (because any real-time elements are constrained by the usual limits of mobile graphical capabilities). But the real-time nature means he can be interactive like in a traditional game, opening the door to creating real gameplay within these highly detailed Seurat environments. Interestingly, Google also told me that the Seurat-generated environment produces real geometry. That means an easier way for developers to integrate the environment and the real-time assets together. For instance, if a developer wanted to allow me to shoot the robot and make sparks fly off of him in the process, those sparks could correctly bounce off of the floor, and indeed, K-2SO's robot corpse could correctly fall onto it. Adjustable Fidelity Although Google has offered limited technical explanations of how Seurat actually works, nor if/when the company will offer it up publicly, it seems to be a rapidly maturing tool. Google's VP of VR and VR, Clay Bavor, told me that the output can range in visual fidelity depending upon how much overhead the developer wants to retain for real-time interactive elements. That option, he said, is about as simple as a set of sliders. So if a developer simply wants to offer a pretty environment for the player to look around, they would tell Seurat to render at the maximum fidelity that the mobile VR device could handle. But if the developer wanted to retain 50% of the processing power for real-time assets so that the environment can contain interactive characters and other dynamic content, the dev can notch down the Seurat output quality to ensure some open overhead for those extra elements. - - — - - Although Seurat requires developers to specify specific viewing areas—unlike a normal real-time environment where the viewer could choose to go anywhere—it seems like there's a category of games that could make excellent use of Seurat to bring desktop-quality graphics to mobile VR: genres like narrative/character-focused pieces, turret/static/wave shooters, point and click adventure, and more. We'll have to wait and see what developers can do with Seurat, and what challenges may yet be unidentified, but for now the potential of the technology seems very high and I'm excited to see how it can be used to improve the mobile VR experience.