Preview: ILM Uses ‘Star Wars’ Assets to Show Potential of Google’s ‘Seurat’ VR Rendering Technology

13

Combining Real-time Elements for Interactive Gameplay

Of course, one challenge with synthetic light-field rendering approaches is that they’re typically static (because if you wanted to generate a series of light-field ‘frames’ you’d again run into issues with file size). One approach to get around that issue is to combine the light-field scene with traditional real-time assets.

The Star Wars Seurat scene demonstrated just that using a real-time model of the Rogue One droid character K-2SO who was strolling about the scene, and his real-time reflection was seamlessly composited against the shiny floor that was part of the Seurat scene. Because the model is running in real-time, I could see that he didn’t look quite as shiny and high-quality as the environment around him (because any real-time elements are constrained by the usual limits of mobile graphical capabilities). But the real-time nature means he can be interactive like in a traditional game, opening the door to creating real gameplay within these highly detailed Seurat environments.

Interestingly, Google also told me that the Seurat-generated environment produces real geometry. That means an easier way for developers to integrate the environment and the real-time assets together. For instance, if a developer wanted to allow me to shoot the robot and make sparks fly off of him in the process, those sparks could correctly bounce off of the floor, and indeed, K-2SO’s robot corpse could correctly fall onto it.

Adjustable Fidelity

Although Google has offered limited technical explanations of how Seurat actually works, nor if/when the company will offer it up publicly, it seems to be a rapidly maturing tool. Google’s VP of VR and VR, Clay Bavor, told me that the output can range in visual fidelity depending upon how much overhead the developer wants to retain for real-time interactive elements. That option, he said, is about as simple as a set of sliders.

SEE ALSO
Hands-on: 'Metro Awakening' on Quest 3S Brings Snappy Shooting and Smart Scavenging

So if a developer simply wants to offer a pretty environment for the player to look around, they would tell Seurat to render at the maximum fidelity that the mobile VR device could handle. But if the developer wanted to retain 50% of the processing power for real-time assets so that the environment can contain interactive characters and other dynamic content, the dev can notch down the Seurat output quality to ensure some open overhead for those extra elements.

– – — – –

Although Seurat requires developers to specify specific viewing areas—unlike a normal real-time environment where the viewer could choose to go anywhere—it seems like there’s a category of games that could make excellent use of Seurat to bring desktop-quality graphics to mobile VR: genres like narrative/character-focused pieces, turret/static/wave shooters, point and click adventure, and more. We’ll have to wait and see what developers can do with Seurat, and what challenges may yet be unidentified, but for now the potential of the technology seems very high and I’m excited to see how it can be used to improve the mobile VR experience.

1
2

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Mermado 1936

    What I dont understand is why this is not posible in a shooter game… someone knows?

    • J.C.

      By the time a dev lowered the environment visuals enough to allow multiple targets, YOUR weaponry, and all interactions/effects, it probably wouldn’t look much better.

      Probably. I have no doubt someone WILL make a shooter with this tech, so I guess we’ll just have to wait and see.

      • Lucidfeuer

        This. Also this is actually the new 3D isometric graphics like that of the first fps.

    • Buddydudeguy

      What isn’t possible? The question doesn’t make sense.

    • kool

      It only renders what the camera can see. You can’t see the other side of the objects or move too far off the axis.

  • Foreign Devil

    As a graphics snob. . this is very hopeful for mobile VR. I thought mobile would be relegated to blocky decade old graphics.

    • Mei Ling

      It’s two ways really; processing power or funky new paradigm shifts in software engineering.

  • Lucidfeuer

    Still waiting for an actual explanation.

    How is this different from Otoy’s ORBX? If I understand, rather than producing assets from every possible angles it simply adjust the angle of assets and texture maps do the light/reflection work?

    I’ve been waiting for something like this mixed with real-time objects, and while I understand how it can work for flat surfaces I don’t understand how it can work for round/convoluted space objects, like those barrels and bridge in the dock scene, unless they’re 3D objects too.

    • yexi

      As I understand, you need to define a path of camera, and all the assets lights will be precalculed in this particular angle, and other key angles near this one.

      Then an algorith will be able to able to adapt a little (but not a lot, so you surely can walk a little off the path, but not more that 2step… it’s for that you can not make a FPS level with that. Only Cynematic/Panorama/…

      In short, it’s like a Dynamic 3D 360Videos in perfect quality

      • Lucidfeuer

        Yes it’s like a lightfield stereocube, but they managed to do it in an optimised lightweight way which means you can further extend the zone(s) in which you can move.

        Technically you can indeed choose to work with a whole-room free-roam zone, although that would be useless for a narrative and cinematographic intent.

        The real question is what tool they plan on implementing for it not to be a demoware (a more polite description of vaporware when only a handful of companies with direct access get to use it for demonstration purpose before it falls into oblivion because nobody can actually use it).

  • yexi

    In fact, you can still made uncommon FPS using this.
    For example, you can make some hiding spot, and you can only teleport you from one to one other, so you can pre-render each spots using this technology.

    Add somes ennemis wave (Space Pirate, Serious sam, Holopoint,…), and you have the most beautiful FPS in the world.

  • Um… sounds like their are baking their lighting and reflections, which is something you can already do in the Unreal Engine 4. Actually, it’s something you HAVE to do, as nice reflections won’t work any other way on mobile.

    • Joel Wilkinson

      I could be wrong, but it seems like they’re baking a lot more than just lighting and reflections. They’re baking the whole geometry. Their other article about this here: http://www.roadtovr.com/googles-seurat-surface-light-field-tech-graphical-breakthrough-mobile-vr/ has a gif where it’s taking a perspective and baking only the geometry it needs making the end result look like a facade. Makes it seem like this technique is probably only useful for backdrops.