Google announced Seurat at last year’s I/O developer conference, showing a brief glimpse into the new rendering technology designed to reduce the complexity of ultra high-quality CGI assets so they can run in real-time on mobile processors. Now, the company is open sourcing Seurat so developers can customize the tool and use it for their own mobile VR projects.

“Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene,” Google Software Engineer Manfred Ernst explains in a developer blogpost. “It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.”

 

Blade Runner: Revelations, which launched last week alongside Google’s first 6DOF Daydream headset Lenovo Mirage Solo, takes advantage of Seurat to a pretty impressive effect. Developer studio Seismic Games used the rendering tech to bring a scene of 46.6 million triangles down to only 307,000, “improving performance by more than 100x with almost no loss in visual quality,” Google says.

Here’s a quick clip of the finished scene:

To accomplish this, Seurat uses what the company calls ‘surface light-fields’, a process which involves taking original ultra-high quality assets, defining a viewing area for the player, then taking a sample of possible perspectives within that area to determine everything that possibly could be viewed from within it.

SEE ALSO
'Metro Awakening' Interview Reveals New Screenshots, Game Details & Locomotion Options

This is largely useful for developers looking to create 6DOF experiences on mobile hardware, as the user can view the scene from several perspectives. A major benefit, the company said last year, also includes the ability to add perspective-correct specular lightning, which adds a level of realism usually considered impossible on a mobile processors’ modest compute overhead.

Google has now released Seurat on GitHub, including documentation and source code for prospective developers.

Below you can see an image with with Seurat and without Seurat (click to expand):

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Mei Ling

    The second image is somewhat sharper and cleaner than with the technology applied.

    • Frank Taylor

      Right, it appears they sacrifice anti-aliasing on a large percentage of the higher polygon models in the scene.

    • Lucidfeuer

      Or they inverted the two. The point of Seurat is to get the second result. I think OP or the person at Google made a mistake.

      • B

        No, the point is to increase frame rate on low end hardware. Second result takes around 150ms per frame, first takes 1.7ms. That’s actually seriously impressive for only a small drop in visual quality…

        • Lucidfeuer

          Never seen the point of Seurat being stated as “reducing framerate”. In fact visual result from the 1st and 2nd shot are so appart, that it’d be a moot comparison like “so yeah it’s crappier, more pixelated, less qualitative but it runs faster…” duh.

          • Andrew Jakobs

            255 fps ‘low quality’ vs 2 fps ‘high quality’, so 60/70fps would be a bit above medium quality and alleviate most pixelating.. quite a feat IMHO if that’s what it takes to get decend VR to mobile.

        • Timothy Bank

          I remember playing Descent on a fast PC back in 1996. High frame rate even at low quality made for a really good experience. The quality loss is minimal and the performance gain is huge. This would really change things.

        • Marcus

          1.7 ms vs 152 ms was CPU. For GPU it is 3.9 ms vs 475.8 ms, resulting in 255.9 frames per second vs 2.1 FPS in the original. Which is even more impressive.

      • Nope, and actually this is not a good scene to apply Seurat with such luminescent color contrasts. ILXM demo is a much better example. It should also be noted that the process is only good for capturing static objects, so videos showing animated signs, had to be added afterwards.

    • Personally I am impressed and this to me was not the best example to apply the process. However, I think it is ingenious method to use something like photogrammetry to rebuild the scene into a low polygon model with a UV mapped texture. Or at least that is what I am getting out of the process using the tools now available from Github. I can see its benefits in desktop use as well, especially in large land masses.

  • Till Eulenspiegel

    This looks like a new type of Projection Mapping.

  • Ian Shook

    Does this surface light field allow for reflective and refractive changes in the scene? Otherwise it just seems like a light-baked asset.

  • Can anyone confirm if this is useless for VR experiences that allow any form of locomotion? If the tech leverages the fact that “scenes are typically viewed from within a limited viewing region” then it wouldn’t allow for any player exploration?

    • Marcus

      I guess it’s just more triangles (and thus fewer FPS) for more freedom. And maybe you need a special app/game design to switch between scenes without the user/player noticing (like shared areas with small complexity or the need for teleportation between scenes like Budget Cuts does AFAIK).

      But probably it’s best for games without artificial locomotion like Superhot …

      • Thanks. That makes sense. Isn’t that always the case with VR development? It’s all about trade-offs :)

  • Jerald Doerr

    Lower polygons on 1st image = screwd up shadows but at 100×’s faster I could work with that! It will be interesting to see how well it works and how .uch perspective you can use or if it’s just used for pushing in and out??

  • DaKangaroo

    Congrats, you invented polygon culling.

    • Mac

      And a nice automated tool to create projected textures on parallaxing planes. Looks useful for certain types of projects.

    • It’s a bit more involved than culling polygons. It takes cubemap renders as input (RGB + depth). No geometry goes into the system, but it spits out low-poly geometry and projective textures to map onto that geometry. When viewed from a specific 1 cubic meter area the scene looks just about exactly as it would’ve if rendered using the original geometry/material assets, but in <2ms on a mobile GPU, making it usable for mobile VR. Again, the limitation is the view-area. You could conceivably chain together a bunch of these 'headboxes' and transition between them to allow a viewer to navigate around a scene somewhat, but who knows how much memory that involves using to store all the different geometry and textures.

  • Sadly the tools on Github were not tested thoroughly on multiple platforms, and you can’t even build the main command line tool on Windows, and the Unreal plug-in is only source code, no binary. Even one person who was able to successfully build it using Linux (Ubuntu) was not able get it to run. However, it seems the Unity tool is an actual project so you will be able to perform 1.(create RGBD cubemaps) and 3.(import Seurat mesh/texture files from step 2) of a three step process. Sadly the main pipeline (step 2.) is a problem. I plan to pull out my MINT laptop and see if I can run it there. Won’t do me any good because my VASMRE app “Breath of a Forest” was done in Unreal and would have liked to stay in Unreal, instead of doing a mass Export and recreating a ton of Materials and shaders for Unity. :( https://uploads.disquscdn.com/images/618ea0d5cbb1f34c3a749f50b9f0902af9a635de64d19b4b25962e22dc026cb0.png

  • I guess there is a lot of guesses on what is actually being done, but in my limited time with the tools. You capture a number of random RGBD(color & depth map image) cubemaps in your scene and then process them to create a new low poly mesh based on these camera shots, as well as a new UV mapped texture to wrap you new mesh. It seems more like a photogrammertry technique versus projected mapping. Sadly, I am only going by Github instructions and have yet to create the main pipeline tool to try it myself or how long it takes to process it. It is also sad the Plug-in for Unreal is useless unless you compile it yourself, so I can’t even test the capture process in Unreal immediately (or find out how long it takes). There are Python scripts for Maya, but I don’t use Maya and most of the look of my project was done in Unreal. It should also be noted that many of the global effects need to be turned off since you can only capture one ray per camera. But there is no reason you can’t apply them once you re-import the Seurat mesh/texture asset. Hopefully I will be able to actually do steps 1, 2, and 3 by Friday morning for “3D Tech Closet”; segment.

  • beestee

    Google should partner with Epic to bring the Showdown demo to mobile with this. Epic could also use it as training material for Unreal Engine. It is hard to understand what the benefits and limitations are without being able to do in-VR side-by-side comparisons of the end results.