Serious Simulations, a virtual reality, military training solutions provider, say they may have a solution to the tethered desktop virtual reality conundrum. Serious claim that their wireless streaming solution is capable of delivering wireless streaming video at a latency low enough to be suitable for VR.

Serious Simulations are a young company, as many are entering the VR space. Started some 18 months ago by founder and CEO Chris Chambers, the company is aiming to capture a slice of the professional VR market with bespoke immersive systems for the likes of the military. In doing so however, they found that the high-end VR systems available relied on a tethered-to-PC cable solution, one that didn’t fit their requirements.

So, they went away and designed a system which can stream HD video at extremely low latencies to a VR display, low enough they claim to be good enough to replace the (currently unavoidable) cabling.

The prototype system Serious was demonstrating at Vision VR/AR Summit last month uses the company’s proprietary video reformatting process to deliver the first pixel from every new frame from computer to HMD in 17 microseconds, using a 60GHz wireless link. SS’ solution requires another unity to be situated in close proximity to the HMD, worn on the user’s person. The extra box receives the image data wirelessly then ‘reformats’ the image for delivery over local HDMI/MIPI interface direct to the HMD’s display.

Serious claims that other wireless video transmission solutions utilising the new 60Ghz, IEEE 802.11ad protocol require full frame buffering on the HMD side, adding 17-22 milliseconds of delay. According to Chambers “until we invented this video formatting procedure and associated hardware, there was at least a 17-22 MILLISECOND PENALTY for manufacturers to go from wires to wireless for their VR displays. That is additional latency on top of their existing pipeline. This explains why no manufacturers are spending resources on true wireless video solutions, since their latency pipelines are already too long.”

By contrast, Serious Solutions’ claimed 17 microsecond for a single 1080p 60Hz feed led them to dub their system ‘zero frame’ latency.

serious-solutions-wireless-60GHz

Frank He went hands-on with an Oculus Rift DK2 packing a Serious Solutions’ ‘Zero Frame’ wireless system and sat down to chat with Chambers at Unity’s Vision VR/AR Summit in California last month.

Update: Since this interview was conducted, Serious Simulations have been in touch to give up this update:

Since then Serious Simulations has begun production of our updated wireless VR processor (we have been calling it the “Gen 2 board”). It has a more powerful FPGA, audio amp, better layout of connectors to make it HMD conversion friendlier, and power management functions. In addition, it lays the groundwork we needed for our next big leap forward in wireless VR, which is to support higher resolutions, and higher refresh rates to stay in sync with the direction of the industry. This latter process is a huge step for us and is patent pending as of February 2016. We are building the prototype now.


Road to VR: Can you introduce yourself, and tell us a little about Serious Solutions’ origin and its focus?

SEE ALSO
Meta Overhauls 'Horizon Worlds' Avatar System for More Realistic Representation

Chambers: I’m Chris Chambers, CEO of Serious Simulations. I founded the company about a year and a half ago to develop professional training solutions using a VR approach, and full human motion and freedom of motion as the main interface. One of the things that we’ve found immediately was that there was not a wireless display option that was adequate for anything we wanted to do in professional training, so we had to develop a lot of that.

We developed our own HMD which has a very wide FOV and high resolution and we put that into a wireless package. The wireless package initially included frame rotation software which enables the link from TV industry wireless links, which are landscape mode, into the display, which is a portrait, or natively portrait device. But that frame rotation software added 17 milliseconds of additional latency. We found that to be unacceptable.

serious-simulations-1 (4)There was no [available] alternate solution so we had to invent a solution. And so we did. We created a video processor system that involves pixel shader code on the GPU that helps to prearrange the image but maintain the landscape mode of a prearranged kind of… They look scrambled on the screen you saw earlier, but it ships out as a landscape mode image, goes over the wireless in landscape and then we have a patented process on the headset itself in the form of a small printed circuit board and that process of descrambling the image and sending it to the screen takes approximately 17 microseconds because we’re only buffering one row of a 1080p display. That’s kind of it in a nutshell. It’s very very fast, mainly because of the buffering only being done at one row rather than one full frame of video.

Road to VR: With your solution, you sort of want to partner with other companies doing VR, is that right?

Chris: Absolutely. That’s why we’re here. We’re announcing this innovation to the industry and although we’ve only embedded it in our own HMD, we’re a very small business and we are willing to license the IP, sell the IP, or whatever it takes to get this out to other manufacturers who’d like to have a zero latency wireless link.

serious-simulations-1 (3)

Road to VR: So I’m assuming that Oculus, Sony, HTC, and, you know, any of the big players have come and… Checked you out?

Chambers: Yes, this has been a great venue for that and we’ve had some interest from all the parties we’ve desired to talk to here. It’s very good.

Road to VR: [Your current demo at the booth] is only working on an Oculus Rift DK2 right now, so for the consumer Oculus Rift, and HTC Vive, they have higher refresh rates and have a lot more pixels that need to be sent over. Will your solution be able to handle that?

Chambers: Well, we are scaling up the solution, and so our next generation, which we’re working on that’s coming out this year, has the ability to get to 90 Hz refresh rates. That involved a slightly different process that gave us more capacity to push that many pixels. And we filed a patent on that process last week.

SEE ALSO
Meta VR Studio Behind 'Lone Echo' Shuttered After No New Game Release in Nearly Three Years

Road to VR: So hypothetically if you were to integrate that technology into one of the Oculus Rifts or HTC Vives, how much [cost] would be added to the base price of the headset itself?

Chambers: Yeah, it’s almost an impossible question to answer right now…

Road to VR: Just, like a rough ballpark.

Chambers: Well we know where we’d like to go to. We’d like to get down into the low hundreds of dollars so that this becomes consumable by… You know mass consumption. And we’re not there yet but it can be with quantity – it can be very affordable. As a small business we don’t have the quantities to justify the low price just yet.

serious-simulations-1 (2)

Road to VR: And also, do you know about eye tracking technology?

Chambers: Yeah.

Road to VR: And foveated rendering perhaps?

Chambers: That you’re ahead of me on.

Road to VR: So foveated rendering, it’s this concept where with eye tracking, you know where the eye is looking and because our eyes only really see really high detail in the center – the fovea – we can render our peripheral vision at a much lower resolution. So I was thinking if you could have a really special data [format]… With only that really high resolution center and low resolution peripheral, you know that could reduce some of the bandwidth requirements right?

Chambers: Yeah.

Road to VR: So I would think that would help to lower the barrier to entry a bit. So I WAS going to ask if there was anyone who was working in that space who had talked to you, but I guess not!

Chambers: We haven’t yet, but it sounds like an interesting idea and certainly conceivable. I don’t see why it would be difficult to do.

Road to VR: So there was one thing I was wondering, and perhaps this is because of my own lack of technical knowledge, but the way the Oculus Rift consumer version and HTC Vive work is that they have two separate screens, and they’re updated globally and not through a [rolling] shutter. Does that affect anything? Like you mentioned your solution was to solve that sort of landscape to portrait conversion. But what about those [headsets] that may not necessarily apply?

Chambers: Yeah I actually don’t know enough about it until we get into the proto discussions with those companies. I don’t see any impediments there. We have a path to much greater data throughput that gets us to higher resolutions and… We are already doing dual screen in our own HMD so we are pushing two streams of data simultaneously to two screens. So it all sounds entirely doable.

Road to VR: The two screens – there’s no difference in delay between the two eyes is there?

Chambers: No they’re perfectly synced. They’re coming out of the GPU – a dual output GPU – so they’re being rendered at the same time in one large image.

Another thing that’s interesting is that we’re moving forward with a new method that we can push more than just the video across our wireless link, that we’ll be able to include audio and command data. We can perform other functions on the HMD than just serving video. We’re going to combine all those streams of data and have that wirelessly synced to the HMD. Our whole goal is to not have any tethers.

SEE ALSO
Meta Announces Quest 3S, Launching Next Month With an Aggressive $300 Starting Price

Road to VR: What sort of range are hoping for to be able to not have the signal degraded – like how far can you walk away from the transmitter?

Chambers: On the current 60 GHz wireless link, we tested it up over 53 feet, so for the purposes we were looking for, that was plenty. So for most VR applications that’s sufficient. Longer than that, we’d have to do something different. But again that technology is just off the shelf commercial wireless link from, you know, the DVDO type wireless links out there, so we’re living with that because that’s what’s out there. We have our own path to go beyond that, that we’re working on this year.

Road to VR: So with the demo setup you have here, I saw some weird things, like to the peripheral vision…

Chambers: Well again, since on this particular version, we provided the wireless video, but I don’t have any access to the SDK, so we weren’t using their tracker and we’re not using their code to do you know any of the image distortion. So that work still has to be done. You’re seeing some anomalies in the image because we’ve gone around it. We’re not using the SDK. We don’t have any access to the SDK. That can all be fixed with access to the SDK.

Road to VR: So you actually have to integrate with-

Chambers: We have a little more work to do yes.

Road to VR: So you can’t just connect it to any video signal and have it work.

Chambers: It’s working to a degree but you see some anomalies like that. Mostly in the edges you see some odd colors/distortions. That’s just something we haven’t worked with yet.

Road to VR: How has the [reception] been at your booth?

Chambers: Most of them, they get the demo and they say “I get it, I see what you’re doing, that’s great.” And they’re convinced – like “wow.” VR is here to stay. So it’s pretty cool – the light goes on and they get it.


It’s early days yet for Serious Simulations application to consumer headsets and to achieve that dream of a truly untethered high-end VR solution, integration with Oculus’ tight software to hardware stack seems as if it could be challenging, especially as Serious will present a non-standard solution. However, technology using 60Ghz wireless communication could pose an extremely interesting option for the 2nd generation of VR headsets that will likely not be too far behind those launching this year.

You can grab more information on Serious Simulations and their technology by heading to their website here.

 

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • Whoa. Why not put the hardware in a backpack?

    • Gerald Terveen

      because you have to wear that backpack for hours? that is the common solution right now, but that is hardly the best one.

      • I meant the wireless receiver.

        • Mateusz

          Yup. I’d rather put Wi-fi reciever in my backpack or under my belt than to have a 2.4 gigahertz (12cm) wave right next to my head for 12 hours straight (assuming I get addicted to VR :p)

          • Nothing like microwaves across the cranium.

  • Damon Curry

    A great question. I am happy to reply. I am the VP of Technology for Serious Simulations. We have a number of reasons why we think wireless video streaming is preferable to backpack or other man-wearable hardware. First, the requirements for the serious training systems we produce do not permit, really cannot permit, backpacks or similar additions. For example, in our combat soldier training system, ‘ready2train’, soldiers train in full combat gear, meaning that they wear military backpacks plus many other items of equipment and supply. To require those trainees to remove their real equipment in order to don some special “training” equipment actually reduces training realism and training effectiveness. Second, while CPUs and computer memory get smaller and faster, graphics cards grow in the other direction. I recall when graphics cards were, indeed, a card … then they grew to be a big card with a dedicated fan, then a multi-card assembly that might require two fans, and some need liquid cooling … clearly graphics cards are beasts consuming lots of power, taking up lots of space, and generating lots of heat. All of those issues block the inclusion of truly top-end graphics cards in backpack applications, generally speaking. Third, it is easy to upgrade a desktop computer, especially a high performance “gaming” computer, when new and more powerful graphics cards come available, but it is generally not easy and often not possible to do a similar upgrade to a backpack computer that had its primary design goal being small size. For these main reasons, Serious Simulations believes that wirelessly streaming video from a remote (non-tethered) very high performance and easily upgradeable computer is the best way to go for virtual reality and similar simulation-based training applications.

    • Ah, I meant the receiver. Wireless is clearly a big step forward, and a necessary one. I’m sure miniaturization of the components is inevitable, but as depicted the current size seems quite unwieldy for placement on the head. Upon reflection, a hip pack would probably be a better solution than a backpack.

      While I’ve got you on the line, what is the feasibility of dual drivers pushing 1080p to each eye?

    • Pobrecito hablador

      Hi, Damon,

      Do you use commercial 60ghz modems, or have you developed yourselves one with custom modulation and channel coding techniques?

      I thought commercial modems did not offer the bandwidth needed, without relying on highly lossy compression.

  • Mike Coker

    What kind of battery life does a system like this have? Regarding the backpack comment, it does seem like for the consumer version a different location than the back of the head could be chosen and still maintain portability. I don’t yet know the answer to the battery question, but I suspect it will take a bit of power so a “light backpack” or waist mounted device might allow for more battery. We could all wear Batman utility belts!

    I am excited to see some large player take this product and run with it. I really dislike being tethered to my PC.

  • Damon Curry

    Will, thanks for clarifying. We are indeed considering alternate ways to mount the wireless receiver(s) and batteries. We’ve built significant power management features into our control circuits, so we have a good bit of flexibility there. Considering just our military customers for the moment, most of them prefer a helmet-mounted arrangement over a belt pack of some sort, but other applications and users, especially consumers, want other options. So many options — so little time!!

  • Damon Curry

    Mike, you’re right to question battery life. I think many people too often neglect that detail. In general, our current designs provide at least 2 hours of uninterrupted battery life, and we always make our power management control circuits to manage two batteries with “Hot Swap” capability, remote sensing of battery charge status, remotely commanded power management, and other things. By the way, that’s 2 hours of uninterrupted use for our dual-display “Peripheral Vision Immersive Device” that has two 1080p display panels, one for each eye. We could get longer, even substantially longer, battery life by using larger batteries, or more batteries, or a single display panel HMD, plus more power-efficient components (which we are always on the lookout for).

  • Damon Curry

    Will, let me answer your follow up question about two 1080p display panels. That is in fact the configuration of Serious Simulations’ “Peripheral Vision Immersive Device” (PVID), which provides two 1080p displays, one per eye, producing very wide field of view plus binocular overlap. I don’t wish to make a posting here resemble a marketing piece, but that’s our quick answer to your question.

    • Great, so it sounds like you’re already on track to be able to provide the proper signal for the Rift and Vive’s displays.The article seems to give the impression that Serious was only currently able to push a single 1080p signal, so enough for the DK2 or PSVR, but not the higher resolution HMDs. I hope to see a consumer ready version of your product soon, this really is an important piece of the VR puzzle. Oh, and if you have any more marketing(!) to do, I’d say this is as good a place as any you could hope for; most readers here seem quite educated, thirsty for information, and often involved in the frontline of the industry – I certainly appreciated it in any case.

      • I was curious about the limitations of the wireless medium itself for resolution throughput, so I did some fuzzy and probably largely uninformed math: If we assume 0% transmission loss, no packet overhead, and 24 bits per pixel, 60ghz can deliver about 2.5M pixels per millisecond. That’s a little more than enough pixels for a Vive/Rift display. As resolution increases, it’s fine to take more than 1ms for transmission because the computer will be able to build and serve the frame in less time (also because it can start the next frame before finishing transmission, then asynchronously warp). This sounds pretty feasible and scalable, unless I missed something, or unless transmission efficiency takes a bigger than marginal chunk of throughput (i.e. if 6Ghz translates to 3Gbps rather than something more like 5.8Gbps).

        • Bandwidth would decrease with distance and occlusion, so you certainly wouldn’t want to be too close to your limits or you’d risk this and packet loss dynamically restrictioning your resolution capabilities from moment to moment, but the Serious VP has indicated that they are already pushing 1080p per eye reliably, so it seems like they’ve met that mark.

          I am a little confused when you say that “as resolution increases, it’s fine to take more than 1ms for transmission because the computer will be able to build and serve the frame in less time” though, it does seem like rendering time will increase as resolution does, and I’m not sure a 1ms delay is acceptable when mapping to motion tracking.

          • @ignaciomartn:disqus I forgot about modulation! Thanks for pointing that out. I’ve never worked with low level transmission tech, so my assumptions were fairly arbitrary. I guess I should have just looked up data rates that are currently possible with 60Ghz tech. 4.6Gbps was the highest number I could find. That really does change things!

            @DJDowism:disqus Rendering would indeed take longer as resolution increases, if rendering power remained constant. My earlier statement assumed that processing power will increase to remain on par with higher resolution (or rather, that manufacturers won’t deliver higher resolution HMDs until our computers can handle the res just as well as they handle 1200p today). My main point was that although higher resolution leads to longer transmission time, asynchronous transmission as well as in-receiver asynchronous timewarp could prevent this from significantly affecting quality (the amount of time we have to calculate and render the frame in the first place).

            Oculus actually claims that 20ms is the maximum acceptable latency for presence (from “motion to photon”). Current headsets are 90fps, which is 11ms per frame, but on most platforms the motion reading is taken again right before render (rather than before the other logic that goes on every frame) so the M2F can be less than 11ms.

            So to tie those two responses together, it looks like at 4.6Gbps, we would be delivering 191,666 pixels per ms. Frames for modern displays at around 2.6M pixels, then, would take 14ms to deliver. That means that we can’t even double the current number of pixels without the transmission tech hitting that 20ms wall. Even slight increases in resolution would take away from the current rendering budget (5 to 6ms). I think many VR games today are around the 8ms mark for their render time (most experiences spend a large majority of their 11ms on rendering rather than game logic).

            Forget my earlier point about it being scalable, then – as soon as we get higher resolution displays, they’ll have to be wired unless data compression is employed, graphics quality is reduced, graphics processing ability increases by large margins, or more than 4.6Gbps are coaxed out of that spectrum. Already, it looks like graphics settings would need to be lowered to achieve presence using this tech as opposed to a wire. Seems like Gear VR’s 2560×1440 for example, would really be pushing the limits of this technology (over 3.6M pixels and 19 ms, leaving less than 1ms for other parts of the display tech to deliver each frame).

        • Pobrecito hablador

          How do you calculate that? The rf carrier frequency is 60Ghz, but that tells nothing about the bandwidth of the communication channel and its spectral eficiency, which depends on the modulation technique used. Ultimately, the teoretical maximum information transfer rate is given by the Shannon limit for noisy channels, but you cannot come close to it without using forward error correcting codes, which add overhead, and worse,latency.

  • Furdog

    Is there any real date in mind for the new version of this board that will work at 90Hz? What resolutions do you expect for the new version to be capable of?

  • mellott124

    The DVDO is already sub-frame. Why the 17ms latency? Because you’re rotating the incoming frames for a cell phone panel?

  • Luke

    I ask this to the developers in general please let us choose if choose wireless or wired.

    Me and many other player are scared by having battery and wireless system so close to the brain. many people think it could be dangerous for health.

    thx

    • brandon9271

      There is speculation that wireless transmitters near the brain could have negative effects in the long term. However, from my understand this would be a receiver and not a transmitter.. Unless, it’s a bidirectional link of some sort. Perhaps it is..?

      • Luke

        there is no answer the studies are incomplete and developing (source: italian wikipedia).

        A guy (radio amateur) talking in general about this argument told me that the distance from the body is very crucial in the equation. more far it is more safe it is.

        recently I have read over internet (over some forum board) that according to some studies is not only the wireless problem. Another problem that might be even greater are the batteries of the cellphones and portable devices. we should go deeper because it seems batteries when charged give some kind of radiation to the body if too close. I’m not an expert, maybe radiation is not the correct word ( I also speak english using often google translate so I really cant help so much).
        Maybe wikipedia may help. I read the italian version but I guess the english version should say the same.

        What I ask is to give the costumers 2 options on the same HMD: wireless and cabled. So we can choose! :)

        • brandon9271

          Having the choice would be best. Even if it was safe I’d rather not have the extra weight on my head and neck! :)

  • Damon Curry

    Querido Pobrecito Hablador … we have used a number of commercial off-the-shelf wireless video devices and we have some similar devices of our own invention. We cannot reveal our internal research and development results yet but our goal is to commercialize our R&D into off-the-shelf products as soon as possible.

  • gothicvillas

    If this thing blow up like hooverboard or electric cigies that would be one big ugly hole at the back of your head.. However, I hate tethered option we have today and hope this gets looked at.

  • Rico S Mario Melchert

    Love this.
    50 feet? I would be more then happy with 20 :D
    Bring on a kickstarter for a Vive-mod ok?