Australian startup Immersive Robotics are poised to deliver what they claim is a truly universal wireless solution for PC VR headsets that delivers tether-free virtual reality with minimal compromises in quality and extremely low latency. I've always found it fascinating to observe how the advent a new technology can accelerate the development of another. The push for rapid advances in smartphone specifications for example accelerated the development of mobile, high resolution displays and low-cost IMU components without which today's first generation consumer VR headsets could simply not have existed. Likewise, now that consumer VR is finally here, the demand for a solution to those ever more anachronistic, presence-sapping cables is driving innovation and rapid advancement in wireless video systems. We've seen an explosion of stories centering around companies looking to take the (until now) slowly evolving sphere of wireless video broadcasting and give it a good shot in the arm. Most recently we've seen HTC partner with TPCast to deliver a wireless add-on solution for their SteamVR powered Vive VR system. But prior to that we'd already heard how Valve was investing a "significant amount" in wireless video streaming for VR by way of Nitero, a specialist in the field with Quark VR and Serious Simulations on the scene still earlier than that. However, when it comes to pushing the boundaries of cutting edge technology, you can never have too many people racing to the finish line. Immersive Robotics (IMR) are an Australian startup who have developed a wireless VR streaming and inline compression system designed from the very start to be used with PC VR headsets, offering a claimed sub 2-3ms latency, with minimal compromises to image quality and works over existing WiFi standards you're very likely to have in your home right now. IMR call their system the Mach-2K and from what they've we've seen so far, it shows some considerable promise. In truth, IMR's project is far from new as the founders have been developing their technology since 2015, with working proof of concept running first on an early OSVR headset before securing a government grant to fund further development. [caption id="attachment_57082" align="alignright" width="150"] Dr Daniel Fitzgerald[/caption] IMR was co-founded by Tim Lucas and Dr Daniel Fitzgerald. Lucas has a background in unmanned vehicle design having worked on multiple "prominent" UAV designs but has also worked with VR and LiDAR powered Photogrammetry, having built what he describes as "the first Virtual Reality simulation of a 3D scanned environment from an aircraft". Lucas' co-founder Fitzgerald hails from aerospace avionics engineering with a PhD focusing on the then emerging unmanned drone industry. Fitzgerald has built auto-piloting software for said drones, an occupation which let him practice his talent for algorithm software development. [caption id="attachment_57097" align="alignright" width="150"] Tim Lucas[/caption] With the virtual reality industry now growing rapidly, the duo have set about designing a system built around proprietary software algorithms that delivers imagery to VR headsets wirelessly. "Basically from an early point in modern VR history, my business partner Dr Daniel Fitzgerald and I decided to tackle the problem of making a HMD wireless," Fitzgerald tells us, "Our original area of expertise was in designing high-end drones and we initially envisioned it as an interface for that area." The team quickly realised that with the advent of consumer level cost, room-scale VR, there were some significant opportunities to capitalise. "Soon after looking into it, we realized that logically pretty soon everyone using tethered HMD's would probably just want to get rid of the wires anyway and that the potential in this growing market was significant," Lucas tells us, "We designed a video compression algorithm from the ground up that could compress data down to acceptable rates for current wireless technology but at the same time eliminating the flaws of current compression technology that make it unsuitable for VR such as high added latency." "What we ended up with was a compression and decompression algorithm running on individual boards, which is able to plug into HTC Vive compress it's data down by around 95% with less than 1ms additional latency. Most of all there is no visible degradation to what the user normally sees with the cables." [caption id="attachment_57094" align="aligncenter" width="640"] The Mach-2K Receiver and Belt[/caption] That system is called the Mach-2K and comprises a battery powered receiving box, small enough to be worn on a belt by the player. The unit is then attached to the USB, audio and HDMI, and a transmission device attached to the PC which beams native resolution 2160 x 1200 images @90Hz to the target VR headset, currently an HTC Vive. IMR have developed hand-crafted algorithms capable of achieving up to 95% compression on those images while adding under 2-3ms of motion-to-photon latency, all delivered over a vanilla WiFi system. As if that weren't enough, the two devices, each working in tandem to compress and then decompress imagery at source and destination respectively, were originally conceived to handle 4k per-eye resolutions up to 120Hz, ready for the next generation of high-spec VR devices. "At the moment we have actually scaled it back for HTC Vive support," says Lucas, "it will support 4K per eye which we believe to be a near future requirement," so there's room here for IMR's technology to evolve alongside advances in VR headsets. [caption id="attachment_57093" align="aligncenter" width="593"] Inside the Mach-2K receiver[/caption] Mach-2K Specs: Current resolution fully supported 2160 x 1200 Current frame-rate fully supported 90Hz Planned resolution in the near future 4K per eye Planned frame-rate in the near future 120Hz Main CPU: FPGA I/O: HDMI, USB 2.0, 12volts out. Eye tracking input Supply Power: 5 Volts DC Current Frequencies: 802.11ac Wi-Fi 5Ghz Future Supported Frequencies: Up to 60Ghz WiGig On-board software: B.A.I.T "Biologically Augmented Image Transmission" Algorithm OEM and SDK options available, allowing third parties to create application specific modules for the algorithm. User select-able compression schemes [gallery type="rectangular" ids="57084,57094,57095"] Lucas continues "What we ended up with was a compression and decompression algorithm running on individual boards, which is able to plug into HTC Vive compress its data down by around 95% with less than 1ms additional latency." Skeptical? So was I. So I asked IMR for some example images which demonstrated the before and after image quality of the Mach-2k system. The images below represent the IMR's development progress as they've tuned and iterated upon that compression algorithm. Each image grid compares fidelity against an original, raw image after passing through the company's older V2 algorithm and their current V3 iteration. Click on each to load the full size image. Lossy (not a negative term, merely indicating data is discarded) video compression aims to shrink data sizes by discarding data used to describe a scene. As the same image still needs to be described in each frame only using less data, naturally their are compromises which must be made. A telltale sign of a compressed scene is loss of subtle colour fidelity, seen most glaringly as banding or posterisation on smooth colour gradients. You can see IMR's older V2 algorithm struggling to the gradients as accurately as the raw image but their current method improves significantly, with minimal extra banding introduced. If that level of image quality is indeed representative of the real-time, 90Hz experience, I can very easily see many users unable to distinguish between wired and wireless versions of the experience, except perhaps in especially challenging VR scenes. Let me be clear here however; we have not yet seen this system in action for ourselves, so assuming that compression quality is indeed representative, we still are unable to judge added latency - the area of performance which will make or break the system of course. Ultimately, IMR see their technology used as a way to provide universal wireless solutions for all VR headsets. "Our algorithm is designed to be as agnostic as possible with wireless equipment," says Lucas, "we have demonstrated it to some of the world's leading WiFi and VR experts and the general consensus is that our latency is the best anyone has seen. Allowing various options for integration which obviously would come with their own overheads." Continued on Page 2 - 'Interview with Immersive Robotics' It's clear then that Immersive Robotics is moving on apace, with the team aiming to show off their technology at the forthcoming CES trade show in Las Vegas next month. In order to dig a little deeper, find IMR's answers to some of our burning questions regarding their technology below. Road to VR: It sounds as if the key USP is your compression. Is this entirely proprietary? If so, does this mean you've had to build custom compression acceleration for the (presumably) SoC used in the transmission and receiving devices? IMR: Our compression / decompression is proprietary and built from the ground up specifically to tackle the problem of sending VR data between any supported devices. Call it a VR standard if you will. Our flagship product demonstrator is sending the VR video and USB data seamlessly between the PC and HMD wirelessly. Aside from the core algorithms and VR standard we have designed, the complete system is implemented on our real time electronics hardware. We show a system end to end (compression, transmission and decompression) of less than 1 ms. Other people currently coming to market that we've observed have EXTREMELY high latency which is not practical for VR and will make people sick. It also has no future expand-ability. A considerable amount of time and resources on our part has gone into actually creating this architecture that the algorithm can run on which is extremely fast and robust, we chose to do this completely independent of the computer, so it is not slowed down by any software layers it is all completely embedded. There is no software or drivers to install. This involves our own custom PCB hardware and all the supporting code that runs on the chips. All of this is proprietary. (Provisional patent filed in the USA - United States Provisional Patent Application No. 62/351738 - "IMAGE COMPRESSION METHOD AND APPARATUS") Road to VR: You say the process of compression and transmission adds only 1ms latency - is that motion-to-photons? How did you go about measuring the latency? IMR: The process of compression and decompression of a single frame at around 95% introduces latency only in the hundreds of microseconds end to end. This is measured via clock cycles of our hardware, the time it takes to compress and subsequently decompress a bit of data. This measurement is a measurement of what we are replacing - ie: the HDMI cable in the video's case for example. So the latency is added to the system latency where the HMDI and USB cable is usually connected. Road to VR: What radio transmission technology / frequency are you using and how are you dealing with potential line of sight issues? IMR: We are actually able to utilize a selection of off the shelf WiFi modules, we chose this path because of the strength in our compression we are able to get the data rate to an acceptable level to just take advantage of the latest existing WiFi tech, it brings the cost down and also opens up opportunities for us to partner with other companies that are already doing well in this area. We can use anything from the 802.11 AC and for even more bandwidth and less compression we can move up to the new AD standards. Our antenna placement on our belt allows for an antenna to almost always be in view of a base station. The line of sight issue is a little more pronounced with higher frequency and needs more care given to antenna placement. Current model being released at CES is using 802.11 AC Road to VR: What is the maximum range for the system, between base station and receiver? IMR: This depends on the WiFi Frequency being used and the level of compression chosen, we can automatically scale the compression to the signal to noise ratio or the user can select a preset. With the data-rates for this application, AC can be up to 50m theoretically and depends somewhat on environmental factors, it is certainly far enough for most play areas. We are aware that our technology also allows people to develop much larger play areas and ad many more people to them, this is something we are partnering up with interested companies to test. We are guaranteeing with our modules the ability for multiple users to play in the same large area. VR Arcade companies we are in contact with have suggested an area of up to 30x30 would be fantastic, so we are testing to support this currently. Road to VR: What expertise did you leverage, historical or through hiring, to allow you to engineer such an efficient codec? IMR: The Compression algorithm was developed completely between Dr Daniel Fitzgerald and myself. Our backgrounds are Aerospace & Robotics, primarily we used to develop UAV/"Drone" technology, for high-end mission critical tasks. This involved a lot of wireless / camera, vision processing technology as well as safety and time critical hardware and software combinations, advanced autopilot development and rapid prototyping. It worked out that we saw an evolving requirement for this sort of thing in the VR industry and had the prior skills and knowledge to apply to it and make it happen quickly. We received investment for the company and hired skillful engineers to assist with things like PCB design, interfacing with HDMI, ethernet, wireless chips, etc and solving issues like creating the invisible wireless USB link. Road to VR: You mention your compression skirts issues you found with pre-existing systems already out there, can you detail what those were specifically and what codecs you tried? IMR: Existing and conventional video compression like H.264 and many others uses frame to frame compression (frame buffering) to achieve its level of compression, this is fine to view something on a screen. but for example if you used it for the HTC Vive you would end up with over 11ms of added latency (buffering a single frame), and as the screen resolution got bigger in the future you would end up with even more. Without going into too many proprietary details, our system is able to work on such little latency because we figured out how to do a specialized style of compression without being constrained by any of this. This delay issue described above is the key difference between our system and what is currently being pushed by other companies on the market. We have thought of the real problem from the beginning rather than trying to push out a product fast that is unacceptable for VR. If you are developing a wireless accessory for VR you can get away with maybe 2 or 3 ms MAX delay being introduced by the wireless portion of the system. Road to VR: You say IMR is working towards productising this technology. What price range are you looking to shoot for - a very rough ballpark is obviously fine here? IMR: We are targeting business customers who can roll this out to consumers for applications such as VR arcades, theme parks, training etc. Price point would depend on quantities but roughly around $1200-1500. For more information on Immersive Robotics you can head to their website, twitter or Facebook page.