The first class of Rothenberg Ventures’ ‘River’ VR accelerator have graduated. Thirteen companies, in which Rothenberg invested a combined $1.3 million, showed off their latest work at the company’s annual Founder Field Day event. Road to VR was on the scene to see what the companies had accomplished during the three month accelerator.
In December last year, Rothenberg Ventures, a venture capital firm based in San Francisco, announced plans for the first all-VR accelerator program that they call ‘River‘. The company sought to invest $1 million across 10 virtual reality startups. The three month program would provide office space and regular mentorship from knowledgeable advisors. After sifting through some 200 applicants, Rothenberg Ventures ultimately raised their ante to $1.3 million across 13 companies for the first River class.
The culmination of that first class came at Founder Field Day, an annual event hosted by Rothenberg Ventures that brings together founders, investors, strategists, and mentors for a day of networking and learning. The focal point of the event, held in San Francisco at the home of the Giants, AT&T Park, was the 13 graduating River companies. I got to speak with the founders of these VR companies and see what they’d achieved after three months in the River program. The final part to our three part series features Fove, Triggar VR, and Vantage.TV.
Fove is a small team of Japan-based founders creating a VR headset which stands apart for the rest thanks to integrated eye-tracking technology. What’s more, eye-tracking isn’t the only thing the headset does well—Fove has impressed with their attention to the rest of the package, especially when it comes to headtracking performance. That’s why we called the headset “Next Best at CES” earlier this year. From that article, I briefly break down the many useful benefits that could come from high quality eye tracking:
Eye-based interface and contextual feedback: Imagine a developer wants to make a monster pop out of a closet, but only when the user is actually looking at it. Or perhaps an interface menu that expands around any object you’re looking at but collapses automatically when you look away.
Simulated depth-of-field: While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths) it cannot simulate depth of field (the blurring of out of focus objects) because the flat panel means that all light from the scene is coming from the same depth). If you know where the user is looking, you can simulate depth of field by applying a fake blur to the scene at appropriate depths.
Foveated rendering: This is a rendering technique that’s aimed to reduce the rendering workload, hopefully making it easier for applications to achieve higher framerates which are much desired for an ideal VR experience. This works by rendering in high quality only at the very center of the user’s gaze (only a small region on the retina’s center, the fovea, picks up high detail) and rendering the surrounding areas at lower resolutions. If done right, foveated rendering can significantly reduce the computational workload while the scene will look nearly identical to the user. Microsoft Research has a great technical explanation of foveated rendering in which they were able to demonstrate 5-6x acceleration in rendering speed.
Avatar eye-mapping: This is one I almost never hear discussed, but it excites me just as much as the others. It’s amazing how much body language can be conveyed with just headtracking alone, but the next step up in realism will come by mapping mouth and eye movements of the player onto their avatar.
If Fove can achieve consistent, high performance eye-tracking, developers will be able to do these things and more. The company isn’t first to combine eye-tracking with a VR headset, but they aim to be the first to offer it as a fully integrated solution at consumer price point, and as we’ve seen with the resurgence of the VR industry in the last few years, that can make all the difference.
At Founder Field Day, Fove CTO Lochlainn Wilson told me that the resultant headset from the company’s Kickstarter campaign would achieve 120Hz eyetracking, significantly faster than the current prototype. Also new since I had last seen the prototype was an updated distortion shader which makes scenes viewed through the headset visually more accurate.
One missing piece that the company is still working on is positional tracking (the ability to track head movement through 3D space). This important feature improves comfort and immersion, and has been demonstrated with high performance from big competitors like Oculus, Sony, and HTC/Valve. We’ll be looking forward to trying Fove’s positional tracking solution and hope to see it match the headset’s other impressive qualities.
Disclosure: At the time of publishing, FOVE is running advertisements on Road to VR
1. FOVE, VR Headset with Eye tracking
22 Triggar VR, Rugged Professional 360 Degree Video Platform
Triggar VR is building a top to bottom solution for heavy-duty 360 degree VR video capture that covers everything from the camera to the file hosting. Co-founder Bruce Allan told me that the company aims to make VR video recording rigs that are “beyond GoPro levels of durability.”
Allan showed me some 3D printed prototype rigs consisting of two and four GoPro cameras with custom wide angle lenses. He also touted the camera’s small stitching radius which reduces parallax error, enabling the camera to almost seamlessly capture confined spaces, like the inside of a racecar.
For extreme recording needs, like underwater or even in space, Triggar is creating highly reinforced camera rigs. The aptly named ‘Thor’ prototype is a metal enclosure for four cameras on the end of a handle. The compact unit was impressively dense and must have weighed at least 10 pounds, resting most comfortably on its head.
The company plans to eventually create their own custom capture hardware to eliminate the reliance on a GoPro array, dodging issues like troublesome camera syncing and overheating when the cameras operate in confined spaces, Allan told me.
The company is also building a hosting platform that will be able to play back 360 degree VR video across a multitude of devices. The company has already deployed the TriggarVR app on iOS which offers up videos with a splitscreen view with headtracking for VR smartphone viewers like Google Cardboard.
Currently the company’s cameras aren’t capturing in 3D. Allan tells me that this decision is all about video quality and ease of processing. Recording in 3D essentially halves the quality of each frame (because a left and right eye view must be encoded into the same available space as a single full frame). 3D stitching also creates new stitching challenges. Allan told me that the company could and might offer 3D-capable rigs, but for now the quality boost from non-3D is his taste.
2. Triggar VR, Rugged Professional 360 Degree Video Platform
33 Vantage.TV, VR Concerts and Live Events in High Quality
Vantage.TV is another entrant into the VR video space. The company is focusing on quality, production, content streaming, and a unique differentiator that I’ve not seen elsewhere for VR video: voice communication between users.
Vantage was showing an impressive demonstration at Founder Field Day which allowed four users to experience a concert shot in 2D, 180 degree video from several vantage points (see what they did there?). Played back on Gear VR, the four users would see the same synced footage and could easily talk to each other directly through the phone’s onboard microphone to commentate as the action progressed.
Michael Richardson, Vantage’s CTO, told me that the company’s demonstration is more than just a proof of concept, it’s the system that they’re nearly ready to deploy, one that will allow users to experience streamed VR video together with real-time voice chat right over the internet.
As they’re recording in 2D and only 180 degrees, Vantage is able to pack impressive quality into each frame, to the point that it was noticeably one of the highest quality VR video experiences that I’ve ever seen in terms of visual fidelity. Vantage compensates for the lack of complete 360 recording by employing multiple cameras at various angles, giving users a cumulative view around the recording venue without having to drop frame quality by recording in a complete sphere.
Although not recorded in 360 degrees, Vantage’s experience still supports full spherical headtracking. Inside, you can see the experience in front of you, and if you turn around, the recorded scene fades and gives way to an abstract pattern that seems to suitably match its colors to those of the scene in front of the user. The effect works surprisingly well to tell the user ‘there’s nothing back here’, but keep them feeling like they’re part of the scene.
The social aspect of merely being able to speak with other people seeing the same thing as you adds significantly to the experience. It’s a necessary piece of the puzzle as obvious use-cases for VR video are inherently social—who goes to see a concert alone? Vantage’s 180 degree recording solution also means that the system doesn’t rely on custom VR camera rigs. Venues need only to adapt existing cameras with the appropriate lenses to capture the experience in a way that’s ready for the company’s VR video experience.
One day we’ll have affordable, high quality full 3D 360 degree video recording—and the internet connections to stream it—but until then, Vantage.TV’s smart compromises are a win where quality is concerned and an important bridge to the future of the VR video medium.
Disclosure: Rothenberg Ventures covered Road to VR’s travel costs to attend Founder Field Day.
3. Vantage.TV, VR Concerts and Live Events in High Quality