X

News Bits: Microsoft Research Demonstrates Incredible First-person ‘Hyperlapse’ Video Technique

    Categories: News

With SIGGRAPH 2014 in full swing, incredible computer graphics papers and demonstrations are a’flowing this week. Microsoft Research has published a paper showing an impressive timelapse processing technique which turns jumpy first-person frames into beautifully smooth watchable ‘hyperlapse’ footage, perfect for wide angle head-mounted recordings from devices like GoPro and Google Glass. The technique may even have applications for VR content.

A GoPro action camera mounted on a helmet for a first-person recording.

The paper, published by Microsoft researchers Johannes Kopf, Michael F. Cohen, and Richard Szeliski, details the technique which uses complex video processing techniques to determine the positional movement of the camera through space, construct a partially complete 3D model of the recorded scene, and finally construct a smooth camera path that flies through the CG environment which is textured using the input footage:

At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up.

Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input.

Next, we compute geometric proxies for each input frame. These allow us to render the frames from the novel viewpoints on the optimized path.

Finally, we generate the novel smoothed, time-lapse video by rendering, stitching, and blending appropriately selected source frames for each output frame. We present a number of results for challenging videos that cannot be processed using traditional techniques.

The resulting footage is significantly more watchable than the input footage (even when processed with traditional stabilization methods) and quite mesmerizing.

A short explanation video is shown above. Below is a more detailed technical breakdown which shows how the hyperlapse footage is achieved.

Since the technique partially recreates a 3D version of the recorded scene, it’s possible that this technique could be used to fly a user-controlled camera through the scene in real-time. This could offer an immersive way to view hyperlapses with a VR headset like the Oculus Rift; the user would have control over where they are looking within the video. Of course, there will be large gaps in the geometry depending upon where the camera was pointed in the original recording. While the GoPro cameras used in the videos above have a 170 degree field of view, it’s possible that the technique could be applied to even wider shots, perhaps even full spherical video.

The project page notes that the research team is “working hard on making our Hyperlapse algorithm available as a Windows app. Stay tuned!”

Related Posts
Disqus Comments Loading...