Dubbed DeepStereo, Google's new algorithm will make it possible to simulate the exploration of foreign landscapes in a seamless series of images. Intended for use in conjunction with Street View, the algorithm is capable of synthetically producing images to complete gaps between a sequence of pictures captured by Google's cameras.
Thereby, DeepStereo aims to correct the jagged animation that would likely ensue when opting to create your own virtual traveling movies using nontraditional stop motion techniques. Without Google's new algorithm, running a string of these photographs in the standard 24 frames per second wouldn't appear fluently enough to seem authentic, according to MIT Technology Review.
Now, however, thanks to John Flynn and his team of engineers over at Google, the problem of how to fill these pictorial voids has been solved. By examining the frames on either side of a select Street View image compilation, DeepStereo can successfully manufacture the missing pieces, prompting fluid video footage from practically any sequence.
It's a problem computer scientists have been attempting to resolve for decades, with many engineers failing due to the image tearing caused by a lack of information needed to generate the absent visual detail. DeepStereo, on the other hand, ascertains the depth and color of each pixel in an artificially produced image by analyzing that of the previous and subsequent images in a series. On the downside, creating a single image in this process takes approximately 12 minutes, even with the aid of a multicore processor.
Impressive technology, but there's still a long way to go as far as optimization is concerned to make it viable to the everyday user.