Posts: 3,036 +815
Editor's take: To me, there is something slightly off-putting about still images that are brought to life by AI. Unlike Apple's Live Photos, which are just mini-movies, artificially animated pictures have an unnatural quality, no matter how spot on the animation is. For lack of a better way to phrase it, they feel a bit like a hallucination.
During Google's I/O event Tuesday, Vice President of Google Photos Shimrit Ben-Yair showed how the company is using machine learning to make Google Photos better. One improvement builds upon "Cinematic Photos," a feature launched last year that adds a three-dimensional effect to images. From this foundation, Google created "Cinematic Moments."
Cinematic Moments creates an animated photo from a pair of still images—shots of a kid jumping on a trampoline, for example. Machine-learning algorithms take the two photos, say one with the child standing on the trampoline and one in mid-air, and inserts several frames in between to create a moving image of the jump.
Many of the examples Google provided (below) look like the Ken Burns Effect at first glance, but paying more attention, you can see the animations, which range from subtle to dramatic. The results are somewhat creepy. They look realistic enough, but something about them doesn't feel quite right.
MyHeritage achieved a similar effect recently, using AI to animate photos of people's ancestors. That process only needs one image. However, the animation is accomplished through deepfake technology using videos of live actors to animate the face in the still.
Google did not say when Cinematic Moments would roll out but did say it would be available on both Android and iOS. Ben-Yair also noted that the process works with scanned photos as well. So users are not bound only to photos taken with their phones.