Lytro camera lets you refocus shots after they're taken

Matthew DeCarlo

Posts: 5,271   +104
Staff

A Silicon Valley start-up has vowed to redefine the meaning of "point-and-shoot" with a camera technology that allows users to take photos with no regard to image focus. According to Lytro Inc. founder and CEO Ren Ng, the company's upcoming Lytro cameras are equipped with an array of highly sensitive sensors that leverages light-field technology to capture the so-called "missing dimensions" of a picture.

Full disclaimer: we're no photography buffs. With that in the open, the concept behind Lytro's technology seems straightforward enough. The company's camera records all the data it can about the field of light it's exposed to, including the color, intensity and direction of individual light rays. The result is a highly adjustable digital image that, among other things, allows you to refocus shots after they're taken.

Speaking with the Wall Street Journal, Ng compared light-field technology to present-day audio recording. Instead of recording multiple musicians simultaneously, modern multitrack studios record them separately so the volume and other effects can be tweaked independently. Whereas your digital camera records the total sum of a scene's light rays, a light-field picture "can tell a story in a new way" Ng says.

We've embedded a light-field image above for you to play with (more here). You can click anywhere on the picture to shift the focus of the shot (double click to zoom). Although the refocusing aspect of Lytro's technology seems to be garnering all the hype, the company's camera can also capture images in very low-light conditions without a flash, and you can also create 3D images that don't require special glasses.

Lytro expects to launch its first camera toward the end of this year, but it hasn't shared any pricing details, saying only that the device will be "reasonably priced" for consumers. Assuming it's priced within the grasp of the average shopper, many believe Lytro's innovative offering will obviate existing digital cameras. Looking beyond still imagery, Ng eventually hopes to bring light-field technology to the video industry.

Permalink to story.

 
as a amateur photographer, the only way I see them being able to do this is to have the camera automatically focus on the nearest object and shoot with a smaller aperture (F8 or higher). On small point and shoots the aperture is usually quite small as it is and it is the physical size of the hole which determines depth of field, not proportionality to focal length (which is used to determine the F number). If depth of field is maximized, an innovative program could then be used to blur appropriate sections automatically using the blurry part as a layer. I can do this in photoshop, but it would take a while (there are probably faster ways). It is easy enough to do it by setting the aperture but there is no way to unblur a photo once it has been taken which is why I say the above.
 
Guest said:
as a amateur photographer, the only way I see them being able to do this is to have the camera automatically focus on the nearest object and shoot with a smaller aperture (F8 or higher). On small point and shoots the aperture is usually quite small as it is and it is the physical size of the hole which determines depth of field, not proportionality to focal length (which is used to determine the F number). If depth of field is maximized, an innovative program could then be used to blur appropriate sections automatically using the blurry part as a layer. I can do this in photoshop, but it would take a while (there are probably faster ways). It is easy enough to do it by setting the aperture but there is no way to unblur a photo once it has been taken which is why I say the above.

The article says that the camera will use "an array of highly sensitive sensors". It is likely that each sensor takes an individual picture with a different focus. However, I'm betting image quality suffers since each picture will have been taken with a relatively small sensor.
 
@Morgawr an array of sensors could determine the distance to an object in order to help the camera properly apply the blur layer. It would be overly complicated to have multiple pictures taken simultaneously. Image quality would dramatically diminish if differing sections of the chip were used. In addition, I don't see how simultaneity could be achieved as the aperture would have to shift for each focal plane, this alone would take too long as multiple pictures would be required.
 
This seems like it would take most of the fun of of learning to take decent photographs.

Something similar to buying "Rock Band", then thinking you can play the guitar.

Although, I suppose this would be a God send for the inbred and incontinent. Er, I mean "incompetent".
 
Why a focus cant be there on whole image? Who doesnt like to see full picture to be v clear and precise? We dont usually enjoy focused pictures
 
The way it is described, it sounds like the wave front itself is captured in the same way that a hologram works. In the case of a hologram, a lens is not even needed. The eye forms the focused image at the time of viewing the hologram. If a digital hologram were taken, the image could then be reconstructed with standard techniques like an inverse Fourier transform. The reconstruction algorithm could focus any plane at any distance in the same way that a lens does. In fact, a lens is just an optical method of taking the Fourier transform. The problem is that a digital hologram takes a lot of memory and the reconstruction software requires a lot of computations. Reconstructing the color is also very tricky. It sounds like a great project for a smart group of engineers and scientists. I hope the investors realize the risk factors. I would certainly like to own such a camera if the price was not too high.
 
Why you not know to spell?

The focus in an image can be used to make a specific subject pop, completely changing the nature and story of the picture, just as in the sample shot above. You can concentrate on the beauty of one of the girls, or on the beauty of the rose. Focusing on everything detracts from that.
 
Guest said:
Why a focus cant be there on whole image? Who doesnt like to see full picture to be v clear and precise? We dont usually enjoy focused pictures

"we"? How nice of you to speak on behalf of the human population :p The reason for the focus on a certain part of the image is to make it stand out from the rest. Not being a photographer, I can't exactly explain it any better than that, but having been married to a photographer, I do understand the reasoning for it.
 
Guest said:
Why a focus cant be there on whole image? Who doesnt like to see full picture to be v clear and precise? We dont usually enjoy focused pictures

You can. Check out the demo video at AllThingsD.
 
"we"? How nice of you to speak on behalf of the human population :p The reason for the focus on a certain part of the image is to make it stand out from the rest. Not being a photographer, I can't exactly explain it any better than that, but having been married to a photographer, I do understand the reasoning for it.
The reasoning follows. And BTW, the technique is called, "selective focus".
Why a focus cant be there on whole image? Who doesnt like to see full picture to be v clear and precise? We dont usually enjoy focused pictures
Utilization of great depth of field is always a sound technique, especially in fashion photography. This enables you to not only see a beautiful model, but every piece of garbage in the dump behind her, sharp as a tack, and in glorious "living" color. Did I mention the dog s*** on the pavement in front of her. Yeah well, let's make that sharp too.

I'd probably use a long lens and focus on the model, which would blur the dump in the background to until it was unrecognizable, but that would just be stupid, now wouldn't it?
 
No, this uses synthetic aperture imaging. I know a company called Pelican Imaging has something similar, and Kodak is working on it as well. Basically, the image has very high resolution to begin with and is in a format that allows for selective focusing to transform the data into what we think of as a view-able, focused image.
 
You can see from the post-shot photos that the focus truly is continuous. This is not something you can do by merely adapting existing digital photography technology using a traditional lens. Without a doubt, the results are impressive.

However, as a long-time photographer and journalist, I can see problems.

First of all, you have very shallow depth of field, meaning you only have one plane in focus at a time. Controlling depth of field, from shallow to long, is a key artistic tool of a photographer. Can Lytro offer the equivalent of aperture control? Or at least multiple plane focusing?

Second, if the camera is gathering information at an infinite number of planes, it either must be sacrificing resolution or it is producing a massive file. The larger the file, the longer the processing time, and the longer the shot to shot interval. It looks to me like the resolution is rather poor, but I presume this would improve with subsequent models. I wonder what sort of compression the camera will use, and how this would affect image quality? These demo shots probably use no compression.

Third, what sort of shutter speeds will a Lytro camera offer? Presumably, it uses a new type of electronic shutter. The camera claims superior low light sensitivity, but that normally requires long shutter speeds. Will the camera automatically adjust the shutter speed according to the light, or offer manual shutter speed control? It sounds like the image sensor is capable of recording a very wide range of luminance, which would require a high bit format, and a large file size.

And finally, while the sample images are fun to play with, I see a possible problem. The far images seem out of focus, as do the very near. It looks very good at the mid-range subjects, but how does the Lytro perform at infinity? How does it perform with macro shots?

This is the first demonstration of this technology. It is truly impressive and has great potential. The possible use for 3D still photos or videos is tantalizing. It also has huge scientific potential. Sending a camera like this to Mars would provide a lot more data of the surface. It also would make a good surveillance camera.

But I don't see the Lytro replacing a $1,000 DSLR or other serious digital camera any time soon. An experienced photographer knows how to focus a camera, any camera, from a DSLR to a point and shoot with a simple focus lock shutter. And with a good camera, a photographer can adjust the depth of field, for practical and artistic effect. A lot of amateurs know how to do all this, too.

Most of all, a serious photographer wants lots of detail, enough to give a sense of texture. A $1,000 digital camera can now produce an image with as much detail as an Ansel Adams scenic as reproduced in an art book, though nowhere near the detail of the original photo print. For me, that's thrilling.

Plus, you need enough extra detail to crop. I can make a very good picture using just a fraction of a 12 MP raw file. Will you be able to crop a Lytro image? Will there be the equivalent of raw files?

Oh, one more point. Ren Ng talks about the frustration of focus lag time. If you prefocus and use focus lock, this is negligible, less than one-tenth of a second. If you use manual focus and manual exposure, the delay falls to near zero. That's the way to go if you are shooting action shots. It's probably all the automatic gimmicks like face recognition and tracking that slows things down.
 
There is potential here for no more blurry pictures of Bigfoot, Nessie or UFOs!

As far as I understand, the majority of our best photography is still on film media. I have heard that serious photographers prefer its performance and result.

This isn't hard to imagine when you consider that Lytro's camera will save to digital media which is a file of data. I speculate that it will just be a larger file for more information about the scene and its various depths.
 
Or you can go to the "How It Works" page on the Lytro website:
http://www.lytro.com/science_inside
 
Wow, is that chic holding the other ones b*obs. I sense some ***** crap goin on in that scene. You can see the pleasure in her eyes! Wish I were there! lol
 
Most everyone is taking what they currently know and applying it to this. Thats ok, but every once in a while something big happens that takes the complex old way and throws it out and does it in a new way that doesn't involve the old steps.

Maybe this is just a "trick" and its using the old ways. But what if it was truly innovative? Or maybe we just don't think anything new can happen anymore.
 
Looks incredible... but what if I can't decide which woman to focus on?
 
This is nothing new. The seminal work was done in 1992 from a concept first described by Archimedes and popularized in 1903 with the advent of the Stereograph. What Ng is describing is technological advancement of a principle due improvements made possible by increasingly powerful in-camera microprocessors. However, much room for skepticism remains.

The two classic constraints to overcome in this imaging era is not turning out sub mega-pixel images from 12 Mega-pixel (and higher) sensors and the requirement of fixed apertures producing shallow depths of field. As it turns out, this is what we are shown in simulation and Ng offers no technical information beyond hand waving hucksterisms as to why or how his technology adoptions/breakthroughs have improved things. So far all we have is exquisitely crafted media hype, a compelling simulation and no visible product. Not even a computer generated mock up let alone a physical prototype and this is supposed to be available by years end?

It is additionally notable that none of the major camera manufactures such as Canon, Nikon, Olympus et al., have developed process and product derived from the methodology since it has long been known. I suspect the reason being the technology too limited in scope to be generally useful and so it will be should Ng ever bring his novelty product to market among a swirl of hype and quick profit taking.

What I find more amazing is major media outlets reporting on this without any investigation on background. On the other hand maybe I shouldn't.
 
what?!?!? they only did this stuff now?... looks like we are still using rocks and climbing trees... but hey.. it had to be done somewhen...
 
I suspect that they're using a layered sensor.
They state that they can 'record the direction of a beam of light', which travels in a straight line. To know the path of a straight line, you only need two points. So two image sensors, layered one-on-top-of-the-other, would allows you to know exact what light is coming from where.
This would also allows for the creation of 3D photos without two lens, because to create a 3-dimensional image, all you need is to know where the light is coming from. Traditionally, this has been done with two lens, each with their own sensor, placed a distance apart and working in unison. This method will allow the camera to record the direction and distance of an object. The same can probably be done with a dual-layer sensor.
The dual-layer sensor would also allows a user to refocus an image once taken as well, because it would store the direction of a light source, you could tell it to 'focus on that one, instead of this one' after the shot was taken.

What I'm really interested in is, will the photos store the light-direction information after your done with any initial refocusing? and if so, what would this mean for photo-manipulation softwares?
 
Back