Sony believes phone cameras will eclipse DSLRs within a few years

Shawn Knight

Posts: 15,292   +192
Staff member
What just happened? Sony believes smartphone camera image quality is on pace to eclipse that of digital single-lens reflex (DSLR) cameras in the near future. During a recent business briefing, Sony Semiconductor Solutions (SSS) President and CEO Terushi Shimizu said they expect still images from camera phones to exceed what is possible with a DSLR within the next few years.

A combination of larger apertures and advanced image sensor technology will be instrumental in getting there, we're told.

Nikkei's report specifically mentions two-layer transistor pixel technology and artificial intelligence processing, which effectively doubles how much light each pixel is exposed to.

Sony announced its two-layer transistor pixel tech last year. Traditional CMOS image sensors house both the photodiodes and pixel transistors on the same substrate, but Sony's new tech separates them on different substrate layers. According to the company, this approximately doubles saturation signal levels, widens the dynamic range and reduces noise - all of which result in improved image quality.

As TechRadar correctly highlights, smartphone cameras and standalone digital cameras have been moving in opposite directions for a while now. As smartphone camera quality continues to improve, more people are leaving their dedicated shooters at home or not buying them at all. Once they reach parity with DSLRs, there won't be a reason for the average consumer to pick one up.

Image credit: Vitaly Vlasov

Permalink to story.

 
I am pretty sure they do not mean Sony phones, but other manufacturers that actually can use Sony sensors to produce decent photos.
 
I laugh ever time I see articles like this.
Look at the SIZE of most smartphone image sensors. And by that, I DO NOT mean how many
megapixels it has. Even an consumer level d-slr sensor eclipses anything in a smartphone.
Size matters! Also, this megapixel race, in reality doesn't offer a "better" photo. Where the extra
pixels come in handy on a smartphone is when you zoom & crop without losing a lot of detail.
Also, stuffing that many pixels in a tiny area, when taking a photo in less than ideal lighting conditions means the gain of the sensor has to be cranked up in a lot of cases, to capture the photo, which can result in NOISE on the photo. Then the "AI" software will work to reduce/eliminate the noise, which can result in a soft or flat photo.
My d-slr has on 26 megapixels. There is a reason it doesn't have or need 200 megapixels.
Not to mention, ONE of my camera lenses costs more than most smartphones.
If they were to go with an APS-C size sensor, and say a 5-10x OPTICAL f/2.8 lens on a phone,
then they might start to broach the d-slr realm. But, that would take away from the "colorful, slim, sexy, stylish" aspect of a smartphone because it would be much thicker. That would put off the influencers & hollyweirdo types.
 
What's to stop the same tech from making it's way into DSLRs?
It has to do with how light refracts in a prism. This is the same reason larger telescopes can take higher resolution photos. There is a reason people pay tens of thousands of dollars for camera lenses. There is a fundamental property of light that makes it impossible for a proper lens to fit in something inside the size of a smartphone.

The underlying camera sensor tech in phones is already fantastic and the post processing done by most phones today is just as good, but it's far more complicated than that. There is a reason people will spend hundreds of thousands on proper camera setups rather than get some micro 4/3rds mirrorless.
 
I don't really buy this. As others have said, DSLRs can adopt new technologies as well. And they market for them has definitely shrunk, but the people who need them spent a lot of money on them. Sony itself releases new high-end cameras every year.

That said, I don't expect I'll ever buy a DSLR, and even my dad, the photography enthusiast of the family, hasn't bought a new one since 2006 and rarely bothers to take his along nowadays (despite it still having a few advantages, even with its age). The quality of camera phones has reached a level where combined with their convenience/portability, I don't see why I'd want to haul around a DSLR as a non-professional photographer.

Where I might be in the minority is that I still see a use case for traditional point-and-shoot camera, that are less bulky than DSLRs. The main reason being optical zoom, but larger sensors than on phones help too. I still carry my 2012 Sony point-and-shoot occasionally, and while in a lot of ways it has been eclipsed by my iPhone, I get pictures with it that the iPhone can't match, due to zoom. A pocketable point-and-shoot with 16X zoom and 2019-level technology would be a tempting purchase, but Sony at least seems to have stopped updating their point-and-shoot offerings.
 
They very well may and in some ways a few of them already have. Still, I'll put my old Deardorff 12x20 up against any of them for now. Going to have to be a pretty good image to go up against a negative that's 12' x 20', especially when it's contact printed .....
 
Not going to happen. Amount of light collected and focused by 82mm diameter of my sigma 24-70 2.8 will for decades result in better details than max 1cm mobile phone lens.
And if the sensor of mobile phone will become 10 times better, same will be true for dslr or mirrorless, but amount of light and details will still be significantly higher by using large lens.
Pictures taken by mobile looks nice - on mobile. Oversaturated, artificially vibrant, eye catching tech simply can't stand comparison the moment you actually load them on a pc screen. Physic can't be cheated.
 
There is a fundamental property of light that makes it impossible for a proper lens to fit in something inside the size of a smartphone.
No smartphone lens (and not even most DSLR lenses) are diffraction-limited by Rayleigh's criterion. The primary reasons for substandard smartphone image quality are the sensor's tiny size, which means not only poor light-gathering ability, but higher pixel cross-talk and more noise. This -- along with the fact that nearly all smartphone lenses use cheap plastic rather than optical glass -- accounts for the poor image quality.
 
No smartphone lens (and not even most DSLR lenses) are diffraction-limited by Rayleigh's criterion. The primary reasons for substandard smartphone image quality are the sensor's tiny size, which means not only poor light-gathering ability, but higher pixel cross-talk and more noise. This -- along with the fact that nearly all smartphone lenses use cheap plastic rather than optical glass -- accounts for the poor image quality.
That wasn't so much what I was talking about. The size of the lenses in a phone, in Short, have problems with red, green and blue light focusing at the same point. Expensive lenses on cameras can have a series of lenses in a single photo lenses that end up focusing the different wavelengths of light in a single point. They also do better jobs at have even brightness at all corners of of the sensor as well as having features like booka. Apple has done a great job of using AI to create these effects but it's questionable where this will all go
 
That's not what the Sony executive said at all. He was referring to P&S cameras which are basically garbage with their now smaller than phone sensors.
 
Sure. And integrated graphics will be as powerful as descrete graphics. Onboard aud.... isn't... hmm.

Eclipse? Is Sony still making phones at that point? Catch up? I'll buy that.
 
I laugh ever time I see articles like this[...]
Look at the SIZE of most smartphone image sensors[...]
My d-slr has on 26 megapixels. There is a reason it doesn't have or need 200 megapixels.
Not to mention, ONE of my camera lenses costs more than most smartphones...

Don't laugh. Smartphone sensors have at least 10x more investment and development than DSLR sensors. Not only that but smartphone lenses also have 10-20x more development than those to dSLRs.

I'll explain you:
- most sensors have more pixels because the SoC inside the smartphone / AI captures 1 shot for detail
- than with binning, those 108 MP (as example) combine to gather light / color as 12 MP
- with all those images, the SoC combines all with detail + HDR/color/light on a single image with high quality

This works well on photos but *still not* on video!

Photos out of a smartphone are better vs. JPG (non-RAW) photos out of a dSLR. Of course if people shoot RAW and spend hours fixing the dSLR photos with professional software, you get better photos out of the dSLR.

Where the dSLR (>1500€) still win?
- background blur, specially on wedding / marketing photos, this is very important
- video, hands down.

So general non-RAW photos are clearly better on smartphones; the next step are videos but video is a much more demanding (sensor/lenses) task, so not until 3-5 years in the future.

Sure. And integrated graphics will be as powerful as descrete graphics. Onboard aud.... isn't... hmm.

Your smartphone is always in the pocket and there for you. Apps / software will always be more advanced than dSLRs. So if phones cover 98% of all photo- and videography, than dSLR sales will be a niche. ATM only big professionals need dSLR, even I sold my dSLR and shoot on my S22 Ultra + Osmo Pocket 2 > with onboard software + Mac Software I get enough good results. Many people with an iPhone Pro 13 + software get "pro" results
 
Last edited:
Where the dSLR (>1500€) still win?
- background blur, specially on wedding / marketing photos...
That's bokeh. And depth of field control, signal-to-noise ratio, low-light performance, diffraction, dynamic range, photo burst rates, etc -- the DSLR wins on them all.

Smartphone sensors have at least 10x more investment and development than DSLR sensors.
False division. Most sensor makers (Sony, Samsung, Canon, etc) make sensors for both DSLRs and smartphones - and automotive sensors, industrial robots, defense weaponry, etc. etc. etc. There is no "separate tech stack" for a smartphone CMOS sensor.
 
That's bokeh. And depth of field control, signal-to-noise ratio, low-light performance, diffraction, dynamic range, photo burst rates, etc -- the DSLR wins on them all.

False division. Most sensor makers (Sony, Samsung, Canon, etc) make sensors for both DSLRs and smartphones [...] There is no "separate tech stack" for a smartphone CMOS sensor.
Bokeh and I quote "is defined as “the effect of a soft out-of-focus background that you get when shooting a subject"". So you want to name it to show up while saying the same I said.

About "depth of field control, signal-to-noise ratio, low-light performance, diffraction, dynamic range, photo burst rates": big sensors heat up and cannot make a read out as fast as small sensors (so most crop video out) nor dSLR have laser/lidar sensors to measure depth or high frame reads for one image. So small sensors achieve good results (low light, diffraction, dynamic range, etc.) without relying on the sensor / optic itself. That is not valid (yet) for video.

About "There is no "separate tech stack" for a smartphone CMOS sensor"": false. Yes there is. On most companies you have teams for small sensors (up to 1") and teams for big sensors. Most big sensors have very small optimizations over the years and rely mostly on the chipset/ software. Most small sensors improve greatly on each generation.

Smartphones already killed completely the low & mid-to-low end camera market; the next step is the midrange. The high-end market is not attractive and relies on huge sensors, optics and is impossible to achieve on small devices.

Only Apple, Xiaomi and Samsung spend more on sensors and lenses on ONE smartphone model and generation than any entire full frame DSLR generation. That's why most companies (Sony, Nikon, Canon, etc. ) have so expensive lenses, accessories and try to dry out the sub-full frame business.
 
Pretty sure they said this 3, 6, 9 years ago.... ?
I think what they always mean "phone cameras in 3 years will eclipse today's DSLR camera". And even that is never strictly true; it doesn't beat it terms of optics, just software trick to compensate for smaller sensors and less light going through a lens.
 
Bokeh and I quote "is defined as “the effect of a soft out-of-focus background that you get when shooting a subject"". So you want to name it to show up while saying the same I said.
If you wish to be pedantic, your definition is wrong; there is both foreground and background bokeh, nor is it something you "get" whenever shooting a subject. A wildlife photographer, for instance, may (or may not) intentionally create bokeh in his foreground, while keeping the background razor-sharp.

nor dSLR have laser/lidar sensors measure depth...
There are already external lidar stabilizers available for DSLRs. Nor does Lidar improve image quality; Apple included it on the iPhone primarily for AR applications, so the point is moot.

...or high frame reads for one image
You're confusing readout rates with frame rates. And while smaller sensors do have higher readout rates if one assumes constant power, that doesn't aid image quality. And camera users are far more concerned with *frame* rates, anyway -- and here, DSLRs excel.

About "There is no "separate tech stack" for a smartphone CMOS sensor"": false. Yes there is. On most companies you have teams for small sensors (up to 1") and teams for big sensors.
A separate team is not separate technology. Nor can any team get around the basic laws of physics. If small sensors and optics were superior, we'd have launched an iPhone into orbit, rather than the James Webb Space 25 square meter telescope.
 
Back