Sony's new image sensors are the first ever with integrated AI processing

Shawn Knight

Posts: 15,287   +192
Staff member
Bottom line: No word yet on whether or not Sony's new image sensors will find their way to consumer products although given advancements in technology like AR, you have to think someone will try their hand at it at some point.

Sony on Thursday announced two new image sensors which it claims are the first in the world to come with built-in artificial intelligence processing. Including AI processing functionality directly on the image sensor affords multiple benefits, we’re told.

Localized, or edge processing, helps to speed up the entire process, especially when used in conjunction with cloud services as it reduces the amount of data that needs to be transmitted. This, combined with the fact that extracted information is output as metadata, reduces privacy concerns and security risks. The combo also lowers power consumption and communication costs.

The new sensors are the IMX500 and the IMX501. Both are 1/2.3-type (7.857 mm diagonal) sensors with a 12.3 effective megapixel output. The IMX500 is a bare chip product while the larger IMX501 is a package product. Full chip specifics can be found over on Sony’s website.

A sensor of this nature has many potential uses. In a retail setting, for example, a camera could be set up at the entrance to count the number of shoppers entering the store. When installed on a shelf, it could help detect stock shortages and when mounted in the ceiling, could help create a heat map to identify where people tend to gather.

Sony is already sampling its IMX500 with industry partners and plans to do the same with the IMX501 starting next month.

Permalink to story.

 
These are sensors for compact cameras, go pros, drones. I was wondering when this feature would come to them. When they are put in full frame camera sensors is when it gets really interesting!
 
How good is it compared to cpu/gpu processing?
I'd better have image sensor that produces better raw data.
 
All this "AI on image sensor" tech sounds like PR-spin on engineering-speak for "we improved the FPGA on the chip".
Yes, every time anybody uses modelling to do anything now they call it AI. Forms of regression and cluster modelling etc have been used in computing since the 60's and have always been used widely in image processing, OCR etc. but now it's AI not just a mathematical formula to predict an outcome. It's just marketing BS so you can just ignore it.
 
What is the use case of this? Streaming data more efficiently?
These are sensors for compact cameras, go pros, drones. I was wondering when this feature would come to them. When they are put in full frame camera sensors is when it gets really interesting!

What is the use case of this advancement? Streaming data more efficiently and securely?
 
What is the use case of this? Streaming data more efficiently?

What is the use case of this advancement? Streaming data more efficiently and securely?

Use case for full frames? If they are able to more efficiently handle the massive amount of data a FF sensor generates in just a fraction of a second, then you can pair high MP/higher dynamic range/great bit-depth sensors with cheaper memory cards.

Take the D850 for example, it needs XQD memory cards to keep up with the output from the sensor. Technology like this might allow similar image size, dynamics, and depth in more entry-level FF and high-end CF models - and even more MP/range/depth in the flagship FF models.
 
Use case for full frames? If they are able to more efficiently handle the massive amount of data a FF sensor generates in just a fraction of a second, then you can pair high MP/higher dynamic range/great bit-depth sensors with cheaper memory cards.

Take the D850 for example, it needs XQD memory cards to keep up with the output from the sensor. Technology like this might allow similar image size, dynamics, and depth in more entry-level FF and high-end CF models - and even more MP/range/depth in the flagship FF models.

Ah ok I see, does that mean the information being compressed and you can manipulate it less? For instance cinema cameras rely on uncompressed data so that you can manipulate the raw image.

However if this means the opposite, where I have loads of more DR/information to play with, then hurry up and bring us this technology.

Using the P4K (m4/3s) sensor and it's already bitrate hungry, luckily the external SSD solution can pretty much handle it. I can't imagine 6k-8K FF RAW @ 60 to 120p and it's requirement for data handling.
 
Ah ok I see, does that mean the information being compressed and you can manipulate it less? For instance cinema cameras rely on uncompressed data so that you can manipulate the raw image.

However if this means the opposite, where I have loads of more DR/information to play with, then hurry up and bring us this technology.

Using the P4K (m4/3s) sensor and it's already bitrate hungry, luckily the external SSD solution can pretty much handle it. I can't imagine 6k-8K FF RAW @ 60 to 120p and it's requirement for data handling.
Don't think of it as compression - because it isn't compression. Think of it as more efficient signal processing
 
Back