Sony is the largest player when it comes to supplying camera image sensors for smartphones. Examples of popular sensors include the 48MP IMX586, which was used by a bewildering variety of phones ranging across price ranges in 2019. The IMX586 was succeeded by the 64MP IMX686. The company doesn’t just deal in smartphone image sensors, of course. It has a hugely successful lineup of premium mirrorless cameras that are often held to be the gold standard. Its relentless pace of innovation is showing no sign of letting up, as it has now announced the upcoming release its first image sensors with AI processing, the IMX500 and the IMX501 respectively.
The Sony IMX500 and IMX501 are upcoming two models of intelligent vision sensors. Sony claims they are the world’s first image sensors to be equipped with AI processing functionality. The company says that including AI processing functionality on the image sensor itself enables high-speed edge AI processing and extraction of only the necessary data. This reduces data transmission latency, addresses privacy concerns, and reduces power consumption and communication costs compared to using cloud services.
It’s important to note that these two sensors aren’t intended for phone cameras. Instead, the applications are in the retail and industrial equipment industries as well as contributing to building “optimal systems” that link with the cloud.
Why have AI processing integrated with the sensor itself? Sony explains that the spread of IoT has resulted in all types of devices being connected to the cloud, which makes information processing systems commonplace, wherein information obtained from such devices is processed via AI on the cloud. The problem associated with such an approach are increased data transmission latency hindering real-time information processing; security concerns from users associated with storing personally identifiable data in the cloud; and other issues such as the increased power consumption and communication costs that come with cloud services.
The IMX500 and the IMX501 feature a stacked configuration, which consists of a pixel chip and logic chip. The AI image analysis and processing functionality are equipped on the logic chip. The signal acquired by the pixel chip is processed via AI on the sensor, which eliminates the need for high-performance processors or external memory. This, in turn, enables the development of edge AI systems.
The sensor outputs metadata instead of image information, which results in reduced data volume and addresses privacy concerns. The AI capability makes applications such as real-time object tracking with high-speed AI processing possible. Different AI models can be chosen by rewriting internal memory according to the user’s requirements or the conditions of the location where the system is being used.
The pixel chip of these sensors has BSI and has approximately 12.3 effective MP for capturing information. The optical format is 1/2.3″ (7.857mm diagonal) with 1.55-micron pixel size. They have a Bayer color filter array. The sensors are capable of 4K at 60fps video recording without AI processing and 4K at 30fps video recording with AI processing. The local chip also equips Sony’s original DSP in addition to the conventional image sensor operation circuit. The DSP is dedicated to AI signal processing as well as memory for the AI model. With this, high-performance processors and external memory aren’t needed, thus benefiting edge AI systems.
The workflow of the image sensors is explained by Sony. The signals acquired by the pixel chip are run through an ISP, and AI processing is done in the process stage on the logic chip, with the extracted information outputted in the form of metadata, which reduces the amount of data handled. The actual image information itself is not outputted, which is beneficial for security and privacy. Users can select the data output format including ISP format images (YUV/RGB) and ROI (Region of Interest) specific area extract images.
Sony is also promoting speed. It says that when a video is recorded using a conventional image sensor, it is necessary to send data for each individual output frame for AI processing, which results in increased data transmission and makes it difficult to achieve real-time performance. The new IMX sensors, on the other hand, perform ISP processing and high-speed AI processing (3.1ms processing for MobileNet V1*2) on the logic chip, thus completing the entire process in a single video frame. This, in turn, makes it possible to deliver “high-precision, real-time tracking of objects while recording video”, according to Sony.
Finally, users can write the AI models of their choice to the embedded memory and update it according to its requirements or the conditions of the location where the system is being used. Sony gives an example where multiple cameras employing this product are installed in a retail location, and a single type of camera can be used with versatility across different locations, times, and purposes. When the camera is installed at the entrance to the facility, it can be used to count the number of visitors entering the facility; when installed on the shelf of a store, it cna be used to detect stock shortages; when on the ceiling it can be used for heat mapping store visitors, and so on. The AI model in a given camera can be rewritten from one used to detect heat maps to one for identifying consumer behavior.
As it sounds, the announcement of the upcoming release of the image sensors is indeed a notable achievement. However, it comes with its own implications for surveillance. Right now, the sensors are intended for low power solutions such as security cameras. Smartphone cameras may incorporate this technology a few years down the line, but it’s not on the cards for now because of limitations as the embedded logic chip can’t match the versatility offered by modern ISPs (such as the Spectra ISP in the Qualcomm Snapdragon SoCs). Right now, smartphone cameras function differently, where the sensor in itself is relatively dumb and functions with a smart ISP found as part of the phone’s SoC. The ISP does all the work of image processing, which means the actual role of the sensor itself is quite limited in modern smartphone cameras (image processing is more important than having great camera hardware for image quality). Computational photography is the new buzzword, but in the industrial world, the IMX500 and IMX501 attempt to bring the AI processing glory to sensors themselves.
The IMX500 and IMX501 are scheduled to launch in products next year. Sony plans to release samples of the products in April and June 2020 respectively. For more information about their specifications, readers are advised to check out the source link.
Source: Sony
The post The Sony IMX500 and IMX501 are Sony’s first image sensors with AI processing appeared first on xda-developers.
from xda-developers https://ift.tt/363Z0Tc
via IFTTT
Aucun commentaire:
Enregistrer un commentaire