LightBlog

vendredi 2 octobre 2020

How Qualcomm is Improving the Camera Experiences on Android Phones with its Spectra ISPs

Qualcomm is one of the giants of the chip maker industry. The U.S.-based company designs system-on-chips (SoCs) for smartphones, wearables, and more. The Snapdragon line of SoCs is used by nearly every major Android device maker for flagship, mid-range, and budget smartphones. Qualcomm gets plaudits every year at the company’s annual Tech Summit for advancements in the CPU, GPU, and AI fields, as it incorporates ARM’s new CPU microarchitectures and complements them with yearly improvements in its custom GPUs. However, its advancements in the field of cameras aren’t noticed as much, as they tend to go under the radar.

This doesn’t mean, however, that Qualcomm’s work in smartphone cameras is unimportant. On the contrary, Qualcomm’s Spectra ISPs in its Snapdragon SoCs help make much of modern smartphone cameras possible with increased computational processing power, features such as 8K video recording, HDR10 video, support for high-megapixel QCFA cameras, and much, much more. Qualcomm has promoted that the Spectra 380 ISP in the Snapdragon 855 was the world’s first CV-ISP, and it has promoted the world’s first 4K HDR video recording features, which itself has now supplemented by 2nd generation 4K HDR10+ video recording. The Spectra 480 ISP in the latest-generation Snapdragon 865 is highly capable – it can process two gigapixels per second, a 40% increase over its predecessor. It’s an intellectual property (IP) that differentiates Qualcomm from its competitors in the mobile chip vendor space.

While Qualcomm explains most of the headline features in its press releases and product keynotes, up until now, consumers haven’t got a chance to know most of the low-level detail that makes these things work.

That’s why we at XDA Developers were happy to accept an offer to speak with Judd Heape, Senior Director, Product Management at Qualcomm. XDA’s Editor-in-Chief, Mishaal Rahman, and I had an interview with Judd in June 2020 to learn and see how Qualcomm is pushing the goalposts with smartphone photography and video recording. We spoke about topics including AI image processing, multi-frame noise reduction (MFNR), AV1, Dolby Vision video recording, pixel binning in high-megapixel cameras, and much more. Let’s take a look at Judd’s insights at each topic one-by-one:


AI image processing workloads

Mishaal Rahman: I’ll start off with one of the ones that Idrees had, which is an interesting one, and which I was also interested in. So we’re wondering what are the AI image processing workloads that Qualcomm uses in the Spectra ISP and to what degree are they customizable by device makers?

Judd Heape: Yeah, so we look at a lot of AI workloads and there are some AI that can run in the ISP itself like for example our next generation 3A auto exposure white balance and focus are AI based.

But we also look at a few other AI workloads, which would run outside of the ISP, in one of the other computing elements. So in particular we look at things like: we have a AI based noise reduction core which runs externally in the AI part of the chip.

Also, we have things like face detection, which is a full deep learning engine that also runs in that complex, but of course assists the camera. And there’s other things that which we are working on other than face detection and denoising we’re also looking at doing things like an auto adjust picture of core using AI that would automatically set things like per scene based, based on like HDR content aware processing to to modify shadow and highlights and color and that sort of thing.

One of our partners just won a huge AI workload award. Independent software vendor partners also have a lot of really intense AI based algorithms and those can range any thing from like, smooth camera transition, like what Arcsoft does, I mentioned that at the last tech day and that’s AI based to which is Morpho’s semantic segmentation engine which is a AI engine that understands live different parts of the scene like what is you know, fabric versus skin versus sky and grass and building and that sort of thing and and then the ISP can take that information and process those pixels differently for texture and noise and color for example.

Qualcomm’s statement: For ML & AI we’re also not announcing any new updates for the features of face detection and “3A” (AE, AF and AWB) today, either. However, as Judd said, we are committed, going forward, to bringing more ML/AI capability to the camera, including these two feature areas.


Analysis and context: AI in smartphones has largely been held to be a buzzword ever since the first neural processing units (NPUs) and “AI-based” features started arriving in Android phones. However, that doesn’t mean AI itself is meaningless. On the contrary, AI has a lot of potential in mobile, to the point where chip vendors and device makers alike are only scratching the surface so far of what’s possible.

Thanks to AI, smartphone cameras have become better – sometimes quickly, sometimes agonisingly slowly, but they are getting there. Smartphone cameras are overcoming fundamental limitations such as relatively smaller sensors, fixed focal lengths, and poorer optics with smart computational photography that is powered by machine learning (ML). Auto exposure, noise reduction, face detection, and segmentation are only some of the fields where AI in smartphone photography has been able to make an impact. In the next five years, these nascent fields of AI improving different aspects of photography will mature a lot.


Multi-frame noise reduction

Idrees Patel: Qualcomm has been mentioning multi-frame noise reduction as a feature. I would like to know more detail about it as in how the image stacking works. Is it similar in any way to like what Google is doing with their HDR+ technology or is it completely different?

Judd Heape: It’s similar but different. So the multi-frame noise reduction basically takes five to seven frames in a very rapid succession. So imagine the camera doing a burst and capturing five to seven frames. Then the ISP engine takes a look at those frames and picks the best one for focus and clarity and then it can pick frames on either side of that say three to four frames of either either side of that and then average them all together and then what it does is: it tries to pick frames that are close enough together so that there’s very little movement.

And when it finds those frames, it then averages them together to discern what is different. What is actual image data and what is noise data. So when you have more and more information: more and more frames, you can actually do simple things like you look at the differences between the frames. The differences are probably noise, whereas what’s equal in the frames are probably image data.

So we can do that into this real-time frame combining to reduce noise. Now, you can also do the same thing with low light and HDR and that’s a lot like what Google is probably doing. We’re not privy to their algorithm. But they’re using multi-frame to increase sensitivity so that you can see; once you’ve reduced the noise floor, you can now look at more at doing local tone mapping, or putting gain to the image without adding more noise.

So that’s how they get low light. As well as HDR. So those enhancements to the multi-frame noise reduction will be coming from Qualcomm, which will also include low light and also HDR. But that is something we’ll roll out shortly.

Mishaal Rahman: So you mentioned rolling out this feature shortly. Is that coming in like an update to the BSP for partners?

Judd Heape: In our next-generation products, we – this is a software addition – so we will have the ability to engage with – actually it’s happening right now on the next generation products – we’re engaging with customers right now to do more multi-frame techniques beyond noise reduction, but also to handle HDR and low-light situations. But it is using the same base ISP engine, but we’re adding more software to handle these multi-frames for more than just noise reduction.

So it’s not something that this has rolled out but we’re engaging with some key lead customers on those features.


Analysis and context: With every new Snapdragon SoC announcement, Qualcomm’s specifications table includes specifications related to multi-frame noise reduction. The Snapdragon 865, for example, with its dual 14-bit CV-ISPs supports up to a hypothetical 200MP single camera (even though camera sensor vendors such as Sony, Samsung and OmniVision have yet to release any smartphone camera sensor above 108MP). However, when it comes to single camera support with MFNR, zero shutter lag (ZSL), and 30fps support, the specification changes to 64MP, and for dual cameras with the same specifications, the specification changes to 25MP.

Qualcomm’s multi-frame noise reduction is very similar to HDR+ but not entirely the same, as explained by Judd above. While HDR+ takes a series of underexposed exposures and averages them to get the best photo, MFNR takes five-seven normal frames. It doesn’t seem as Qualcomm’s MFNR is as advanced as Google’s solution because HDR and low light are not mentioned to be specific priorities in the current workflow for Spectra, while Google’s HDR+ targets HDR, low light photography, and noise reduction at the same time, with Night Sight taking it up a notch even further. However, it’s encouraging to learn that MFNR is receiving enhancements and Qualcomm will be rolling out these enhancements to “some key customers”. In the future, maybe we won’t need unofficial Google Camera ports to achieve the full potential of non-Google Android smartphone cameras.


Super resolution for video

Mishaal Rahman: So, something that I heard at the Tech Summit. Actually, I think it was in an interview with Android Authority. Is that Qualcomm is planning to extend super resolution to video as a software solution for partners and that this would be rolling out in an update, apparently. I’m wondering if you have any updates to share on this feature.

Judd Heape: Yes, so that’s a feature that we’ve had the ability to do for a while, and it’s just now rolling out. I wouldn’t say it’s like in a software update, but I would say it’s kind of like the multi-frame, low-light, that sort of thing. We are engaging with some specific lead customers on that feature. So yeah, video super resolution is something that we are – it’s available now, [we’re] rolling out to some key customers. Maybe in another generation or so we will have it as what we call a plan of record feature where it actually is built into the software code base for [the] camera. But right now, it’s more on the level of specific customer engagements for that new feature.


Analysis and context: Super resolution for video is a feature that, up until now, hasn’t shown up in smartphone cameras. It’s such a new field that research papers are still being written about it. Using multi-frame techniques for photography is one thing, but using them for video to upscale the video to a higher resolution is an entirely different matter. Qualcomm says it’s rolling out the feature to “some key customers” again, but right now, it’s not built into the software code base for the camera. In the future, it may be available for everyone, but for now, it’s a feature that end-consumers haven’t even got to use yet.


High-megapixel Quad Bayer sensors

Pixel binning

Via: AnandTech

Idrees Patel: Let’s talk about Quad Bayer sensors. Since 2019, many phones now have 48MP, 64MP, and now even 108MP sensors. These are Quad Bayer sensors; you don’t actually have true color resolution of 48 or 64 or 108MP. One thing I wanted to ask was how is the ISP differing in terms of image processing for these Quad Bayer or Nona Bayer Sensors (4-in-1 or 9-in-1 pixel binning), when compared to traditional sensors, which don’t have any pixel binning.

Judd Heape: Yeah, so of course the benefit of these quad CFA (Quad Color Filter Array) sensors is the ability in bright light to run them at full resolution, and then the ISP can process them at a full 108 megapixels or 64 megapixels or whatever is available.

However, typically in most lighting situations, like indoor or dark you have to bin because the sensor pixels are so tiny that you have to bin to get the better light sensitivity. So I would say in the majority of the time. Especially if you’re doing video or if you’re in a low light for snapshot, you’re running in binned mode.

Now, the ISP can process the sensor either way. You can look at the sensor in binned mode in which case it’s just a regular Bayer image coming in or it can look at it in full resolution mode in the quad CFA mode. And if it’s in that mode the ISP converts it to Bayer.

So we’re doing some – what we call – remosaicing. So some interpolation of the quad CFA image to make it look like full resolution Bayer again. And that is typically done in software for snapshot, although we are going to eventually add this capability in the hardware.

What isn’t hardware today is binning. So you can bin in the sensor and you can actually have the sensor decide if it’s going to output, you know, full or quarter or 9th resolution or you can bin in the ISP. And that’s a feature that we added in Snapdragon 865 actually. So if you’ve been in the ISP and then run the sensor at full resolution what that gives the ISP the ability to do is have both the full resolution image and the binned image at the same time. Therefore it can use the smaller resolution to bin image for video and preview; for video and and camcorder and viewfinder; and simultaneously for snapshot use the full resolution image for full-size snapshot.

But again, that would be in the case of bright lighting conditions. But at least if you’ve been in the ISP, you have the ability to handle both the big and little image at the same time, and therefore, you can get simultaneous video and snapshot; you can get full resolution ZSL; all without having to switch the sensor back and forth, which takes time.

It’s a good feature. And as Quad CFA sensors and even you know, the 9x sensors and maybe even more come out. We’re looking more and more to – so as these sensors become more ubiquitous – we’re looking more and more to handle those sensors in the hardware, not just for binning but also for remosaicing.

And so the benefit of that is that if you do it in the hardware versus doing it in software you reduce the latency for your customers and therefore, your shot to shot times and your burst rates will be much faster. So as we march forward with new ISPs and new chips, you’ll start seeing a lot more of what we’re doing for these new types of sensors put into hardware.


Analysis and context: Huawei was the first to use a 40MP Quad Bayer sensor with the Huawei P20 Pro in 2018, and the popularity of Quad Bayer sensors was so high that it has now made its way to even $150 phones powered by Snapdragon/Exynos/MediaTek chips. In particular, we have seen the smartphone industry arrive at 48MP and 64MP cameras as the sweet spot, while a few phones do go as high as 108MP. Quad Bayer and Nona Bayer sensors don’t come without negatives, as their full resolution comes with caveats.

However, for marketing reasons, a 48MP sensor sounds a lot better than a 12MP sensor, even if the user is taking 12MP pixel binned photos most of the time anyway. A 48MP sensor should theoretically result in better 12MP pixel binned photos in low light than a traditional 12MP sensor, but the image processing has to keep up, and as I mention below, there is a long way to go for that to happen. Regardless, it was interesting to see how the Spectra ISP handles Quad Bayer sensors with remosaicing. There is a lot of potential in these sensors, and phones like the OnePlus 8 Pro (that uses a Sony IMX689 Quad Bayer sensor with large pixels) are currently at the pinnacle of smartphone cameras.


ML-based facial recognition

Mishaal Rahman: So I think earlier you had mentioned that ML-based facial recognition is supported in the Spectra 480. That’s something that I actually heard at the Tech Summit. [That this is] one of the improvements from the 380 to the 480; that it’s part of the – there’s a new objective detection block in the video analytics engine that’s used for spatial recognition going forward.

Can you talk more about how much this improves facial recognition and what potential applications do you see it being used by vendors?

Judd Heape: Yeah actually, so you’re right in the embedded computer vision block, which is the EVA block, which we talked about at Tech Summit. That has a general object detection coronet which we’re using when the camera is running, we’re using that to detect faces. The techniques in that block are more traditional techniques, so the object recognition is done with traditional classifiers, but on top of that we do have a software engine running to actually improve the accuracy of that block.

So we’re using software ML based software to filter out the false positives so it might detect more things as faces in the scene and then the ML block of the ML software is saying, okay that sort of face, that’s really not a face and so it’s increasing the accuracy by a few percentage points by running that ML filter on top of the hardware.

I mentioned a lot of things about the future. Going forward in the future, what we plan to also do is run the actual face detection itself in ML or in deep learning mode in software. Especially, that will be true at the lower tiers, so for example in a tier where we don’t have the EVA engine, the hardware engine we will start to phase in deep learning as detection, which is running in the AI complex of the chip and then later on, in the upper tiers in the 7800 tier when we do have the EVA…

I will say in general though, we will be moving more toward ML approaches to do face detection and that would include both software in the medium term and hardware in the later term. I’m not going to disclose which products will have it but of course as we march forward in improving the ISP, we will be adding more and more hardware capability to do ML, for sure.

Mishaal Rahman: Awesome. Well, I think it’s a given that the direction you’re going is bringing the 800 series’ machine learning improvements down to the lower tier, so I think that’s generally a given. But of course, no specifics you can give us on that. Thank you for the update.

Judd Heape: Face detection is something that we’re very passionate about. We want to improve these accuracies, you know generation over generation in all tiers all the way from 800 tier down to the 400 tier. ML is a big part of that.


Analysis and context: These aspects are what give smartphone photography so much more potential over even the latest mirrorless cameras. Yes, the mirrorless cameras have better image quality in low light and are much more flexible, but smartphone cameras are overcoming their limitations through ingenious ways. ML-based face detection is only a part of that.


Improvements in the image processing engine

Mishaal Rahman: Awesome. So one of the things that I briefly heard during the roundtable discussions after the Snapdragon Tech Summit was an improvement to the image processing engine. I heard that there’s been improved low middle frequency noise reduction or LANR. And that you’re applying a dynamic reverse gain map; is it something you mentioned earlier in the conversation.

Judd Heape: Oh, okay. So I think you’re mixing two things together. Yeah, so the LANR core, which is the core that works on noise reduction on more coarse grain, which helps a little light. That’s a new block which was added in Snapdragon 865 into the ISP, and that’s one thing.

The reverse gain map is something else. That something else I mentioned at the round tables, but that is to reverse the effects of lens shading. So as you know, if you have a handset and it’s got a small lens; the center of the lens is gonna be bright and the edges are gonna be more vignetted; they’re gonna be darker.

And so in the years past in the ISP, what we’ve had is: we’ve applied a reverse gain map to get rid of those dark edges. And so that’s been in the ISP for quite some time. What we added in Snapdragon 865 though, is the ability for that gain map to change given the image, because if you apply a lot of gains to the edges what happens is the edges can get clipped, especially if you’re looking at bright light scenes outside, like blue sky can kind of turn white or the edges will reapply a lot of gain.

So in the Snapdragon 865, that reverse gain map is not static; it’s dynamic. So we’re looking at the image and we say, okay these parts of the image are being clipped and they shouldn’t be so we can roll off the gain map naturally so that you don’t get bright fringes or halo effects or this sort of thing from correcting the lens shading. So that’s different from noise reduction, so they’re two different cores.


Low light photography and aggressive noise reduction

Sony Xperia 1 II

Sony Xperia 1 II, a Snapdragon 865-powered flagship

Idrees Patel: So one thing I wanted to ask about was low light photography. Like in the past few years, there have been a lot of [OEM-implemented] night modes, but one thing I’ve been noticing is that many device makers go for aggressive noise reduction, which reduces detail, to the point where even luminance noise is removed.

So my question is that is Qualcomm advising any device makers not to do that and is it something that their processing pipelines do, or is it something influenced by the ISP in the SoC.

Judd Heape: Yeah, a lot of that has to do with tuning, and if you don’t have multi-frame, or I would say a very good image sensor is not available, with a high sensitivity or optics with low F numbers. One way to get rid of noise in low light in particular is to apply more noise reduction, but what happens when you apply more noise reduction is that you lose detail.

So sharp edges become blurry. Now, you can get rid of that if you apply these multi-frame techniques. Or if you apply AI techniques, which sort of can figure out where edges of objects and faces are, and that sort of thing. So applying just brute force noise reduction in this day and age is not really the way to handle it because you end up losing detail.

What you want to do is do multi-frame techniques or AI techniques so that you can still apply noise reduction to more like interior areas of objects while keeping nice clean edges or keeping sharp edges on objects. So that’s what I would say using either AI or multi-frame is the way to do the noise reduction and improve imagery in low light going forward.

Idrees Patel: Yes, and that’s exactly what I wanted to hear. [It’s] because that’s the main thing that separates great smartphone cameras from middle-tier or budget-tier cameras.

Judd Heape: Yeah.

Idrees Patel: Great smartphone cameras know when to apply noise reduction and when not to.

Judd Heape: Exactly. Yeah, and like I said, some some OEMs, I mean, so the camera tuning is is really done by our customers or OEMs, and some OEMs prefer a softer image with less noise. Some prefer to reveal more detail with maybe a little bit more noise.

And so, it’s a trade-off and so you have limitations. And it’s like I said the best thing to do, is [to] get a better image sensor with higher sensitivity, bigger pixels or lower F-number optics, because then you get more light in from the start, which is always better. But if you can’t do that, then instead of just cranking up the noise reduction and losing detail, what you want to do is to use multi-frame or AI techniques.


Analysis and context: This, in my opinion, is currently the biggest issue with smartphone cameras. Yes, you can use a 48MP or 64MP or even a 108MP sensor. However, if you don’t opt to use restrained noise reduction with MFNR or AI techniques, all of those megapixels, 4-in-1 binning and even 9-in-1 binning aren’t of much use. The Galaxy S20 Ultra is the prime example here, as its 108MP primary camera was largely held to be a disappointment. Samsung went backwards in image processing by using extremely aggressive noise reduction in its night modes in its 2020 flagships, while the 2019 Galaxy S10 series ironically had better image quality.

Judd reveals that some OEMs actually prefer a softer image with less noise, which is fundamentally the wrong choice to make. Tuning is made by device makers and hence, two phones using the same sensor and being powered by the same SoC can output very, very different photos. It has to be hoped that these device makers learn the truth from their better performing competitors. While Samsung lost its way in image processing this year, OnePlus has been a stark contrast. The OnePlus 8 Pro is one of the best smartphone cameras on the market, which is a notable achievement considering the very poor output of the OnePlus 5T‘s camera in 2017. The image processing mindset has to change for photos to come out sharp, no matter how much the megapixel wars rage on.


AV1 decoding and encoding

Mishaal Rahman: So this is kind of a little bit separate from the other discussions we’re having about camera quality. One of the things that some people in the open source media codec community have been wondering is when Qualcomm will support AV1 decoding and possibly encoding. I know that one is a little bit of a stretch but Google is requiring 4K HDR and 8K TVs on Android 10 to support AV1 decoding and Netflix, YouTube, they’re starting the rollout of videos encoded in AV1. So it looks like a slow uptick of AV1 encoded videos. So we’re wondering when at least the decoding support will be available in Spectra.

Qualcomm’s statement: Per your question on AV1 – we have nothing to announce today. However, Snapdragon is currently capable of AV1 playback via software. Qualcomm is always working with partners on next-generation codecs via software and hardware making Snapdragon the leader in HDR codecs including capture and playback in HEIF, HLG, HDR10, HDR10+, and Dolby Vision.  Of course, we realize to bring the best CODEC experiences to our customers, including support of high resolution and lowest power, that implementing these in HW is desired.


Video recording – motion compensation

Mishaal Rahman: So I don’t know if Idrees has any more questions but I did have one question about something that I read back at the Snapdragon Tech Summit. It’s about the motion compensated video core. I heard that there’s like improvements in the motion compensation engine, to reduce the noise when video recording. I was wondering if you can expand on what exactly it’s been improved and what’s been done.

Judd Heape: Yeah, so the EVA (Engine for Video Analytics) engine has been improved with a more dense motion map core so that the EVA engine, you know, for example is always looking at the incoming video and it has a core in there that’s doing motion estimation, what we’ve done is we’ve made that core a lot more accurate where it does it on almost a per pixel level rather than kind of like a more coarse maybe block level and so we’re getting a lot more motion vectors out of the EVA engine in Snapdragon 865 than we did in previous generations. And what that means is that the video core doing encoding can use those motion vectors to be more accurate about the encode, but the ISP on the camera side also uses that information for noise reduction.

So as you know, for generations we’ve had motion compensated temporal filtering, which is really the active noise reduction during video, which averages frames over time to get rid of noise.

The problem with that technique, though, is that if you have movement in the scene. Movement ends up getting just rejected from noise reduction because it can’t be handled or it gets smeared, and you get these ugly trails and artifacts on things. So, in motion compensated temporal filtering, what we’ve done in the past since we didn’t have this dense motion map for local motion, we have – simply for filming like this, when you’re moving the camera, it’s quite easy because everything’s moving globally – it’s the same thing.

But if you’re shooting something and you have an object moving within the scene. What we did before [was that] we just ignored those pixels, because we couldn’t process them for noise, because it was a moving object. And therefore, if you averaged frame-by-frame, the object was in a different place every frame so you couldn’t really handle it.

But on Snapdragon 865, because we have the more dense motion map and we have the ability to look at – the motion vector is on almost on a pixel by pixel basis, we’re actually able to process those locally moved pixels frame by frame for noise reduction, whereas before we couldn’t. I think I mentioned a metric in the talk. I don’t remember the number (it was 40%) but it was a large percentage of pixels on average for most videos can now be processed for noise whereas in the previous generation, they couldn’t be. And that’s really in part to having the ability to understand local motion and not just global motion.


Video recording – HDR

HDR10+ video recording on the Galaxy S10

Idrees Patel: Another question I have is about HDR video. This year, I am seeing many more device makers offer HDR10 video recording. So is it something that was promoted with the Snapdragon 865, or has it been there since a few generations.

Judd Heape: Oh yeah, so as we talked about it at Tech Summit, we’ve had HDR10, which is the video standard for HDR on the camera encode side for a few generations now, since Snapdragon 845, I believe, and we’ve constantly improved that.

So last year, we talked about HDR10+, which is 10-bit HDR recording, but instead of with static metadata it has dynamic metadata, so the metadata that’s captured by the camera during the scene is actually recorded in real time, so that when you play it back the playback engine understands if it was a dark room or a bright room, and it can compensate for that.

We also last year at Tech Summit talked about Dolby Vision capture, which is Dolby’s alternative to HDR10+. For example and so it’s very similar where they actually do the dynamic metadata as well. So Snapdragon today can support all three of these formats: HDR10, HDR10+, and Dolby Vision capture. And so, there’s really no constraint: our OEMs can choose whichever method they choose. We’ve had customers using HDR10 for a while now, and we have last year and this year, more and more customers picking up HDR10+. And I think in the future, you’ll see some adoption of Dolby Vision capture as well.

So yeah, we’ve been promoting that heavily. HDR is really important to us, both on the the snapshot side and on the video side. And like I said, we’ve been committed to the HDR10 and HDR10+ and now Dolby Vision formats now, you know for since Snapdragon 845 and now even recently Snapdragon 865 for Dolby Vision.

Mishaal Rahman: Also, I actually wasn’t sure if any vendors implemented Dolby Vision recording yet, but I guess that answers that question. [That’s] something we’ll see in the future.

Judd Heape: Yeah. I think, I of course – I can’t comment on which vendors are interested and sort of thing. That would be a question for Dolby; it’s their feature and so if you wanted more information about that, I would suggest contacting Dolby. But to date, as far as I know, there’s been no handset that’s come out with Dolby Vision capture.

Idrees Patel: Because you need display support as well. I’ve noticed that smartphone displays support HDR10 and HDR10+ but not Dolby Vision.

Judd Heape: Yeah actually, but Dolby Vision playback has been supported on Snapdragon since the past. They can work with a display and the display doesn’t have to necessarily meet any specific criteria to be Dolby Vision compliant except that Dolby will grade the display and make sure that it has a certain color, gamma, a certain bit depth, a certain brightness and a certain contrast ratio.

So, you know, you can buy like an HDR10 display, but you can also buy a handset that supports Dolby Vision playback, but Doby will have qualified that display to make sure it’s compliant with their strict requirements.


Collaboration with software vendors: Imint, Morpho, and Arcsoft

Imint Vidhance

Mishaal Rahman: I guess just one question for me to follow up on, to do more research with is one company that we’ve talked to recently is Imint. They recently upgraded their Vidhance Stabilization software to work with the Spectra 480. I know you guys work with a lot of companies who also take advantage of the Spectra 480, the processing. I’m wondering if you’re able to disclose more examples of these technologies that have – or the partners that you’ve worked with, just so it’s] something we could follow up on to, learn more about how Spectra 480 is being used in the field.

Judd Heape: Yeah, so. We work with a lot of software vendors. Like what we mentioned in the past, Dolby Vision is one of them, right. There are other ones like you mentioned, Imint Vidhance for EIS. The stabilization. We also mentioned Morpho and Arcsoft before; where we work with them very closely as well.

As far as how we work with them though, our policy is that we really want to work really closely with these independent software vendors and make sure that whatever they’re doing in software, that they’re able to leverage the hardware in Snapdragon to get the lowest power possible.

So one of the things that we’re doing with these vendors is we’re making sure they have really good access to the HBX or the DSP core. They’re also using the EVA engine to get to get these motion vectors and to use the the hardware and the AVA engine for image manipulation so that they can do image movement and translation and de-warping and that sort of thing in a hardware rather than using the GPU to do that.

And so, we really work closely with these ISPs, the ones I mentioned in particular to make sure that they’re not just putting everything and software in the CPU but they’re using things like the DSP and hardware in the EVA to get better performance and lower power. So that’s really important to us as well because it gives our in-customers the best possible mixture of features and power consumption.

[Closing Comments from Judd]: I just wanted to say, thank you guys for all the really good questions. They’re really, really good. I’ve been at Qualcomm for about three years now and looking at our past even, beyond my tenure here, where we started on Spectra, before Snapdragon 845, we will work really hard to really improve the ISP, and the camera, and just the overall experience over the past several years. And I’m really excited about even what the future brings. And what we can announce at future tech summits that you guys can get to ask and write about. [Spectra] probably in my opinion is one of the most exciting technologies at Qualcomm.


Final Thoughts

It was great to have a discussion with Judd about Qualcomm’s contributions to smartphone photography. We can have mixed feelings about the company and their patent licensing system, but Qualcomm’s mark on the smartphone industry is felt by everyone, whether you talk about patents, 4G and 5G, Wi-Fi, the Adreno GPUs, the Spectra ISPs, and the Snapdragon chips themselves, which are largely held to be the gold standard in the Android smartphone market.

There are still many pain points that have to be resolved in smartphone photography, but the future is bright as Qualcomm promises that to make more advancements in the vast, growing fields of ML, which powers AI. Let’s see what Qualcomm has to announce in this field at the next Snapdragon Tech Summit.

The post How Qualcomm is Improving the Camera Experiences on Android Phones with its Spectra ISPs appeared first on xda-developers.



from xda-developers https://ift.tt/3ncJIUk
via IFTTT

Aucun commentaire:

Enregistrer un commentaire