Multiple Lenses in Your Phone: What to Expect and What They Can’t Do

Mobile photography is seeing a boom right now. Manufacturers are constantly introducing new tricks and technologies that owners of traditional cameras could only dream of. How about several lenses in one phone? Sounds great, right? But not everything is as great as it seems. Let’s take a look at the benefits and drawbacks of multi-lens phones.

In this text, I’ll partly be working from my earlier article on the basics of optics. In that article, I mentioned, among other things, technological advancements that will increase dynamic range so much that in the future, it will be practically unlimited. But mobile photography is seeing lens advancements as well. Smartphone makers are now one-upping each other in terms of who has the most lenses on their phones.

Every manufacturer has approached this concept in their own way and is using the added optics differently. So lenses can serve these goals or combinations of them:

  • Capturing more light—easier photography in the dark
  • Changing the angle of view—wide or telephoto
  • Capturing a scene’s depth—this is used mainly for distance-based blurring

And in the future, we’ll likely also encounter other uses that we currently can’t even dream of.

Problems with Parameters

Every lens in these phones has a separate sensor lying beneath it. That much makes sense. But what’s confusing is that each sensor has a different size. So a single phone can easily have three differently sized sensors.

And because all the important values such as lens speed and ISO are stated in relation to sensor size, they are hardly usable without some kind of conversion. So in lens speed listings, every value means something different, and they’re mutually incomparable.

Mobile telephoto lenses have an especially large “parameter problem,” and it’s yet initially hard to see. In the flood of numbers, you can overlook how a smaller sensor is used for a longer lens, making the images from a telephoto lens resemble simple crops from the main camera. Naturally, that fact isn’t very marketable.

Calculating Parameters

Let’s get all this organized. I’ve converted lenses’ parameters to a common denominator: the traditional full frame sensor. This is not easy work, because manufacturers try to muddle things. So I mainly browsed manufacturers’ specs, descriptions in reviews, Flickr (where photos include EXIF data), and Wikipedia (which lists sensor sizes and crop factors for them).

I took the current best models from the biggest manufacturers and compared the rear cameras only. For now, I’ve ignored selfie cameras.

Manufacturers also received a rating from me for how hard it was to find the main parameters. Keep in mind that this is “only” a comparison of lens and sensor specs. A phone’s processor can then magic up a completely different image than what the numbers would suggest. Despite this, the calculations are good for keeping aware of what inputs the processor has to work with.

Generally, for all manufacturers, the main lens that phones have had for years is in the best shape. If you’re taking pictures with the default ISO setting, which for phones is usually somewhere around ISO 50, you’ll get photos that are very similar to those for a full frame camera with a 26 mm lens, an f/10 aperture, and ISO 1,600.

Sometimes there’s also an ultra-wide lens that roughly corresponds to a full frame camera with a 16 mm lens, an f/14 aperture, and ISO 2,000. This is definitely a welcome addition because ultra-wide shots are hard to take with a phone’s basic lens—panoramas can’t do everything.

If, on the other hand, you’re using a phone’s telephoto lens, you’ll get a picture equivalent to a full frame camera with a 50 mm lens and an f/20 aperture. The noise is like what you’d have with a full frame with an ISO of about 3,600. Referring again to my earlier article on the basics of optics, that means that due to the zooming, we’re shooting a smaller crop of the scene. The camera has lost three-quarters of its light, but there’s been no compensation by increasing the size of the lens aperture, as is done on larger cameras. The aperture on phones has stayed the same or is only a little better.

Thus these telephoto lenses capture practically the same image as a crop from the basic lens, just with a different resolution!

This is difficult to test because phones often don’t enable you to easily shoot with a specific lens and specific settings, let alone shooting to RAW. Instead of that, they choose on their own how they’ll take a picture and which lens or lenses they’ll use. Sometimes they even ignore the telephoto lens and choose a crop out of a wider focal length on their own.

A 2x zoom on the Xiaomi Mi 8. Even though the phone has a telephoto lens available, it has chosen a crop out of a wider focal length.

Collecting the Most Light Possible

But phones don’t give up easily here. Besides software tricks, there are also some real possibilities for capturing the most photons possible and thus reducing noise.

The basic approach is to lengthen the exposure or combine multiple pictures taken just after each other (often without informing the photographer). Some phones have a stabilized telephoto lens, which definitely helps in these situations. Noise is reduced, and so it corresponds to lower ISOs than the ones I mentioned in the previous section.

The second option is to combine images from multiple lenses. Each one captures a part of the light reflected from the scene, and together they effectively increase their overall lens speed. Crops from lenses with different focal lengths can be combined, or a picture can be assembled from lenses with the same angle of view.

And indeed, the upcoming Nokia 9 combines images from a set of five identical lenses and offers another technological “trick” as well. Two of its lenses have a standard RGB sensor beneath them, but three more are serviced by BW (Black & White) sensors.

Nokia 9
The Nokia 9 with its five identical lenses (2x RGB, 3x BW), plus a sixth for depth capture. Source: Nokia

While it may not seem like it, black and white is an advantage here. The color sensors have a Bayer filter in front of them, which means that e.g. for the red sub-pixel, the filter removes photons with non-red colors. This robs you of 40–50% of the available light. When there’s no need to distinguish colors and it’s enough to determine light intensity, no Bayer filter is used, and the result is more light captured and less noise.

Colors are then filled in from the other sensors—just with a touch of inaccuracy. The Nokia 9 isn’t the only phone with BW sensors; they’re also found in other phones, and even in high-class cameras such as the Leia M Monochrom (which, as the name implies, doesn’t even have a way to add colors; it shoots pure black and white).

So if we were to simplify and say that one BW sensor captures twice the light of an RGB sensor, the Nokia 9 with its five lenses captures 8x the light of a single lens on its own. So it will be better by 3 EV. Instead of its lens speed being equivalent to a full frame at somewhere around f/12 (just my estimate, as this phone, isn’t on the market yet), it’s reduced to about f/4, and that’s quite the shift.

Multiple Lenses for Depth of Field

Many smartphones are already using multiple lenses today to determine how images are shifted against each other in their individual pixels and to calculate depth. Some even have special depth cameras. Here the phone primarily uses the information collected for software blurring of pictures’ backgrounds.

Blurring on the Xiaomi Redmi Note 5, with the use of a second sensor for depth detection. The second variant shown here is produced automatically if, for example, you cover the depth sensor with your finger. This is the real image that the algorithms are working from. Photo: Jiri Dostal

But depth can also be used for 3D views of the pictures you take, and Facebook even has support for this.

Depth can be guessed from a single 2D image as well, without multiple lenses, but with them, you get more reliable results in tough situations.

A Periscope Lens

Rising lens counts and faster processors for processing sensor data will definitely enable new tricks. But not all that glitters is gold.

The phone maker named Oppo has announced the development of a “periscope” telephoto lens that will be able to achieve focal lengths of up to 160 mm full frame equivalent. That’s nice, but from their pictures, it seems that the lens leads out through an aperture similar to the one on the basic lens.

That means that the angle of view will be limited, and yet here again there will be no significant compensation for this by increasing the size of the aperture (just like it didn’t happen with the arrival of 50 mm lenses), and so we can expect a lens speed of somewhere around f/40 to f/60 full frame equivalent. However, the Nikon P1000 has an f/45 equivalent lens as well, and reviews state that you can have fun with it outdoors, so in good light, this mobile telephoto lens should be usable as well.

The two bottom sensors look forward, while the top sensor peeks out around the corner through the periscope. The longer path through the assembly’s individual lenses can thus be hidden inside the phone’s thin body. Source: Oppo

What Will the Future Bring?

When you just look at their parameters, multi-lens sets don’t seem to make photographic sense. But their use for things like software blurring does make sense in everyday situations, and it completely suffices for “Facebook photography.” Still, this is as much about the camera system’s processors as it is about its lenses.

In the future, I’m expecting even better depth detection and better work with it. For example, filming the video while walking down the street and capturing just your friend next to you while replacing their background. Or changing a whole scene’s lighting. This is a difficult task in general, but we’ll see this possibility is provided for at least some specific cases.

Already today, certain software supports editing that takes advantage of depth information. So it’s easy for example to darken the sky while avoiding changes to the buildings touching it. Don’t expect perfect results, but again—for social networks, it’s absolutely enough.

Thanks to lens stabilization and better sensor technologies,  I also expect a very high dynamic range for static scenes, where it’s easy to collect a lot of light in multiple shots. Blowout will disappear, and color gradients will be smoother. In this respect, there will be no difference in the future between a phone and the most expensive pro cameras.

But quickly moving objects and bad light will still be problematic for phones.

Overall, phones will rise ever high technologically, and software will be powerfully helping them in this. The higher-class cameras, and especially lenses, will remain out there for people who want the highest quality in all kinds of conditions. But for the vast majority of people, a smartphone will be enough.


I’d like to wrap up by thanking the people who had to put up with my mobile experiments! That is Jiri Dostal, Zdena Klimova, Verunka Ruda, Libor Foltynek, and Richard Schneider. Thank you once again!

Receive our weekly newsletter to stay on top of the latest photography trends

Subscribe to receive the best has to offer

Invalid email

By confirming the subscription, you consent to the processing of your personal data for receiving newsletter. Learn more in our privacy policy.

AuthorVit Kovalcik

I’ve been a freelancer since early 2012; photography is my living. I acquired my photography experience, both inside and outside the studio, during the previous years—when I was working all day and taking pictures every evening and weekend. I don’t have just one clearly defined topic; I like photographing people, but also cityscapes and landscapes.

Comments (0)

There are no comments yet.

Leave a Reply

Your email address will not be published. Required fields are marked *