Whirrr! How Does That Focusing Mechanism Work?

Camera-shopping lately? You can avoid nasty surprises and grow as a photographer by knowing how camera focusing systems work. There are several different systems with different advantages and disadvantages. Learn them and keep them in mind, and you’ll take clearer pictures—and maybe save money too!

The human eye always just a wink away from automatically focusing on the right spot. But for machines, things are more complicated. How do cameras cope with the task of focusing? When happens in a camera when you focus on a particular object? Depending on the system it uses, several steps take place. We’ll cast light on them here and compare the pluses and minuses of the different methods.

Contrast Detection

The simplest method, which just monitors the image on the main sensor and works straightforwardly from there, is the contrast-detection system. Here’s the basics of how it works: the system tries mildly changing the lens’s focus (for example towards infinity) and if its analysis shows that the change has blurred the picture, it stops and switches directions. If the change has improved the contrast, then it continues moving in the same direction for as long as the contrast keeps improving. This system learns that the right focus has been reached by “overshooting” it and blurring the picture again. It then has to go back to the point where the image was best.

Left: The contours in an unfocused shot have poor contrast. Right: Focusing gives them sharp contrast.
Left: The contours in an unfocused shot have poor contrast. Right: Focusing gives them sharp contrast.

Contrast detection algorithms are used to determine how blurry the picture is—thus this method’s name. This system does not need any complicated hardware to do its job, so it can be used anywhere. That’s why it’s in practically every compact, mobile, and web camera.

Contrast detection algorithms are always improving, but they can’t free these systems from their key disadvantage: they’re slow. First, because the lens itself moves slowly along, as numbers are crunched and re-crunched to judge the contrast. Second, because this system overshoots and has to go back.

Phase Detection

DSLRs, meanwhile, use an entirely different system that demands a separate sensor chip devoted solely to focusing. In the image below, this chip is visible in the bottom part of the camera. Light is diverted towards it by a second mirror located directly behind the main mirror (it is translucent).

This image from Panasonic shows a cross-section of a typical DSLR. The two mirrors working together bring the view to the viewfinder and also guide light (and thus the picture) to the focusing sensor.
This image from Panasonic shows a cross-section of a typical DSLR. The two mirrors working together bring the view to the viewfinder and also guide light (and thus the picture) to the focusing sensor.

As a simplification you can imagine that this sensor lets the camera view each pixel in the scene through the left and right part of the lens, and evaluate the differences in these two images.

In reality the camera simplifies things, too: it doesn’t compare the pictures in full, but only 1-pixel strips. Light that came from the left is compared to light that came from the right, and by examining the difference (the phase shift), the camera immediately knows in which direction, and even how far, to shift the lens. This lets it whisk the lens right to the spot where it will give a correctly focused image! This focusing method is very fast and precise.

Sharpness is calculated via the thin strip marked in red. The lens is focused based on the original phase difference.
Sharpness is calculated via the thin strip marked in red. The lens is focused based on the original phase difference.

Unlike contrast detection, this system can even keep up with moving objects. Lens shifting mechanisms are not always 100% precise, and so in practice the “jump” to the expected focus spot is followed by another, refining calculation, and if that calculation finds any remaining imprecision, a second, corrective jump is made. (Unless it’s not. In many cameras, phase shift is constantly re-evaluated even while the lens is in motion, and the remaining lens shifting distance is changed as appropriate).

Phase detection only works when the sensor strip (see the red line in the illustration above) hits at least some edges at a 90° angle.  When you want to focus on a spot with only horizontal edges (e.g. a typical set of curtains), phase detection can’t give you auto-focus. Unless, that is, you have a sensor with a second strip at right angles to the first. These exist, and they’re called a cross-type sensor. High-end DSLRs may have several sensors enhanced with diagonal edge detection—they add two more strips in a second cross shape, this time tilted at a 45° angle.

Various focus point types.
Various focus point types.

Phase detection has the obvious disadvantage of needing an extra chip. And a less obvious disadvantage: these sensors cover a point, not the whole picture at once. These are the “focus points” you always hear about. Low-end DSLRs use simplified phase detection electronics to save costs, with only a few focus points. Higher-end models offer more points and more precision.

The image below shows the focus points on a Canon 1D X. The first picture shows how the viewfinder looks with all 61 focus points. The second is an overview of the detection strips for the individual points. The third shows the actual auto-focus sensor.

Focus points on a Canon 1D X.
Focus points on a Canon 1D X.

Because of how this system works, it’s not available while filming videos or doing anything else that requires a raised mirror. With a raised mirror, light from outside simply doesn’t reach the auto-focus chip. In this situation, cameras switch to the “backup” method, contrast detection, which slows down focusing significantly.

Hybrid Systems

Many cameras combine both approaches. Detection strips for phase detection are integrated in some way right into the main sensor, in place of some of the colored pixels. Since there are millions and millions of pixels, losing a few hundred doesn’t matter. They’re just interpolated from their surroundings. This system is found in mirrorless cameras (advanced cameras with interchangeable lenses but no mirror), and in the Samsung Galaxy S5 and the iPhone 6.

This system usually isn’t as fast as phase detection on a separate sensor, but it’s getting closer every year. It has the huge advantages of being video-friendly and of course of making it possible to eliminate the mirror—enabling faster and simpler operation and reducing the number of breakable parts in the camera.

Canon took an even more radical approach in their 70D. Every single pixel on the main sensor has a left and right half, and so you can focus on any absolutely any point in the picture. However, this is a DSLR that also has a separate AF sensor, and so these divided pixels are only used for video filming and in Live View. But I personally think that this technology has great potential.

Sony took a bit of a sidestep a few years back and decided to try creating a range of DSLRs with fixed mirrors. As discussed above, autofocus systems in classical DSLRs don’t work when the mirror is up (video filming, Live View). That’s a problem. A fixed mirror that never goes up eliminates that problem, making it possible to autofocus at any time. The mirrors are translucent so that you can both take pictures and use the viewfinder, and so this system is named Single-Lens Translucent (SLT). However, the translucency causes about a 30 percent loss of light, so these cameras have increased noise.

Number Crunching Isn’t Everything

In upcoming articles we’ll take a look at how lenses are actually moved during focusing and at how to get the most out of autofocus in practice.

Receive our weekly newsletter to stay on top of the latest photography trends

Subscribe to receive the best learn.zoner.com has to offer

Invalid email

By confirming the subscription, you consent to the processing of your personal data for receiving newsletter. Learn more in our privacy policy.

AuthorVit Kovalcik

I’ve been a freelancer since early 2012; photography is my living. I acquired my photography experience, both inside and outside the studio, during the previous years—when I was working all day and taking pictures every evening and weekend. I don’t have just one clearly defined topic; I like photographing people, but also cityscapes and landscapes.

Comments (0)

There are no comments yet.

Leave a Reply

Your email address will not be published. Required fields are marked *