Multiple cameras They have been circulating in the mobile ecosystem for a long time and the options used by manufacturers when composing their modules are varied. We find cameras attached to super wide-angle lenses, telephoto lenses that offer optical zoom thanks to a longer focal length, and also lenses specifically designed to focus at very short distances, commonly known as macro lenses.

We also find cameras whose only functionality is to measure depth, either through normal systems or with specialized 3D TOF systems, but in the mobile market there is another extra functionality that, unfortunately, is little used. We say « unfortunately » because they are really useful cameras once you know what they work for and how. We talk about cameras with monochrome sensors: the « black and white » cameras.

What are black and white sensors

On several occasions we have explained how a photographic sensor works, but explaining it at the moment serves to differentiate color sensors from monochrome ones. So, let’s explain it again for at least refresh concepts necessary to understand how digital photography works.

Photographic sensors are made up of groups of small transistors, which in this case are known as photosites or photodiodes. They are transistors whose only functionality is to capture the light that enters them through the lens. Due to their arrangement in the sensor, the photosites are grouped four by four and each one is responsible for capture part of the light from each point in the scene which will then be captured in one pixel in the final photograph. Remember this, four photosites for each pixel.

The reason for this division into blocks of four is that each photosite specializes in capturing a different color temperature. Above each photosite there is a filter that allows only one color temperature to pass: one filter to let the red through, another to let the green through and another to let the blue through. Red, green and blue = RGB. The fourth photosite is the redundant one and serves as a support, helping to capture even more light.

Normally this fourth photosite is also red, thus occurring an RGGB array, but recently Huawei is choosing to replace the green ones with the yellow ones to maximize light capture, moving to a RYYB matrix. This is what are known as « Bayer matrices ». Therefore, we have four photosites for each pixel, each of them captures a type of light and, when merged, the mobile processor composes the final photograph, superimposing all the colors.

With monochrome sensors there is no division of light, only its intensity is captured

In the case of monochromatic sensors, photosites do not have these filters to allow some color temperatures to pass and reject the others. What we have is that each of the photosites measures the intensity of the light that touches it. For each pixel, what we have is a group of four photosites, since the arrangement in blocks of four is identical, which only and exclusively measure the intensity of light. The amount. And they convert this light intensity into a number that they use as a support to create the photograph.

What are monochrome sensors for?

Comparison of light capture between color and monochrome sensors from Red.com

Once we know how the sensors work, we can easily calculate that each of the photodiodes of a color sensor captures 33% of the incoming light, while with the photodiodes of the monochrome sensors we can capture 100% of the amount of light. And since light is information, a color « Bayer matrix » allows us to obtain 133% light while a monochromatic allows us to obtain 400% (approximately) of the light of the sceneHence, they are usually mounted with darker lenses that reduce this capture because, for practical purposes, they can afford it.

Monochromatic sensors can sometimes be used to take black-and-white photos of the same style, but most often these sensors are used as a support for the main sensor. While the phone’s main camera captures scene information from its side, the monochrome sensor provides information in the form of a scene lighting map, allowing both sensors to be combined to process a photograph with various enhancements.

From the outset, we obtain photographs with less noise in low light situations, as the monochrome sensor offers more accurate scene information that can generate sharper pictures. We also have a better contrast by having additional information to separate the pixels from each other. Finally, we get a better dynamic range general since, although the resolution of these sensors is usually low, the original photograph receives this extra layer of illumination that serves to improve the final result.

Depth sensors and black and white sensors are auxiliary to the main sensor

So, and quite briefly, just as we have sensors whose function is to measure depth to offer selective blur or bokeh, monochrome sensors or black and white sensors they exist solely and exclusively to give the processor extra information about the scene light. The processor, combining all the available information, is in charge of doing the rest. With better or worse results, of course, but that is already a software problem.