Pixels: the more the better. Or is it

pac man che fuori esce da un palazzo e sta per mangiare una mini cooper

So many pixels without the proper optics are like a race car with the wheels of a hatchback. Not to mention that, usually, the more their quantity increases the more their size decreases, with further negative implications on image acquisition.

The last decade has seen tremendous advances in image sensor technology, thanks in large part to research and development in consumer electronics.

The ability to leverage this technology for machine vision is fantastic-it has led to great innovation in CMOS sensors and image sensor quality in general-but it also presents a number of challenges that the optics to be implemented in these systems must overcome.

The first challenge is smaller and smaller pixel sizes. While smaller pixels commonly mean higher resolution on a quantitative level, once the optics are chosen, they may not necessarily be so on a qualitative level as well. In an ideal world without diffraction or optical errors, resolution would simply be based on pixel size and the size of the object being photographed.

To resolve all the details in an image, there must be enough space between them so that two or more details close together are not captured by adjacent pixels on the image sensor. If two or more details are acquired from adjacent pixels, they will be indistinguishable from each other. If the details occupy exactly the size of one pixel, the separation between them must itself be one pixel. It is through this understanding that we arrive at the concept of a “line pair” (which actually has the size of two pixels).

This is one reason why it is incorrect to measure the resolution of cameras and lenses in megapixels. It is therefore more appropriate to describe the resolution capabilities of a system in terms of the frequency of line pairs, normally specified as line pairs per millimeter (lp/mm). Take for example a very common pixel size: 3.45 μm (microns, or rather micrometers, or millionths of a meter). The spatial frequency of this pixel size is about 145 lp/mm and is obtained by doubling the pixel size and inverting it (to turn it into a frequency).

It follows, then, that when the pixel size decreases, the resolution increases, since it is possible to intercept smaller details with smaller pixels, have less space between them, and still be able to resolve the spacing between details. This is a simplified model of how an image sensor behaves, and is an ideal case since it does not take noise or other parameters into account. Lenses have not yet been taken into account, and they are an equally important part of an imaging system, especially if pixels become smaller in size.

Lenses and resolution – Lenses also have their own resolution specifications, but they are not as easy to understand (at least at a basic level) as for sensors, since there is nothing more concrete than a pixel to image. For optics, there are two limiting factors that determine resolution capabilities: diffraction and aberrations. Diffraction limits the maximum performance a lens can achieve, based on the focal ratio f/n (equal to the focal length of the lens divided by the diameter of the aperture where light enters), as well as the wavelength of illumination used.

Each lens has a cutoff frequency (in lp/mm) determined by its diffraction. However, when a lens is “fast,” i.e., bright, (with a focal ratio less than or equal to 5.6), optical aberrations are often what make the lens no longer as “perfect” as it could have been because of the diffraction limit alone. Simply put, in most cases the lenses do not perform at their theoretical cutoff frequency. To summarize, as pixel frequency increases (i.e., as their size decreases), contrast decreases. Any lens will always be subject to this rule.

Sensors with pixels smaller than 2.2 μm are very popular nowadays, particularly in smartphone cameras (with sizes close to 1 μm), and below that size it is virtually impossible for the optics to resolve down to the level of individual pixels. However, these pixels still have utility: just because optics cannot resolve them all, that does not make them useless.

For certain algorithms, such as blob analysis or optical character recognition (OCR), it is less important to know whether the lens can actually resolve down to the level of individual pixels but more relevant is the amount of pixels that can be placed on a particular detail.

With smaller pixels, subpixel generation by interpolation can be avoided, which increases the accuracy of any subpixel-based measurement operation. Also, in case the camera is color (and thus generally based on the Bayer pattern), the loss of resolution will be more limited.

If it is absolutely necessary to see down to the single pixel level, it is often better to double the magnification of the optics and halve the field of view. As a result, the size of our detail will occupy twice as many pixels and the contrast will be much higher.

Of course, the downside will be that we will only be able to observe a portion of the desired field of view. From the perspective of the image sensor, the best thing to do is therefore to keep the pixel size and increase the size of the image sensor.

Larger format sensors – Unfortunately, increasing the size of the sensor creates additional problems for lenses. One of the main cost factors of a lens is the size of the format for which it is designed. Designing a lens for a larger format sensor requires a larger number of individual optical components, which must also be larger, while tolerances must be tighter.

Optical imaging solutions face many more challenges today than they did a decade ago. The sensors with which they are used have much higher resolution requirements, and format sizes are gradually both increased and decreased, while pixel sizes continue to shrink.

In the past, optics never limited an acquisition system; today it does. Where a typical pixel size was about 9 μm, a much more common size today is around 3 μm. This increase in pixel density by a factor of 81 is not without consequences, and while most of these are positive, the lens selection process is more important now than ever before.

The wide variety of machine vision lenses on the market exists to cover these new image sensor technologies. As theoretical limits are reached, understanding these limitations before they become a problem is essential to solving applications, both now and in the future.

Share this post