Resolution is the ability of the image system to reproduce the details of an object, which depends on factors such as the type of lighting used, the pixel size of the sensor and the capabilities of the optics. The smaller the detail of the object, the higher the required resolution of the lens.
Introduction to the Resolution Process
The image quality of the camera depends on the sensor. Simply put, a digital image sensor is a chip inside a camera body containing millions of photosensitive spots. The size of the camera's sensor determines how much light can be used to create the image. The larger the sensor, the better the image quality, as more information is collected. Typically, on a distribution network, digital cameras advertise sensor sizes: 16 mm, Super 35 mm, and sometimes up to 65 mm.
As the size of the sensor increases, the depth of field will decrease at a given aperture, since a larger analogue requires you to approach the subject or use a longer focal length to fill the frame. To maintain the same depth of field, the photographer must use smaller aperture sizes.
This small depth of field may be desirable, especially to achieve background blur for portraiture, but landscape photography requires a greater depth, which is easier to shoot with the flexible aperture of compact cameras.
Separation of the number of horizontal or vertical pixels on the sensor will indicate how much space each of them occupies on the object, and can be used to assess the resolution of the lens and resolve consumer concerns about the size of the digital image elements of the device. As a starting point, it is important to understand what can actually limit the resolution of the system.
This statement can be demonstrated by the example of a pair of squares on a white background. If the squares on the camera’s sensor are displayed on adjacent pixels, then they will appear as one large rectangle in the image (1a), and not two separate squares (1b). To distinguish between squares, between them requires a certain space, at least one pixel. This minimum distance is the ultimate resolution of the system. The absolute limit is determined by the size of the pixels on the sensor, as well as their number.
Lens Measurement
The relationship between alternating black and white squares is described as a linear pair. Typically, the resolution is determined by the frequency measured in pairs of lines per millimeter - lp / mm. Unfortunately, the resolution of the lens in cm is not an absolute number. At a given resolution, the ability to see two squares as separate objects will depend on the level of the gray scale. The greater the separation in the gray scale between them and space, the more stable is the ability to resolve these squares. This division of the gray scale is known as contrast with a specific frequency.
The spatial frequency is set in lp / mm. For this reason, calculating resolution in terms of lp / mm is extremely useful when comparing lenses and determining the best choice for these sensors and applications. The first is where the system resolution calculation begins. Starting with a sensor, it is easier to determine which lens characteristics are needed to meet the requirements of a device or other applications. The highest frequency allowed by the sensor, Nyquist, is actually equal to two pixels or one linear pair.
The resolution of the lens by definition, also called the resolution of the image space for the system, can be determined by multiplying the size in μm by 2 to create a pair, and dividing it by 1000 to convert to mm:
lp / mm = 1000 / (2 X pixel)
Sensors with larger pixels will have lower limit resolutions. Sensors with smaller pixels will have higher values, according to the above formula for the resolution of the lens.
Active area of the sensor
You can calculate the maximum resolution for the subject to be viewed. To do this, it is necessary to distinguish such indicators as the ratio between the size of the sensor, the field of view and the number of pixels on the sensor. The size of the latter refers to the parameters of the active area of the camera sensor, usually determined by the size of its format.
However, the exact proportions will vary depending on the aspect ratio, and the nominal sensor formats should only be used as a guide, especially for telecentric lenses and large magnification sizes. The sensor size can be directly calculated from the size of the pixels and their active number in order to check the resolution of the lens.
The table shows the Nyquist limit associated with the pixel sizes found on some very commonly used sensors.
Pixel Size (μm) | Bound Nyquist Limit (lp / mm) |
1,67 | 299.4 |
2.2 | 227.3 |
3.45 | 144.9 |
4,54 | 110.1 |
5.5 | 90.9 |
As the pixel sizes decrease, the associated Nyquist limit in lp / mm increases proportionally. To determine the absolute minimum resolvable spot that can be seen on the object, it is necessary to calculate the ratio of the field of view to the size of the sensor. This is also known as the primary magnification (PMAG) system.
The relationship associated with system PMAG allows you to scale the resolution of the image space. As a rule, when developing an application, it is not indicated in lp / mm, but rather in microns (microns) or fractions of an inch. You can quickly go to the maximum resolution of the object using the above formula to simplify the choice of the resolution of the z lens. It is also important to keep in mind that there are many additional factors, and the above limitation gives much less error than the complexity of taking into account many factors and calculating them using equations.
Focal Length Calculation
Image resolution is the number of pixels in it. It is indicated in two dimensions, for example, 640X480. Calculations can be performed separately for each measurement, but for simplicity it often comes down to one. To make accurate measurements on the image, you need to use at least two pixels for each smallest area that you want to detect. The size of the sensor refers to the physical indicator and, as a rule, is not indicated in the passport data. The best way to determine the size of the sensor is to look at the pixel parameters on it and multiply it by the format, in this case, the resolution of the lens solves the problem of a bad picture.
For example, a Basler acA1300-30um camera has a pixel size of 3.75 x 3.75um and a resolution of 1296 x 966 pixels. The size of the sensor is 3.75 μm x 1296 by 3.75 μm x 966 = 4.86 x 3.62 mm.
The sensor format refers to the physical size and is independent of the pixel size. This parameter is used to determine which lens the camera is compatible with. In order for them to match, the lens format must be larger or equal to the size of the sensor. If a lens with a smaller format is used, the image experiences vignetting. This causes the sensor areas outside the edge of the lens format to become dark.
Pixels and camera selection
To see objects in the image, there should be enough space between them so that they do not merge with neighboring pixels, otherwise they will be indistinguishable from each other. If the objects are one pixel apart, the separation between them should also be at least one element, which is why a pair of lines is formed, which actually has two pixels in size. This is one of the reasons why it is incorrect to measure the resolution of cameras and lenses in megapixels.
In fact, it is easier to describe the resolution of the system in terms of the frequency of the pairs of lines. It follows that with decreasing pixel size, the resolution increases, since you can place smaller objects on smaller digital elements, have less space between them and still allow the distance between the objects to be shot.
This is a simplified model of how the camera’s sensor detects objects without considering noise or other parameters, and is an ideal situation.
MTF contrast charts
Most lenses are not perfect optical systems. Light passing through the lens undergoes a certain degree of degradation. The question is, how can this degradation be assessed? Before answering this question, it is necessary to define the concept of “modulation”. The latter is a measure of the contrast len at a given frequency. One could try to analyze real-world images taken through the lens to determine the modulation or contrast for parts of different sizes or frequencies (interval), but this is very impractical.
Instead, it is much easier to measure modulation or contrast for pairs of alternating white and dark lines. They are called a rectangular grid. The line interval in a rectangular wave array is the frequency (v), for which the modulation function or contrast of the lens and the resolution in cm are measured.
The maximum amount of light will come from the light bands, and the minimum from the dark bands. If the light is measured by brightness (L), the modulation can be determined in accordance with the following equation:
modulation = (Lmax - Lmin) / (Lmax + Lmin),
where: Lmax is the maximum brightness of white lines in the lattice, and Lmin is the minimum brightness of dark ones.
When modulation is determined from the point of view of light, it is often called Michelson contrast, since they take the ratio of illumination from light and dark bands to measure contrast.
For example, there is a square wave lattice of a certain frequency (v) and modulation, as well as the inherent contrast between dark and light regions, reflected from this lattice through the lens. The modulation of the image and thus the contrast of the lens is measured for a given grating frequency (v).
The modulation transfer function (MTF) is defined as the modulation M i of the image divided by the modulation of the stimulus (object) M o , as shown in the following equation.
USF test grids are printed on 98% bright laser paper. Black laser toner for the printer has a reflectance of about 10%. Thus, the value for M 0 is 88%. But since the film has a more limited dynamic range compared to the human eye, it can be safely assumed that M 0 is essentially 100% or 1. Thus, the above formula reduces to the following simpler equation:
Thus, MTF len for a given grating frequency (v) is simply the measured grating modulation (M i ) when photographing through a lens onto a film.
Microscope resolution
The resolution of the microscope objective is the shortest distance between two separate points in the field of view of its eyepiece, which can still be distinguished as different objects.
If two points are closer to each other than your resolution, they will appear fuzzy and their positions will be inaccurate. The microscope may offer high magnification, but if the lenses are of poor quality, the resulting poor resolution will degrade image quality.
Below is the Abbe equation, where the resolution of the z microscope lens is the resolution equal to the wavelength of the light used, divided by 2 (the numerical aperture of the lens).
The resolution of the microscope is influenced by several elements. An optical microscope mounted at high magnification can create an image that is blurry, however it is still at the maximum resolution of the lens.
The digital aperture of the lens affects resolution. The resolution of the microscope lens is a number that indicates the ability of the lens to collect light and resolve a point at a fixed distance from the lens. The smallest point that can be resolved by the lens is proportional to the wavelength of the light collected, divided by the number of numerical apertures. Therefore, a larger number corresponds to a greater ability of the lens to determine an excellent point in the field of view. The numerical aperture of the lens also depends on the amount of correction of optical aberration.
Telescope Lens Resolution
Like a light funnel, a telescope is able to collect light in proportion to the area of the hole, this property is the main lens.
The diameter of the dark adapted pupil of the human eye is a little less than 1 centimeter, and the diameter of the largest optical telescope is 1000 centimeters (10 meters), so the largest telescope is one million times larger than the human eye.
Therefore, telescopes see weaker objects than people. And they have devices that accumulate light using electronic detection sensors for many hours.
There are two main types of telescope: lens-based refractors and mirror-based reflectors. Large telescopes are reflectors because mirrors need not be transparent. Telescope mirrors are one of the most accurate designs. The allowed error on the surface is approximately 1/1000 of the width of a human hair - through a 10-meter hole.
Previously, mirrors were made of huge thick glass plates so that they did not sag. Today's mirrors are thin and flexible, but are supported by computer control or are otherwise segmented and aligned with it. In addition to the task of searching for weak objects, the astronomer’s goal is also to see their small details. The degree to which parts can be recognized is called resolution:
- Fuzzy images = poor resolution.
- Clear images = good resolution.
Due to the wave nature of light and phenomena called diffraction, the diameter of a mirror or telescope lens limits its ultimate resolution with respect to the diameter of the telescope. In this case, resolution means the smallest angular part that can be recognized. Its small values correspond to excellent image detail.
Radio telescopes must be very large to provide good resolution. The atmosphere of the Earth is turbulent and blurs the image of the telescope. Earth astronomers can rarely reach the ultimate resolution of the spacecraft. The turbulent effect of the atmosphere on the star is called vision. This turbulence makes the stars flicker. To avoid these atmospheric blurry objects, astronomers launch telescopes into space or place them on high mountains with stable atmospheric conditions.
Parameter calculation examples
Canon Lens Resolution Data:
- Pixel size = 3.45 μm x 3.45 μm.
- Number of pixels (H x V) = 2448 x 2050.
- Desired field of view (horizontal) = 100 mm.
- Sensor Resolution Limit: 1000 / 2x3.45 = 145 lp / mm.
- Sensor Dimensions: 3.45X2448 / 1000 = 8.45 mm3.45X2050 / 1000 = 7.07 mm.
- PMAG: 8.45 / 100 = 0.0845 mm.
- Measurement of the resolution of lenses: 145 x 0.0845 = 12.25 lp / mm.
In fact, these calculations are quite complicated, but they will help to create an image based on the size of the sensor, pixel format, working distance and field of view in mm. Calculating these values will determine the best lens for images and applications.
Problems of modern optics
Unfortunately, doubling the size of the sensor creates additional problems for lenses. One of the main parameters that affect the cost of an image lens is the format. Designing a lens for a larger sensor requires numerous separate optical components, which need to be larger and system transfer more rigid.
A lens designed for a 1-inch sensor can cost five times more than a lens designed for a ½ "sensor, even if it cannot use the same characteristics with a limited resolution in pixels. The cost component must be considered before determining the resolution lens ability.
Today, optical image processing is faced with greater problems than ten years ago. The sensors with which they are used have much higher resolution requirements, and the sizes of the formats are simultaneously controlled both smaller and larger, while the size of the pixels continues to decrease.
In the past, optics never limited an image processing system; today it does. Where a typical pixel size is about 9 microns, a much more common size is about 3 microns. 81 , , , , - .