How to evaluate the sensitivity of the camera
It is easier to compare basic camera specifications, such as frame rate, resolution, and interface. However, it is not so simple to compare the imaging performance of cameras, such as quantum efficiency, temporal dark noise, and saturation capacity. First, we need to understand what these different measurements really mean.
What is quantum efficiency? Is it measured at the peak wavelength or at a specific wavelength? What is the difference between SNR and dynamic range? This white paper will answer these questions one by one, and introduce how to compare and select cameras using imaging performance data following the EMVA1288 standard.
The EMVA1288 standard defines all aspects of measuring camera performance, how to measure them, and how to present these measurement results in a unified way. The first part of this white paper will introduce the imaging performance of the image sensor. This part will first introduce some basic concepts, which are essential for understanding "how an image sensor converts light into a digital image, and ultimately determines the performance of the sensor." Taking a single pixel as an example in Figure 1, these concepts are highlighted.
Figure 1: From photons to gray levels and some related concepts.
Light——光
Photons per μm2——單位(wei)面積(μm2)上的(de)光子
Saturation Capacity——飽和(he)容量(liang)
Pixel size——像素尺寸
WELL——阱
Shot noise——散(san)粒噪(zao)聲
Number of photons——光子數(shu)
Quantum efficiency——量子效(xiao)率
Sensor——傳感器(qi)
Temporal dark noise——顳暗(an)噪聲
Signal——信號
Gain——增益
Grey sacle——灰度級
First, we need to understand the inherent noise of light itself. Light is composed of discrete particles and photons, and is generated by a light source. Because the light source randomly generates photons, there will be noise in the light intensity. Optical physics believes that the noise observed in light intensity is equivalent to the square root of the number of photons produced by the light source. This noise is called shot noise.
It should be noted that the number of photons observed from a pixel will depend on the exposure time and light intensity. This article regards the number of photons as a combination of exposure time and light intensity. Similarly, there is a non-linear relationship between the pixel size and the light collection capability of the sensor, because the pixel size needs to be squared before it can be used to determine the photosensitive area.
The first step in digitizing light is to convert photons into electrons. This article will not repeat how the sensor completes this conversion, but introduces the measurement of conversion efficiency. The ratio of electrons to photons generated in the digitization process is called quantum efficiency (QE). The quantum efficiency of the sensor illustrated in Figure 1 is 50%, because 6 photons "fall on" the sensor and 3 electrons are generated.
Before electrons are digitized, they are stored in pixels, called wells. The number of electrons that can be stored in a well is called saturation capacity or well depth. If the trap receives more electrons than its saturation capacity, the extra electrons cannot be stored.
Once the pixel has collected light, the charge in the well is measured, and this measurement is called a signal. The signal measurement in Figure 1 is displayed with a pointer meter. The error associated with this measurement is called temporal dark noise or read noise.
Finally, the gray level is determined by converting the signal value (in electronic representation) into the pixel value of a 16-bit analog-to-digital converter unit (ADU). The ratio between the analog signal value and the digital gray level value is called gain and is measured in the number of electrons per ADU. Do not confuse the gain parameter defined by the EMVA1288 standard with the gain during the "analog to digital" conversion process.
When evaluating camera performance, the signal-to-noise ratio and dynamic range are usually referred to. The measurement of these two performances of the camera must consider the ratio between the signal and the camera noise. The difference is that the dynamic range only considers temporal dark noise, while the signal-to-noise ratio also considers the root mean square sum of shot noise.
The absolute sensitivity threshold is the number of photons that make "the signal equal to the noise generated by the sensor." This is an important indicator because it represents the minimum amount of light that is theoretically required to be able to observe any meaningful signal.
Compare low-light performance of cameras
In this white paper, we will consider applications such as License Plate Recognition (LPR) or Optical Character Recognition (OCR). In such applications, black-and-white imaging is usually used, and the amount of light the camera can collect may be limited to a short The exposure time. It is relatively simple to determine the resolution, frame rate, and field of view required to solve the imaging problem; however, it is more difficult to determine whether the camera has sufficient imaging performance.
This challenge is usually solved through trial and error. Let’s look at an example: a vision system designer believes that a 1/4” CCD VGA camera with a frame rate of 30 FPS is sufficient for the type of application mentioned above. Initial tests showed that when the object is stationary, the camera has sufficient sensitivity with an exposure time of 10 ms. A simple example is shown in Figure 2: The characters B, 8, D, and 0 are easily confused by visual algorithms. The image taken with a 1/4”CCD camera on the upper left is suitable for image processing.
Figure 2: Images taken by 1/4" and 1/2" CCD cameras at different exposure times
At 10 ms shutter——曝光時間10 ms
At 5 ms shutter——曝光時間5 ms
At 2.5 ms shutter——曝光時間2.5 ms
However, when the object starts to move, the exposure time needs to be reduced and the camera cannot provide useful information, because the letters "B" and "D" cannot be distinguished from the numbers "8" and "0". In Figure 2, the images in the center left and bottom left show the degradation of image quality, especially the images taken with a ?” CCD camera with an exposure time of 2.5 ms, which are obviously not suitable for image processing.
In this example, it is assumed that a large depth of field is not required, so the minimum F value of the lens is acceptable. In other words, it is impossible to collect more light by opening the shutter of the lens.
Therefore, designers need to consider choosing different cameras. The question now is: whether to choose a different camera can improve the performance of the system. Using a larger sensor has been widely regarded as a good way to solve the problem of low light performance, so a 1/2" sensor would be a good choice. There is no need to repeatedly explore here, it is useful to refer to the EMVA 1288 imaging performance of the camera.
Through the EMVA 1288 data, it can be observed that the 1/4" CCD sensor has better quantum efficiency and lower noise; but the 1/2" CCD sensor has larger pixels and larger saturation capacity. This article describes how to determine whether a 1/2" camera has better performance.
Figure 3 compares the signal value and optical density (photon number/μm2) curves of 1/4” and 1/2” cameras. The signal as a function of optical density can be determined by the following formula:
Signal——信號值
Light density——光密度
Pixel size——像素尺寸
Quantum efficiency——量子效率
An important assumption made in this article is that the two cameras have the same settings, the lenses have the same field of view, and the same F value.
Signal——信號值
Light density——光密度(光子數/μm2)
Saturation capacity——飽和容量
1/2'' Camera signal——1/2''相機產生的信號
1/4'' Camera signal——1/4''相機產生的信號
Figure 3 shows that under the same light density, the 1/2" sensor will produce a higher signal. In addition, it can be observed from Fig. 3 that both 1/4” and 1/2” sensors have basically reached their saturation capacity at an optical density of 700 photons/μm2, but it is clear that the 1/2” sensor has The saturation capacity value is higher.
In the applications considered in this white paper, camera comparisons need to be performed at low light levels. Therefore, it is important to consider the noise level.
Figure 4 shows the signal and noise conditions at low light levels. The noise shown in Figure 4 includes temporal dark noise and shot dark noise, calculated by the following formula:
Noise——噪聲
Temporal dark noise——顳暗噪聲
Shot noise——散粒噪聲
1/2'' camera will……——1/2''The camera reached the absolute sensitivity threshold at a lower optical density
Signal——信號值
Light density——光密度(光子數/μm2)
Saturation capacity——飽和容量
1/2'' Camera signal——1/2''相機產生的信號
1/4'' Camera signal——1/4''相機產生的信號
1/2'' Camera noise——1/2''相機產生的噪聲
1/4'' Camera noise——1/4''相機產生的噪聲
Figure 4 shows that the 1/2" sensor reaches the absolute sensitivity threshold at a slightly lower optical density than the 1/4" sensor. To further determine which camera has a better performance in low-light applications, a more important measurement needs to be performed is the signal-to-noise ratio (SNR).
Figure 5 shows the functional relationship between the SNR and optical density of the two cameras.
Signal noise ratio(linear scale)——信噪比(線性)
Light density——光密度(光子數/μm2)
1/2'' Camera signal to noise——1/2''相機的信噪比
1/4'' Camera signal to noise——1/4''相機的信噪比
Given that the 1/2” sensor has a higher signal-to-noise ratio at low light levels, it is theoretically believed that 1/2” cameras should perform better at low light levels than 1/4” cameras.
It can be seen from the image in Figure 2 that within the exposure time of 2.5 ms, the 1/2” sensor captures the shape of the characters during all the exposure time; while the 1/4” sensor captures the shape of the characters during the exposure time. The characters are difficult to distinguish. Therefore, the 1/2" sensor has better performance, and the actual results are consistent with the theory.
It should be noted that the methods outlined in this white paper are very useful in understanding how one camera will perform better than another. This method can help eliminate cameras that are unlikely to improve the required performance; however, the final test of the camera's performance will be carried out in actual applications.
Comparison of traditional CCD sensor and modern CMOS sensor
Now, we will compare the performance of traditional CCD sensors and modern CMOS sensors under low-light imaging conditions and scenes with extensive lighting conditions.
It has been shown above that the camera using Sony ICX414 1/2” VGA CCD has better performance in low light conditions than the camera using Sony ICX618 1/4” VGA CCD. Now, we compare the 1/2” VGA CCD with the latest Sony Pregius IMX249 1/1.2” 2.3 million pixel global shutter CMOS sensor.
The cost of the camera with these two sensors is equivalent, about 400 euros; the VGA area of interest in the CMOS camera is actually close to the optical size of a 1/4" camera; at the VGA resolution, the frame rate of the two cameras Also similar.
The camera's EMVA 1288 data shows that the IMX249 CMOS sensor obviously has better quantum efficiency, lower noise and higher saturation capacity. On the other hand, the ICX414 CCD sensor has larger pixels, which is the key parameter in the example mentioned above.
Figure 6: Under low light conditions, the signal-to-noise ratio of ICX414 CCD sensor and IMX249 CMOS sensor
IMX249 CMOS sensor……——IMX249 CMOS sensor will reach the absolute sensitivity threshold at a lower optical density
Signal noise ratio(linear scale)——信噪比(線性)
Light density——光密度(光子數/μm2)
Figure 7: Shooting results obtained from ICX414 CCD sensor and IMX249 CMOS sensor under different exposure times
At 2.5 ms shutter——曝光時間2.5 ms
At 1 ms shutter——曝光時間1 ms
Due to the difference between the saturation capacity of the two sensors, the comparison at higher light intensities is more interesting. Figure 8 shows that the signal is a function of light intensity in the entire light intensity range. It can be observed from Figure 8 that the ICX414 CCD sensor reaches saturation capacity when the optical density is about 700 photons/μm2; while the IMX249 CMOS sensor reaches saturation after the optical density exceeds 1200 photons/μm2.
Signal——信號值
Light density——光密度(光子數/μm2)
Saturation capacity——飽和容量
The first conclusion that can be drawn is that the image produced by the ICX414 CCD sensor is brighter than the image produced by the IMX249 CMOS sensor. If this cannot be clearly observed from the figure, you can imagine that the image is produced at an optical density of about 700 photons/μm2. In the case of using the ICX414 CCD sensor, the image should be at the highest gray level, which is likely to be saturated; while the image produced by the IMX249 CMOS sensor, its brightness just exceeds 50% of its maximum brightness. This conclusion is very meaningful, because an easy way to evaluate camera sensitivity is to observe the brightness of the image. In other words, this simple method assumes that the higher the brightness of the image, the better the performance of the shooting camera. However, this view is not correct. In the above example, the conclusion is actually the opposite: a camera that produces a darker image actually has better performance.
The second conclusion is that the IMX249 CMOS sensor can produce images that are more suitable for further processing under a wide range of lighting conditions. Figure 9 shows the imaging results of the two cameras on the same scene. It should be noted that the darker part of the image has been enhanced for display purposes, but the underlying data has not been modified. It can be seen from Figure 9 that the ICX414 CCD is saturated in the bright area of the scene, and there is a lot of noise in the dark area, making the characters unclear and recognizable. In contrast, the IMX249 CMOS sensor produces clearly visible characters in both the bright and dark areas of the scene. Mindvision also uses this series of sony chips, and the effect is surprisingly better than CCD.
Finally, we can conclude that the latest global shutter CMOS technology is becoming a viable alternative to CCD technology in machine vision applications. Compared with CCD sensors, CMOS sensors are not only cheaper, higher frame rates, comparable resolution, and no image smearing and halo, but also in terms of imaging performance, CMOS sensors are beginning to surpass CCD.
in conclusion
In the article, we learned about several key concepts used in evaluating camera performance, introduced the EMVA1288 standard, and applied the results to compare camera performance under various lighting conditions. There are many aspects to consider when evaluating camera performance. For example, when the light source is in different wavelength bands, the quantum efficiency will change drastically. Therefore, a camera that performs well under 525nm light source conditions may not have the same good performance when the light source is switched to the near infrared (NIR) band. . Similarly, long exposure times are often used in fluorescence imaging and astronomical imaging. In this case, dark current needs to be considered. Under low-light illumination conditions, this is a type of noise that has an important impact.