15
2022
-
12
A Target Detection Method to Avoid the Influence of Illumination
Author:
STREAMARY
A target detection method to avoid the influence of light [patent abstract] A target detection method to avoid the influence of light.
A target detection method to avoid the influence of light [patent abstract] A target detection method to avoid the influence of light.
It includes the following steps in sequence:
Step 101: establish the background image using statistical method;
Step 102: calculate and output the gradient of the current frame image and the gradient of the background image, the gradient includes horizontal gradient and vertical gradient;
Step 103: compare the direction and amplitude of the gradient of the current frame image and the gradient of the background image, and extract and output the foreground contour accordingly; Step 104: Fill the extracted foreground contour to obtain foreground clumps, and filter the noise to output the target. The target detection method to avoid the influence of light provided by the invention can accurately detect the target to avoid the influence of light, and effectively solve the problem of inaccurate and unreliable detection of the target due to the influence of light in the target detection. [Patent Description] A target detection method to avoid the influence of light [Technical field]
[0001] The invention belongs to the technical field of image processing and video monitoring, in particular to a target detection method to avoid the influence of light. [Background technology]
[0002] Moving object detection is the basis of intelligent video surveillance technology, and its detection results will directly affect the false alarm rate and false alarm rate of the detection of later events (such as intrusion, item left, item stolen, vehicle driving in reverse direction, etc.), so it has received widespread attention. However, in practical applications, illumination changes often occur, which greatly affects the accuracy and reliability of moving target detection. Therefore, it is necessary to study the target detection method to avoid the influence of illumination.
[0003] There are two main types of target detection methods to avoid the influence of light. One kind of methods is pixel-based methods. Generally speaking, the illumination change will only bring about the change of pixel brightness and the color will not change too much. This method is based on this principle to analyze the pixel value in HSI space to identify the illumination change. However, in the real environment, many situations do not meet this assumption, and in most outdoor scenes, neither the background nor the target have color information, so the application effect of such methods in the real environment is not ideal. The other method is area-based method. If the scene has a certain contrast before and after the illumination change, then the illumination change will not change the image texture edge feature. The region-based method is based on this principle. If the edge of the foreground and the background match, the foreground area is the false foreground area caused by the illumination change. However, in the evening, if the assumption that "the scene has a certain contrast before and after the light change" is not tenable, such methods will fail. In addition, if there are real objects in the illumination change area at the same time, this method will also fail to match.
[0004] To sum up, there is an urgent need to propose a simple and effective target detection method to avoid the influence of light. [Content of the invention] [0005] In order to solve the above problems, the purpose of the invention is to provide a target detection method to avoid the influence of light.
[0006] In order to achieve the above purpose, the target detection method to avoid the influence of light provided by the invention includes the following steps in sequence: [0007] Step 101: establish a background image using statistical method;
[0008] Step 102: calculate and output the gradient of the current frame image and the gradient of the background image, the gradient includes horizontal gradient and vertical gradient;
[0009] Step 103: Compare the direction and amplitude of the gradient of the current frame image and the gradient of the background image, and extract and output the foreground contour accordingly;
[0010] Step 104: Fill the extracted foreground contour to obtain foreground clumps, and filter the noise to output the target.
[0011] In step 102, the horizontal gradient and the vertical gradient of the current frame image and the horizontal gradient and the vertical gradient of the background image are calculated respectively using gradient operators. [0012] In step 103, the method of comparing the direction and amplitude of the gradient of the current frame image and the gradient of the background image, and extracting and outputting the foreground contour according to this method includes the following steps:
[0013] Step 1031: Calculate the gradient amplitude Al and gradient direction of each pixel in the current frame image according to the horizontal and vertical gradients of the current frame image and the horizontal and vertical gradients of the background image output in step 102 Θ 1. And the gradient amplitude A2 and gradient direction of each pixel in the background image Θ 2;
[0014] Step 1032: If the gradient amplitude Al of the pixel point (x, y) in the current frame image and the gradient amplitude A2 of the pixel point (x, y) in the background image are ≥ the first threshold Tl, then go to step 1033; If the gradient amplitude Al and A2 are ≤ the second threshold T2, the pixel point (x, y) is considered as noise point, otherwise | A1 is calculated_ A2| ; If A1-A2 ≥ the third threshold T3, the pixel point (x, y) is considered to belong to the previous scenic spot;
[0015] Step 1033: calculate the gradient direction of the pixel point (X, y) in the current frame image Θ I and the gradient direction of the pixel point (x, y) in the background image Θ Absolute difference I of 2 θ 1- Θ 2 |, if I θ 1- Θ 2 ≥ the fourth threshold T4, the pixel point (x, y) is considered to belong to the previous scenic spot;
[0016] Step 1034: Extract all pixel points belonging to the front scenic spot to obtain the foreground contour.
[0017] The first threshold Tl e [8, 12], the second threshold T2 e [3, 5], the third threshold T3 e [4, 6], and the fourth threshold T4 e [18., 22 °].
[0018] In step 104, filling the extracted foreground contour to obtain foreground clumps and filtering noise to output the target includes the following steps:
[0019] Step 1041: Fill the foreground contour output in step 103 to obtain foreground lumps;
[0020] Step 1042: calculate the difference image between the current frame image and the background image, and use the threshold segmentation method to segment the difference image to obtain the changing foreground in the difference image;