Real Time Gray Level Corner Detector Fei Shen
Han Wang
School of Electrical & Electronic Engineering Nanyang Technological University Singapore, 639798
[email protected]
[email protected]
Abstract In this paper, we will introduce a new corner detector and compare its performance with other three popular corner detectors. The new detector assumes a welldefined corner of a gray level image should have a good corner structure that is similar with the ideal corner structure. Experiments have shown that the new corner detector is accurate and efficient. What’s more, the corner detector estimates the corner attributes, such as corner angles and corner orientations, without additional computational cost.
1. Introduction As a useful 2D feature, the gray level corner is defined as the junction point of two or more straight-line edges [5]. Of the most intuitive type of features, corners are very critical because they are invariant to rotation and little changes can be observed under different lighting. Corners minimize the amount of data to be processed without losing the most important information of a gray level image. Also, corners are important for image matching as they can yield sharp matches [2]. The matched corners are taken as input to high level image vision tasks, such as 3D information reconstruction, 3D object tracking and so on. Many corner detectors have been developed recently. They can be mainly divided into two catalogues [6]: template-based corner detector and geometry-based corner detector. Template-based corner detector is to develop a set of corner templates and determine the similarity between the templates and all the subwindows of a gray level image. This kind of corner detector is not widely used because of its high algorithm computational cost. Geometry -based corner detector relies on measuring differential geometry features of corners. Some of them are widely used in all kinds of applications.
Given a digital image, the earliest geometry-based corner detectors first segment the sharp, extract its boundary as chain code, and then search for significant turnings at the boundary. This kind of corner detectors also suffers the high algorithm complexity, as multiple steps are needed. What’s more, any errors in the segmentation step will lead to great astray in the corner results. Afterward, many geometry-based corner detectors, which directly operate on the gray level image, were introduced. Kitchen found that the change of gradient direction multiplied by the local gradient could isolate the corners well. Wang and Brady [4] observed that the total curvature of the gray level image is proportional to edge normal, and inversely proportional to the edge strength. Moravec proposed an interest point detector which functions by considering a local window in an image and determining the average changes of intensity which results from shifting the window by a small amount in four directions. Harris and Stephens [2] modified the Moravec method to the famous Plessey corner detector by estimating the autocorrelation from the first-order derivatives. Most of them are sensitive to noise as they depend on the image derivatives. Smith and Brady introduced a straightforward method named SUSAN that does not depend on image derivatives [5]. Given a digital image, the USAN (Univalue Segment Assimilating Nucleus) area will reach a minimum when the nucleus lies on a corner point. SUSAN is not sensitive to noise and it is very fast as it only uses very simple operations. But all of the above corner detectors cannot report other corner attributes, such as corner angle and corner orientation, which is also very important in high level applications, without additional computations. In part 2, we will give an analysis on the corner structure features. The main idea of our new corner detector will be introduced in part 3. After that, we will show some experiment results. The final part will be the conclusion.
2. Corner Structure Features In the real applications, we will meet many corner types, such as L-Junction, T-Junction, Y-Junction, Arrow –Junction, X-Junction and so on. The number of regions that contribute to the corner determines the corner type. Every region can be characterized by five parameters: the position of the corner (x, y), the corner orientation α, the corner angle β and the gray level intensity I. So the general region can be described as E(x0, y0, x, y, α, β, I). If the corner angle of a region is less than 180 degree, this region can be regarded as the corner region. Otherwise, this region will be treated as the background region. A complete corner consists of at least two adjacent regions (L-Junction). T-Junctions and other types of junctions even consist of three or more adjacent regions. An equation, which consists the information of all the regions of a corner neighbor, is described as equation (1): n
Ecorner( x, y ) = ∑ E i ( x0 , y0 , x, y,α , β , I )
Figure 2: Ideal digital L-Junction model Figure 2 is an ideal digital corner model. Let S1, S2, S3 and C1, C2, C3 be the same meaning with those of figure 1. It’s also not difficult to get the following equations:
(1)
S1 : C1 =0.5000 S2 : C2 =0.4375 S3 : C3 =0.4167 θ : 360 =0.4028
i =1
This equation can handle any corner type. Let’s look at an ideal corner model shown in figure 1:
In the digital case, although Si : Ci does not equal to θ : 360°, it approaches to θ : 360° when i increases. And Si : Ci <=0.5 holds for any i. This gives us some idea about how to detect corners.
3. Structure-based corner detector We hope the neighborhood pixels of a corner point should follow some simple rules so that we can report the corner points easily. Now let’s look at the ideal corner model again. Though our example is a XJunction, we will explain later that our analysis will be suitable for all kinds of junctions. Figure 1: Ideal L-Junction model
Figure 1 is a L-Junction model with the gray region being the corner region and the white region being the background region. Dividing the whole region into three homocentric circles: C1, C2, C3 with the corner point being the center. S1, S2, S3 are the corner region cirques corresponding to C1, C2 and C3. It’s easy to get the following equation:
Si θ = < 0.5 Ci 360o
S i Bi Bi
Mi
Si
Bi
Ai
(i = 1, 2, 3)
Ci θ represents the corner angle. It’s the ideal case. When the digital case, the situation will be a little different.
Figure 3. X-Junction model
No matter how many regions contribute to the corner, if we only consider region 1 shown in figure 3, other regions can be regarded as a single background region. If we consider other regions, the situation will be the same. Assume the whole corner region is large enough. We can divide this region into many homocentric circles, C1, C2, C3 and so on, with the corner point of region 1 being the center. Let Si be the corresponding cirque of region 1 with Ci, M i be the middle point of Si, Ai and Bi be the end points of Si. For every i-th circle, if equation 2 holds,
Si < 0 .5 Ci
(i > 1)
(2)
S i− 1 ≤ S i
In equation 6, I0 represents the gray level of the central pixel. The default value for thresh1 is 25. After we get Ai and Bi, we will use equation 7 to correct the initial Mi to the correct one. M i = M i −1 + ( N −N ) / 2 (7)
i ( M → A)
i ( M → B)
In the above equation, Ni(M→ A) means the number of pixels that fit equation 6 from M to A. The similarity function used to compute the similarity between two gray level pixels are given by equation 8:
We can say that the central point fits the requirement of corner structure and it can be regarded as a corner candidate. Equation 2 is called Corner Structure Constraint Function. After we make sure if a pixel is a corner point candidate, we use equation 3 to compute the corner response of every pixel:
S = ∑ Si
(corner candidate)
S =0
(else)
i
(3)
Equation 3 is the corner response function of our new corner detector. Nonmaximum suppression will be carried on every corner candidate to mark the corners. Ci is known for every i . So our task is how to compute Si. Instead of searching the whole Ci to computer Si, we find the midpoint of Si first. From the midpoint, compute the length of M iAi and M iBi separately. Adding them, we get Si. There are some advantages of this method. First, We only compute the corner region. Noise of the background region will not affect the result. Also, It is of great low computational cost, as we do not spend time in background region. What’s more, we can get the exact position of midpoint. This means we can compute the corner orientation without additional computation. Let nj represents the similarity between the j-th pixel and the central pixel. We get:
S i = M i Ai + M i Bi j = Ai
j = Bi
j= Mi
j =M i
= ∑ n j + ∑ nj
(4)
Assume we have got the value of Mi-1; we can use equation 5 to initialize Mi: M = 1.5 * M i
From the initial Mi, check the pixels of the i-th circle in two directions: clockwise direction and anticlockwise direction. The last pixels that fit equation 6 will be regarded as Ai and Bi. (6) I − I 0 < thresh1
i −1
(5)
nj =e
−(
I j − I0 thresh 2
6
)
(8) In equation 8, thresh2 is another threshold to be determined by the user. The default value is 15. So we have figured out how our algorithm works. In the real work, the mask size is very important. If we use a small mask, it will be some sensitive to strong edges. If we use a big mask, only significant corners will be reported and some small corners will be ignored. Usually, a 7*7 mask will be suitable for most cases, as it can keep most corner features and is some resistant to strong edges. Si and the coordinate of Mi of the most outer circle are used to estimate corner angles and corner orientations
4. Experimental Results In this part, we examine the performance of four corner detectors, our new corner detector (SCD), Wang and Brady corner detector (WANG-BRADY), Plessey corner detector (PLESSEY) and SUSAN corner detector. The algorithms were tested and compared on the basis of the following three aspects of criteria: Detection: The corner detector should detect even the very subtle corners, while ignoring noise effects. Localization: The corners should be detected as close as possible to their true locations. Complexity: Speed is a prime requirement for realtime jobs such as robot navigation. Reduced algorithm complexity contributes to more automation process and faster implementation. First, we compare the performance of these four corner detectors on a synthetic image. This synthetic image consists of many corner types, including L-Junction, Y-Junction, T-Junction, Arrow-Junction and XJunction. It has been widely used to evaluate the corner detectors. Figure 4 a) to d) show the corner images of a synthetic image using SCD, WANG-BRADY,
PLESSEY and SUSAN individually. From the result, we can see that the new corner detector performed best on this synthetic image. All junctions have been correctly detected, as we have expected. No wrong corners were reported. Wang and Brady corner detector missed one corner, at the obtuse angle of the triangle, and this is because the angle is very close to 180 degree, so it appears as an edge point, not a corner. Also, Wang and Brady corner detector spotted spurious corners when the angles are very sharp. This is because the first order derivatives operators perform badly on this synthetic image. This shows the disability of WANG-BRADY of detecting corners whose angle is very large or very small. Although Plessey corner detector reported all the corners, it marked three corners wrongly. What’s more, Plessey corner detector suffers great deviation in corner localization. In the TJunctions at the low part of the ladder, the localization deviation is even as large as 4 pixels. The smoothing of its first order derivatives causes this. SUSAN performs quite well too. But it missed the X-Junction in the middle of the square at the bottom of the synthetic image. In this X-Junction, the gray level of the upperleft region equals to that of the downright region and the gray level of the upper-right region equals to that of the down-left region. In this situation, the response of the central pixel of SUSAN will look like an edge point, instead of a corner point. This shows that SUSAN cannot work well at some X-Junctions. To see the performance of detecting additional corner attributes, we chose the square in the middle of the synthetic image and output the angle and orientation results, shown in table 1. This square contains six welldefined corners, including two L-Junctions, one TJunction, one Y-Junction and two Arrow-Junctions. We can see that the corner positions are all correctly detected. No deviation in corner position occurs. The corner angles are reported 15 degree higher than accurate values. This is because 7*7 mark is not big enough to eliminate the influence of digitization. Corner orientations are reported correctly. When the corner angle is very small, there will be some errors in detecting corner orientation. But the error is so small that it can be neglected. We also tested the corner detectors on a real image. This real image is a house picture, corrupted by a lot of noise and textures. The result of our new corner detector is shown in figure 5 a). We can see that the new corner detector can filter the noise and textures effectively. Keeping the main corners detected, our algorithm reported the least noisy corners. The results of other three corner detectors are shown from figure 5 b) to d).
To compare the computational complexity, we assume that in a real image, 75% of pixels have a low gradient, and of the remaining 25% of pixels, 20% are assumed to be edge pixels and 5% are assumed to be corners. We take the number of additions and multiplications needed at one pixel as the complexity of a corner detector. The comparison is shown in table 2. We can see that the computational cost of the new corner detector is the lowest. It only needs 26 additions and 0 multiplications at one pixel. Wang and Brady corner detector and SUSAN corner detector are of almost the same algorithm complexity. But SUSAN only needs 0.75 multiplications while WANG-BRADY needs 7.25 multiplications. So SUSAN will be a little faster than WANG-BRADY. Plessey corner detector is the most time-consuming one, needing 117 operations at one pixel.
5. Conclusion In this report, a novel corner detector was introduced. It assumes that the neighbor of a corner point in a gray level image should fit the structure functions well. Divide the neighbor of a pixel into some stages and check each stage based on Corner Structure Function. When a corner is marked, the structure of this corner point is also made explicit to us. Taking the advantage of this, the new corner detector can estimate the corner angle and orientation information without additional computational cost. Experiment results have shown that the corner method has good detection performance and excellent localization performance. The speed is faster than other widely used corner detectors.
Reference: 1.
2.
3.
4.
5.
6.
L Kitchen and A Rosenfeld, "Gray level corner detection", Pattern Recognition Letters, vol. 1, pp. 95-102, 1982. C Harris and M Stephens, "A Combined Corner and Edge Detector", in Proc. 4th Alvey Vision Conference, pp. 189-192, 1988. A Singh and M Shneier, "Gray level corner detection a generalization and a robust real time implementation", Computer Vision, Graph and Image Process, vol. 51, pp. 54-69, 1990. H Wang and M Brady, "Real-time corner detection algorithm for motion estimation", Image and Vision Computing, vol. 13:9, pp. 695-703, 1995. S M Smith and M Brady, “SUSAN -- a new approach to low level image processing”, International Journal of Computer Vision, vol. 23(1), 45-78, 1997. Z Zheng, H Wang and E K Teoh, "Analysis of gray level corner detection", Pattern Recognition Letters, vol. 20, pp. 149-162, 1999.
a)
b)
c)
d)
Figure 4: corner maps of the synthetic image using: (a) SCD; (b) WANG-BRADY; (c) PLESSEY; (d) SUSAN
Experimental Result
Accurate Value
x
y
angle
orientation
X
Y
angle
orientation
140
115
45.0
-6.9
140
115
30.0
-15.0
140
145
105.0
45.0
140
145
90.0
45.0
157
125
135.0
90.0
157
125
120.0
90.0
157
145
105.0
135.0
157
145
90.0
135.0
174
115
45.0
-173.1
174
115
30.0
-165.0
175
145
105.0
135.0
175
145
90.0
135.0
0
0
15.0
2.7
Average Deviation
Table 1: attributes report of the new corner detector with a synthetic image
a)
b)
c)
d)
Figure 5: corner maps of the real image obtained by: (a) SCD; (b) WANG-BRADY; (c) PLESSEY; (d) SUSAN
SCD WANG-BRADY PLESSEY SUSAN
Additions 26 24.75 95 32.25
Multiplications 0 7.25 22 0.75
Table 2: Algorithm Complexity analysis of different corner detectors
Operations/pixel 26 32 117 33