Final Project Report-new

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Final Project Report-new as PDF for free.

More details

  • Words: 3,900
  • Pages: 18
FINAL PROJECT EG 6362G Computer Vision and Pattern Recognition

“STOP Sign Detection and Recognition Using Color and Gray Scale Modeling”

Submitted by, Ashutosh Jaiswal 260859

Abstract There are many different applications in the field of pattern recognition. The pattern recognition includes a very vast area in the image processing. Numerous methodologies are used depending on the application. One of the most common application is detection and recognition of sign boards, which is mostly used in automated robotic vehicles. An approach for detection and recognition of stop sign is given in this project, which may be used for detection and recognition of stop sign’s. Both color and gray scale models are being used to make the recognition process as accurate as possible and to get an optimal balance in the result. The stop sign in this project is detected by means of rules that restrict color and shape of the sign board. It is then recognized by applying Hough’s transform, which is a structural recognition approach. The number of sides of the stop sign and its color are being used as the primary feature set in the project. This method can be easily used for different class of sign’s with some variation in the recognition process. 1. Introduction High variance of sign appearance has made the detection and recognition of road signs a computer vision problem. There are two main approaches in this field, the colorbased and gray scale based sign recognition. Color based approach allows to reduce false positive results in the recognition process whereas grayscale methods concentrate on the geometry of the model to recognize it [1]. In color-based studies the most common way of detection is based on segmentation by thresholding in color space. In [1] a gray scale sign modeling approach has been given, where three main steps are being carried out, a) Detection b) Model Matching and c) Recognition [1]. The approach uses the gray scale image to extract the region of interest (ROI) and then some pre-processing steps are carried out to make it easier for recognition process. Finally, recognition is done using template matching with images stored in database. In [2], first the candidate regions are identified by color and pruned using shape criteria. Recognition is done using template matching, and sign’s are tracked over time. The method uses continuous frames of data being received from a video source and performs the recognition process in a timely fashion. The aim of this project is to employ both the above mentioned methods in [1] and [2], and use them for the detection of stop sign and recognizing it. The approach is to first perform segmentation using thresholding in color space and then applying some filtering to the image to concentrate the area of interest. In the recognition process, instead of using template matching, a structural approach is carried out, as we are looking for one particular sign. The structural uniqueness of a STOP sign is being used as a feature and this is done using Hough Transform. In order to understand the project implementation better, a brief introduction to basic pattern recognition concepts and its implementation in this project is given in this section. In latter sections a detailed discussion of the methodologies and the filters being used for implementation is discussed. Finally, we evaluate the MATLAB program and report the results and the degree of correctness in the detection and recognition process. Results for various images with different back ground noise is given and observed.

1.1 Feature Extraction Feature is any distinctive aspect, quality or characteristic of the pattern under consideration. Features may be symbolic like color, intensity etc or numeric like height, area etc. The combination of the features is represented as a d-dimensional column vector is called a feature vector. The d-dimensional space defined by the feature vector is called the feature space. The quality of a feature vector is related to its ability to discriminate examples from different classes. Examples from the same class should have similar feature values and examples from different classes have different feature values. There are two main types of features (1) Global: Features which do not change with geometric variation of an image. Ex: Color, number of sides, etc (2) Local: Features which may vary with variation in image. Ex: Length of sides, area, intensity etc. In our project we have chosen two distinctive features of a STOP sign, which are color and shape (number of sides). However, if the problem was to detect and recognize more objects then the feature vector size would also increase in order to classify the object appropriately. This project uses Hough Transform and color segmentation in order to extract these features. A detailed explanation is given in the following sections. 1.2 Classification The task of a classifier is to partition feature space into class-labeled decision regions. The borders between decision regions are called decision boundaries. The classification of feature vector x consists of determining which decision region it belongs to, and assign x to this class. However, in our project the classification process is pretty simple because we are dealing with only one pattern. If the problem was to deal with multiple patterns, then classifier would have to sort the input images in appropriate classes. The main job of classifier in this project is to determine if the two extracted features are within the limits of specification and if they are, then the required pattern is recognized, otherwise it is rejected. 1.3 Approach In the project we are performing the following steps to achieve the task. It is assumed the detection process has already been done and we are working on the image attained from the detection system. (i) The image is first read and is stored as a variable (ii) Next, we perform color segmentation process on the image by setting the threshold in such a way that only RED portions of the image is kept and everything else is rejected (iii) Another step of color segmentation follows (iv) We then convert the image into binary format for filtering purposes (v) Two stages of filtering are done in order to extract the region of interest (vi) Edge detection is done in order to extract the borders (vii) Hough transform is applied to determine the number of line segments in the image (viii) Points of intersection from Hough Transform is then extracted

(ix) (x)

Euclidean distance of each of the detected points is found and points within same limits of distances are grouped as one point Finally, the number of groups are counted and the value is used to determine whether a stop sign has been detected or not.

2. Color Segmentation Color segmentation is the process of setting a threshold for the color we are interested in and removing the regions of the image which we are not interesting in. In this project, we are detecting stop signs and RED color could be considered as a unique feature. There may be other object with red color, but filtering process will help us remove these objects from the image. Color band comprises mainly of three basic colors Red, Green and Blue, whose different levels of combination give us other colors. We are using this property to our advantage. In Red color of stop sign it was observed the level of Red is the highest (around 140-150), and the levels of green and blue are relatively low (Green: 35-65 & Blue: 0-30). In the first step of color segmentation, we remove all the pixels in the image which are not red. In other words we look for pixels whose green and blue levels are over the observed level of the Red color (of stop sign). This will remove all the regions that will definitely not belong to the stop sign area. In the second step of color segmentation, we are setting a threshold to detect any red color and then if detected change it to white (to make it easy to model the image in gray scale). Once the color segmentation is done, we will be left with an image which has Most of the area (90-98%) of the stop sign and may be some other regions which may also have been picked up. Our next step is to remove these regions which are not desired by filtering them out. The filtering process and the recognition process is now carried out on a gray scale. The image is converted to gray scale for the remaining processing. 3. Filtering There may be some cases where color segmentation may not eliminate most of the unwanted objects of the image. In such cases we make use of a heuristic approach to eliminate the portions not desired. In the first step of filtering we try to eliminate all the pixels that form dots in the image, in other words we eliminate single or double pixel areas. This is done by considering each of the pixels and considering its eight nearest neighbors. If there are two or less than two neighbors that are white then the pixel under consideration is eliminated. This is implemented in MATLAB using the logic described below, which could be understood from Figure 1. In MATLAB, we are determining the values of each pixel and computing the values of its eight nearest neighbors. Since the image is a binary image, the value of any pixel will either be 0 or 1. We are then assigning each of the neighbor’s value to a variable, as shown in Fig 1. For each pixel we have a value in each of the variables A, B, C, D, E, F, G and H. Once these variables are computed then we compute the sum SUM=A+B+C+D+E+F+G+H. The value of this sum is then used to determine the number of neighbors that are white. If the sum is ‘n’ then the number of neighbors is also ‘n’. Based on this, we implement this step in MATLAB program. The second step of filtering aims in keeping the pixels which has more number of

Fig 1: Figure shows the eight nearest neighbors of the pixel under consideration

white pixels whereas rejecting all others. In MATLAB we keep the pixels which generated the SUM of 4 or more, this will also help in region growing within the detected stop sign, to make edge detection process more precise. These filtering steps clean away most of the regions that are unwanted and help in making the following processes easier and more precise. Once we get the filtered image, the next task is to compute the number of sides for the recognition process. 4. Recognition Process In the recognition process we extract the number or edges of the image and use it as a feature of our feature set. In order to do so we use Hough Transform to determine the edges and then based on the value we decide if the image has a stop sign within it. This step can also be related to classifying where the process of using the number of edges as a feature and then recognizing the image may be related to classification. The first step is to determine the edges of the image which is done using the Prewitt Edge detector. In MATLAB, the function edge(f,’type’) is used, where f is the image and type related to the type of edge detector being used (in out case type=’prewitt’). Next we determine the Hough Transform. A brief introduction to Hough Transform and its implementation is given in Section 4.1. 4.1 Hough Transform We consider a point (xi,yi) and all the lines that pass through that point. Indefinitely many lines pass through the point (xi,yi), all of which satisfy the slope intercept equation yi=axi+b for some values of a & b. We may write the equation as b=yiaxi, yields equation for a single line for fixed pair (xi,yi).There may be a second line in the same plane passing through another point (xk,yk) and this line may intersect the line passing through (xi,yi) such that we have unique values a’ and b’, which satisfy equations of both these lines. In other words the two lines intersect at the point (a’,b’). This is called as the Parameter Space. Fig 2 shows the xy-plane and the parameter space of the Hough Transform.

Fig 2: Image showing xy-plane and parameter space of an image- Hough Transform

Image lines could be identified by where large number of parameter space lines intersect. The number of points will give number of line segments. However, the difficulty with this approach is that, a (slope of the line) approaches infinity as the line approaches in vertical direction. This problem is solved using normal representation of line, xCosθ + ySinθ = ρ A horizontal line has teta = 0o, with r being equal to positive x-intercept. Similarly a vertical line has teta = 90o, with r being equal to positive y-intercept, or q = -90o, with ‘r’ being equal to negative y-intercept.

Fig 3: Normalization of line in parameter space

The Hough Transform of an image with four corners is shown in Fig 4. It can be seen clearly that there are four bright spots which are the intersection points. Our task is to count the number of points to give us the sides of the image. In MATLAB, this is done using the following commands, f=imread(‘pic.jpg’)% Read image BW = edge(f,'canny') ; % Apply Hough Transform theta = 0:179;

R,xp] = radon(BW,theta);

Fig 4: Hough Transform of an object with four sides shows four intersecting points

Next we need to set another threshold in order to separate the intersecting points and make the background totally black. This is don’t in our program and the threshold was set to gray level of 110, which means that any pixel greater then this gray level will be kept in the image as a white pixel and all others will be made black. This process may give us multiple points in one region. The challenge now is to separate the points too close to each other by classifying those points as one group. This can be done by computing the Euclidean distance of each point from the origin (1,1) and then grouping all the points whose distances lie within a certain range . This has been implemented in our program and will give us the number of edges of the image. This is used to determine whether it is a stop sign. For a stop sign this number varies between 6 to 12 (considering an error margin), as per repetitive observations and in this project has been set to 6<x<12. 5. Program Listing % Displaying all the images that are being used for testing the program f1=imread('stopsign1.jpg'); f2=imread('stopsign2.jpg'); f3=imread('stopsign3.jpg'); f4=imread('stopsign4.jpg'); f5=imread('stopsign5.jpg'); f7=imread('stopsign7.jpg'); f8=imread('stopsign8.jpg'); f9=imread('stopsign9.jpg'); f11=imread('stopsign11.jpg'); f10=imread('stopsign10.jpg'); subplot(3,3,1) imshow(f1) title('Stopsign1.jpg') subplot(3,3,2) imshow(f2) title('Stopsign2.jpg') subplot(3,3,3) imshow(f3) title('Stopsign3.jpg') subplot(3,3,4) imshow(f4) title('Stopsign4.jpg')

subplot(3,3,5) imshow(f5) title('Stopsign5.jpg') subplot(3,3,6) imshow(f7) title('Stopsign7.jpg') subplot(3,3,7) imshow(f8) title('Stopsign8.jpg') subplot(3,3,8) imshow(f9) title('Stopsign9.jpg') subplot(3,3,9) imshow(f10) title('Stopsign10.jpg') figure imshow(f11) title('Stopsign11.jpg') % MAIN PROGRAM clear all; clc; % Read the image in which a stop sign has to be detected f=imread('stopsign3.jpg'); img0=f;%original image s=size(f); % Color Thresholding with R>110,G>70,B>50 % Replacing non-red pixels by black ('0') for i=1:s(1) for j=1:s(2) r=f(i,j,1); g=f(i,j,2); b=f(i,j,3); if r>110 if g>70 if b>50 f(i,j,1)=0; f(i,j,2)=0; f(i,j,3)=0; end end else f(i,j,1)=f(i,j,1); f(i,j,2)=f(i,j,2); f(i,j,3)=f(i,j,3); end end end % Second step of color thresholding % Replacing all pixels within the Red band % with white ('255') for i=1:s(1) for j=1:s(2) r=f(i,j,1); g=f(i,j,2); b=f(i,j,3);

if r>135 & r<245 if g>0 & g<58 if b>15 & b<39 f(i,j,1)=255; f(i,j,2)=255; f(i,j,3)=255; end end else f(i,j,1)=0; f(i,j,2)=0; f(i,j,3)=0; end

end end img1=f; %Color segmented image retrieved %Converting color image to gray scale f=rgb2gray(f); % Replacing all gray scale pixels by a white % level ('255') for i=1:size(f,1) for j=1:size(f,2) if f(i,j)>40 f(i,j)=255; else f(i,j)=0; end end end img1b=f; %Color segmented image converted to binary format s=size(f); i=s(1); j=s(2); % Converting into binary format with white ='1' % and black ='0' f=dither(f); % First step of filtering: % Checking for nearest neighbours and removing single % white pixels by checking for neighbours and if all % neighbours are black then turn the pixel to black for m=2:i-2 for n=2:j-2 if f(m,n)==1 X=f(m,n); if (m-1)>0 & (n-1)>0 A=f(m-1,n-1); else A=0; end if (m-1)>0 B=f(m-1,n); else B=0; end

end end

if (m-1)>0 & (n+1)<=j C=f(m-1,n+1); else C=0; end if (n-1)>0 D=f(m,n-1); else D=0; end if (n+1)<=j E=f(m,n+1); else E=0; end if (m+1)0 F=f(m+1,n-1); else F=0; end if (m+1)
end img2=f; % Second step of filtering: % Checking for nearest neighbours and removing uneven % white patterns by checking for neighbours and if atleast % four neighbours are white then turn the pixel to white or change % to black for m=2:i-2 for n=2:j-2 X=f(m,n); if (m-1)>0 & (n-1)>0 A=f(m-1,n-1); else A=0; end if (m-1)>0 B=f(m-1,n); else B=0; end

if (m-1)>0 & (n+1)<=j C=f(m-1,n+1); else C=0; end if (n-1)>0 D=f(m,n-1); else D=0; end if (n+1)<=j E=f(m,n+1); else E=0; end if (m+1)0 F=f(m+1,n-1); else F=0; end if (m+1)=4 f(m,n)=1; else f(m,n)=0; end end end img3=f; % Edge detection of the filtered image f=edge(f,'prewitt'); img5=f; % Computing Hough transform of the edge detected image theta = 0:179; [R,xp] = radon(f,theta); % Image being rotated to fit screen f = imrotate(R,90); img6=f; % Running a filter mask with threshold value to detect points % close to white level and set all others black w=[-1 -1 -1;-1 8 -1;-1 -1 -1]; g=abs(imfilter(double(f),w)); T=90; g=g>=T; s=size(g); M=size(g,1); N=size(g,2);

% Changing all the detected points to white level % and leaving all other pixels black for i=1:M for j=1:N if g(i,j)>=90 g(i,j)=255; end end end img7=g; % Converting image to binary format g=dither(g); % CNT is an array created to hold the distances of all the % detected points CNT= zeros(1,50); cnt=0; % Computing the Euclidean Distances of all points from origin (1,1) for i=1:M for j=1:N if g(i,j)==1 cnt=cnt+1; CNT(cnt)=sqrt(((i-1)^2)+((j-1)^2)); end end end s=size(CNT); s=s(2); % Sorting the CNT array in ascending order CNT=sort(CNT); % Rounding the distances to closest integer CNT=round(CNT); % Grouping all the points whose distances almost same % as one single point for i=1:s-1 X=CNT(i); Y=CNT(i+1); Z=abs(X-Y); if Z<=3 CNT(i)=CNT(i)-CNT(i); else CNT(i)=CNT(i); end end % COUNT is the variable that counts the number of points COUNT=0; % Counting the number of points after grouping for i=1:s if CNT(i)>0 COUNT=COUNT+1; else COUNT=COUNT; end end % Setting Threshold for recognition process if COUNT>=4 disp('*****************************'); disp('*****************************');

disp('*****************************'); disp('*****************************'); disp('*****************************'); disp('A STOP SIGN HAS BEEN DETECTED'); disp('*****************************'); disp('*****************************'); disp('*****************************'); disp('*****************************'); disp('*****************************'); f=imread('WARNING.jpg'); figure imshow(f) y=wavread('SOUND.wav'); sound(y) else

disp('NO SIGN IS DETECTED');

end % CALLING ALL THE IMAGES IN THE PROGRAM figure imshow(img0) title('ORIGINAL IMAGE') figure subplot(2,2,1) imshow(img1) title('1st step of color segmentation') subplot(2,2,2) imshow(img1b) title('Color segmented image converted to binary') subplot(2,2,3) imshow(img2) title('First step of filtering') subplot(2,2,4) imshow(img3) title('Second step of filtering') figure subplot(1,2,1) imshow(img5) title('Edge Detection') subplot(1,2,2) imshow(img6,[]) title('Hough Transform') figure imshow(img7) title('Detection of points')

6. Results In order to verify the functionality of the idea proposed in order to detect stop signs, we are using a set of nine different pictures and verifying the program with each of them. Each picture has a different background and the program will be tested for its efficiency for a varying background. In order to verify if the process works for “stop signs only”, we are also testing the program with other sign boards and check if there is a false recognition. The set of figures used is shown in Figure 5.

Fig 5: Signs with different backgrounds being used for recognition process

Let us now consider each step of the recognition process for the image ‘stopsign3.jpg’. The background of this image seems very simple, but with sky as background, the value of threshold set for color segmentation should be chosen very carefully because these values for green and red color are very close to each other. Considering this, we have chosen the appropriate threshold values and the recognition was done. Fig 6 shows a bigger view of the image ‘stopsign3’. Also, the written text does not matter in the recognition process because we will be eliminating it and will be considering only the shape of the stop sign. The results of the image are shown in Fig 7 through Fig 9.

Fig 6: Stop sign picture being considered (‘stopsign3.jpg’)

Fig 7: Image shows the different steps of segmentation and filtering

It can be seen from Fig 7 that the color segmented image has removed most of the unwanted areas and has concentrated the area of interest. In the second stage of filtering we see that the region growing within the stop sign helps us remove the textual matter written in the image. This makes the region of interest more prompt and makes it easy to detect edges and hence sides.

Fig 8: Edge detection and Hough Transform

Fig 8 shows the edge detection and Hough Transform of the image. We see that the points of intersection are eight, which is equal to the number of sides of a stop sign.

Fig 9: Detection of points of intersection

Finally, the intersection point are detected and the COUNT is found as 8, detecting the stop sign. Let us consider the results of some other images, and their results shown in Fig 10

Fig 10: (a) stopsign9.jpg (b) stopsign7.jpg (c) stopsign10.jpg (i) Original Image (ii) Color Segmentation of Image (iii) Conversion to binary image (gray scale) (iv) First stage of filtering (v) Second stage of filtering (vi) Edge detection

It is seen that for different images we are getting a constant output, which makes this process of detection close to precise. 7. Conclusion The project was implemented successfully using the MATLAB platform and the results were evaluated. One of the many applications of pattern recognition in image processing was successfully demonstrated by this project. If helped us give an insight to the filtering technique which can be employed based on the application and the image in consideration. A combination of color segmentation and gray scale detection, makes this process a more precise project. In [1] and [2] methods using only gray scale and color segmentation were presented, and using those methods we combine the proficiency to make the detection and recognition process very accurate. It is however very important to choose the features very carefully, so that the recognition process becomes accurate and precise. There are various patterns in nature which can be recognized by automated systems. Pattern Recognition in Image Processing is a vital component in such a system. Feature Extraction is an important step in the process of pattern recognition as it forms the basis of the problem. Classifiers help in assigning the extracted features to pre-defined classes. By choosing optimal features and preprocessing techniques such as filtering, application problems such as road sign detection can be solved accurately and reliably.

8. References [1] Sergio Escalera and Petia Radeva, “Fast grayscale road sign model matching and recognition”, Centre de Visio per Computador, Catalonia, Spain. [2] Michael Sheiner, “Road Sign Detection and Recognition”, IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, June 2005. [3] Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins, “Digital Image Processing Using MATLAB”, Prentice Hall Publications, 2004.

Related Documents

Final Project
June 2020 17
Final Project
May 2020 26
Project Final
November 2019 37
Final Project
April 2020 29
Final Project
June 2020 23
Final Project
May 2020 24