Pp1

  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Pp1 as PDF for free.

More details

  • Words: 2,877
  • Pages: 4
A Hierarchical Palmprint Identification Method Using Hand Geometry and Grayscale Distribution Features Jie Wu, Zhengding Qiu Institute of Information Science Beijing Jiaotong University, Beijing, 100044, P.R. China {[email protected]}

Abstract Palmprint identification, as an emerging biometric technique, has been actively researched in recent years. In existing palmprint identification algorithms, ROI segmentation is always a must step. This paper presents a novel hierarchical palmprint identification method without ROI extraction, which measures hand geometry and angle values in coarse-level feature extraction, and calculates unit information entropy of each subimage to describe grayscale distribution as the fine-level feature. We utilize the grayscale distribution variance caused by particular positions of principle lines, wrinkles and minutiae in primitive hand images as the palm descriptor instead of ROI-based features. Experiments were developed on a database of 990 images from 99 individuals. Accuracy up to 99.24% has been obtained when using 6 samples per class for training. A performance comparison between the proposed method and ROI-based PCA method was made also.

1.Introduction The last decade has witnessed a great development on palmprint based personal identification. As one of the developing biometric techniques, palmlprint identification is becoming a popular and convincing solution for identifying persons’ identity since adult palmprint is proved to be a unique and stable personal physiological characteristic. So far, studies on palmprint identification are mainly based on ROI district regularly cropped in preprocessing stage. The usually used ROI-based features can be generally grouped as follows: texture features[1,2], transformed space features[3,4], statistical features[5,6] and subspace features[7].

However, there are some disadvantages of ROI-based feature extraction: First, Image quality is the key ingredient in ROI-based methods. The blurriness caused by capture device’s not clean enough or hand’s shaking when images are being captured will make some texture information unavailable. Second, a ROI image only takes a small proportion of the primitive hand image, which will lose some useful and essential information required in identification. Third, time taken by ROI extraction will extremely increase the system processing time(See Table 1). To avoid these drawbacks, we present a new hierarchical palmprint identification method here, using the whole hand image instead of ROI district. Under normal illumination, hand geometry and some angle values are extracted in coarse-level stage. Angle values are adopted as a complement to the geometrical information provided by line segments. During the fine-level identification, we attempt to find grayscale variance caused by different positions of principal lines, wrinkles and minutiae, while skin colors contribute much to the grayscale variance also. The concept of unit information entropy is introduced as the description of grayscale distribution in this level. The paper is organized as follows: Section 2 presents the preprocessing algorithm and steps of coarse-level feature extraction. Section 3 focus on fine-level feature extraction. Experimental reulsts are reported in Section 4. Section 5 summarizes this paper.

2.Preprocessing and Coarse-level Feature Extraction 2.1. Preprocessing Hand images are captured in an original size of 1792 × 1200 pixels and stored in JPEG mode. Previous studies always employ rotation and translation to

The 18th International Conference on Pattern Recognition (ICPR'06) 0-7695-2521-0/06 $20.00 © 2006 Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.

primitive images before ROI cropping in this stage because the position and direction of a hand may vary from time to time when being captured. We don’t correct all the samples to the same fixed angle and space position, but instead manage the adjustment in fine-level stage detailed in Section 3. Following steps are executed in our preprocessing: Transform JPEG images to BMP mode, next crop primitive images to derive the largest circumscribed rectangle of the hand. Resize these circumscribed images to 1/16 size of their original, then we obtain images whose sizes are distinct between individuals for the sake of palm size difference caused by size difference of skeleton. Finally denoise the resized images with midvalue filter and employ histogram equlization as image enhancement. The preprocessed images are grayscaled with both length and width between 200-300 pixels, according to palm size diversity that reach to a highest of 1.32 times in our database and should not be ignored. Figure 1 shows preprocessing steps and results of the proposed method and traditional method. For simplicity, Figure 1(a)is not restrictly scaled.

(a)

cross points on a palm as coarse-level features, and categorized all samples into 7 groups. This method only goes well on a precondition that most samples have a high quality, which is hard to achieve by reason of some easily confronted influences such as illumination, hand speciality or inexpectant movement when being captured. In this paper, we measure some line segments and angle values constructed by these line segments as coarse-level features. This method remains unaffected even when the primitive image happens to blur. Line segments used here involves the lengths of middle finger and ring finger. Edges of triangles constructed by lines connected finger valleys are included also. For convenience, the four finger valleys from thumb to little finger are named A, B, C, D, respectively. E is defined as the peak of middle finger, F is the peak of ring finger. The palm-connected end of middle finger G and ring finger H are defined as the mid-point of BC and CD as shown in Figure 2. We pick up line segments AB, AC, AD, BC, BD, CD and finger lengths FH, EG as the description of hand geometry. Angles ğCBD, ğDBA, ğBAC, ğCAD, ğBDA, ğCDB, ğDCA, ğBCA contained in triangles made by lines segments mentioned above are seen as a complement to the traditional geometrical information. Angle values are calculated through the law of cosine which is defined as below: ğCBD=arccos(((BC)2+(BD)2-(CD)2)/2(BC)(BD)) (1) Rest angle values are calculated in the same way. E

F H

(b) Figure 1. (a)Our preprocessing result. (b)Traditional preprocessing steps and result

2.2. Coarse-level feature extraction Hand geometry[8,9], including hand width, hand length, finger width, finger length, has been utilized to discriminate between individuals. Rigorously, due to its vulnerability and inconsistency, hand geometry should not be taken as a distinguished feature alone, especially in identification, which requires much higher recognition ability than verification. Although, considering hand geometry does have its recognition ability just not enough to identify, we use these geometrical features to classify only in coarse-level because the system doesn’t require that precisely in this stage. The coarse-level stage can reduce the processing time in a great degree and is adopted prevalently. Dai[10] computed the number of principal lines and

C G

B

D A

Figure 2. Coarse-level feature extraction The result of coarse-level feature extraction is a 16-dimension vector V=[AB, AC, AD, BC, BD, CD, FH, EG, ğ CBD, ğ DBA, ğ BAC, ğ CAD, ğ BDA, ğCDB, ğDCA, ğBCA]. Each test sample generates such a feature vector and matches with sample templates from totally W training classes in Euclidean distance. The w (1<w ≤ W/10) nearest neighbours are found as the coarse-level identification result. The number of classes that will take part in fine-level processing is significantly reduced in coarse-level. The elimination of non-qualified classes not only limit the matching scope but also lower the probability of misclassification in fine-level.

3.Fine-level Feature Extraction

The 18th International Conference on Pattern Recognition (ICPR'06) 0-7695-2521-0/06 $20.00 © 2006 Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.

We gainw classes after coarsely identification as the input of fine-level process. The similarities between a test sample and the w corresponding classes will be computed in this period. First, according to the stored geometrical information, we adjust the test sample to the same position and direction of the w classes respectively. Relying on the fact that the grayscale distribution is identical within the same class and obvious distinct between classes, we present an entropy based method to measure the grayscale distribution as fine-level feature. For a M × N sized, 256 grayscale digital image A, f(x,y)(0 ≤ f(x,y) ≤ 255) is the grayscale value of (x,y), the global information entropy of A is defined as: M

M

N

N

H = −¦¦ pij log pij , pij = f (i , j ) / ¦¦ f (m, n) (2) m =1 n =1

i =1 j =1

Global entropy describes the global grayscale statistical characteristic within an image. It only relates to appearance probability of pixels belong to each grayscale but not concerns with positions of these pixels. Different images will share the same global entropy when they coincidentally have the same probability of each grayscale. So the global entropy can’t be adopted as a distinguished feature of images. In contrary to global entropy, local entropy has the ability to describe the grayscale distribution of any given area in an image. Sun[11] combined grid descriptor(GD) with image information entropy to describe each GD area’s grayscale distribution. We continue to use the idea of local entropy in calculating each subimage’s grayscale distribution as the spatial information combined feature. The M × N sized primitive image was divided into totally K subimages, each of which can be called a unit with a size of M’ × N’. K is defined as follows: K = P × Q P = ¬« M / M '¼» (3) Q = ¬« N / N '¼» Entropy of each unit U st ( s = 1, 2...P , t = 1, 2...Q ) is define as: M ' N'

H st = −¦¦ p 'ij log p 'ij i =1 j =1

M ' N'

(4)

p 'ij = f '(i , j ) / ¦¦ f '( m, n) m =1 n =1

By calculating the corresponding unit information entropy values of K subimages, we can obtain the unit entropy matrix of image A as shown in Figure 3. Rearrange the unit entropy matrix to a vector can we derive the fine-level feature H=(H11,H12,…,HPQ). Then measure the similarity between the fine-level featre vector H from the test sample and H’i(i=1,2…w) from the corresponding w classes in Eulidean distance. P ,Q

di = (

¦

m =1, n =1

( H mn − H i',mn )2 )1/ 2

(5)

The nearest neighbour is found as the system’s final identification result. H11

H12



H1Q

H21 H22



H2Q





HP1 HP2



H11 H12



H1Q

H21 H22



H2Q





HPQ

HP1 HP2



… …

HPQ

Figure 3. Matrix of unit information entropy

4.Experimental Setup and Results An experimental database was established with hand images captured by our own capture device made up of a camera kept at a fixed distance from a table without any peg or limitation and connected with a computer. Under normal daylight illumination without fiercely change, users were asked to place their right hands on a flat soft surface with palm side facing skyward, fingers stretching to the best extent, and no obvious dirt on hands. Users were also asked to replace their hands after each capturing, and none of these users had received any training before. The database includes 99 individuals aged from 20 to 36 of both gender, 10 images from each person with the resolution of 1792 × 1200(pixels) and 8-bit colors. 5 images in 10 of each class were captured at one time, the rest were captured 2 months later.

4.1. Coarse-level experimental result The coarse-level identification result is clearly shown in Figure 4, when training set includes only 1 sample from each class as an example. Figure 4(a) and (b) shows the results of method only using line segments as coarse-level features and method with angle values added as a complement, respectively. The x-aixs values in the figure are the test sample index from 1 to 891. To each test sample, we findthe w nearest neighbours in similarity measurement and obtain aw sized array with the original w value 50. The ith element is the ith nearest neighbour’s class index. Each y-axis value means what position the class index of this test sample is in the array. The proposed method result displayed in Figure 4(b) performs significantly better then the previously used algorithm shown in Figure 4(a), which implies the addition of angle information increase the dispersion between classes and the convergence within a class. In Figure 4(b), only 10 in 891 samples have a y-axis value bigger than 8. And there are totally 708 images in 891 with a y-axis value 1, that means only the coarse-level method can achieve the accuracy high up to 79.46%. According to the data analysis, we choose 8 as the

The 18th International Conference on Pattern Recognition (ICPR'06) 0-7695-2521-0/06 $20.00 © 2006 Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.

50

50

45

45

40

40 Coarse-level identification result

Coarse-level identification result

final w value, which is much smaller than the total class number 99. Making thew value ultimately 8 assures 98.9% of the test samples are correctly processed in coarse-level, which can reach a highest of 99.5% when using 6 samples per class for training.

35 30 25 20 15 10

30 25 20 15 10

5 0

35

5

0

100

200

300

400 500 Test sample No.

600

700

800

900

0

0

100

200

300

400 500 Test sample No.

600

700

800

900

(a) (b) Figure 4. Results of coarse-level classification (a) Line segments only (b)Angle information added

(5)Our method is effective for the deficiency of texture legibility in images like in our database since color features can make up the lack of texure.

5. Conclusion This paper presents a novel hierarchical method without ROI extraction for analysis of palmprint. In coarse-level, the proposed method adds angle value information as a complement to geometrical line segments traditionally used in recognition. Then unit information entropy is introduced to evaluate the local grayscale distribution as the fine-level feature. Results are promising with accuracy up to 99.24% when 6 samples in 10 of each class were used for training and 93.15% when only 1 sample per class for training.

4.2. Fine-level experimental result A performance comparison of our method and traditional PCA algorithm are made in Table 1, while the latter need to extract ROI district in preprocessing. Table 1. Performance comparison between methods Our method Train/Test samples per class(total) 1/9 (99/891) 93.15% 2/8 (198/792) 97.47%

PCA method

67.34% 80.18%

3/7 (297/693)

98.85%

90.04%

4/6 (396/594)

98.99%

92.25%

5/5 (495/495)

99.19%

93.74%

6/4 (594/396)

99.24%

95.96%

Average time per sample Preprocessing

1.654 s

11.781 s

Training/Matching

0.337/2.833 s

0.860/1.294 s

Through the analysis of our fine-level experimental result, there comes the following comments: (1)Traditional ROI extraction employed in preprocessing takes much longer time than any other stages in the system, which can be perfectly avoided in our preprocessing method. (2)The accuracy of PCA method vary dramatically when the number of training samples changes. (3)Making M’=28, N’=25 in the proposed method, we derived the accuracy 93.15% when only 1 sample per class was used for training and a highest accuracy 99.24% when 6 training samples per class were used. (4)The deficiency of our method is that the time taken by matching is longer than PCA. But the obvious enhancement of performance may compensate the time consuming. This disadvantage will be improved in our future research.

6. References [1] D. Zhang, W. Shu. Two novel characteristics in palmprint verification: datum point invariance and line feature mathcing. Pattern Recognition, vol. 33, no. 4, 1999, pp.691-702. [2] N. Duta, A.K. Jain, K.V. Mardia. Matching of palmprint. Pattern Recognition Letters, 2001,vol. 23, no. 4, pp.477-485. [3] W.X. Li, D. Zhang, Z.Q. Xu. Palmprint identification by fourier transform. Int’l Journal of Pattern Recognition and Artificial Intelligence, 2002, vol. 16, no. 4, pp. 417-432. [4] W.K. Kong, D. Zhang, W.X. Li. Palmprint feature extraction using 2-D Gabor filters. Pattern Recognition, 2003, vol. 36, no. 10, pp. 2339-2347. [5] Y.H. Pang, C. Tee, A.T.B. Jin, et al.. Palmprint authentication with Zernike moment invariants. Proc. of the 3rd IEEE Int’l Symposium on Signal Processing and Information Technology, Germany, 2003, pp. 199–202. [6] X.Q. Wu, K.Q. Wang, D. Zhang. Palmprint recognition using directional line energy feature. Proc. of the 17th Int’l Conference on Pattern Recognition, England, 2004, no. 4, pp. 475 – 478. [7] G.M. Lu, D. Zhang, K.Q. Wang. Palmprint recognition using eigenpalm features. Pattern Recognition Letters, 2003, vol. 24, no.9-10, pp. 1463-1467. [8] S.R. Raul, S.A. Carmen, G.M. Ana. Biometric identification through hand geometry measurements. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2000, vol. 22, no. 10, pp. 1168-1171. [9] A. Kumar, C.M. Wong, C. Shen, A.K. Jain. Personal verification using palmprint and hand geometry biometric. Proc. of 4th Int’l Conf. on Audio and Video Based Biometric Person Authentication, Guildford, UK, 2003, pp. 668-678. [10] Q.Y. Dai, Y.L. Yu, D. Zhang. A palmprint classification method based on structure features. Pattern Recognition and Artificial Intelligence. 2002, vol. 15, no. 1, pp. 112-116. [11] J.D. Sun, X.S. Wu, L.H. Zhou. Entropy based image retrieval. Journal of Xidian Univeristy, 2004, vol. 31, no. 2, pp. 223-228.

The 18th International Conference on Pattern Recognition (ICPR'06) 0-7695-2521-0/06 $20.00 © 2006 Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.

Related Documents

Pp1
June 2020 8
Pp1
May 2020 11
Pp1
November 2019 17
Pp1
December 2019 12
Pp1 Doc
October 2019 17
Linkoln I Kenedi Pp1
June 2020 0