Pp5

  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Pp5 as PDF for free.

More details

  • Words: 4,009
  • Pages: 6
International Conference on Intelligent and Advanced Systems 2007

PALMPRINT IDENTIFICATION USING WAVELET ENERGY Kie Yih Edward Wong*

G. Sainarayanan**

Ali Chekima*

*School of Engineering and Information Technology, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Malaysia **Department of Electrical and Electronics, New Horizon College of Engineering, Bangalore, India Abstract- Palmprint Identification is the means of recognizing an individual from the database using his/ her palmprint features. Palmprint is easy to capture, requires cheaper equipment and is more acceptable by the public. Moreover, palmprint is also rich in features. Wavelet transform is a multiresolution analysis tool that can extract palm lines in different resolution levels. In low-resolution level, fine palm lines are extracted. The higher the resolution level, the coarser are the extracted palm lines. In this work, a digital camera is used to acquire the ten right hand image of 100 different individuals. The hand images are pre-processed to find the key points. By referring to the key point, the palmprint images are rotated and cropped. The palmprint images are enhanced and resized. The resized images are decomposed using different types of wavelets for six decomposition levels. Two different wavelet energy representations are tested. The feature vectors are compared to the database using Euclidean Distance or classified using feedforward backpropagation neural network. From the results, an accuracy of 99.07 percent can be obtained using Db5 wavelet energy feature type 2 and classified with neural network.

I. INTRODUCTION Palmprint Identification is the means of recognizing an individual from the database using his/ her palmprint features. Palmprint is one of the physiological biometrics that has been introduced a decade ago. Physiological biometrics is the usage of human body parts to identify the identity of an individual. Some of the commonly used physiological characteristics are fingerprint, face, hand geometry, iris and palmprint. Fingerprint can achieve high accuracy but it is affected by “dummy finger” that can fool most of the fingerprint identification systems that do not have liveliness test [1]. Face recognition has lower identification rate but it can be used to track the location of a person using surveillance cameras. Hand geometry features for most of the adults are similar, thus it is not suitable for identification purposes. Iris identification can provide highest accuracy compared to others biometrics methods, but the iris-scanning device is expensive and might cause visual discomfort for frequent users. Palmprint gained popularity among other types of biometric systems because it is difficult to imitate due to its complexity and size. The palmprint image is easy to capture and more acceptable by the public. Palmprint acquisition requires cheaper equipment than iris scanning devices, such as commercial digital camera. Furthermore, palmprint is also rich in features. Various types of methods are suggested to extract the geometry features [2], line features [3], point features [4] and texture features [5-7] from the palmprint.

714 ~

Palmprint geometry features are incapable to identify the identity of individuals in large database. This is because most of the adults have similar palm width and palm area. Line features are hard to extract due to its thickness that is not uniform. Some of the wrinkles are as thick as the principal lines. Point features require high-resolution images to find the minutiae points, or delta points. Palmprint can be analyzed as texture features. Some of the texture analyses conducted are using Wavelet Transform [6], Derivative of Gaussian filters [7] and Fourier Transform [8]. Wavelet transforms extract wavelet coefficients from the palmprint image using different types of wavelets. Some of the commonly used wavelets are Haar wavelet, Daubechies wavelet, Symlets wavelet and Coiflets wavelet. Every wavelet has its own decomposition and scaling coefficients. The wavelets under different scaling property can act as multi-resolution analysis to analyse different types of palm lines (principal line, wrinkles and ridges) [9]. Fig. 1 shows the proposed palmprint biometric system using wavelet energy.

Fig. 1. Palmprint Biometric System

In this work, ten right hand images of 100 different individuals are acquired using a digital camera. Then, the hand images are pre-processed to find the key points. By referring to the key points, the angle of rotation and the ROI mask are calculated. The palmprint images are rotated according to the angle of rotation and the central part of the palm is cropped. The palmprint images are enhanced using image adjustment and/ or histogram equalization [10-11]. The palmprint is resized to 256 x 256 pixels. The resized images are decomposed using different types of wavelets for six decomposition levels. The types of wavelet used in this work are Haar wavelet, Daubechies wavelet 2 to 6, Symlets wavelet 3 to 6 and Coiflets wavelet 1 and 2. Two different wavelet energy representations namely, combination decomposition level normalization, WE1 and individually decomposition level normalization, WE2 are tested. The feature vectors are compared to the database using Euclidean distance or classified using scaled conjugate gradient-based feedforward backpropagation neural network.

1-4244-1355-9/07/$25.00 @2007 IEEE

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.

International Conference on Intelligent and Advanced Systems 2007

II. HAND IMAGE ACQUISITION A Canon Powershot A430 digital camera is used to capture ten right hand images of 100 different individuals. The hand images had a uniform dark intensity background. The uniform dark intensity background eases the hand image segmentation. During the hand image acquisition, users are required to lean their hand against the background and spread their fingers apart. No special peg alignment is needed in this work. No special lighting is used to brighten the hand images. All of the hand images are 1024 x 768 pixels saved using JPEG compression in Red-Green-Blue (RGB) format. Fig. 2 shows a hand image acquired using digital camera.

local minima and K2 be the third local minima, circled in the Fig 4.

Fig. 4. Graph of Dist(PB,PW) versus Index of PB

From the index of K1 and K2 in graph, the location of K1 and K2 in the boundary hand image is determined as illustrated in Fig 5. The angle of rotation, ș˚ is the turning of line connecting K1 and K2 (LK1,K2) with origin at K1 in a clockwise direction so that the line is parallel with the x-axis. The angle of rotation is visualized in Fig. 5. Fig. 2. Hand Image

III. HAND IMAGE PRE-PROCESSING There are three main steps in image pre-processing, which are image segmentation, image alignment and region-ofinterest selection. Image segmentation separates the hand image from its background. Since the background is uniform low intensity color, a global threshold value is selected to perform the task. A fixed global threshold value cannot segment different hand images properly. Thus, Otsu’s Method is used to find a variable global threshold value [12] for image segmentation. Fig. 3 shows the segmented hand image using Otsu’s Method.

Fig. 5. Angle of Rotation Calculation

Let dK1,K2 be the distance between K1 and K2. According to experimentation, distance between LK1,K2 and the ROI mask, dK,ROI, is defined as 0.2 times of dK1,K2. This is to move the ROI mask towards the center of the palm. The length of each side of ROI mask, ls, is 1.4 times of dK1,K2. Variable ROI masks allow cropping of palmprint image according to different palm size. Fig. 6 shows the determination of the square ROI mask.

Fig. 3. Segmented Hand Image

Since the hand is free to rotate, image alignment is required to align the hand images to a predefined orientation. Before that, the boundary of the hand image is located using boundary tracking algorithm. The middle of the wrist, PW, is defined as the median value of the right most pixels. The distance between boundary pixels, PB and PW, Dist(PB, PW) are calculated. Fig. 4 shows the graph of Dist(PB, PW) versus index of PB. PB and PW are shown in Fig. 5. Let K1 be the first

Fig. 6. Square ROI Mask Determination

~ 715

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.

International Conference on Intelligent and Advanced Systems 2007

IV. PALMPRINT EXTRACTION The minimum and maximum for four corners of ROI mask in x and y-axis are determined. Referring to these values, a smaller hand image was cropped. The cropped image is rotated by ș degrees clockwise. By referring to the index of the first non-zero pixel, m from the diagonal pixels of the rotated image, the rotated image is cropped m pixels from its side to obtain palmprint image. Let the grayscale intensity palmprint image be E0. Fig. 7 shows the palmprint image in RGB format (left) and grayscale intensity format, E0 (right).

(a) (b) Figure 7. Palmprint Image in (a) RGB (b) Grayscale Intensity

All of the palmprint images are enhanced using either histogram adjustment and/ or histogram equalization. The adjusted palmprint image, E1, is obtained by adjusting grayscale palmprint image into full 256 bins. Histogram equalized palmprint image, E2, is the grayscale palmprint image with its histogram equalized into 256 bins. When all the color channel of the palmprint image (Red, Green and Blue) was firstly adjusted into 256 color bins, before converting it into grayscale image, the individually adjusted palmprint image, E3, is calculated. Histogram equalized individual adjusted palmprint image, E4, is the histogram equalized of E3. Fig 8 shows the enhanced images using different enhancement methods.

(a)

From Fig. 8, it is observed that the histogram equalized palmprint images darken the palm lines and its background. E1 is less clear than E3 because the conversion of RGB to grayscale before image adjustment reduces the extraction of useful palmprint lines. The Histogram Equalized Palmprint Image, E2 and Histogram Equalized Individual Adjusted Palmprint Image, E4 are similar but E2 is clearer than E4. Since the enhanced images were variable in size, all of the enhanced images were resized to 256 x 256 pixels. V. WAVELET TRANSFORM Wavelet transform is a multi-resolution analysis tool that can extract the palm lines in different resolution levels. In low-resolution decomposition, the fine details of the palmprint image are extracted. When the decomposition level increases (higher resolution), the coarser palm lines are extracted. In wavelet decomposition, the original palmprint image (level one) or the approximation of the previous decomposition level (others level except level 1) is used to further decompose into approximation, horizontal, vertical and diagonal details of the next decomposition level as in Fig. 9. The size of the approximation and details for the level L + 1 is half of the width and height of the level L.

Fig. 9. Six Level of Wavelet Decomposition

In this work, the palmprint image is analysed using Haar wavelets, Daubechies wavelets (db2 to db 6), Symlets Wavelets (sym3 to sym 6) and Coiflets Wavelets (coif1 and coif2) to decompose the enhanced palmprint image into 6 decomposition levels. Fig. 10 shows the Haar wavelet coefficients for the image E1 in six decomposition levels. The details for every decomposition levels are adjusted for easy viewing.

(b)

(c) (d) Fig. 8. Enhanced Image. (a) E1, (b) E2, (c) E3 and (d) E4 Fig. 10. Haar Wavelet Coefficients For E1

716 ~

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.

International Conference on Intelligent and Advanced Systems 2007

VI. WAVELET ENERGY The wavelet coefficient images for different decomposition levels have different size. Thus, the images are divided into 4 x 4 blocks regardless of the size of the image. Each 4 x 4 block in different decomposition levels represents the wavelet coefficient value in the same particular area of coefficient image in different decomposition levels. The wavelet energy for all the blocks is calculated using (1) WE i , j =

P

The wavelet energy feature for L decomposition levels are arranged as in (3). WL L = [WD H , L , WDV , L , WD D , L ]

(3)

The feature representation WE1 is calculated using (4) and (5). WE ALL = [WL 6 , WL5 , WL 4 , WL3 , WL 2 , WL1 ] (4)

Q

¦ ¦ (C p , q ) 2

(1)

p =1 q =1

where i and j are the location of the blocks from 1 to 4, [P Q] is the size of the decomposition block, C is wavelet coefficient value in every block. Two types of wavelet energy representations are used in this work. The first wavelet energy representation, WE1, is calculated by normalizing the sum of the total decomposition levels while the second wavelet energy representation, WE2, is calculated by normalizing the sum of different decomposition levels. Figure 11 shows the feature representation WE1 and Figure 12 shows the feature representation WE2.

WE1 = WE ALL / sum(WE ALL )

(5)

For feature representation WE2, the feature vector is obtained by calculating using (6) and (7). NWL L = WL L / sum(WL L )

(6)

WE 2 = [ NWL 6 , NWL 5 , NWL 4 , NWL 3 , NWL 2 , NWL1 ] (7) Fig. 13 shows the wavelet energy for the same individual but taken at different time interval while Fig 14 shows the wavelet energy for different individuals.

Fig. 11. Feature Representation WE1 Fig. 13. Wavelet Energy For Same Individual At Different Time

Fig. 12. Feature Representation WE2

The wavelet energy feature (horizontal, vertical and diagonal) in different decomposition levels is arranged, as in (2).

WD K , L = [WE1,1 , WE1,2 , WE1,3 ,  , WE 4,3 , WE 4,4 ]

(2)

where WDK,L is the wavelet energy for detail K in L decomposition level, K is either horizontal, vertical or diagonal details.

Fig. 14. Wavelet Energy For Different Individuals

From Fig. 13 and 14, it is shown that the wavelet energy is different in every individual. The wavelet energy can be regenerated from the same individual at different time.

~ 717

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.

International Conference on Intelligent and Advanced Systems 2007

VII. FEATURE MATCHING The feature vector (WE1 or WE2) is matched using Euclidean distance or scaled conjugate gradient-based feedforward backpropagation neural network. The Euclidean Distance measures the likeliness of two different wavelet energy features using (8).

ED =

Y

¦ (WEX k , y − WEX l , y ) 2

(8)

y =0

where WEX is the wavelet representation type 1 or 2 {WE1 or WE2}, Y is the length of the feature vector, WEXk and WEXl are any two feature vectors for individual k and l from the database. Fig. 15 shows the graph of the normalized genuine and imposter matching distribution versus the threshold index. If the wavelet energy feature belongs to the same individual (intra-class matching), the ED result will be lower than the threshold obtained from the minimum error. For the inter class matching or matching between different individuals, the ED result will be higher than the threshold obtained from the minimum error.

Fig. 15. Genuine/ Imposter Matching Distribution Versus Threshold Index

In this work, each of the feature vectors is matched with the remaining 999 feature vectors in the database using Euclidean Distance. The genuine ED distribution graph and imposter ED distribution graphs are drawn. Since for every feature vector, there will be nine genuine matching and 990 imposter matching, both of the distribution graph is normalized. From the interceptions between these two distribution graphs, a threshold is selected. The accuracy for different types of wavelets using different types of feature representation is calculated. In the scaled conjugate gradient-based feedforward backpropagation neural network (NN), three out of ten sets of feature vector (for same individual) are used for training. In this work, the number of hidden neuron is set to 288, 432, and 576, so that it is one, 1.5 and two times the size of the feature vector. The NN has 100 output neurons to represent the identity of every user in the system. The output neuron has a range from zero to one. Both of the hidden and output neurons used tangent sigmoid activation function for training. The NN is trained until the performance goal of 1e-3 is

achieved, or when the maximum epoch of 20000 is reached, or when the minimum gradient of 1e-6 is exceeded. Another three sets of feature vector are used to find a suitable threshold for determining the genuine and imposter. The remaining four sets of feature vectors are used for testing purposes. VIII. RESULTS AND DISCUSSION Table 1 and Table 2 show the Euclidean Distance results for WE1 and WE2 respectively. Table 1: Euclidean Distance Results for WE1 E0 E1 E2 E3 E4 Coif1 78.640 81.554 82.940 81.551 82.988 Coif2 78.962 82.474 82.429 82.451 82.384 82.443 86.328 86.140 86.266 86.160 Db2 81.468 85.400 84.969 85.330 84.925 Db3 79.240 82.547 81.730 82.510 81.757 Db4 79.958 84.396 82.939 84.334 82.984 Db5 78.208 81.899 80.856 81.819 80.894 Db6 89.694 90.660 91.761 90.645 91.726 Haar Sym3 81.468 85.400 84.969 85.330 84.925 Sym4 77.954 82.169 82.408 82.083 82.424 Sym5 76.430 79.598 80.525 79.618 80.537 Sym6 78.740 82.462 82.335 82.414 82.291 Table 2: Euclidean Distance Results for WE2 E0 E1 E2 E3 E4 Coif1 80.725 83.842 87.043 83.816 87.019 Coif2 80.060 82.777 84.675 82.779 84.648 82.094 86.437 89.208 86.376 89.235 Db2 83.105 87.399 90.026 87.372 89.992 Db3 82.286 86.831 89.084 86.781 89.065 Db4 83.073 86.180 88.856 86.192 88.833 Db5 82.098 85.118 88.006 84.979 87.972 Db6 86.848 90.765 92.945 90.770 92.894 Haar Sym3 83.105 87.399 90.026 87.372 89.992 Sym4 78.453 80.818 83.712 80.744 83.766 Sym5 79.052 82.036 84.920 82.009 84.957 Sym6 77.946 80.007 82.909 79.908 82.912

From both tables (Table 1 and Table 2), it is shown that the Haar wavelet can achieve the highest accuracy rate compared to other types of wavelet such as Daubechies 2 to 6, Symlets 3 to 6 and Coiflets 1 to 2. The enhanced image (E1, E2, E3 and E4) can improve the accuracy of the system, compared to the original grayscale image E0. The histogram equalized images, E2 and E4 can perform better compared to the adjusted images, E1 and E3. The enhanced image, E2 can yield the best accuracy rate compared to others. Wavelet energy representation type 2 is better than the wavelet energy representation type 1. Table 3 to 5 are the neural network results for 288, 432 and 576 hidden neurons respectively. CoifX is the Coiflets Wavelet of Order X, where X = 1 or 2. DbX is the Daubechies Wavelet of Order X, where X = 2, 3, 4, 5 or 6. SymX is the Symlets Wavelet of Order X, where X = 3, 4, 5 or 6.

718 ~

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.

International Conference on Intelligent and Advanced Systems 2007

Table 3: Neural Network Results for 288 Hidden Neuron. E0 E1 E2 E3 E4 Coif1 96.58 98.23 97.75 98.10 97.95 Coif2 96.93 98.33 98.00 98.59 98.53 97.34 98.33 98.26 98.83 98.46 Db2 97.42 97.16 97.84 96.96 97.40 Db3 97.10 97.88 97.48 97.74 97.94 Db4 97.04 98.42 98.61 98.14 97.41 Db5 96.93 98.34 97.84 98.23 97.79 Db6 97.22 97.91 97.95 97.78 98.20 Haar Sym3 97.42 97.16 97.84 96.96 97.40 Sym4 97.19 97.98 98.20 97.81 97.49 Sym5 96.44 97.88 97.47 98.17 98.22 Sym6 97.01 98.15 98.25 97.77 98.29 Table 4: Neural Network Results for 432 Hidden Neuron. E0 E1 E2 E3 E4 Coif1 97.39 98.12 97.93 98.43 98.12 Coif2 97.31 98.43 98.70 98.13 98.60 97.99 98.70 98.22 98.70 98.41 Db2 97.58 97.84 98.59 98.67 98.31 Db3 97.65 98.15 97.99 98.22 98.44 Db4 97.59 98.50 98.57 98.45 98.75 Db5 97.46 98.56 98.85 98.54 98.17 Db6 97.94 98.50 98.51 98.44 98.61 Haar Sym3 97.58 97.84 98.59 98.67 98.31 Sym4 97.47 98.36 98.17 98.20 98.39 Sym5 97.00 98.19 98.55 98.24 98.47 Sym6 97.17 98.64 98.60 98.54 98.50 Table 5: Neural Network Results for 576 Hidden Neuron. E0 E1 E2 E3 E4 Coif1 97.52 98.53 98.44 96.28 97.32 Coif2 97.27 98.16 98.61 98.20 98.68 98.04 98.49 98.26 98.71 98.63 Db2 97.93 98.85 98.09 98.83 97.71 Db3 97.40 98.47 98.87 98.38 98.79 Db4 97.90 98.84 98.49 99.07 98.73 Db5 97.75 98.94 98.70 98.91 97.59 Db6 98.33 98.17 97.89 98.62 98.40 Haar Sym3 97.93 98.85 98.09 98.83 97.71 Sym4 97.66 98.30 98.74 98.62 98.70 Sym5 97.21 98.45 98.75 98.53 98.21 Sym6 97.40 98.42 98.89 98.45 98.82

From Table 3 to 5, it is observable that the increase of the hidden neurons can increase the accuracy rate. The increase of hidden neurons will increase the storage space required to save the network. An accuracy of 99 percent can be achieved using the wavelet energy representation WE2 for Daubechies 5 (Db5). The scaled conjugate gradient-based feedforward backpropagation neural network can classify the genuine and imposter better than the Euclidean Distance.

pixels, two key points (K1 and K2) were determined. Using these key points, the angle of rotation and the location of square ROI are approximated. The palmprint image is cropped out after rotation. The palmprint image is enhanced using image adjustment and/ or histogram equalization. The enhanced image is decomposed into six levels using different types of wavelet such as Haar wavelet, Daubechies wavelet, Symlets Wavelet and Coiflets Wavelet. The wavelet coefficients are represented using two types of wavelet energy features, WE1 or WE2. The wavelet energy feature is compared with the remaining of the feature vectors stored in the database using the Euclidean Distance or the scaled conjugate gradient-based feedforward backpropagation neural network. From the results, an accuracy of 99.07 can be achieved using enhanced image E3 decomposed with Daubechies 5 (Db5) and trained using neural network with 576 hidden neurons. For future work, more test data will be gathered to test the accuracy of the neural network. A modified wavelet transform will be suggested to further increase the accuracy of the palmprint biometric system. REFERENCES [1] Ton Van Der Putte and Jeroen Keuning, “Biometrical Fingerprint Recognition: Don’t Get Your Fingers Burned, Smart Card Research and Advanced Applications,” IFIP TC8/WG8.8 Fourth Working Conference on Smart Card Research and Advanced Applications, pp. 289-303, 2001. [2] Edward Wong, G. Sainarayanan and Ali Chekima, “Palmprint Authentication using Relative Geometric Feature,” 3rd International Conference on Artificial Intelligent in Engineering and Technology (ICAIET 2006), pp 743-748. [3] C.C. Han, H.L. Cheng, C.L. Lin and K.C. Fan, “Personal Authentication using Palm-print features”, Pattern Recognition, vol. 36, issue 2, 2003, pp 371-381. [4] Nicolae Duta, Anil K. Jain and Kanti V. Mardia, “Matching of palmprints”, Pattern Recognition Letters 23, 2002, pp. 477-485. [5] Wenxin Li, Jane You and David Zhang, “Texture-Based Palmprint Retrieval Using a Layered Search Scheme for Personal Identification,” IEEE Transactions on Multimedia, vol 7, No. 5, Oct 2005, pp 891 – 898. [6] Xiang-Qian Wu, Kuan-Quan Wang and David Zhang, "Wavelet Based Palmprint Recognition," Proceedings of the First International Conference on Machine Learning and Cybernetics, Beijing, 4-5 November 2002, pp. 1253-1257. [7] Xiangqian Wu, Kuanquan Wang and David Zhang, “Palmprint Texture Analysis Using Derivative of Gaussian Filters”, International Conference on Computational Intelligence and Security 2006, Vol. 1 (2006), pp. 751-754. [8] Wenxin Li, David Zhang and Zhuoqun Xu, “Palmprint Identification By Fourier Transform”, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 16, No.4 (2002), pp. 417-432. [9] David D. Zhang, Palmprint Authentication, Kluwer Academic Publishers, 2004. [10] Edward Wong, G. Sainarayanan and Ali Chekima, “Palmprint Identification using Discrete Cosine Transform,” World Engineering Congress 2007 (WEC2007), pp 85-91. [11] Edward Wong, G. Sainarayanan and Ali Chekima, “Palmprint Identification using SobelCode,” Malaysia-Japan International Symposium on Advanced Technology (MJISAT), 12-15 Nov 2007, Accepted for oral presentation. [12] Otsu, N., "A Threshold Selection Method from Gray-Level Histograms," IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, No. 1, 1979, pp. 62-66.

IX. CONCLUSION Ten right hand images from 100 different individuals were acquired using a digital camera. The hand images were segmented and its boundaries are located. From the boundary

~ 719

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.

Related Documents

Pp5
November 2019 6
Pp5
May 2020 3
Test Pp5 R Ii A.docx
October 2019 17