Pp3

  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Pp3 as PDF for free.

More details

  • Words: 4,714
  • Pages: 6
Two-stage Approach for Palmprint Identification using Hough Transform and Hausdorff Distance Fang Li

Maylor K.H. Leung

School of Computer Engineering Nanyang Technological University Singapore 639798 [email protected]

School of Computer Engineering Nanyang Technological University Singapore 639798 [email protected]

Abstract—This work proposes a line-based Hough Transform method, which extracts global features for coarse-level filtering in two-stage palmprint identification system. This is a novel approach to the application of the Hough transform for feature extraction of palmprint. A principal line detection mechanism in transformed space is also proposed based on a flooding process, which is motivated by Rainfalling Watershed segmentation algorithm. The local neighbourhood information of principal lines is used as a means to measure similarity as well. It works by first extracting consistent and structurally unique local neighbourhood information from inputs or models, and then voting on the optimal matches. It performs speedy interpretation of input images and retrieval of structurally similar models from large database according to the input. The local information extracted from position and orientation of individual line is used for further fine-level identification. Line-based Hausdorff Distance (LHD) algorithm is applied for local line matching. Experiments had been conducted on a palmprint database collected by Hong Kong Polytechnic University. Promising results are achieved. Keywords—principal line detection, line segment, palmprint identification

I. INTRODUCTION The palm is the inner surface of a hand between the wrist and the fingers. ‘Palmprint’ refers to the various lines on a palm [1]. The pattern of palmprint has been hardwired into the body at birth and remains relatively unaffected by aging. Compared with other biometric traits, the advantages of palmprint are the availability of large palm area for feature extraction, the simplicity of data collection and higher user acceptability [2-8]. In addition, it can be conveniently combined with a thermal-based camera to capture vein pattern for additional authentication at the same time. Feature extraction is the most important part in pattern recognition because features are the main keys to recognize an unknown object. Principal lines and wrinkles, called line features, are the most clearly observable feature in palmprint images captured by normal digital camera [9]. Both global feature (e.g. principal curve) and local features (e.g. position and orientation of individual line) can be extracted from line features [2][11]. This observation motivates us to develop a two-stage matching scheme for the palm lines. Global features are used for coarse-level matching while the fine-level

identification uses local information. It performs speedy interpretation of input images and retrieval of structurally similar models from large database according to the input. This work proposes an image matching method in Hough space, which extract simple global pattern feature for coarselevel filtering in two-stage palmprint identification system. This is a novel approach to the application of the Hough transform for feature extraction of palmprint. In traditional chirognomy, the structure of principal lines (consisting of heart line, head line and life line, very few examples have only two principal lines) in the Region Of Interest (ROI) is checked. This observation motivates us to extract principal lines pattern as a supplementary feature. A principal line detection mechanism in transformed space is also proposed based on a flooding process, which is motivated by Rainfalling Watershed segmentation algorithm. The local neighbourhood information of principal lines is used as a means to measure similarity as well. It works by first extracting consistent and structurally unique local neighbourhood information from inputs or models, and then voting on the optimal matches. The local information extracted from position and orientation of individual line is used for further fine-level identification. Linebased Hausdorff Distance (LHD) [12] algorithm is applied for local line matching. Experiments had been conducted on a palmprint database collected by Hong Kong Polytechnic University. Promising results are achieved. The rest of this paper is organized as follows: Section 2 introduces the structure of the proposed two-stage palmprint identification system. Line-based HT, principal line detection using Watershed segmentation, and LHD are discussed in details in Section 3. Section 4 presents the results for identification. Finally, the conclusion and future work are highlighted in Section 5. II.

SYSTEM OVERVIEW

The block diagram of the proposed two stage palmprint matching system is demonstrated in Figure 1. The preprocessing module is consists of line detection, thinning, contour extraction, and polygonal approximation processes. Polygonal approximation is applied to extract Line Edge Map (LEM) [13] from a palm. Each curve on a palm is approximated using several straight line segments. In line feature extraction phase, LEM is transformed by the line-based

1-4244-0342-1/06/$20.00 ©2006 IEEE Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:08 from IEEE Xplore. Restrictions apply.

ICARCV 2006

Hough Transform into two dimension feature vector L ( ρ ,θ ) to extract the global pattern of the palmprint and principal lines, at the same time, the line segments set is stored as local information for further fine-matching. Section 3 presents the details of these two stages.

a) Sample LEM

b) x-y plane

0 1 2 150 Accumulate 100 d Height 50

0

3

6 3 0 0

3

6

9 12 15

Angle

4 5 6 7

Distance

8 c) Frequency count of lines in ( ρ

−θ

) space

Figure 2. Hough transform with ρ- θ plane

Figure 1. Block diagram of proposed two-stage palmprint matching system

2) Distance of Global line structure Hence, by converting these line segments to ( ρ − θ ) space, the positions and orientations of principal lines and wrinkles can be easily described using a feature vector L = {lρ θ , ..., lρθ , ..., lρ θ } . In order to reduce the min min max max

There are two stages in our approach to match between the input images and the templates from the database: coarse-level matching and fine-level matching.

number of cells, the ρ-θ space is evenly grouped into 16x9 bins. Feature vector L is thus simplified to 16x9 integers to represent the global pattern of the palmprint, which is 162B only for a 108KB image. The distance of global structures between Model image m and Test image t is calculated as follows:

A. The first stage: coarse-level matching

dist ( global ) =

III.

PROPOSED TWO-STAGE MATCHING SCHEME

1) Hough Transform The HT method is first used to detect straight lines from a binary image. The result is then further enhanced to extract curves of the palm. Most papers related to HT deal with the detection of curves of analytic shapes [14]. This paper presents a method of using HT for palmprint matching. It is shown that the line-matching problem in image space can readily be converted into a point-matching problem in parametric ( ρ − θ ) space. This conversion is shown in Fig.2. The sample LEM (160x160 resolution) in Fig. 2a is converted into the ρ-θ plane according to the relationship illustrated in Fig. 2b. The 3D view of conversion result is shown in Fig. 2c. The height of each bin, Lρθ, is the accumulated length of the lines whose distance to origin is Distance ρ and the intersection angle to x-axis is Angle θ.

∑θ (l

m ρθ ρ ∈[1,16 ], ∈[1, 9 ]

− l t ρθ ) 2

(1)

The time complexity for matching one pair is o( pq) . p the maximum bin number of ρ-dimension and q is the maximum bin number of θ-dimension. Since p and q are normally fairly small (Experimentally, p and q are set to 16 and 9 respectively), the matching process is very efficient. 3) Local neighbourhood information of principal lines It is found that feature vector L is too simple to get accurate matching result. In traditional chirognomy, the structure of principal lines (consisting of heart line, head line and life line, very few examples have only two principal lines) in the Region Of Interest (ROI) is checked [11]. This observation motivates us to extract principal lines pattern as a supplementary feature. Since the principal lines are normally the longest curves in palmprint, the problem of extracting principal lines can be solved by extracting the longest curves [3]. A technique derived from Rainfalling Watershed [15] is used to extract the longest curves. This method is based on the idea that the line

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:08 from IEEE Xplore. Restrictions apply.

segments approximating the same curve or feature line have only slight deviations in line parameters. This idea is represented in Fig. 3. Represented by 3D ( ρ − θ ) space, which is shown in Fig. 2c, these line segments are allocated in same bin or neighbour bins. Palmprint can be imagined as many hills with different heights in 3D ( ρ − θ ) space. A flooding process, which is shown in Fig. 4, is done to drown most of the hills by raising the water level until only 3 or 2 groups of hills remain (depends on the number of the principal lines of the palmprint being analysed) . Hills under the threshold level will be marked as “non-principal lines”. Connected hills are clustered together and ρ − θ values are averaged within cluster. Finally contours are drawn at the outline of each group to end the segmentation process.

(4) 1 N (l bi × θ bi ) ∑ L k i =1 where, the intersection angle to x-axis of ith bin in kth cluster isθ b .

θk =

i

Thus, the line segments that correspond to the main feature lines or principal lines in the ROI can be approximated using feature vector Principal={(L1,ρ1,θ1), (L2,ρ2,θ2), (L3,ρ3,θ3)} ( {(L1,ρ1,θ1), (L2,ρ2,θ2)} for those palms with two principal lines), which represents the central location and accumulated length of the three/two peak clusters.

(a) (b) a) grouping in ρ-θ space; Figure 3. Grouping line segments with similar line parameters

Group1

Group3

In addition to the global structure of all the lines, the neighbourhood structure of principal lines, at the same time, can be seen as a supplementary mean to measure similarity. Pair-wise geometric attributes are used to represent each local neighbourhood structure here. The selected geometric attributes of primitive features should be simple, invariant to translation/scaling/rotation, relatively robust to end-pointerosion, and sufficient for discrimination [16]. In our work, D (la , lb ) = ( ρl − ρ l ) 2 + (θ l − θ l ) 2 is computed to represent

Group2

a

Figure 4. Flooding process

(2)

N

Lk = ∑ l bi i =1

where N is the number of bins in kth cluster; lbi is the length of th

i bin in k cluster. 1 N ρk = ∑ (lbi × ρbi ) Lk i =1 where, the distance to origin of ith bin in kth cluster is

b

a

b

the relationship between a pair of line segments la and lb.

Therefore, by determining the grouped bins, one principal line k, which is corresponding to the kth cluster in this ROI, is represented by three values, Lk, ρk, and θk. Take third group in Fig.5 as an example. There are two bins in this cluster to represent third principal line. ρ3 and θ3 are the location of the 3rd cluster. L3 is the accumulated value of the two bins in the 3rd cluster. In general, Lk is the accumulated length of the kth cluster. ρk and θk are the averaged distance value and angle value of kth cluster respectively. They are calculated as follows:

th

b) extracted principal lines of Fig. 2a

Figure 5. Sample of grouping

(3)

ρb . i

DNNmk is the relationship between each principal line mk and its nearest neighbour in same model image. It is calculated as follows: D (mk , m j ) = ( ρ mk − ρ m j ) 2 + (θ mk − θ m j ) 2

(5)

DNNmk = min( D ( mk , m j )) , j ∈ [ 0,2], k ≠ j.

(6)

The same operations are applied on each principal line tk in test image. DNNtk is calculated as the feature value of each principal line in test image as follows: D(t k , t j ) = ( ρtk − ρ t j )2 + (θ tk − θ t j ) 2

(7)

DNNt k = min( D (t k , t j )) , j ∈ [0,2], k ≠ j.

(8)

A 2-dimensional matrix (T[ρ, θ]) is employed to conglomerate the local structures of principal lines of one image and works as the framework for indexing. One pair-wise geometric attribute relative to each principal line and its nearest

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:08 from IEEE Xplore. Restrictions apply.

neighbour is computed to represent its local structure. For nonprincipal line grid in this matrix, the attribute value DNN ρθ is set to MAX. MDNN ρθ and TD NN ρθ are the matrixes for mode

image and test image respectively. With T[ρ, θ], one can perform image matching by intersecting the matrixes of model image and test image. dist( principal) =

∑L

mρθ

ρ∈[1,16], θ∈[1, 9 ]

(9)

(MDNNρθ − TDNNρθ )

4) Result of coarse-level matching The coarse-level distance between two images is defined as follows:

D ( m, t ) = dist ( global ) + dist ( principal )

(10)

Because HT works by accumulating a large number of votes, it is relatively insensitive to small vote fluctuation caused by noise and occlusion. Take the curve in Fig. 3 as example, even under poor segmentation and all the line segments are broken, it will not affect the final result significantly. The matching algorithm is implemented with Microsoft VC++ and works on a NEC (CPU 3GHZ, RAM 1GB) personal computer with Windows XP. This scheme can process 600 images using 12.3s only and achieve 71% accuracy for top 1 match and 100% accuracy for top 15 matches, as shown in Fig. 6. We have tried different bin numbers for ρ-θ space. 16x9 bins gives the optimal result.

Identification accuracy

1 0.9 16x9 bins

0.8

4x4 bins 8x8 bins 32x8 bins 10x10 bins

palmprint images with similar principal lines can be impostors if only global and principal line information is considered. However, if we also look into the local information of each line, such as position and orientation information, we can distinguish the impostor from genuine. Further fine-level matching that uses the current Top 15 match results can enhance the accuracy, at the same time, lower down the time complexity compared to fine-matching without the coarse-level matching. 1) Hausdorff Distance Named after Felix Hausdorff (1868-1942), Hausdorff Distance (HD) is the “maximum distance of a set to the nearest point in the other set” as h ( A , B ) = max  min a ∈ A  b∈ B

(11)

{d ( a , b ) } }

where a and b are points of sets A and B respectively, and d(a, b) is any metric between these points. It should be noted that HD is oriented (we could say asymmetric as well), which means that most of the times h(A, B) is not equal to h(B, A). A more general definition of HD would be: H ( A , B ) = max

{h ( A ,

B ),

h(B , A)

}

(12)

which defines the HD between A and B, while eq.11 computes HD from A to B (also called directed HD). HD is one of the commonly used methods in image processing and matching applications. It measures two images’ dissimilarity without explicit point correspondence [12]. a) Line segment Hausdorff Distance (LHD) The original HD is a distance defined between two sets of points. LHD extends the concept to two sets of line segments. Applying the concept on the LEM of an image, each line set captures the complete feature vector set of one image. Hence, we can measure two images’ dissimilarity based on line features using LHD. The LHD has been adopted in a number of applications, such as logo recognition [17].

0.5

LHD is built on the distance, d(m,t), between two line segments m and t. d(m,t) plays the same role as d(a,b) in equation (1). It makes uses of the added attributes of line orientation and line point association to measure the dissimilarity between two line segments. LHD is defined as follows:

0.4

d (m, t ) = [d θ ( m, t ) d || ( m, t ) d ⊥ (m, t )]

0.7 0.6

20x20bins

13

10

7

4

1

T

Top match Figure 6. Identification accuracy of Coarse-matching

B. The second stage: fine-level matching After coarse-level matching, only 71% of the input images can be matched to genuine if only first match is considered. A careful examination of failed examples reveals that the

(13)

where, m is one of the line segments of the Model line set, M; t is one of the line segment of the Test line sets, T. dθ (m, t ) is the angle distance between m and t, which is represented by f (θ ( m, t )). θ ( m, t ) is the smallest intersecting angle between m and t. f() is a non-linear monotonic penalty function to map an angle to a penalty value. A weight wθ which was determined by a training process was needed for f() to normalize with respect to d||(m,t) and d⊥(m,t). d||(m,t) is the parallel displacement, which is defined as the minimum displacement to align either the left end points or the right end points of m

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:08 from IEEE Xplore. Restrictions apply.

and t. d⊥(m,t) is the perpendicular distance between two line segments. dθ (m, t ) was later modified in [17] as

dθ ' (m, t ) = min( m , t ) × sin(θ (m, t )).

(14)

wθ . The term to remove the weight “ min( m , t ) × sin(θ (m, t )) ” transforms the angular difference into a Euclidean distance. This measurement works well as long as the lengths of the lines do not vary significantly. Or else, the incorrect result will be generated. One sample is shown in Fig. 8. Fig. 8(a) shows a line t in test image. The matched lines in genuine model and impostor model are shown in 8(b) and 8(c) respectively. Between these two model lines, they have the same perpendicular distance to line t, which is 17 pixels. Parallel distance is not calculated in this situation in [17]. θ (m1 , t ) And θ (m2 , t ) are 20 degree and 80 degree respectively. From human vision, line m1 is more similar to line t than line m2.

(a)

(b)

(c)

Figure 7. Failed sample of d ' (m, t ) (a) Line t (b) line m1 (genuine) (c) line θ m2 (impostor) (d) superimposed (a) (b) and (c)

However, incorrect result, d(m1,t) =51.2 > d(m2, t), =26.85 is generated using dθ ' (m, t ) . This problem can be solved if the following proposed equation is used. (15) dθ ' ' (m, t ) = ( width ÷ d displace ( m, t )) × tan(θ (m, t )). Where, width is the width of the image, which is 160 for the images in our database. d displace (m, t ) = d ⊥ (m, t ) 2 + d|| (m, t ) 2 represents the position distance of two lines. When human beings judge the dissimilarity between two line segments using position and orientation information, orientation information is insignificant if two line segments are far away from each other. That is, dθ ' ' (m, t ) is inversely proportional to ddisplace (m, t ) . width is used to normalize different distance, at the same time, let the calculation of dθ ' ' (m, t ) become scale invariant. Moreover, Tangent function is used in the new formula. This function can enforce the penalty to big angle difference because the value rises exponentially after 45 degree. Calculated using Eq.15, Correct result, d(m1,t)=19.27 < d(m2, t)= 52.35 is achieved. The implementation of palmprint matching will employ the new formulation. The directed LHD (hs) between two sets of line segments M and T is defined as follow: h s (M , T

)=



1 lm



m∈M

l m × min

t∈T

d (m , t )

(16)

m∈M

where lm is the length of line segment m. The length information is used to define the weight of corresponding line

segment. The longer the line segment, the more significantly it can affect the final dissimilarity. Finally, the degree of dissimilarity between the two sets of line segments is: H s (M , T ) = average (h s (M , T ), h s (T , M

))

(17)

2) Deepest descent The Line Edge Map (LEM) feature is extracted from Region Of Interest (ROI). For this reason, it is important to fix the ROI to be in the same position in different palm images to ensure the stability of the extracted line features. It also has significant influence on the accuracy of identification [18]. Although the locations of the palms are fixed by the pegs, not all the users can follow the instruction accurately. The alignment is usually not accurate. Hence, not all the ROIs will always be located in the same position in different palm images [9]. If the input image happens to match an intruder’s palm better than the genuine due to rotation or shifting, this will unfortunately result in higher error rate. The natural idea is that the effect of ROI displacements in different image can be reduced to a more acceptable degree after rotation and shifting. Identification accuracy can be enhanced if we can find the close-to-optimal alignment between the model and input images. However, the exhaustive computation complexity is too high to be applicable to get the best transformation. In our system, the ROI is 160x160. In this paper, we employ an efficient method, the deepest descent, to approximate the optimal transformation of rotation and shifting to achieve near optimal accuracy with minimal computation complexity. Briefly, the algorithm works as follows. It starts from an arbitrary initial seeding state. In our work, it is the original state. First, the 20 states with rotating degrees from -10 to 10 are checked to find the best rotation. The state with the smallest distance becomes the new seeding state. Next, the eight neighbouring states of the seeding state are calculated and the algorithm moves into one which gives the maximum decrease in the distance over all eight neighbour states. The moved-inneighbour state is used as a new seeding state and the above procedure is repeated until the iteration converges. The criterion of convergence is that the distance between input image and the shifted model image cannot be improved any more by moving into any of the eight neighbours. In this way, the optimal alignment can be found and the corresponding distance can be easily determined. IV.

EXPERIMENT RESULTS

A series of experiments were carried out using the palmprint database collected by the Biometric Research Centre of Hong Kong Polytechnic University from 100 individuals (6 images for each person) [19]. An identification system examines whether the user is one of enrolled candidates. In our experiment, 3 images are randomly chosen from one person as the model images and the rest three are used as test images for identification. Thus, a palmprint database of 100 classes was created. During the experiments, the LEM feature of each image was extracted and

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:08 from IEEE Xplore. Restrictions apply.

hierarchical matching was used. The matching result and comparisons are tabulated in Table 1. An average recognition rate 95%, 91%, and 98% were achieved by the technique proposed in [3], [2] and [1] respectively. The experiment results demonstrate that the hierarchical matching method has achieved the highest accuracy. TABLE I.



REFERENCES [1]

COMPARISON OF DIFFERENT PALMPRINT IDENTIFICATION METHODS

Duta et al.[3]

You et al.[2]

[2] Zhang & Zhang[1]

Proposed two-stage system

[3] [4]

Database Subject size Image Feature

Matching criteria

Recognition rate

V.

3 30 Feature points Euclidian distance

95%

100 200 Texture and feature points Energy difference & Hausdorff distance 91%

50 200 Wavelet Signatures

100 600 Lines

Euclidian distance

Global pattern Line Hausdorff distance 99.2%

[5] [6]

[7]

98%

CONCLUSION AND FUTURE WORKS

A two-stage palmprint matching system has been proposed in this paper. A line-based Hough Transform method is proposed to extract global features for coarse-level filtering in this system. The local neighbourhood information of principal lines is first time used as a means to measure similarity. It works by first extracting consistent and structurally unique local neighbourhood information from inputs or models, and then voting on the optimal matches. It performs speedy interpretation of input images and retrieval of structurally similar models from large database according to the input. The local information extracted from position and orientation of individual line is used for further fine-level identification. Linebased Hausdorff Distance (LHD) algorithm is applied for local line matching. This hierarchical method not only overcomes the problem of being time consuming in palmprint identification system, but also achieves high accuracy. The experiment results prove the usefulness of this algorithm.

[8]

[9]

[10]

[11]

[12] [13] [14] [15] [16]

[17]

[18]

We still have much work to be done in the future to improve this method. •

To solve the skewing problem in palmprint matching.

To consider more local information to enhance the accuracy further.

[19]

L. Zhang, and D. Zhang, “Characterication of palmprint by wavelet signatures via directional context modelling”, IEEE TSMC(B), 34(3):1335-1347, 2004. J. You, W.X. Li, and D. Zhang, “Hierarchical palmprint identification via multiple feature extraction”, Pattern Recognition, 35:847-859, 2002. N. Duta, A.K. Jain, “Matching of palmprints”, Pattern Recognition Letters, 23:477-485, 2002. Z. Riha and V. Matyas, “Biometric authentication systems”, FI MU Report Series, FIMU-RS-2000-08, November 2000. A. Jain and H. Lin, “On-Line fingerprint verification”, IEEE Proceedings of ICPR’96, pp. 596-600, 1996. C.J. Liu and H. Wechsler, “Independent component analysis of Gabor features for face recognition”, IEEE Transactions on Neural networks, vol. 14, no. 4, pp. 919-928, 2003. P.S. Huang, “Automatic gait recognition via statistical approaches for extended template features”, IEEE Transactions on systems, man, and cybernetics-part B: cybernetics, vol. 31, no. 5, pp. 818-824, 2001. V. Athitsos and S. Sclaroff, “Estimating 3D hand pose from a cluttered Image”, Boston University Computer Science Tech. Report No. 2003009, 2003. W. Shu, G. Rong, Z.Q. Bian, and D. Zhang, “Automatic palmprint verification”, International Journal of Image and Graphics, vol. 1, no. 1, pp. 135-151, 2001. D. Zhang, W. Shu, “Two novel characteristics in palmprint verification: Datum point invariance and line feature matching”, Pattern Recognition, vol. 33, no. 4, pp. 691-702, 1999. J. You, W.K. K, D. Zhang, and K.H. Cheung, “On hierarchical palmprint coding with multiple features for personal identification in large databases”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 2, pp. 234-243, 2004. Y.S. Gao, and M.K.H. Leung, “Face recognition using line edge map”, IEEE TPAMI, 24(6):764-779,2002. M.K. Leung, and Y.H. Yang, “Dynamic two-strip algorithm in curve fitting”, Pattern Recognition, vol.23, pp. 69-79, 1990. K.F. Lai, and R.T. Chin, “Deformable contours: modeling and extraction”, IEEE TPAMI, 17(11):1084-1090, 1995. Gonzalez, Rafael C and Woods, Richard E, Digital Image Processing. New Jerseys:Prentice-Hall, Inc, 2002. J.R. Beveridge and E.M. Riseman, “How easy is matching 2D line models using local search?”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 6, pp. 564-579, 1997. J.Y. Chen, M.K.H. Leung, and Y.S. Gao, “Noisy logo recognition using line segment Hausdorff Distance”. Pattern Recognition, vol. 36, pp. 943955, 2003. A. Kumar, D.C.M. Wong,, H.C. Shen, and A.K. Jain, “Personal verification using palmprint and hand Geometry Biometrics”, Proceedings of the fourth International Conference on audio- and videobased biometric personal authentication, 2003. Palmprint database from Biometric Research Center, The Hong Kong Polytechnic University. Available:

http://www4.comp.polyu.edu.hk/~biometrics/

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:08 from IEEE Xplore. Restrictions apply.

Related Documents

Pp3
November 2019 0
Pp3
May 2020 0