Detection Diatoms

  • Uploaded by: Gabriel Humpire
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Detection Diatoms as PDF for free.

More details

  • Words: 4,540
  • Pages: 6
Comparison of Different Feature Extraction Techniques in Content-Based Image Retrieval for CT Brain Images Wan Siti Halimatul Munirah Wan Ahmad 1 and Mohammad Faizal Ahmad Fauzi 2 Faculty of Engineering, Multimedia University Cyberjaya, Malaysia 1 2

[email protected] [email protected]

Abstract—Content-based image retrieval (CBIR) system helps users retrieve relevant images based on their contents. A reliable content-based feature extraction technique is therefore required to effectively extract most of the information from the images. These important elements include texture, colour, intensity or shape of the object inside an image. CBIR, when used in medical applications, can help medical experts in their diagnosis such as retrieving similar kind of disease and patient’s progress monitoring. In this paper, several feature extraction techniques are explored to see their effectiveness in retrieving medical images. The techniques are Gabor Transform, Discrete Wavelet Frame, Hu Moment Invariants, Fourier Descriptor, Gray Level Histogram and Gray Level Coherence Vector. Experiments are conducted on 3,032 CT images of human brain and promising results are reported.

I. INTRODUCTION The advancement in computer technologies produces huge volume of multimedia data, specifically on image data. As a result, studies on content-based image retrieval (CBIR) has emerged and been an active research nowadays. CBIR system is used to find images based on the visual content of the images, and the retrieved results will have visually similar appearance to the query image. In order to describe the image content, low level arithmetic features are extracted from the image itself [3]. Numerous elements such as texture, motion, colour, intensity and shape have been proposed and used to quantitatively describe visual information [4]. The image features that were generated using specific algorithms are then stored and maintained in a separate database. A number of previous works have been done addressing different techniques of the image elements for image retrieval. In 2002, Nikolou et al. [8] has proposed a fractal scanning technique to be used in colour image retrieval with Discrete Cosine Transform (DCT) and Fourier descriptors as feature extraction techniques. Qiang et al. [10] has developed a framework of CBIR based on global colour moments in HSV colour space. Later in 2006, a user concept pattern learning framework has been presented by Chen et al. [9] for CBIR using HSV colour features and Daubechies wavelet transformation. The works on the CBIR for medical applications are rarely found before; however it is gaining a lot of attention recently due to large number of medical images in digital format generated by medical institutions

978-1-4244-2295-1/08/$25.00 © 2008 IEEE

everyday. In 2003, Zheng et al. [7] has developed a contentbased pathology image retrieval system based on image feature types of color histogram, texture representation by Gabor transform, Fourier coefficients, and wavelet coefficients. Recently, Rahman et al. [13] has proposed a CBIR framework which consists of machine learning methods for image prefiltering, statistical similarity matching and relevance feedback scheme for medical images. The features are extracted using colour moment descriptor, gray-level cooccurrence matrix as texture characteristics and shape features based on Canny edge detection. In this paper, detail comparison on the accuracy of different feature extraction techniques are discussed and experimented on medical images. The motivation is to get the best technique to be used in further medical image retrieval application. The techniques are from texture, colour and shape elements; where texture techniques are Gabor Transform and Discrete Wavelet Frame, colours are Gray Level Histogram and Gray Level Coherence Vector, and shape methods are Hu Moment Invariants and Fourier Descriptors. This paper is organized as follows. The next section briefly describes the feature extraction techniques used in the comparison, and then followed by review of medical images used in the experiment in Section III. Experimental setup is discussed in Section IV, followed by the results and discussions in Section V. Finally the conclusion is presented in Section VI. II. REVIEW OF FEATURE EXTRACTION TECHNIQUES A. Gabor Transform (texture) Gabor transform is a technique that extracts texture information from an image. The one used in this research is a two-dimensional Gabor function proposed by Manjunath and Ma [1]. Expanding the mother Gabor wavelet forms a complete but non-orthogonal basis set. The non-orthogonality implies that there will be redundant information between different resolutions in the output data. This redundancy has been reduced by [1] with the following strategy: Let Ul and Uh denote the lower and upper frequency of interest, S be the total number of scales, and K be the total number of orientations (or translations) to be computed. Then the design strategy is to ensure that the half-peak magnitude support of the filter

503

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.

MMSP 2008

responses in the frequency spectrum touch each other as shown in Fig. 1, for S = 4 and K = 6. The Gabor transform is then defined by: (1) W mn ( x, y ) = I ( x1 , y1 ) g mn ∗ ( x − x1 , y − y1 )dx1 dy1



where * indicates the complex conjugate and m, n are integers, m = 1,2,…,S and n = 1,2,…,K. The Gabor transform therefore produce SxK number of the output images, and energy within each image is used as feature, resulting in SxK dimension of features where S=6 and K=4. N −1

M −1

n=0

m=0

y=∑

∑ W (i, j )

(2)

N −1 ⎡ − j 2πkn ⎤ a ( k ) = ∑ z (n ) exp ⎢ ⎥ ,0 ≤ k ≤ N − 1 ⎣ N ⎦ n=0

(4)

The complex coefficients a(k) are called the Fourier Descriptors of the boundary. 64-point Discrete Fourier Transform (DFT) is used which results on 64-dimension of feature vector. E. Gray Level Histogram (intensity) Colour histograms are the most common way of describing low-level colour properties of images. Since medical images are only available in grayscale, a simpler histogram called gray level histogram (GLH) is used to describe intensity of gray level colour map. A GLH is presented by a set of bins, where each bin represents one or more level of gray intensity. It is obtained by counting the number of pixels that fall into each bin based on their intensity [6]. Fig. 2 shows an example of GLH for different images using 64 bins histogram.

Fig. 1. Frequency spectrum of 2D Gabor transforms

B. Discrete Wavelet Transform (texture) Discrete Wavelet Frame (DWF) [2] is an overcomplete wavelet decomposition in which the filtered images are not sub-sampled. This results in four wavelet coefficient images with the same size as the input image. The images are from low-low (LL), low-high (LH), high-low (HL) and high-high (HH) channels. The decomposition is then continued on the LL channels just as normal wavelet transform, but since the image is not sub-sampled, the filter has to be upsampled by inserting zeros in between its coefficients. The number of channels generated for the DWF is 3l+1, where l is the number of decomposition levels. The energy within each channel is used as feature. With l=3, 10-dimension of feature vector is produced. C. Hu Moment Invariants (shape) For this shape representation, invariant moments used are based on one that was derived by Hu [11]. Hu defined seven such moments that enables moment calculations which are invariant under translation and changes in scale and rotation. It includes skew invariant which can distinguish mirror images of otherwise identical images. The seven moments are used as features, hence producing 7-dimensional feature vector. D. Fourier Descriptor (shape) Fourier Descriptors (FDs) is a powerful feature for boundaries and objects representation. Consider an N-point digital boundary; starting from an arbitrary point (x0, y0) and following a steady counterclockwise direction along the boundary, a set of coordinate pairs (x0, y0), (x1, y1),…,(xN-1, yN-1) can be generated. These coordinates can be expressed in a complex form such as (3) z ( n) = x(n) + jy (n), n = 0,1,2,..., N − 1 The discrete Fourier transform (DFT) of z(n) gives

Fig. 2. Example of gray level histogram distribution with number of bins = 64

F. Gray Level Coherence Vector (intensity) Gray Level Coherence Vector (GLCV) is another technique for extracting intensity features of an image. The idea of using it is somewhat similar to Colour Coherence Vector (CCV) proposed by G. Pass et al. [5]. This technique incorporates some spatial information about an image, where each pixel is classified in a given bin as either coherent or incoherent. A pixel is coherent if it belongs to a large connected group of similar pixels; otherwise it is incoherent. The first process is to discretize the gray colourspace, where only n distinct numbers of gray colours (or bins) are used in the image. The next process is to categorize the pixels within a bin as either coherent or incoherent, by comparing the bin size with a predefined threshold value τ. Value of τ and n used in [5] is 300 and 64 accordingly. In this experiment, the bin size is also set to 64 and several values of τ were tested, and the optimal value of τ is found to be 2600. All tested images in our image database contains 262,144 (512x512) pixels, so coherent region is set to be approximately 1% of the image. From τ = 2600, the average of coherent pixels for all images in our database is about 70%; and from bin size of 64, number of features produced are 128-dimensional, where 64 for each coherent and incoherent vectors respectively.

504 Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.

III. MEDICAL IMAGES Medical image collection used in this experiment is provided by Putrajaya Hospital, Malaysia. It consists of 3,032 computed tomography (CT) images of human brain in the DICOM image format. The images are of 512x512 resolutions, scanned from 95 patients with each patient having scans ranging from 15 to 56. To quantitatively evaluate the performance of the texture- and intensity-based feature extraction techniques, the images are divided into 4 different classes according to visual similarity, called general image classification. The ability of the system to retrieve images from the same class to the query images indicates the accuracy of the feature extraction techniques. To evaluate the performance of the shape-based techniques, different classification is used, called shape image classification. The classification is based on the head contour obtained by segmenting the head from its background using fuzzy Cmeans clustering algorithm. Note that the shape-based feature extraction techniques employed will only search for similar shape of the head itself, and not the shapes of different object inside it. Visually the shape of the head can also be classified into 4 different classes. Some examples of the images for both the general and shape classifications are shown in Table I and II. From general image classification, 638 of 3,032 images in the database belong to Class 1, 808 from Class 2, 1134 from Class 3 and 452 images are from Class 4. For shape image classification, 293 belong to Class 1, 1012 from Class 2, 981 from Class 3 and 746 from Class 4.

used (Table III). These vectors are stored in separate feature vector databases according to the different techniques. During the online stage, the feature vector of the query image is computed using one selected technique and is compared to all feature vectors in the feature vector database of that technique. Distance metric is used to compute the similarity between feature vectors of the database image. Small distance implies that the corresponding image is similar to the query image and vice versa. Images are then retrieved based on increasing distance. The flow of this process is shown in Fig. 4.

TABLE I GENERAL IMAGE CLASSIFICATION

Fig. 3. Offline feature extraction stage

Total images per class

Class 1

Class 2

Class 3

Class 4

638

808

1134

452

TABLE III DIFFERENT LENGTH OF FEATURE VECTORS

Technique Gabor Transform Discrete Wavelet Frame Hu Moment Invariants Fourier Descriptor GrayLevel Histogram GrayLevel Coherence Vector

FV Length 24 10 7 64 64 128

Example A Example B TABLE II SHAPE IMAGE CLASSIFICATION

Total images per class

293

1012

981

Fig. 4. Online stage of retrieval process

746

Example A Example B

IV. EXPERIMENTAL SETUP The retrieval system consists of 2 stages, namely the offline feature extraction stage, and the online retrieval stage. During the offline stage, the six feature extraction techniques are applied to all 3,032 images in the database. Different lengths of feature vectors are generated according to the techniques

Measuring dissimilarity between images is of central importance for retrieving images by content. In this work, L1 and L2 metrics, as well as the normalized version of those are considered with respect to the suitable extraction technique. L1 metric is also known as Manhattan distance, calculated by taking the absolute differences between the feature vectors; whereas L2 metric is known as Euclidean distance, calculated by examining the root of squared differences between the feature vectors. Normalized Euclidean and Manhattan metrics are computed by dividing the feature vectors difference with a standard deviation of that particular feature over the entire database. The four distance metrics are given below: n

Euclidean =

∑ (x

ik

− x jk ) 2

k =1

505 Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.

(5)

n

Manhattan =

∑ (x

ik

− x jk )

(6)

k =1 n

Normalized Euclidean =

∑( k =1 n

Normalized Manhattan =

∑( k =1

xik − x jk

σk xik − x jk

σk

)2

(7)

)

(8)

where σk is the standard deviation of the kth feature in the FV database. After the performance of each individual technique is obtained, the best technique among the intensity, texture and shape features is chosen for experiment. The selected techniques are combined to see if the retrieval performance can be further improved. This can be achieved by adding-up the dissimilarity measures of the combined techniques without affecting the relative distances between the query image and the database images of each technique. V. RESULTS AND DISCUSSIONS In the initial setup of the experiment, eight images were selected (two from each class) to test all the techniques with all distance metrics to find the most suitable metric to be used for each technique. The result is summarized in Table IV. It was found that different feature extraction techniques give different performance for each distance metrics. Gabor transform shows the best results using normalized Manhattan metric, Discrete Wavelet Frame performs best using normalized Euclidean metric, Fourier descriptor presents high accuracy using Manhattan metric, while Hu moment invariants, gray level histogram and gray level coherence vector show high accuracy when using Euclidean distance metric. TABLE IV RETRIEVAL ACCURACY FOR ALL FEATURE EXTRACTION TECHNIQUES TESTED USING ALL DISTANCE METRICS

Average retrieval accuracy of 8 query images for TOP 50 (%) E M NE NM Gabor 58.25 58 59.25 59.5 DWF 43.25 44.5 67.75 66.25 Hu moment 62 55.75 51 45.25 Fourier Desc. 89.75 91.75 90.75 88 GLH 71.25 70.75 69.25 71.5 GLCV 74 73.25 25 25 E=Euclidean, M=Manhattan, NE=Normalized Euclidean and NM=Normalized Manhattan Technique

of them belong to Class 1, 5 to Class 2, 10 to Class 3 and 4 to Class 4. First image from Class 1 is selected as the query image, keyword ‘156027’ is used with field ‘Patient ID’, and system will retrieve all 25 images of Patient14 according to increasing distance. Perfect retrieval for this query image would be the retrieval of the other five images from Class 1 (excluding the query image itself) within the top 5 ranked images, followed by images from other classes. The average recognition rate is used to evaluate the retrieval accuracy, and its calculation is as shown in (9). For example, if there are N images in Class 1 for a particular patient, then the average recognition rate is computed as the number of images from similar class within the top N retrieved images. Average recognition = rate

No. of images found from the same class within top N retrieved images No. of images per class, N

(9)

Retrieval process is performed to all CT brain images from 95 patients in image database using the six feature extraction techniques with suitable distance metric as discussed previously. Table V summarizes the retrieval accuracy for each class of all texture and intensity techniques, and Table VI for the two shape techniques. From Table V, recognition rate of all techniques for Class 1 and Class 3, and to some extent Class 4, is satisfactory but not for Class 2. The reason is that classification was done visually based on human vision and some ambiguity are present where images from Class 2 can also be classified as Class 1 or Class 3. Hence, it affects the overall accuracy of Class 2 images. Overall, the average recognition rate per patient is recorded to be above 70% for texture and intensity extraction techniques. From Table VI, the accuracy of shape classification of both techniques varies according to classes. Retrieval for Class 3 recorded highest accuracy. However the accuracy is substantially lower compared to the other four techniques from Table V because shape features are represented based on the contour of the segmented object in the image and it depends a lot on the segmentation accuracy itself. This problem can be fixed with better segmentation and shape extraction techniques to distinguish images of human brain in CT scans. TABLE V PERCENTAGE OF RETRIEVAL ACCURACY FOR TEXTURE AND INTENSITY TECHNIQUES

Technique

To evaluate the performance of each feature extraction technique, all 3,032 CT Brain images are used as query oneby-one to check if similar images from the same patient and class are retrieved successfully. It is easier to do analysis by checking the similarity per patient instead of all images from the database because the total number of images to be considered is then smaller. This operation involves hybridbased image retrieval from our previous work in [12] where PatientID is used as input in text-query and is combined with CBIR. As an example, Patient14 (ID 156027) has 25 scans – 6

Gabor DWF GLH GLCV

% Average for 95 patient per class Class 1 Class 2 Class 3 Class 4 78.45 59.48 84.43 76.78 80.72 64.91 84.44 78.98 86.98 55.31 88.26 60.88 85.48 56.11 87.49 62.84

% Average per patient 74.51 77.05 72.02 72.21

TABLE VI PERCENTAGE OF RETRIEVAL ACCURACY FOR SHAPE TECHNIQUES

Technique

% Average for 95 patient per class Class 1 Class 2 Class 3 Class 4

% Average per patient

Hu moment

52.75

43.14

67.1

32.22

48.53

Fourier Descriptor

47.55

67.05

75.35

74.28

67.39

506 Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.

The other observation recorded is that the average recognition rate varies from patient to patient, depending on the difficulty level in visually classifying the images into a particular class. Certain patients recorded a low average recognition rate because some of its images can be visually classified into 2 classes, hence, effecting the recognition measurement. The average recognition rate for all 95 patients using all techniques is presented in Fig. 5. The experiments were conducted using Matlab 7.3 on an Intel Core Duo 2.0GHz processor with 1GB memory. Average time taken for each technique to complete retrieval process is summarized in Table VII. For texture image element, average time recorded for both techniques are the same, but when referring to the retrieval accuracy, DWF gives better results. As for gray level intensity, the histogram technique performs the retrieval process much faster than the coherence vector technique, even though both techniques result similar retrieval accuracy. Between the two tested shape features, Hu moment technique can execute the retrieval up to three times faster than Fourier Descriptor (FD). However, after considering the poor retrieval accuracy, FD is chosen for further analysis in experimenting combination of feature extraction techniques.

metric produce a very small measurement, hence Normalized Euclidean is used as a replacement because it generates second highest accuracy in Table IV. There is no change in DWF technique. Results for the combination of techniques are shown in Table VIII. From the table, it can be seen that the combination of DWF and FD techniques give the highest average retrieval rate. The pattern of accuracy per class is equivalent to the one in Table V, where Class 1 and Class 3 give better results, as well as Class 4, but a bit low for Class 2. Combining DWF with either GLH or FD performs retrieval faster compared to the other combinations. Obviously more time is needed to compute combination of all three techniques. It is also interesting to note that combining all three techniques does not further improve the retrieval accuracy, in fact it performs worst than all the combination-of-two techniques. This shows that we cannot simply bundle together a lot of feature extraction methods in order to get higher accuracy. TABLE VIII PERCENTAGE OF RETRIEVAL ACCURACY FOR MULTI-TECHNIQUES

Techniques

TABLE VII AVERAGE RETRIEVAL TIME FOR EACH TECHNIQUE

Technique Gabor Transform Discrete Wavelet Frame Gray Level Histogram Gray Level Coherence Vector Hu moment Fourier Descriptor

% Average for 95 patient per class 88.04 82.03 86.97

57.93 64.94 55.31

88.19 84.46 88.26

66.37 78.98 60.89

% Average per patient 77.81 80.60 75.34

69.17

61.37

87.96

72.93

74.01

Class 1 Class 2 Class 3 Class 4 DWF + GLH

Average time taken 5s – 6s 5s – 6s 11s – 12s 19s – 20s 3s – 4s 10s – 11s

DWF + FD GLH + FD DWF + GLH + FD

Since the combination of feature extraction techniques involved summation operation of the techniques dissimilarity measures, the distance for each technique cannot be too dominant compared to others, so a small modification is needed. For gray level histogram (GLH), the distance is very large which ranges from 107 up to 109 because the feature vector consists of number of pixels in a specific bin. To normalize the GLH features, total number of pixels in each bin is divided by the total number of pixels for all bins. For FD technique, it was found that using Manhattan distance

Average time taken 10s – 11s 11s – 12s 15s – 16s 18s – 19s

To ease the work of testing and analyzing the images, a graphical user interface (GUI) was developed using Matlab environment. It consists of two main panels which are Query Panel (left side) and Result Panel (right side). The development of this system is meant for flexible hybrid retrieval system, so in the Query Panel, the type of retrieval can be selected – content-based (CBIR), text-based (TBIR) or both (Hybrid). Accuracy of the system can be analyzed visually by looking at the Result Panel. Fig. 6 shows an example of retrieval results obtained by texture extraction technique of DWF. As can be seen, visually similar scans are retrieved accordingly.

Fig. 5. Average recall rate for 95 patients

507 Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.

Fig. 6. GUI for image retrieval system

VI. CONCLUSIONS An efficient content-based image retrieval system requires excellent content-based technique to effectively use most of the information from the images. In this paper, a study had been carried out on six feature extraction techniques from the texture, intensity and shape image to acquire detail comparisons on retrieval accuracy for each technique elements on medical images. The experiment was performed on 3,032 CT human brain images from 95 patients, visually divided into four classes and each image is tested in order to get the average recognition rate. Technique with the highest accuracy among different approach is combined. Reported results show that the best texture extraction technique is Discrete Wavelet Frame (DWF); for intensity is Gray Level Histogram (GLH) and for shape feature is Fourier Descriptor (FD). For the combination of techniques, DWF and FD combination gives the most excellent result. These techniques can be used in medical applications to provide a reliable image retrieval system. Our current work is using these promising techniques to retrieve medical images based on region of interest, instead of the whole image. A block-based algorithm has been developed based on a simple gray level histogram with image partitioning algorithm. We are integrating the block based method with the DWF, GLH and FD techniques. ACKNOWLEDGEMENT The authors would like to acknowledge Putrajaya Hospital, Malaysia, for contributing the medical images used in this study. This work is funded by the Ministry of Science, Technology and Innovation Malaysia under the Science Fund grant (ID: 01-02-01-SF0014).

REFERENCES [1] B. S. Manjunath and W. Y. Ma, Texture features for browsing and retrieval of image data, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 18, pp. 837-842, Aug. 1996. [2] M. Unser, Texture classification and segmentation using wavelet frames, IEEE Trans. on Image Processing, vol. 4, pp. 1549-1560, Nov. 1995. [3] S. Liapis and G. Tziritas, Color and Texture Image Retrieval Using Chromaticity Histograms and Wavelet Frames, IEEE Trans. on Multimedia, vol. 6, no. 5, pp. 676-686, Oct. 2004. [4] V. N. Gudivada and V. V. Raghavan, Content based image retrieval systems, IEEE Computer, vol. 28, Sept. 1995. [5] G. Pass, R. Zabih, and J. Miller, Comparing Images Using Colour Coherence Vectors, Proc. Fourth ACM International Multimedia Conference, Boston, MA, pp. 65-74, 1996. [6] A. Coman, Exploring the Colour Histogram’s Dataspace for Contentbased Image Retrieval, Technical Report TR 03-01, Univ. of Alberta, Canada, Jan. 2003. [7] L. Zheng, A.W. Wetzel, J. Gilbertson and M.J. Becich, Design and Analysis of a Content-Based Pathology Image Retrieval System, IEEE Trans. on Info. Tech. in Biomed., vol. 7, no. 4, pp. 245-255, Dec. 2003. [8] N. Nikolaou and N. Papamarkos, Image Retrieval Using a Fractal Signature Extraction Technique, IEEE Trans. on DSP, pp. 1215-1218, 2002. [9] S.-C. Chen, S.H. Rubin, M.-L. Shyu and C. Zhang, A Dynamic User Concept Pattern Learning Framework for Content-Based Image Retrieval, IEEE Trans. on Systems, Man and Cybernetics, vol. 36, no. 6, pp. 772-783, Nov. 2006. [10] X. Qiang and Y. Baozong, A New Framework of CBIR Based on KDD, 6th ICSP’02 Proc., vol. 2, pp. 973-976, Aug. 2002. [11] M.K. Hu, Visual Pattern Recognition by Moment Invariants, Computer Methods in Image Analysis, IRE Trans. on Info. Theory, vol. 8, 1962. [12] W. S. Halimatul Munirah W Ahmad, M. Faizal A. Fauzi, W. M. Diyana W. Zaki and R. Logeswaran, Hybrid Image Retrieval System Using Text and Gabor Transform for CT Brain Images, MMU International Symposium on Info. and Comm. Tech. 2007 (M2USIC’2007), Selangor, Malaysia, TS2B–3, Nov 2007. [13] M. M. Rahman, P. Bhattacharya and B. C. Desai, A Framework for Medical Image Retrieval Using Machine Learning and Statistical Similarity Matching Techniques with Relevance Feedback, IEEE Trans. on Info. Tech. in Biomed., vol. 11, no. 1, pp. 58-69, Jan. 2007.

508 Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.

Related Documents

Detection Diatoms
May 2020 10
Edge Detection
October 2019 33
Face Detection
April 2020 23
Error Detection
July 2020 4
Defection Detection
May 2020 2
Collision Detection
November 2019 22

More Documents from ""

042605_02
May 2020 5
042605_04
May 2020 5
050505_03
May 2020 6
042105_03
May 2020 8
042105_01
May 2020 6