Journal Of Computer Science And Information Security

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Journal Of Computer Science And Information Security as PDF for free.

More details

  • Words: 117,718
  • Pages: 180
IJCSIS Vol. 5, No. 1, September 2009 ISSN 1947-5500

International Journal of Computer Science & Information Security

© IJCSIS PUBLICATION 2009

IJCSIS Editorial Message from Managing Editor The editorial policy of the Journal of Computer Science and Information Security (IJCSIS) is to publish top-level research material in all fields of computer science and related issues like mobile and wireless network, multimedia communication and systems, network security etc. With an openaccess policy, IJCSIS is an established journal now and will continue to grow as a venue for state-of-art knowledge dissemination. The journal has proven to be on the cutting edge of latest research findings with its growing popularity.

The ultimate success of this journal is truly dependent on the quality of the articles that we include in our issues. As we endeavor to live up to our mandate of providing a publication that bridges applied computer science from industry practitioners as well as the basic research/academic community.

I also want to thank the reviewers for their valuable service. We have selected some excellent papers with an acceptance rate of 35% and I hope that you enjoy IJCSIS Volume 5, No. 1 September 2009 issue. Available at http://sites.google.com/site/ijcsis/ IJCSIS Vol. 5, No. 1, September 2009 Edition ISSN 1947-5500 © IJCSIS 2009, USA.

Indexed by:

IJCSIS EDITORIAL BOARD Dr. Gregorio Martinez Perez Associate Professor - Professor Titular de Universidad University of Murcia (UMU), Spain Dr. M. Emre Celebi, Assistant Professor Department of Computer Science Louisiana State University in Shreveport, USA Dr. Yong Li School of Electronic and Information Engineering, Beijing Jiaotong University P.R. China Dr. Sanjay Jasola Professor and Dean School of Information and Communication Technology, Gautam Buddha University, Dr Riktesh Srivastava Assistant Professor, Information Systems Skyline University College, University City of Sharjah, Sharjah, PO 1797, UAE Dr. Siddhivinayak Kulkarni University of Ballarat, Ballarat, Victoria Australia Professor (Dr) Mokhtar Beldjehem Sainte-Anne University Halifax, NS, Canada

TABLE OF CONTENTS 1. A Method for Extraction and Recognition of Isolated License Plate Characters (pp. 001-010) YON-PING CHEN, Dept. of Electrical and Control Engineering, National Chiao-Tung University Hsinchu, Taiwan TIEN-DER YEH, Dept. of Electrical and Control Engineering, National Chiao-Tung University, Hsinchu, Taiwan 2. Personal Information Databases (pp. 011-020) Sabah S. Al-Fedaghi, Computer Engineering Department, Kuwait University Bernhard Thalheim, Computer Science Institute, Kiel University, Germany 3. Improving Effectiveness Of E-Learning In Maintenance Using Interactive-3D (pp. 021-024) Lt. Dr. S Santhosh Baboo, Reader, P.G. & Research Dept of Computer Science, D.G.Vaishnav College, Chennai 106 Nikhil Lobo, Research Scholar, Bharathiar University 4. An Empirical Comparative Study of Checklist-based and Ad Hoc Code Reading Techniques in a Distributed Groupware Environment (pp. 025-035) Olalekan S. Akinola and Adenike O. Osofisan Department of Computer Science, University of Ibadan, Nigeria 5. Robustness of the Digital Image Watermarking Techniques against Brightness and Rotation Attack (pp. 036-040) Harsh K Verma, Department of Computer Science and Engineering, Dr B R Ambedkar National Institute of Technology, Jalandhar, India Abhishek Narain Singh, Department of Computer Science and Engineering, Dr B R Ambedkar National Institute of Technology, Jalandhar, India Raman Kumar, Singh, Department of Computer Science and Engineering, Dr B R Ambedkar National Institute of Technology, Jalandhar, India 6. ODMRP with Quality of Service and local recovery with security Support (pp. 041-045) Farzane kabudvand, Computer Engineering Department zanjan, Azad University, Zanjan, Iran 7. A Secure and Fault-tolerant framework for Mobile IPv6 based networks (pp. 046-055) Rathi S, Sr. Lecturer, Dept. of Computer Science and Engineering, Government College of Technology, Coimbatore, Tamilnadu, India Thanuskodi K, Principal, Akshaya College of Engineering, Coimbatore, Tamilnadu, India 8. A New Generic Taxonomy on Hybrid Malware Detection Technique (pp. 056-061) Robiah Y, Siti Rahayu S., Mohd Zaki M, Shahrin S., Faizal M. A., Marliza R. Faculty of Information Technology and Communication, Univeristi Teknikal Malaysia Melaka, Durian Tunggal, Melaka, Malaysia 9. Hybrid Intrusion Detection and Prediction multiAgent System, HIDPAS (pp. 062-071) Farah Jemili, Mohamed Ben Ahmed, Montaceur Zaghdoud RIADI Laboratory, Manouba University, Manouba 2010, Tunisia 10. An Algorithm for Mining Multidimensional Fuzzy Assoiation Rules (pp. 072-076) Neelu Khare, Department of Computer Applications, MANIT, Bhopal (M.P.) Neeru Adlakha, Department of Applied Mathematics, SVNIT, Surat (Gujrat) K. R. Pardasani, Department of Computer Applications, MANIT, Bhopal (M.P.) 11. Analysis, Design and Simulation of a New System for Internet Multimedia Transmission Guarantee (pp. 077-086)

O. Said, S. Bahgat, M. Ghoniemy, Y. Elawdy Computer Science Department, Faculty of Computers and Information Systems, Taif University, Taif, KSA. 12. Hierarchical Approach for Key Management in Mobile Ad hoc Networks (pp. 087-095) Renuka A., Dept. of Computer Science and Engg., Manipal Institute of Technology, Manipal-576104-India Dr. K. C. Shet, Dept. of Computer Engg., National Institute of Technology Karnataka, Surathkal, P.O.Srinivasanagar-575025 13. An Analysis of Energy Consumption on ACK+Rate Packet in Rate Based Transport Protocol (pp. 096-102) P. Ganeshkumar, Department of IT, PSNA College of Engineering & Technology, Dindigul, TN, India, 624622 K. Thyagarajah, Principal, PSNA College of Engineering & Technology, Dindigul, TN, India, 624622 14. Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive Integrated Moving Average (SARIMA) (pp. 103-110) Adhistya Erna Permanasari, Computer and Information Science Dept., Universiti Teknonologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia Dayang Rohaya Awang Rambli, Computer and Information Science Dept. Universiti Teknonologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia Dhanapal Durai Dominic, Computer and Information Science Dept., Universiti Teknonologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia 15. Performance Evaluation of Wimax Physical Layer under Adaptive Modulation Techniques and Communication Channels (pp. 111-114) Md. Ashraful Islam, Dept. of Information & Communication Engineering, University of Rajshahi, Rajshahi, Bangladesh Riaz Uddin Mondal (corresponding author), Assistant Professor, Dept. of Information & Communication Engineering,University of Rajshahi, Rajshahi, Bangladesh Md. Zahid Hasan, Dept. of Information & Communication Engineering, University of Rajshahi, Rajshahi, Bangladesh 16. A Survey of Biometric keystroke Dynamics: Approaches, Security and Challenges (pp. 115-119) Mrs. D. Shanmugapriya, Dept. of Information Technology, Avinashilingam University for Women, Coimbatore, Tamilnadu, India Dr. G. Padmavathi , Dept. of Computer Science, Avinashilingam University for Women, Coimbatore, Tamilnadu, India 17. Agent’s Multiple Architectural Capabilities: A Critical Review (pp. 120-127) Ritu Sindhu, Department of CSE, World Institute of Technology, Gurgaon, India Abdul Wahid, Department of CS, Ajay Kumar Garg Enginnering College, Ghaziabad, India Prof. G.N.Purohit, Dean, Banasthali University, Rajasthan, India 18. Prefetching of VoD Programs Based On ART1 Requesting Clustering (pp. 128-134) P Jayarekha, Research Scholar, Dr. MGR University Dept. of ISE, BMSCE, Bangalore & Member, Multimedia Research Group, Research Centre, DSI, Bangalore Dr. T R GopalaKrishnan Nair Director, Research and Industry Incubation Centre, DSI, Bangalore 19. Prefix based Chaining Scheme for Streaming Popular Videos using Proxy servers in VoD (pp. 135-143) M Dakshayini, Research Scholar, Dr. MGR University.Working with Dept. of ISE, BMSCE, & Member, Multimedia Research Group, Research Centre, DSI, Bangalore, India Dr T R GopalaKrishnan Nair, Director, Research and Industry Incubation Centre, DSI, Bangalore, India 20. Convergence Time Evaluation of Algorithms in MANETs (pp.144-149)

Narmada Sambaturu, Department of Computer Science and Engineering, M.S. Ramaiah Institute of Technology,Bangalore-54,India. Krittaya Chunhaviriyakul, Department of Computer Science and Engineering, M.S. Ramaiah Institute of Technology,Bangalore-54,India. Annapurna P.Patil, Department of Computer Science and Engineering, M.S. Ramaiah Institute of Technology, Bangalore-54,India. 21. RASDP: A Resource-Aware Service Discovery Protocol for Mobile Ad Hoc Networks (pp. 150159 ) Abbas Asosheh, Faculty of Engineering, Tarbiat Modares University, Tehran, Iran Gholam Abbas Angouti, Faculty of Engineering, Tarbiat Modares University, Tehran, Iran 22. Tool Identification for Learning Object Creation (pp. 160-167) Sonal Chawla, Dept. of Computer Science and Applications, Panjab University, Chandigarh, India Dr.R.K.Singla, Dept. of Computer Science and Applications, Panjab University, Chandigarh, India

--------------------

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

A Method for Extraction and Recognition of Isolated License Plate Characters YON-PING CHEN

TIEN-DER YEH

Dept. of Electrical and Control Engineering, National Chiao-Tung University Hsinchu, Taiwan Email:[email protected]

Dept. of Electrical and Control Engineering, National Chiao-Tung University Hsinchu, Taiwan Email:[email protected]

detected areas. Normalization is not always necessary for all recognition methods. Some recognition methods need normalized characters so that they need more computations to normalize the character candidates before recognition. In this paper the detection stage and segmentation stage are merged into an extraction stage. And the normalization is not necessary because the characters are recognized in an orientation and size invariant manner.

Abstract—A method to extract and recognize isolated characters in license plates is proposed. In extraction stage, the proposed method detects isolated characters by using Difference-ofGaussian (DOG) function, The DOG function, similar to

Laplacian of Gaussian function, was proven to produce the most stable image features compared to a range of other possible image functions. The candidate characters are extracted by doing connected component analysis on different scale DOG images. In recognition stage, a novel feature vector named accumulated gradient projection vector (AGPV) is used to compare the candidate character with the standard ones. The AGPV is calculated by first projecting pixels of similar gradient orientations onto specific axes, and then accumulates the projected gradient magnitudes by each axis. In the experiments, the AGPVs are proven to be invariant from image scaling and rotation, and robust to noise and illumination change.

The motivations of this work originate from three limitations of traditional method of LPR systems. First, traditional method uses simple features such as gradient energy to detect the possible location of license plates. However, this method may lose some plate candidates because the gradient energy may be suppressed due to camera saturation or underexposure, which often takes place under extreme light conditions such as sun light or shadow. Second, traditional detection methods often assume the license plates images are captured in a correct orientation so that the gradients can be accumulated on the pre-defined direction and then the license plates can be detected correctly. In real cases, the license plates may not always keep the same orientations in the captured images. They can be rotated or slanted due to the irregular roads, unfixed camera positions, or the abnormal conditions of cars. Third, it often happens that some characters in a license plate are blurred or corrupted which may fail the LPR process in detection or segmentation stage. The characteristic is dangerous for application because one single character may result in loss of whole license plate. Compare to human nature, people know the position of the unclear characters because they see some characters located aside. We try different methods, e.g. change head position or walk closer, to read the unclear character, and even guess it if it is still not distinguishable. This nature is not achievable in a traditional LPR system due to its coarse-to-fine architecture. To retain high detection rate of license plates under these limitations, the method in this paper proposes a fine-to-coarse method which firstly finds isolated characters in the captured image. Once some characters on a license plate are found, the entire license plate can be detected around these characters. The method may consume more computation than the traditional coarse-to-fine method. However, it minimizes the probability of missing license plate candidates in the detection stage.

Keywords-accumulated gradient; gradient projection; isolated character; character extraction; character recognition

I.

INTRODUCTION

The license plate recognition, or LPR in short, has been a popular research topic for several decades [1] [2] [3]. An LPR system is able to recognize vehicles automatically and therefore useful for many applications such as portal controlling, traffic monitoring, stolen car detection, and etc. Up to now, an LPR system still faces some problems concerning various light condition, image deformation, and processing time consumption [3]. Traditional methods for recognition of license plate characters often include several stages. Stage one is detection of possible areas where the license plates may exist. To detect license plates quickly and robustly is a big challenge since images may contain far more contents than just only expected ones. Stage two is segmentation, which divides the detected areas into several regions containing one character candidate. Stage three is normalization; some attributes of the character candidates, e.g., size or orientation, are converted to predefined values for later stages. Stage four is recognition stage; the segmented characters can be recognized by technologies such as vector quantization [4] or neural networks [5] [6]. Most works propose to recognize characters in binary forms so that they find thresholds [7] to depict the regions of interest in the

A challenge to achieve the fine-to-coarse method is recognition of isolated characters. There are some difficulties to deal with isolated characters recognition. First, it is difficult

This research was supported by a grant provided by National Science Council, Taiwan, R.O.C.( NSC 98-2221-E-009 -128 -).

1

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

to extract orientation of an isolated character. In traditional LPRs [3], the orientations of characters can be determined by the baseline [3][8] of multiple characters. However this method is not suitable for isolated characters. Second, the unfixed camera view angle often introduces large deformation on the character shapes or stroke directions. It makes the detection and normalization process difficult to be applied. Third, the unknown orientations and shapes exposed under unknown light condition and environment become a bottleneck for the characters to be correctly extracted and recognized.

characters. Third, on each group the proposed accumulated gradient projection method is applied to find out the nature axes and associated accumulated gradient projection vectors (AGPVs). Finally, the AGPVs of each candidate are matched with those of standard characters to find the most similar standard character as recognition result. The experimental results show the feasibility of the proposed method and its robustness to several image parameters such as noise and illumination change.

Extraction Stage

II. EXTRACTION OF ISOLATED CHARACTERS Before extracting the isolated characters in an image, there are four assumptions made for the proposed methods:

Calculate scale-space differences in the input image

1. The color (or intensities for gray scale images) of a character is monotonic, i.e., the character is composed of single color without texture on it.

Group pixels of positive (or negative) differences

2. Same as 1, the color of background around the character is monotonic, too. 3. The color of the character is always different from that of the background;

Calculate gradient histogram of each group

4. Characters must be isolated and no overlap in the input image.

Find 1~6 nature axes

In this chapter, the scale space theory is used and acts as the theoretical basis of the method to robustly extract the interested characters in the captured image. Based on the theory, the Difference-of-Gaussian images of all different scales are iteratively calculated and grouped to produce the candidates of license plate characters.

Find nature AGPVs on each axis Recognition Stage

A. Produce the Difference-of-Gaussian Images Taking advantage of the scale-space theories [9]-[11], the extraction of characters becomes systematic and effective. In the first step, the detection of the characters is done by searching scale-space images of all the possible scales where the characters may appear in the input image. As suggested by the authors in [12] and [13], the scale-space images in this work are generated by convolving input image with different scale Gaussian functions. The first task to get the scale-space images is defining the Gaussian function. There are two parameters required for choosing of Gaussian filters, i.e., filter width λ and smoothing factor σ, where the two parameters are not fully independent yet some relationship between them are required to be discussed.

Match with nature AGPVs in database to find possible matching characters Calculate augmented AGPV of each possible matching character Calculate total matching cost of each possible matching character Recognize by lowest matching cost

The range of smoothing factor σ is determined from experiments that a better choice of it is from 1 to 16 for the input image up to 3M pixels. There are two factors relevant to the sampling frequency of σ: the resolution of the target characters and the computational resources(including allowed processing time). These two factors play roles of trade-off and are often determined case by case. In this paper, we choose to set σ of a scale double of that of the previous scale for convenient computation, i.e., σ2=2σ1, σ3=2σ2 …, where σ1, σ2, σ3…, are the corresponding smoothing factors of the scale numbered 1, 2, 3… As a result, the choice of smoothing factors in our case is, σ1=1, σ2=2, σ3=4, σ4=8, and σ5=16. Consider

Figure 1. Process flow of the proposed method

The proposed scheme to extract and recognize license plate characters has procedures as the following. First, the candidates of characters are detected by the scale-space differences. Scalespace extrema has been proved stable against noise, illumination change and 3D view point change [9]-[14]. In this paper, the scale-space differences are approximated by the difference-of-Gaussian functions as in [9]. Second, the pixels of positive (or negative) differences are gathered into groups by connected components analysis and form candidates of

2

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

beginning, I(x,y) is convolved with Gaussian filter G(x,y,σa) to generate the first smoothed image, I1(x,y) for the first octave. σa is the smoothing factor of the initial scale and is selected as 1 (σa =σ1) in our experiments. The smoothed image I1(x,y) is used to convolve with Gaussian filter G(x,y,σb) to generate the second smoothed image I2(x,y), which will be subtracted from I1(x,y) to generate the first DOG image D1(x,y) on the octave. The I2(x,y) is also sub-sampled by every two pixels on each row and column to produce the image I2'(x,y) for the next octave. It is worth to note that an image sub-sampled from a source image has smoothing factor equal to one half of that of the source image. The length and width of image I2'(x,y) are L/2 and W/2, and the equivalent smoothing factor is (σa+σb)/2 from initial scale. As the σb is selected to be same as the smoothing factor σa of the initial scale, the image I2'(x,y) therefore has the equivalent smoothing factor σ=σa,, and is served as the initial scale of the second octave. The image I2'(x,y) is convolved with G(x,y,σb) again to generate the third smoothed image, I3(x,y), which can be subtracted from I2'(x,y) to produce the second DOG image D2(x,y). The same procedure can be applied to the remaining octaves to generate the required smoothed images I4 and I5, and Difference-ofGaussian images D3 and D4.

factors of noise and sampling frequency in the spatial domain, a larger σ is more stable to detect characters of larger sizes. Ideally the width λ of a Gaussian filter is infinity, while in real case it is reasonable to be an integer to match the design of digital filters. In addition, the integer cannot be large due to limited computation resources and only odd integers are chosen such that each output of convolution can be aligned to the center pixel of the filter. The width λ is changed with the smoothing factor σ, which is in other words the standard deviation of the Gaussian distribution. Smaller σ has better response on edges but yet more sensitive to noise. When σ is small, there is no need to define a large λ because the filter decays to a very small value when it reaches the boundary. In this paper we propose to choose the two parameters satisfying the following inequality,

λ ≥ σ × 7 & λ = 2n + 1,∀n ∈ N .

(1)

B. Grouping of the Difference-of-Gaussian Images To find the interested characters in the DOG image, the next step is to apply connected components analysis to connect pixels of positive (or negative) responses into groups. After connected components analysis, all the groups are filtered by their sizes. There are expected sizes of characters for different octave and the groups will be discarded if their sizes are not falling into the expected range. The most stable sizes for extracting general characters on each octave are ranged from 32×32 to 64×64. Characters sizes smaller than 32×32 are easily disturbed by noise and result in undesirable outcomes. Characters sizes larger than 64×64 can be extracted on octaves of larger scales. III.

Figure 2. The procedure to produce difference-of-Gaussian Images

An efficient way to generate the smoothed images is taking sub-sampling. As explained above that the filter width is better chosen λ≥7σ, it makes the filter width grows to a large value when the smoothing factor grows up. This leads to a considerable amount of computation in real case if the filters are implemented in such a long size. To avoid expanding the filter width directly, we take use of sub-sampling on images of smoothing factors σ>1 based on the truth that the information in images is decreased as the smoothing factors increase.

THE ACCUMULATED GRADIENT PROJECTION VECTOR(AGPV) METHOD

After extraction of the candidate characters, a novel method named accumulated gradient projection vector method, or AGPV method in short, is proposed and applied to recognize the extracted candidate characters. There are four stages to recognize a character using the AGPV method. First, determine the axes; including the nature axes and augmented axes. Second, calculate the AGPVs based on these axes. Third, normalize the AGPVs for comparing with standard ones. Fourth, match with standard AGPVs to validate the recognition result. The procedure will be explained in detail in the following sections.

Due to the fact that the image sizes vary with the level of sub-sampling, we store the smoothed images into a series of octaves according to their sizes. The images on an octave have one half length and width of those of the previous octave. On each octave there are two images subtracting from each other to produce the desired Difference-of-Gaussian (DOG) image for later processing. The procedure of producing the Difference-ofGaussian images can be explained by Fig.2. Let the length and width of the input image I(x,y) be L and W respectively. In the

A. Determine Axes It is important to introduce the axes first before discussing the AGPV method. An axis of a character is a specific orientation on which the gradients of grouped pixels are projected and accumulated to form the desired feature vector.

3

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

An axis is represented by a line that has specific orientation and passes through the center of gravity point of the pixels group. The axes of a character can be separated into two different classes named nature axes and augmented axes, which are different in characteristics and usages and will be described below.

Let function H(a) denote the histogram magnitude appeared on angle a. Find the center of the k-th peak, pk, of the histogram, which are defined by satisfying H(pk)> H(pk -1) and H(pk)> H(pk +1). A peak represents a specific orientation in the character image. Beside the center, find the boundaries of the peak, start angle sk and end angle ek, within an threshold angle distance ath , i.e.,

1) Build up Orientation Histogram Upon an input image is clustered into one or more groups of pixels, the next step is to build up the corresponding orientation histograms. The orientation histograms are formed from gradient orientations of grouped pixels. Let γ(x,y) be the intensity value of sample pixel (x,y) of an image group I, the gradients on x-axis and y-axis are respectively,

sk = a , H (a ) ≤ H (b ),∀b ∈ ( pk − ath , pk )

(4)

ek = a , H (a ) ≤ H (b ),∀b ∈ ( pk , pk + ath )

(5)

The threshold ath is used to guarantee the boundaries of a peak stay nearby of its center and is defined to be 22.5 degrees in the experiment. The reason to choose ±22.5 degrees threshold is because it segments a 360-degree circle into 8 orientations, which is similar to human eyes often see a circle in 8 octants.

∇X (x , y ) = γ (x + 1, y − 1) − γ ( x − 1, y − 1) + 2 × (γ (x + 1, y ) − γ ( x − 1, y )) + γ ( x + 1, y + 1) − γ ( x − 1, y + 1) .(2) ∇Y ( x , y ) = γ ( x − 1, y + 1) − γ ( x − 1, y − 1) + 2 × (γ (x , y + 1) − γ ( x , y − 1)) + γ ( x + 1, y + 1) − γ ( x + 1, y − 1)

Once the start angle and end angle of a peak is determined,

The gradient magnitude, m(x,y), and orientation, θ(x,y), of this pixel is computed by

ek

define the energy function of the k-th peak as E (k ) =

∑ H (a ) ,

a = sk

(∇X (x , y ))2 + (∇Y (x , y ))2 θ ( x , y ) = tan −1 (∇Y ( x , y ) / ∇X (x , y )) ,

which stands for the gradient energy of a peak. In addition, an outstanding energy function D(k) is also defined for each peak,

m( x , y ) =

(3)

D(k ) = E (k ) −

By assigning a resolution BINhis in the orientation histogram, the gradients are accumulated into BINhis bins and the angle resolution is REShis =(360/BINhis). The BINhis is chosen as 64 in the experiments and the angle resolution REShis is therefore 5.625 degrees. Each sample added to the histogram is weighted by its gradient magnitude and accumulated into the two nearest bins by linear interpolation. Besides the histogram accumulation, the gradient of each sample is accumulated into a variable GEhis which stands for the total gradient energy of the histogram.

(H (s k ) + H (ek )) × (ek − s k ) 2

(6)

The outstanding energy neglects the energy contributed by neighboring peaks and is more meaningful than E(k) to represent the distinctiveness of a peak. Peaks with small outstanding energy are not considered as nature axes because they do not outstand from the neighboring peaks and may not be detectable in new images. In the experiments, there are different strategies to threshold the outstanding energy for calculating standard AGPVs and test AGPVs. When calculating standard AGPVs, we select one grouped image as standard character image for each character and assign it to be the standard of the recognition. The mission of this task is to find stable peaks in the standard character image. Therefore, a higher threshold GEhis/32 is applied and a peak has outstanding energy higher than the threshold is considered as a nature axis of the standard character image. When calculating test AGPVs, the histogram may have many unexpected factors such as noise, focus error, bad lighting condition…, so that the task is changed to find one or more matched candidates for further recognition. Therefore, a lower threshold GEhis/64 is used to filter out the dissimilar ones by the outstanding energy. After threshold checking, the peaks whose outstanding energy higher than the threshold is called nature peaks of the character image and the corresponding angles are called the nature axes. Typical license plate characters can be found having two to six nature axes by the procedures above.

2) Determine the Nature Axes The next step toward recognition is to find the corresponding nature axes based on the built orientation histogram. The word “nature” is used because the axes always exist “naturally” regardless of most environment and camera factors that degrade the recognition rate. The nature axes have several good properties helpful for the recognition. First, they have high gradient energy on specific orientation and therefore are easily detectable in the input image. Second, the angle differences among the nature axes are invariant to image scaling and rotation. It means, they can be used as references to correct the unknown rotation and scaling factors on the input image. Third, the directions of nature axes are robust within a range of focus and illumination differences. Fourth, although some factors, such as different camera view angle, may cause character deformation and change the angle relationship among the nature axes, the detected nature axes are still useful to filter out the dissimilar ones and narrow down the range of recognition.

4

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

Fig.3 is an example to show the nature axes. Fig.3(a) is the source image, where intensity is ranged from 0(black) to 255(white). Fig.3(b) overlays the source image with the detected nature axes shown by red arrows. Fig.3(c) is the corresponding orientation histogram which are accumulated from the pixels gradient magnitudes in Fig.3(a). We can see six peaks in the histogram, marked as A,B,C,D,E and F respectively, which correspond to the six red arrows in Fig.3(b).

B. Calculate AGPVs Once the axes of a character are found, the next step is to calculate the accumulated gradient projection vectors (AGPVs) based on these axes. On each axis of corresponding peak pk, the gradient magnitudes of pixels whose gradient orientations fall inside the range sk < θ (x , y ) < ek are projected and accumulated. The axis could be any one in the nature axes or augmented axes. 1) Projection principles The projection axis, ηφ, is chosen from either nature axes or augmented axes with positive direction φ. Let the (xcog, ycog) be the COG point of the input image, i.e.,

.

.

(a) Figure 3.

(b)

⎡N (xi )⎤⎥ ⎡ xcog ⎤ 1 ⎢ ∑ i =1 ⎥ ⎢y ⎥ = × ⎢ N N cog ⎥ ⎢ ⎣ ⎦ ( ) y i ⎥⎦ ⎢⎣∑ i =1 ,

(c)

(a)input image (b)the nature axes (c)orientation histogram

3) Determine the Augmented Axes Augmented axes are defined, as augmentations to nature axes, to be the directions on which the feature vectors, AGPVs, generated are unique or special to represent the source character. Unlike the nature axes possessing strong gradient energy on specific orientation, augmented axes do not have this property so that they may not be observed from orientation histogram.

(7)

where (xi, yi) is the i-th sample pixel and N is the total number of sample pixels of a character. Let the function A(x,y) denote the angle between sample pixel(x,y) and the x-axis, i.e.,

⎛ y⎞ A( x , y ) = a tan⎜ ⎟ ⎝ x ⎠.

Some characters, such as Fig.4, have only few (one or two) apparent nature axes. Therefore, it is necessary to generate enough AGPVs on augmented axes for the recognition process. The experiment tells us that a better choice for the number of AGPVs is four to six in order to recognize a character in a high successful rate. The AGPVs can be any one from nature axes AGPVs or augmented axes AGPVs.

(8)

The process of projecting a character onto axis ηφ can be decomposed into three operations. First, rotate the character by angle Δθ = (A(xcog , ycog ) − φ ) . Second, scale the rotated pixels by a projection factor cos(Δθ). And third, translate the axis origin to the desired coordinate. Apply the process on the COG point, the coordinate of COG point after rotation is,

The augmented axes can be defined by character shapes or by fixed directions. In our experiments, there are only four fixed directions, as the four arrows in Fig.4(b), defined as augmented axes for the total 36 characters. If any one of the four directions already exists in the nature axes, it will not be declared again in the augmented axes.

⎡ x rcog ⎤ ⎡cos (Δθ ) − sin(Δθ )⎤ ⎡ x cog ⎤ ⎢y ⎥ = ⎢ ⎥ ⎥⋅⎢ ⎣ rcog ⎦ ⎣ sin(Δθ ) cos(Δθ ) ⎦ ⎣ y cog ⎦ ,

(9)

Scaling by a projection factor cos(Δθ), it becomes,

0 ⎤ ⎡ x rcog ⎤ ⎡ x pcog ⎤ ⎡cos(Δθ ) ⋅⎢ ⎥ ⎢y ⎥ = ⎢ cos(Δθ )⎥⎦ ⎣ y rcog ⎦ ⎣ pcog ⎦ ⎣ 0 .

.

(10)

Finally, combine (15) and (16) and further translate the origin of axis ηφ, to (xηori, yηori), the final coordinate (xproj, yproj) of projecting any sample pixel (x,y) onto axis ηφ , is computed by (a)

(b)

(c)

Figure 4. (a).a character has only one nature axis. (b)the nature axes in red arrow and three augmented axes in blue arrows(c)orientation histogram

5

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

ˆl( x , y ) =

⎡ x proj ⎤ ⎡ cos 2 (Δθ ) − sin(Δθ ) cos(Δθ )⎤ ⎡ x ⎤ ⎢y ⎥ = ⎢ ⎥⋅⎢ ⎥ cos 2 (Δθ ) ⎦ ⎣ y ⎦ . (11) ⎣ proj ⎦ ⎣ sin(Δθ ) cos(Δθ ) ⎡ x pcog ⎤ ⎡ xηori ⎤ −⎢ ⎥+⎢ ⎥ ⎣ y pcog ⎦ ⎣ yηori ⎦

(x

− xcog ) + ( y proj − y cog ) 2

proj

2

.

(13)

To accumulate the gradient projections, an empty array R(x) is created with length equals to the diagonal of the input image. Since the indexes of an array must be integers, linear interpolation is used to accumulate the gradient projections into the two nearest indexes of the array. In mathematical representations, let b=floor( ˆl(x , y ) ) and u=b+1, where floor(z) rounds z to the nearest integers towards minus infinity. For each sample pixel (x,y) on input image I, do the following accumulations,

Note that the origin of axis ηφ., (xηori, yηori), is chosen to be the COG point of the candidate character in the experiments, i.e., (xηori, yηori)= (xcog, ycog), because it concentrates the projected pixels around the origin (xcog, ycog) and saves some memory space used to accumulate the projected samples on new axis.

( (

) )

ˆ ( x , y ) × ˆl( x , y ) − b R(b ) = R(b ) + m ; ˆ ( x , y ) × u − ˆl( x , y ) R(u ) = R(u ) + m

2) Gradient projection accumulation In this section, the pre-computed gradient orientation and magnitude will be projected onto specific axes then summed up. Only sample pixels of similar gradient orientations are projected onto the same axis. See Fig. 5 for example; an object O is projected onto axis η of angle 0-degree. In this case, only the sample pixels of gradient orientations θ(x,y) near 0-degree will be projected onto η and then accumulated.

(14)

Besides R(x), an additional gradient accumulation array, T(x) is also created to collect extra information required for normalization. There are two differences between R(x) and T(x). First, unlike R(x) targeting on only the sample pixels of similar orientation, T(x) targets on all the sample pixels of a character and accumulates their gradient magnitudes. Second, R(x) ˆ (x , y ) , while accumulates the projected gradient magnitude m T(x) accumulates the original gradient magnitude m(x,y). Referring to (14),

According to axes types, there are two different cases to select sample pixels of similar orientations. For nature axis corresponding to k-th peak pk, the sample pixels with orientation θ(x,y) ranged inside the boundaries of the pk, i.e., sk < θ(x,y) < ek, are projected and accumulated. For augmented axis with angle φ, the sample pixels with gradient orientations θ(x,y) ranged by θ(x,y)≥ φ-22.5 and θ(x,y)≤ φ+22.5 will be projected and accumulated.

( (

) )

T (b ) = T (b ) + m( x , y ) × ˆl( x , y ) − b ; T (u ) = T (u ) + m( x , y ) × u − ˆl( x , y )

(15)

The purpose of T(x) is to collect the overall gradient information of the interested character then applied to normalize array R(x) into desired AGPV.

Figure 5.

3) Normalization The last step to find out the AGPV on an axis is to normalize the gradient projection accumulation array R(x) into a fixed-length vector. With the fixed length, the AGPVs have uniform dimensionality and can be compared with standard AGPVs easily. Before the normalization, the length of AGPV, LAGPV, has to be determined. Depends on the complexity of recognition targets, different length of AGPV may be selected to describe the distribution of projected gradients. In our experiments, the LAGPV is chosen as 32. A smaller LAGPV lowers the resolution and degrades the recognition rate. A larger LAGPV slows down system performance and makes no significant difference on recognition rate. It is worth to note that, one AGPV formed upon an axis is independent from the other AGPVs formed upon different axes. This is important to make the AGPVs independent from one another regardless of the formed character and axes.

Accumulation of gradient projection

From (3) and (11), the projected gradient magnitude, mˆ( x , y ) , and the projected distance, ˆl(x , y ) of sample pixel (x,y) onto axis ηφ are respectively ˆ ( x , y ) = m( x , y ) × cos (θ (x , y ) − φ ) . m

In order to avoid the impact of isolated sample pixels which are mostly caused by noise, the array R(x) is filtered by a Gaussian filter G(x):

(12)

~ R ( x ) = R (x )* G ( x ) ,

6

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(16)

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

where the operator * stands for convolution operation. The variance of the G(x) is chosen as σ =(LenR)/128 in the experiments, where LenR is the length of R(x). It is found that this choice benefits in both resolution and noise rejection. Similarly, the array T(x) is also filtered by the same Gaussian filter to eliminate the effect of noise. After Gaussian filtering, the array T(x) is analyzed to find effective range, the range in which the data is effective to represent a character. The effective range starts from index Xs and ends in index Xe, defined as

X s = {x s ,T (x s ) ≥ thT ;T (x ) < thT , ∀x < x s }

,

(17)

X e = {xe ,T (xe ) ≥ thT ;T (x ) < thT , ∀x > xe } ,

(18)

(a)

(b)

and

where the threshold thT is used to discard noise and is chosen as thT =Max(T(x))/32 in the experiment. The effective range of R(x) is the same as the effective range of T(x), from Xs to Xe.

(c)

Figure 6. (a).Gradient projection on axis D. (pink: COG point; red: axis D; cyan: selected sample pixels; blue: locations after projection) (b).the gradient accumulation array T(x) with distance to the COG point (c).the gradient projection array R(x) (d).normalized AGPV.

As mentioned previously, the gradient projection accumulation results in a large sum along a straight edge. This is a good property if the interested character is composed of straight edges. However, some characters may consist of not only straight edges but also some curves and corners which only contribute small energy on array R(x). In order to balance the contribution of different types of edges and avoid the disturbance from noise, a threshold is used to adjust the content of array R(x) before normalization, ~ ⎧ 0 , if R (x ) < thR Rˆ( x ) = ⎨ ~ ⎩255, if R (x ) ≥ thR .

C. Matching and Recognition 1) Properties used for matching Unlike general vectors matching problem directly referring to the RMS error of two vectors, the matching of AGPVs refers to special properties derived from their physical meanings. There are three properties useful for similarity measuring between two AGPVs. Each peak in an AGPV represents an edge on the source character. The number of peaks, or say the edge count, is useful to represent the difference between two AGPVs. For example, there are four peaks on the extracted AGPV in Fig.6(d) and each of them represents an edge on the axis. The edge count is invariant no matter how the character exists in the input image. In this paper, a function EC(V) is defined to calculate the edge count of an AGPV V and EC(V) increase if V(i)=0 and V(i+1) >0.

(19)

After finding the effective range and adjusting the content of array R(x), the accumulated gradient projection vector (AGPV) is defined to resample from Rˆ(x ) , ⎛ ⎛⎛ i ⎞ ⎞⎞ AGPV (i ) = Rˆ⎜⎜ round ⎜⎜ ⎜ ⎟ × ( X e − X s ) + X s ⎟⎟ ⎟⎟ 32 ⎝ ⎠ ⎝ ⎠⎠ . ⎝

(d)

Although the edge count in an AGPV is invariant for the same character, the position of the edges could be varied if the character is deformed due to slanted camera angle. This is the major reason to explain why the RMS error is not suitable to measure the similarity between two AGPVs. In order to compare AGPVs under the cases of character deformation, a matching cost function C(U, V) is calculated to measure the similarity between AGPV U and AGPV V, expressed as,

(20)

Fig.6 gives an example of the gradient accumulation array T(x), gradient projection accumulation array R(x) and normalized AGPV. The example uses the same test image as Fig.3 and only one of the nature axes, axis E, is selected and printed. Similar to the method of finding the peaks of orientation histogram, the k-th peaks, pk, on R(x) is defined as R(pk)> R(pk -1) and R(pk)> R(pk +1). It can be observed that four peaks exist in Fig.6(c) and each of them represents an edge projected onto axis E.

C (U ,V ) = EC (U ) − EC (V ) + EC (UV ) − EC (V ) + EC ( IV ) − EC (V )

, (21)

Where UV = U ∪ V is the union vector of AGPV U and AGPV V and UV(i)=1 if V(i)>0 or U(i) >0. IV = U ∩V is the intersection vector and IV(i)=1 if V(i)>0 and U(i) >0.

7

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

An inter-axes property used to match the test character with the standard characters is that the angular relationships of nature axes on the test character are similar to those on the corresponding standard character. In the experiment, a threshold thA=π/32 is used to check if the AGPVs of the test character match the angular relationship of nature axes of a standard character. Let AAT(k) be the k-th axis angle of the test character and 0≤AAT(k)<2π. The function AA(i,j) denote the angle of the j-th axis of the i-th standard character, similarly 0≤AA(i,j)<2π. If the m-th and n-th axis of the test character are respectively corresponding to the g-th and h-th axis of the i-th standard character, then

( AAT (m) − AAT (n )) − ( AA(i , g ) − AA(i ,h )) ≤ thA .

Stage 2: Find the other matching pairs between the standard AGPVs and the test character: Based on the fundamental pair, the axes angles of the test character are compared with those of the standard character. Let the number of nature AGPVs detected on the test character be NNT. For the i-th standard character, create an empty array mp(j)=0, 1≤j≤ NV(i), to denote the matching pair with the test character. Take use of (22), calculate

( AAT (k ) − AAT (kT )) − ( AA(i , j ) − AA(i , jS )) ≤ thA . ∀k ∈ [1, NN T ], k ≠ kT ; ∀j ∈ [1, NN (i )], j ≠ jS

the k-th test AGPV satisfies (25) is called the j-th matching pair of the standard character, denoted as mp(j)=k. Note that there might be more than one test AGPVs satisfying (25). In this case only the one of lowest matching cost is recognized as the j-th matching axis and the others are ignored.

(22)

2) Create standard AGPV database A standard database is created by collecting all the AGPVs extracted from characters of standard parameters: standard size, standard aspect ratio, no noise, no blur, and neither rotation nor deformation. The extracted AGPVs are called standard AGPVs and stored by two categories: the one calculated on nature axes is called the standard nature AGPVs and the other calculated on augmented axes is called the standard augmented AGPVs. Let the number of total standard characters be N, N=36(0~9 and A~Z) for license plate characters in this paper. Denote the number of standard nature AGPVs for i-th standard character as NN(i), the number of standard augmented AGPVs as NA(i), and the total number of AGPVs as NV(i), where NV(i)= NN(i)+ NA(i). The j-th standard AGPV of the i-th character is denoted as VS(i,j), where j=1 to NV(i). Note that VS(i,j) are standard nature AGPVs for j≤NN(i) while VS(i,j) are standard augmented AGPVs otherwise.

Stage 3: Calculate total matching cost of standard nature AGPVs: Define a character matching cost function CMC(i) to measure the similarity between test character and the i-th standard character by summing up the matching costs of all the matching pairs, CMC (i ) =

T

S

(26)

(27)

If there is one AGPV of the test character, say, the k-th nature AGPV, satisfies (25), i.e., ( AAT (k ) − AX ) ≤ th A , then the k-th nature AGPV is mapped to the j-th augmented axis and mp(j)=k. Otherwise, the AGPV corresponding to the j-th standard augmented axis must be calculated based on the axis angle AX. After that, the matching costs of the augmented AGPVs are accumulated into the character matching cost function as,

CMC (i ) = CMC (i ) +

(23)

NV (i )

∑ MC (V (mp( j )),V (i , j ))

j = NN (i )+1

T

S

(28)

Stage 5: Recognition: Due to the different number of AGPVs for different standard character, the character matching cost function is normalized by the total number of standard AGPVs, i.e.,

Find a pair of axes whose matching cost is minimum:

k ,j

∑ MC (V (mp( j )),V (i , j ))

j =1,mp ( j )> 0

AX = ( AA(i , j ) − AA(i , j S )) + AAT (kT )

Stage 1: Find the fundamental matching pair. Calculate the cost function between the test character and the j-th AGPV of the i-th standard character.

(kT , j S ) = arg min(C1 (k , j ))

NN (i )

Stage 4: Calculate the matching costs of augmented AGPVs: At the first step, find the axis angle AX on the test character corresponding to the j-th standard augmented axis as

3) Matching of characters In order to recognize the test character, the AGPVs of it is stage-by-stage compared with the standard AGPVs in the database. Moreover, a candidates list is created by including all the standard characters at the beginning and removes those having high matching cost to the test character on each stage. Until the end of the last stage, the candidate in the list consisting of the lowest total matching cost is considered as the recognition result.

C1 (k , j ) = MC (VT (k ),VS (i , j ))

(25)

(24)

CMC (i ) = CMC (i ) / NV (i )

If min(C1(kT, jS)) is less than a threshold thF, the i-th standard character is kept in the candidates list and the pair(kT, jS) is served as the fundamental pair of the candidate.

8

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(29)

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

Finally, the test character is recognized as the h-th standard character of the lowest matching cost if the character matching cost CMC(h)
IV.

A. Stability to Noise The image set B is generated by two steps: First, copy set A to a new image set. Second, add 4% pepper and salt noise onto the new image set to form set B. Note that 4% pepper and salt noise means one pepper or salt appearing in every 25 pixels.

EXPERIMENTAL RESULTS

It can be seen from Table I that the character extraction rate is degraded when noise is added. A further experiment shows that enlarging the size of characters is very useful to improve the extraction rate under the effect of noise. It is reasonable since the noise energy is lowered if the range of accumulation is enlarged. Similarly, the same method is able to improve the TPR of recognition since it accumulates the gradient energy before feature extraction.

The test images are captured from the license plates under general conditions and include several factors which often degrade recognition rates, such as dirty plates, deformed license plates, plates under special light conditions …, etc. All the images are sized 2048x1536 and converted into 8-bit grayscale images. Total 60 test images are collected and each of them contains one license plates consisting of 4 to 6 characters. The minimum character size in the image is 32×32 pixels. We choose some characters from the test images to be the standard characters and calculate the standard AGPVs.

B. Stability to Character Deformation The image set C is generated by transforming pixels of set A via affine transformation matrix M, expressed as

The results are measured by the true positive rate(TPR), i.e., the rate that the true characters are extracted and recognized successfully, and the false positive rate(FPR), i.e., the rate that the false characters are extracted and recognized as a character in the test image. The process is divided into two stages, extraction stage and recognition stage, and the result of each stage is recorded and listed in table I.

⎡ xC ⎤ ⎡ xA ⎤ . ⎢y ⎥ = M ⋅⎢y ⎥ ⎣ A⎦ ⎣ C⎦

Table II gives two examples of different matrix M and corresponding affine-transformed characters

The discussion is focused on the stability of the proposed method with respect to the three imaging parameters: noise, character deformation, and illumination change. These parameters are considered to be the most serious factors to impact recognition rate. Denote the original test images as set A. Three extra image sets, set B, set C and set D, are generated to test the impact of these parameters, respectively.

TABLE II.

Input \ M

In Table I, the column “Extraction TPR” stands for the rate that the characters in test images are extracted correctly. A character is considered as successfully extracted if, first, it is isolated from external objects and second, the grouped pixels can be recognized by human eyes. The column “Recognition 1 TPR” represents the rate that the extracted true characters are recognized correctly, under the condition that the character orientation is unknown. Nevertheless, the column “Recognition 2 TPR” is the recognition rate based on the condition that the character orientation is known so that the fundamental pair of a test character is fixed. This is reasonable for most cases since the license plates are often orientated horizontally. Under such conditions, the characters are kept in vertical orientations if the camera capture angle keeps upright. TABLE I.

Set A Set B Set C Set D

TPR 93.3 83.0 91.6 89.7

Recognition 1 Unknown orientation TPR FPR 88.3 8.3 85.4 8.1 67.3 13.3 78.4 8.6

TWO OF THE AFFINE-TRANSFORMED CHARACTERS

0⎤ ⎡ 1 ⎢− 1 / 6 1⎥ ⎣ ⎦

0⎤ ⎡ 1 ⎢− 1 / 4 1 ⎥ ⎣ ⎦

⎡ 1 0⎤ ⎢1 / 6 1⎥ ⎣ ⎦

⎡ 1 0⎤ ⎢1 / 4 1⎥ ⎣ ⎦

In Table I, the extraction TPR for set C is very close to the original rate for set A. This is because the extraction method by scale-space difference is independent from character deformation. However, the character deformation has serious impact to recognition TPR because not only the angle differences of the axes but also the peaks in AGPV are changed seriously due to the deformation. A method to increase the recognition TPR for deformed input characters is expanding the database by including the deformed characters into standard characters. In other words, the false recognition can be reduced by considering the seriously deformed characters as new characters, then recognize it based on the new standard AGPVs. This method is helpful to resolve the problem of character deformation but takes more time in recognition as the AGPVs in standard database grow up comparing to the original.

SIMULATION RESULTS OF THE FOUR IMAGE SETS

Extraction

(30)

Recognition 2 Known orientation TPR FPR 93.3 3.3 88.3 5.0 75.8 10.6 89.3 3.3

C. Stability to Illumination Change It is found that the successful rate is robust and almost no change on extraction stage when the illumination is changed by constant factors, i.e., I' ( x , y ) = k ⋅ I ( x , y )

9

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(31)

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

Consider to uneven illumination, four directional light sources, L1 to L4, are added into the test images and form Set D to imitate the response of illumination change. Expressed as

plate characters, a lot of isolated characters are wide spreading around our daily lives. For example, there are many isolated characters exist in elevators, clocks, telephones …, etc. Traditional coarse-to-fine methods to recognize license plate characters are not suitable for these cases because each of them have different background and irregular characters placement. The proposed AGPV method shall be useful to recognize these isolated characters if it can be adapted to different font types.

L1 ( x , y ) = ( x + y + 10 ) / (L + W + 10)

L2 ( x , y ) = ((W − x ) + y + 10 ) / (L + W + 10 )

L3 (x , y ) = (x + (L − y ) + 10 ) / (L + W + 10 )

L4 ( x , y ) = ((W − x ) + (L − y ) + 10 ) / (L + W + 10 ) ,

(32)

REFERENCES

where the W and L are respectively the width and length of the test image. We can see from Table I that the uneven illumination change makes little difference to character extraction. A detail analysis indicates that insufficient color (intensity) depth makes some edges disappeared under illumination change and forms the major reason for the drop of extraction TPR. Similarly, the same reason also degrades the TPR in recognition stage.

[1]

[2]

[3]

A good approach to minimize the sensitivity to illumination change is to increase the color (intensity) depth of the input image. 12-bit or 16-bit gray-level test images will have better recognition rates than 8-bit ones. V.

[4]

CONCLUSION

[5]

A method to extract and recognize the license plate characters is presented comprising two stages: First, extract isolated characters in a license plate. And second, recognize them by the novel AGPVs.

[6]

The method in extraction stage incrementally convolves the input image with different scale Gaussian functions and minimizes the computations in high scale images by means of sub-sampling. The isolated characters are found by connected components analysis on the Difference-of-Gaussian image and filtered by expected sizes. The method in recognition stage adopts AGPV as feature vector. The AGPVs calculated from Gaussian-filtered images are independent from rotation and scaling, and robust to noise and illumination change.

[7]

[8]

[9]

The proposed method has two distinctions with traditional methods:

[10]

1. Unlike traditional methods detect the whole license plate in the first step; the method proposed here extracts characters alone and no need to detect the whole plate in advance.

[11] [12]

2. The recognition method is suitable for single characters. Unlike traditional methods require the information of baseline before recognition; the proposed method requires no prior information of character sizes and orientations.

[13] [14]

A direction for future research can be categorized to extend the method to recognize different font types. Not only license

10

Takashi Naito, Toshihiko Tsukada, Keiichi Yamada, Kazuhiro Kozuka, and Shin Yamamoto, “Robust License-Plate Recognition Method for Passing Vehicles under Outside Environment,” IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 49, NO. 6, NOVEMBER 2000 S. Kim, D. Kim, Y. Ryu, and G. Kim, “A Robust License Plate Extraction Method Under Complex Image Conditions,” in Proc. 16th International Conference on Pattern Recognition (ICPR’02), Quebec City, Canada, vol. 3, pp. 216-219, Aug. 2002. Wang, S.-Z.; Lee, H.-J., “A Cascade Framework for a Real-Time Statistical Plate Recognition System,” Transactions on Information Forensics and Security, IEEE, vol. 2, no.2, pp. 267 - 282, DOI: 10.1109/TIFS.2007.897251, June 2007 Zunino, R.; Rovetta, S., “Vector quantization for license-plate location and image coding.” Transactions on Industrial Electronics, IEEE, vol. 47, no.1, pp159 - 167, Feb 2000, DOI: 10.1109/41.824138 Kwan, H.K., “Multilayer recurrent neural networks [character recognition application example].” The 2002 45th Midwest Symposium on, vol. 3, pp 97-100, 4-7 Aug. 2002, ISBN: 0-7803-7523-8, INSPEC: 7736581 Wu-Jun Li; Chong-Jun Wang; Dian-Xiang Xu; Shi-Fu Chen., “Illumination invariant face recognition based on neural network ensemble.” ICTAI 2004., pp 486 - 490, 15-17 Nov. 2004, DOI: 10.1109/ICTAI.2004.71 N. Otsu, “A Threshold Selection Method from Gray-Level Histograms.” Transactions on Systems, Man and Cybernetics, IEEE, vol. 9, no.1, pp62-66, Jan. 1979, ISSN: 0018-9472, DOI: 10.1109/TSMC.1979.4310076 Atallah AL-Shatnawi and Khairuddin Omar., “Methods of Arabic Language Baseline Detection – The State of Art,” IJCSNS, vol. 8, no.10, Oct 2008 David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 , pp. 91-110, 2004 David G. Lowe, "Object recognition from local scale-invariant features," International Conference on Computer Vision, Corfu, Greece (September 1999), pp. 1150-1157. Witkin, A.P. "Scale-space filtering," International Joint Conference on Artificial Intelligence, Karlsruhe, Germany, pp. 1019-1022, 1983 Koenderink, J.J. "The structure of images," Biological Cybernetics, 50:363-396, 1984 Lindeberg, T. "Scale space theory: A basic tool for analyzing structures at different scales." Journal of Applied Statistics, 21(2):224-270, 1994 Mikolajczyk, K. "Detection of local features invariant to affine transformations." Ph.D thesis, Institut National Polytechnique de Grenoble, France, 2002

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Personal Information Databases Sabah S. Al-Fedaghi

Bernhard Thalheim

Computer Engineering Department Kuwait University Kuwait [email protected]

Computer Science Institute Kiel University Kiel, Germany [email protected] leading to the need for a PII database with its own conceptual scheme, system management, and physical structure.

Abstract—One of the most important aspects of security organization is to establish a framework to identify securitysignificant points where policies and procedures are declared. The (information) security infrastructure comprises entities, processes, and technology. All are participants in handling information, which is the item that needs to be protected. Privacy and security information technology is a critical and unmet need in the management of personal information. This paper proposes concepts and technologies for management of personal information. Two different types of information can be distinguished: personal information and non-personal information. Personal information can be either personalidentifiable information (PII), or non-identifiable information (NII). Security, policy, and technical requirements can be based on this distinction. At the conceptual level, PII is defined and formalized by propositions over infons (discrete pieces of information) that specify transformations in PII and NII. PII is categorized into simple infons that reflect the proprietor’s aspects, relationships with objects, and relationships with other proprietors. The proprietor is the identified person about whom the information is communicated. The paper proposes a database organization that focuses on the PII spheres of proprietors. At the design level, the paper describes databases of personal identifiable information built exclusively for this type of information, with their own conceptual scheme, system management, and physical structure. Keywords-component; database; personal identifiable information

personal

Different types of information of interest in this paper are shown in Fig. 1. We will use the term infon to refer to “a piece of information” [9]. The parameters of an infon are objects, and so-called anchors assign these objects such as agents to parameters. Infons can have sub-infons that are also infons. Let INF = the set of infons in the system. Four types of infons are identified: 1.

2.

3.

So-called “private or personal” information is a subset of INF. “Private or personal” information is partitioned into two types of information: PII and PNI. PII is the set of pieces of personal identifiable information. We use the term pinfon to refer to this special type of infon. The relationship between PII and the notion of identifiably will be discussed later. PNI is the set of pieces of non-identifiable information.

Information (INF)

information;

Private/personal information

I. INTRODUCTION Rapid advances in information technology and the emergence of privacy-invasive technologies have made information privacy a critical issue. According to Bennett and Raab [11], technically, the concept of information privacy is treated as information security. “Information privacy is the interest an individual has in controlling, or at least significantly influencing, the handling of data about themselves” [10]; however, the information privacy domain goes beyond security concerns.

Personal non-identifiable information (PNI) Personal identifiable information (PII)

Information security aims to ensure the security of all information regardless whether privacy related or non-privacy related. Here we use the term information in its ordinary sense of “facts” stored in a database. This paper explores the privacyrelated differences between types of information to argue that security, policy, and technical requirements set personal identifiable information apart from other types of information,

Figure 1. Types of information.

4.

11

NII = (INF – PII). We use the term ninfon to refer to this special type of infon. NII is the set of pieces of

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

building applications that enforce the security policies customers want enforced, but only where such control is necessary. By dynamically appending a predicate to SQL statements, VPD limits access to data at the table’s row level and ties the security policy to the table (or view) itself. “Private” in VPD means data owned and controlled by its owner. Such a mechanism supports the “privacy” of any owned data, not necessarily personal identifiable information. In contrast, we propose to develop a general PII information database management system where PII and NII are explicitly separated in planning, design, and implementation.

non-identifiable information and includes all pieces of information except personal identifiable information (shaded area in Fig. 1). PNI in Fig. 1 is a subset of NII. It is the set of non-identifiable information; however, it is called “personal” or “private” because its owner (a natural person) has interest in keeping it private. In contrast, PII embeds a unique identity of a natural person From the security point of view, PII is more sensitive than an “equal amount” (to be discussed later) of NII. With regard to policy, PII has more policy-oriented significance (e.g., the 1996 EU directive) than NII. With regard to technology, there are unique PII-related technologies (e.g., P3P) and techniques (e.g., k-anonymization) that revolve around PII. Additionally, PII possesses an objective definition that allows separating it from other types of information, which facilitates organizing it in a manner not available to NII information. The distinction of infons into PII, NII, and PNI requires a supporting technology. We thus need a framework that allows us to handle, implement, and manage PII, NII, and PNI. Management of PII, NII, and PNI ought, ideally, to be optimal in the sense that derivable infons are not stored. This paper introduces a formalism to specify privacy-related infons based on a theoretical foundation. Current privacy research lacks such formalism. The new formalism can benefit two areas. First, a precise definition of the informational privacy notion is introduced. It can also be used as a base to develop a formal and informal specification language. Informal specification language can be used as a vehicle to specify various privacy constraints and rules. Further work can develop a full formal language to be used in privacy enhancing systems. In this paper, we concentrate on the conceptual organization of PII databases based on a theory of infons. To achieve such a task, we need to identify which subset of infons will be considered personal identifiable information. Since the combination of personal identifiable information is also personally identifiable, we must find a way to minimize the information to be stored. We introduce an algebra that supports such minimization. Infons may belong to different users in a system. We distinguish between proprietors (persons to whom PII refers through embedded identities) and owners (entities that possess PII of others such as agencies or other nonproprietor persons).

Separating “private” data from “public” data has already been adopted in privacy preserving systems; however, these systems do not distinguish explicitly personal identifiable information. The Platform for Privacy Preferences (P3P) is one such system that provides a means for privacy policy specification and exchange but “does not provide any mechanism to ensure that these promises are consistent with the internal data processing” [7]. It is our judgment that “internal data processing” requires recognizing explicitly that “private data” is of two types: personal identifiable information and personal non-identifiable information, and this difficulty is caused by the heterogeneity of data. Hippocratic databases have been introduced as systems that integrate privacy protection into relational database systems [1][4]. A Hippocratic database includes privacy policies and authorizations associated with each attribute and each user for usage purpose(s). Access is granted if the access purpose (stated by the user) is entailed by the allowed purposes and not entailed by the prohibited purposes [7]. Users’ role hierarchies, similar to ones used in security policies (e.g., RBAC), are used to simplify management of the mapping between users and purposes. A request to access data is accompanied by access purpose, and accessing permission is determined after comparing such purpose with the intended purposes of that data in privacy policies. Each user has authorization for a set of access purposes. Nevertheless, in principle, a Hippocratic database is a general DBMS with a purpose mechanism. Purposes can be declared for any data item that is not necessarily personal identifiable information. III. INFONS This section reviews the theory of infons. The theory of infons provides a rich algebra of construction operations that can be applied to PII. Infons in an application domain such as personal identifiable information are typically interrelated; they partially depend on each other, partially exclude each other, and may be (hierarchically) ordered. Thus we need a theory that allows constructing a “lattice” of infons (and PII infons) that includes basic and complex infons while taking into consideration their structures and relationships. In such a theory, we identify basic infons that cannot be decomposed into more basic infons. This construction mechanism of infons from infons should be supported by an algebra of construction operations. We generally may assume that each infon consists of a number of components. The construction is applied in performing combination, replacement, or removal of some of these components; some may be essential (not removable) or auxiliary (optional).

II. RELATED WORKS Current database management systems (DBMS) do not distinguish between PII and NII. An enterprise typically has one or several databases. Some data is “private,” other data is public, and it is typical that these data are combined in queries. “Private” typically means exclusive ownership of and rights (e.g., access) to the involved data, but there is a difference between “private” data and personal identifiable data. “Private” data may include NII exclusively controlled by its owner; in contrast, PII databases contain only personal identifiable information and related data, as will be described later. For example, in the Oracle database, the Virtual Private Database (VPD) is the aggregation of fine-grained access control in a secure application context. It provides a mechanism for

12

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

associations among the infons but only those that are meaningful in a given application area. We may assume that two infons are either potentially associated or cannot be associated with each other. The same restriction can be made for compatibility.

An infon is a discrete item of information and may be parametric and anchored. The parameters represent objects or properties of objects. Anchors assign objects to parameters. Parameter-value pairs are used to represent a property of an infon. The property may be valid, invalid, undetermined, etc. The validity of properties is typically important information. Infons are thus representable by a tuple structure

This infon world is very general and allows deriving more advanced operations and predicates. If we assume the completeness of compatibility and association predicates, we may use expressions defined by the operations and derived predicates. The extraction of application-relevant infons from infons is supported by five operations:

<> or by an anchored tuple structures <>.

1. Infon projection narrows the infon to those parts (objects or concepts, axioms or invariants relating entities, functions, events, and behaviors) of concern for the application-relevant infons. For example, a projection operation may produce the set of proprietors from a given infon, e.g., {Mary, John} from John loves Mary. 2. Infon instantiation lifts the general infons to those of interest within the solution and instantiates variables by values that are fixed for the given system. For example a PII infon may be instantiated from its anonymized version, e.g., John is sick from Someone is sick. 3. Infon determination is used to select those traces or solutions to the problem under inspection that are the most suitable or the best fit for the system envisioned. The determination typically results in a small number of scenarios for the infons to be supported, for example, infon determination to decide whether an infon belongs to a certain piiSphere (PII of a certain proprietor – to be discussed later). 4. Infon extension is used to add those facets not given by the infon but by the environment or the platforms that might be chosen or that might be used for simplification or support of the infon (e.g., additional data, auxiliary functionality), for example, infon extension to related non-identifiable information (to be discussed later). 5. Infons are often associated, adjacent, interacting, or fit with each other. Infon join is used to combine infons into more complex and combined infons that describe a complex solution, for example, joining atomic PIIs to form compound PII (these types of PII will be defined later) and a collection of related PII information. The application of these operations allows extraction of which sub-infons, which functionality, which events, and which behavior (e.g., the action/verb in PII) are shared among information spheres (e.g., of proprietors). These shared “facilities” encompass all information spheres of relevant infons. They also hint at possible architectures of information and database systems and at separation into candidate components. For instance, entity sharing (say, non-person entity) describes which information flow and development can be observed in the information spheres.

We may order properties and anchors. A linear order allows representing an infon as a simple predicate value. Following Devin’s formalism [9], an infon has the form <> and <>. R is an n-place relation and a1, . . . , an are objects appropriate for R. 0 and 1 indicate these may be thought of as objects do, do not, respectively, and they stand in relation R. For simplicity sake, we may write an infon <> as <> when R is known or immaterial. We may use multisets instead of sets for infons or a more complex structure. We choose the set notation because of its representability within the XML technology. Sets allow us to introduce a simple algebra and a simple set of predicates. “PII infons” are distinguished by the mandatory presence of at least one proprietor, an object of type uniquely identifiable person. The world of infons currently of interest can be specified as the triple: (A; O; P) as follows. - Atomic infons A - Algebraic operations O for computing complex infons such as combination ⊕ of infons, abstraction ⊗ of infons by projections, quotient ÷ of infons, ρ renaming of infons, union ∪ of infons, intersection ∩ of infons, full negation ¬ of infons, and minimal negation ┐ of infons within a given context. - Predicates P stating associations among infons such as the sub-infon relation, a statement whether infons can be potentially associated with each other, a statement whether infons cannot be potentially associated with each other, a statement whether infons are potentially compatible with each other, and a statement whether infons are incompatible with each other. The combination of two infons results in an infon with all components of the two infons. The abstraction is used for a reduction of components of an infon. The quotient allows concentrating on those components that do not appear in the second infon. The union takes all components of two infons and does not combine common components into one component. The full negation allows generating all those components that do not appear in the infon. The minimal negation restricts this negation to some given context.

We will not be strictly formal in applying infon theory to PII. Such a venture needs far more space. Additionally, we squeeze the approach in the area of database design in order to illustrate a sample application. The theory of PII infons can be applied in several areas, including the technical and legal aspects of information privacy and security.

We require that the sub-infon relation is not transitively reflexive. The compatibility and incompatibility predicates are not contradictory. The potential association and its negation must not conflict. The predicates should not span all possible

13

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

IV. PERSONAL IDENTIFIABLE INFORMATION

4. Inclusivity of PII

It is typically claimed that what makes data “private” or “personal” is either specific legislation, e.g., a company must not disclose information about its employees, or individual agreements, e.g., a customer has agreed to an electronic retailer's privacy policy. However, this line of thought blurs the difference between personal identifiable information and other “private” or “personal” information. Personal identifiable information has an “objective” definition in the sense that it is independent of such authorities as legislation or agreement.

Let nσ denote the number of uniquely identified persons in the infon σ, then σ ∈ INF ∧ nσ > 0 ↔ σ ∈ PII

PII infons involve a special relationship called proprietorship with their proprietors, but not with persons who are their non-proprietors, and non-persons such as institutions, agencies, or companies. For example, a person may possess PII of another person, or a company may have the PII of someone in its database; however, proprietorship of PII is reserved only to its proprietor regardless of who possesses it. To base personal identifiable information on firmer ground, we turn to stating some principles related to such information. For us, personal identifiable information (pinfon) is any information that has referent(s) to uniquely identifiable persons [2]. In logic, this type of reference is the relation of a word (logical name) to a thing. A pinfon is an infon such that at least one of the “objects” is a singly identifiable person. Any singly identifiable person in the pinfon is called proprietor of that pinfon. The proprietor is the person about whom the pinfon communicates information. If there is exactly one object of this type, the pinfon is an atomic pinfon; if there is more than one singly identifiable person, it is a compound pinfon. An atomic pinfon is a discrete piece of information about a singly identifiable person. A compound pinfon is a discrete piece of information about several singly identifiable persons. If the infon does not include a singly identifiable person, it is called a ninfon. We now introduce a series of axioms that establish the foundation of the theory of personal identifiable information. These axioms can be considered negotiable assumptions. The symbol “→” denotes implication. INF is the set of infons described in Fig. 1.

6. Inclusivity of NII

5. Proprietary For σ ∈ PII, let PROP(σ) be the set of proprietors of σ. Let PERSONS denote the set of (natural) persons. Then, σ ∈ PII → PROP(σ) ∈ PERSONS That is, pinfons are pieces of information about persons.

σ ∈ INF ∧ (nσ = 0) ↔ σ ∈ NII That is, non-identifiable information (ninfon) does not embed any unique identifiers of persons. 7. Combination of non-identifiability with identity Let ID denote the set of (basic) pinfons of type: << is, Þ, 1>>, then, σ1 ∈ PII ↔ <<σ2 ∈ NII ⊕ σ3 ∈ ID) >> assuming σ1 ∉ ID. “⊕” here denotes the “merging” of two sub-infons. 8. Closability of PII σ1 ∈ PII ⊕ σ2 ∈ PII → (σ1 ⊕ σ2) ∈ PII 9. Combination with non-identifiability σ1 ∈ NII ⊕σ2 ∈ PII → (σ1⊕σ2) ∈ PII That is, non-identifying information plus personal identifiable information is personal identifiable information. 10. Reducibility to non-identifiability σ1 ∈ PII ÷ (σ2 ∈ ID) ↔ σ3 ∈ NII where σ2 is a sub-infon of σ1. “÷” denotes removing σ2. 11. Atomicity Let APII = the set of atomic personal identifiable information. Then, σ ∈ PII ∧ (nσ = 1) ↔ σ ∈ APII 12. Non-atomicity

1. Inclusivity of INF

Let CPII = the set of compound personal identifiable information. Then, σ ∈ PII ∧ (nσ > 1) ↔ σ ∈ CPII

σ ∈ INF ↔ σ ∈ PII ∨ σ ∈ NII That is, infons are the union of pinfons and ninfons. PII is the set of pinfons (pieces of personal identifiable information), and NII is the set of ninfons (pieces of non-identifiable information).

13. Reducibility to atomicity σ ∈ CPII ↔ <<σ1, σ2, …, σm >>, σi ∈ APII, m = nσ, and 1≤i≤ m, and {PROP(σ1) , PROP(σ2), …, PROP(σm)} = PROP(σ).

2. Exclusivity of PII and NII σ ∈ INF ∧ σ ∉ PII → σ ∈ N σ ∈ INF ∧ σ ∉ N → σ ∈ PII That is, every infon is exclusively either pinfon or ninfon.

These axioms support the separation of infons into PII, NII, and PNI and their transformation. Let us now discuss the impact of some of these axioms. We concentrate the discussion on the more difficult axioms.

3. Identifiability Let ID denote the set of (basic) pinfons of type << is, Þ, 1>> and let þ be a parameter for a singly identifiable person. Then << is, Þ, 1>> → << is, Þ, 1>> ∈ INF

Identifiability Let þ be a parameter for a singly identifiable person, i.e., a specific person, defined as

14

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Þ = IND1|<< singly identifiable, IND1, 1>> where IND indicates the basic type: an individual [9]. That is, Þ is a (restricted) parameter with an anchor for an object of type singly identifiable individual. The individual IND1 is of type person defined as << person, IND1, 1>> Put simply, þ is a reference to a singly identifiable person. We now elaborate on the meaning of “identifiable.” Consider the set of unique identifiers of persons. Ontologically, the Aristotelian entity/object is a single, specific existence (a particularity) in the world. For us, the identity of an entity is its natural descriptors (e.g., tall, black eyes, male, blood type A, etc.). These descriptors exist in the entity/object. Tallness, whiteness, location, etc. exist as aspects of the existence of the entity. We recognize the human entity from its natural descriptors. Some descriptors form identifiers. A natural identifier is a set of natural descriptors that facilitates recognizing a person uniquely. Examples of identifiers include fingerprints, faces, and DNA. No two persons have identical natural identifiers. An artificial descriptor is a descriptor that is mapped to a natural identifier. Attaching the number 123456 to a particular person is an example of an artificial descriptor in the sense that it is not recognizable in the (natural) person. An artificial identifier is a set of descriptors mapped to a natural identifier of a person. Date of birth (an artificial descriptor), gender (a natural descriptor), and a 5-digit ZIP (an artificial descriptor) are three descriptors that form an artificial identifier for 87% of the US population [12]. By implication, no two persons have identical artificial identifiers. If two persons somehow have the same Social Security number, then this Social Security number is not an artificial identifier because it is not mapped uniquely to a natural identifier. We define identifiers of proprietors as infons. Such definition is reasonable since the mere act of identifying a proprietor is a reference to a unique entity in the information sphere. Hence, << is, Þ, 1>> → << is, Þ, 1>> ∈ INF That is, every unique identifier of a person is an infon. These infons cannot be decomposed into more basic infons.

- Basic pinfons and ninfons, i.e., the pinfon John S. Smith and the ninfon Someone is sick form the atomic PII (i.e., PII with one proprietor) John S. Smith is sick. This pinfon is produced by an instantiation operation that lifts the general infons to pinfons and instantiates the variable (Someone) by a value (John S. Smith). - Complex pinfons form more complex infons, e.g., John S. Smith and Mary F. Fox are sick We notice that the operation of projection is not PII-closed since we can define projecting of ninfon from pinfon (removing all identifiers). This operation is typically called anonymization. Every pinfon refers to its proprietor(s) in the sense that it “leads” to him/her/them as distinguishable entities in the world. This reference is based on his/her/their unique identifier(s). As stated previously, the relationship between persons and their own pinfon is called proprietorship [1]. A pinfon is proprietary PII of its proprietor(s). Defining pinfon as “information identifiable to the individual” does not mean that the information is “especially sensitive, private, or embarrassing. Rather, it describes a relationship between the information and a person, namely that the information—whether sensitive or trivial—is somehow identifiable to an individual” [10]. However, personal identifiable information (pinfon) is more “valuable” than personal non-identifiable information (ninfon) because it has an intrinsic value as “a human matter,” just as privacy is a human trait. Does this mean that scientific information about how to make a nuclear bomb has less intrinsic moral value than the pinfon John is left handed? No, it means John is left handed has a higher moral value than the ninfon There exists someone who is left handed. It is important to compare equal amounts of information when we decide the status of each type of information [5]. To exclude such notions as confidentiality being applicable to the informational privacy of non-natural persons (e.g., companies), the next axiom formalizes that pinfon is applied only to (natural) persons. For σ ∈ PII, we define PROP(σ) to be the set of proprietors of σ. Notice that |PROP(σ ∈ PII)| = nσ. Multiple occurrences of identifiers of the same proprietor are counted as a single reference to the proprietor. In our ontology, we categorize things (in the world) as objects (denoted by the set OBJECTS) and non-objects. Objects are divided into (natural) persons (denoted by the set PERSONS) and non-persons. A fundamental proposition in our system is that proprietors are (natural) persons.

Inclusivity of PII Next we position identifiers as the basic infons in the sphere of PII. The symbol nσ denotes the number of uniquely identified persons in infon σ. Then we can define PII and NII accordingly: σ ∈ INF ∧ nσ > 0 ↔ σ ∈ PII That is, an infon that includes unique identifiers of (natural) persons is personal identifiable information. From (3) and (4), any unique personal identifier or piece of information that embeds identifiers is personal identifiable information. Thus, identifiers are the basic PII infons (pinfons) that cannot be decomposed into more basic infons. Furthermore, every complex pinfon includes in its structure at least one basic infon, i.e., identifier. The structure of a complex pinfon is constructed from several components:

Combination of non-identifiability with identity Next we can specify several transformation rules that convert from one type of information to another. These (privacy) rules are important for deciding what type of information applies to what operations (e.g., information disclosure rules). Let ID denote the set of (basic) pinfons of type << is, Þ, 1>>. That is, ID is the set of identifiers of persons (in the

15

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

world). We now define construction of complex infons from basic pinfons and non-identifying information. The definition also applies to projecting pinfons from more complex pinfons by removing all or some non-identifying information. σ1 ∈ PII ↔ <<σ2 ∈ NII ⊕ σ3 ∈ ID) >> assuming σ1 ∉ ID. That is, non-identifiable information plus a unique personal identifier is personal identifiable information and vice versa (i.e., minus). Thus the set of pinfons is closed under operations that remove or add non-identifying information. We assume the empty information ∅ is in NII. “⊕” here denotes “merging” two sub-infons. We also assume that only a single σ3 ∈ ID is added to σ2 ∈ NII; however, the axiom can be generalized to apply to multiple identifiers. An example of axiom 7 is σ1 = << John loves apples>> ↔ <<σ2 = Someone loves apples ⊕ σ3 = John>> Or, in a simpler description: σ1 = John loves apples ↔ {σ2 = Someone loves apples ⊕ σ3 = John} The axiom can also be applied to the union ∪ of pinfons.

That is, an apinfon is a pinfon with a single human referent. Notice that σ may embed several identifiers of the same person, yet the referent is still one. Notice that apinfons can be basic (a single identifier) or complex (a single identifier plus non-identifiable information). Non-atomicity Let CPII = a set of compound personal identifiable information. Each piece of compound personal identifiable information is a special type of pinfon called cpinfon. Formally, the set CPII is defined as follows. σ ∈ PII ∧ nσ > 1 ↔ σ ∈ CPII That is, a cpinfon is a pinfon with more than one human referent. Notice that cpinfons are always complex since they must have at least two apinfons (two identifiers). The apinfon (atomic personal identifiable information) is the “unit” of personal identifiable information. It includes one identifier and non-identifiable information. We assume that at least some of the non-identifiable information is about the proprietor. In theory this is not necessary. Suppose that an identifier is amended to a random piece of non-identifiable information (noise). In the PII theory the result is (complex) atomic PII. In general, mixing noise with information preserves information.

Closability of PII PII is a closed set under different operations (e.g., merge, concatenate, submerge, etc.) that construct complex pinfons from more basic pinfons. Hence, σ1 ∈ PII ⊕ σ2 ∈ PII → (σ1 ⊕ σ2) ∈ PII That is, merging personal identifiable information with personal identifiable information produces personal identifiable information. In addition, PII is a closed set under different operations (e.g., merge, concatenate, submerge, etc.) that construct complex pinfons by mixing pinfons with nonidentifying information.

Reducibility to atomicity Any cpinfon is privacy-reducible to a set of apinfons (atomic personal identifiable information). For example, John and Mary are in love can be privacy-reducible to the apinfons John and someone are in love and Someone and Mary are in love. Notice that our PII theory is a syntax (structural) based theory. It is obvious that the privacy-reducibility of compound personal identifiable information causes a loss of “semantic equivalence,” since the identities of the referents in the original information are separated. Semantic equivalency here means preserving the totality of information, the pieces of atomic information, and their link. Privacy reducibility is expressed by the following axiom: σ ∈ CPII ↔ <<σ1, σ2, …, σm >>, σi ∈ APII,

Reducibility to non-identifiability Identifiers are the basic pinfons. Removing all identifiers from a pinfon converts it to non-identifying information. Adding identifiers to any piece of non-identifying information converts it to a pinfon, σ1 ∈ PII ÷ σ2 ∈ ID ↔ σ3 ∈ NII where σ2 is a sub-infon of σ1. Axiom 10 states that personal identifiable information minus a unique personal identifier is non-identifying information and vice versa. “÷” here denotes removing σ2. We assume that a single σ2 ∈ ID is embedded in σ1; however, the opposition can be generalized to apply to multiple identifiers such that removing all identifiers produces σ3 ∈ NII.

m = nσ, (1 ≤ i ≤ m), and {PROP(σ1) , PROP(σ2), …, PROP(σm)} = PROP(σ). The reduction process produces m atomic personal identifiable information with m different proprietors. Notice that the set of resultant apinfons produces a compound pinfon. This preserves the totality of the original cpinfon through linking its apinfons together as members of the same set.

Atomicity Furthermore, we define atomic and non-atomic (compound) types of pinfons. Let APII = a set of atomic personal identifiable information. Each piece of atomic personal identifiable information is a special type of pinfon called apinfon. As we will see later, cpinfons can be reduced to apinfons, thus simplifying the analysis of PII. Formally, the set APII is defined as follows. σ ∈ PII ∧ nσ = 1 ↔ σ ∈ APII

V. CATEGORIZATION OF ATOMIC PERSONAL IDENTIFIABLE INFORMATION In this section, we identify categories of apinfons. Atomic personal identifiable information provides a foundation for structuring pinfons since compound personal identifiable information can be reduced to a set of apinfons. We concentrate on reducing all given personal identifiable

16

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

information to sets of apinfons. Justification for this will be discussed later.

Let SAPII denote the set of sapinfons (self personal identifiable information).

A. Eliminating ninfons embedded in an apinfon Organizing a database of personal identifiable information requires filtering and simplifying apinfons to more basic apinfons in order to make the structuring of pinfons easier. Axiom (9) tells us that pinfons may carry non-identifiable information, ninfons. This non-identifiable information may be random noise or information not directly about the proprietor. Removing random noise is certainly an advantage in designing a database. Identifying information that is not about the proprietor clarifies the boundary between PII and NII. A first concern when analyzing an apinfon is projecting (isolating, factoring) information about any other entities besides the proprietor. Consider the apinfon John’s car is fast. This is information about John and about a car of his. This apinfon can be projected as:

14. Aboutness proposition σ ∈ SAPII ↔ ABOUT(σ) = PROP(σ) That is, atomic personal identifiable information σ is said to be self personal identifiable information (sapinfon) if its subject is its proprietor. The term “subject” here means what the entity is about when the information is communicated. The mechanism (e.g., manually) that converts APII to SAPII has yet to be investigated. A. Sapinfons involving aspects of proprietor or relationship with non-person We further simplify sapinfons. Let OPJ(σ ∈ SAPII) be the set of objects in σ. SAPII is of two types depending on the number of objects embedded in it: singleton, ssapinfon and multitude, msapinfon. The set ssapinfons, SSAPII, is defined as:

⊗ (John’s car is fast) ⇒ {The car is fast, John has a car}, where ⇒ is a production operator.

15. Singleton proposition σ ∈ SSAPII → σ ∈ SAPII ∧ (PROP(σ) = OPJ(σ)) That is, the proprietor of σ is its only object.

John’s car is fast information embeds the “pure” apinfon John has a car and the ninfon The car is fast. John has a car is information about a relationship that John has with another object in the world. This last information is an example of what we call self information. Self information (sapinfon = self atomic pinfon) is information about a proprietor, his/her aspects (e.g., tall, short), or his/her relationship with nonhuman objects in the world; it is thus useful to further reduce apinfons (atomic) to sapinfon (self).

The set msapinfons, MSAPII, is defined as follows. 16. Multitude proposition σ ∈ MSAPII → σ ∈ SAPII ∧ (|OPJ(σ)| > 1) That is, σ embeds other objects beside its proprietor. We also assume logical simplification that eliminates conjunctions and disjunctions of SAPII [5].

Sapinfon is related to the concept of “what the piece of apinfon is about.” In the theory of aboutness, this question is answered by studying the text structure and assumptions of the source about the receiver (e.g., reader). We formalize aboutness in terms of the procedure ABOUT(σ), which produces the set of entities/objects that σ is “talking” about. In our case, we aim to reduce any self infon σ to σ´ such that ABOUT(σ) is PROP(σ´).

Now we can declare that the sphere of personal identifiable information (piiSphere) for a given proprietor is the database that contains: 1. All ssapinfons and msapinfons of the proprietor, including their arrangement in super-infons (e.g., to preserve compound personal identifiable information).

Self atomic information represents information about the following:

2. Related non-identifiable information to the piiSphere of the proprietor, as discussed next.

• Aspects of proprietor (identification, character, acts, etc.)

A. What is related non-identifiable information? Consider the msapinfons Alice visited clinic Y. It is msapinfons because it represents a relationship (not an aspect of) the proprietor Alice had with an object, the clinic. Information about the clinic may or may not be privacy related information. For example, year of opening, number of beds, and other information about the clinic are not privacy related information; thus, such information ought not be included in Alice’s piiSphere. However, when the information is that the clinic is an abortion clinic, then Alice’s piiSphere ought to include this non-identifiable information about the clinic.

• His or her association with non-person “things” (e.g., house, dog, organization, etc.) • His or her relationships with other persons (e.g., Smith saw a blond woman). With regard to non-objects, of special importance for privacy analysis are aspects of persons that are expressed by sapinfon. Aspects of a person include his/her (physical) parts, character, acts, condition, name, health, race, handwriting, blood type, manner, and intelligence. The existence of these aspects depends on the person, in contrast to (physical or social) objects associated with him/her such as his/her house, dog, spouse, job, professional associations, etc.

As another example in terms of database tables, consider the following three tables representing the database of a company: Customer (Civil ID, Name, Address, Product ID)

17

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Even the relationship between the two proprietors may have its own sphere of information (white circle around E). E signifies a connection among a set of apinfons (atomic PII) since we assume that all compound PII have been reduced to atomic PII. For example, the infon {Alice is the mother of a child in the orphanage, John is the child of a woman who gave him up} is a cpinfon with two apinfons. If Alice and John are the proprietors, then E in the figure preserves the connection between these two apinfons in the two piiSpheres of Alice and John.

Product (ID, Price, Factory) Factory (Product ID, Product location, Inventory) Customer’s piiSphere includes: -

Ssapinfons (aspects of customer): Civil ID, Name Msapinfons (relationships with non-person objects): Address, Product ID However, information about Factory is not information related to the customer’s piiSphere. Now suppose that we have the following database: Person (Name, Address, Place of work) Place of work (Name, Owner) If it is known that the owner of the place of work is the Mafia, then the information related to the person’s piiSphere extends beyond the name of place of work. The decision about the boundary between a certain piiSphere and its related non-identifiable information is difficult to formalize. Fig. 2 shows a conceptualization of piiSpheres of two proprietors that have compound PII. Dark circles A–G represent possible non-person objects. For example, object A participates in an msapinfon (e.g., Factory, Address, and Place of work in previous examples). Object A has its own aspects (white circle around A) and relationships (e.g., with F) where some information may be privacysignificant to the piiSphere of proprietor.

F

Non-person Objects

We concentrate on what we call PII database, PIIDB, that contains personal identifiable information and information related to it. A. Security requirement We can distinguish two types of information security: (1) Personal identifiable information security, and (2) Non-identifiable information security. While the security requirements of NII are concerned with the traditional system characteristics of confidentiality, integrity, and availability, PII security lends itself to unique techniques pertaining only to PII. The process of protecting PII involves (1) protection of the identities of the proprietor and (2) protection of the nonidentity portion of the PII.

Related data

A

msapinfons

VI. JUSTIFICATIONS FOR PII DATABASES

G

piiSphere

B

ssainfons

sspinfons Aspects

Proprietor

E

Proprietor

Aspects

msapinfons Non-person Objects

D

C cpinfon cpinfon Figure 2. Conceptualization of piiSpheres of two proprietors.

18

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

requires that the internal key (#proprietor) is mapped one-toone to the individual's legal identity or physical location. This is an important feature in PIIDB to guarantee consistency of information about persons. This “identity” uniquely identifies the piiSphere and distinguishes one piiSphere from another. Thus, if we have PIIDB of three individuals, then we have three entries such that each leads (denoted as ⇒) to three piiSpheres: PROPRIETOR_TABLE: {(#proprietor1, …) ⇒ piiSphere of proprietor 1, (#proprietor2, …) ⇒ piiSphere of proprietor 2, (#proprietor3, …) ⇒ piiSphere of proprietor 3}. The “…” denotes the possibility of other information in the table. What is the content of each piiSphere? The answer is set(s) of atomic PIIs and related information. Usually, database design begins by identifying data items, including objects and attributes (Employee No., Name, Salary, Birth, Date of Employment, etc.). Relationships among data items are then specified (e.g., data dependencies). Semantically oriented graphs (e.g., ER graphs [13]) are sometimes used at this level. Finally, a set of tables is declared, such as the following: T1 = Father (ID, Name, Details), T2 = Mother (ID, Name Details), T3 = Child (ID, Name, Details), T4 = Case (No., Father ID, Mother ID, Child ID). T1, T2, and T3 represent atomic PIIs of fathers, mothers, and children, respectively. T4 embeds compound PIIs. In PIIDB, if R is a compound PII, then it is represented by the set of atomic PIIs: {R′ = Case (No., Father ID),

Of course, all information security tools such as encryption can be applied in this context, yet other methods (e.g., anonymization) utilizing the unique structure of PII as a combination of identities and other information can also be used. Data-mining attacks on PII aim to determine the identity of the proprietor(s) from non-identifiable information; for example, determining the identity of a patient from anonymized information that gives age, sex, and zip code in health records (k-anonymization). Thus, PII lends itself to unique techniques that can be applied in protection of this information Another important issue that motivates organizing PII separately is that any intrusion on PII involves information in addition to the owner’s information (e.g., a company, proprietors, and other third parties, e.g., privacy commissioner). For example, a PII security system may require immediately alerting the proprietor that intrusion on his/her PII has occurred. An additional point is that the sensitivity of PII is in general valued more highly than the sensitivity of other types of information. PII is more “valuable” than non-PII because of its privacy aspect, as discussed previously. Such considerations imply a special security status for PII. The source of this volubility is instigated by moral considerations [7]. B. Policy requirement Some policies applied to PII are not applicable to NII (e.g., consent, opt-in/out, proprietor’s identity management, trust, privacy mining). While NII security requirements are concerned with the traditional system characteristics of confidentiality, integrity, and availability, PII privacy requirements are also concerned with such issues as purpose, privacy compliance, transborder flow of data, third party disclosure, etc. Separating PII from NII can reduce the complex policies required to safeguard sensitive information where multiple rules are applied, depending on who is accessing the data and what the function is. In general, PIIDB goes beyond mere protection of data: 1. PIIDB identifies proprietor’s piiSphere and provides security, policy, and tools to the piiSphere. 2. PIIDB provides security, policy, and tools only to proprietor’s piiSphere, thus conserving privacy efforts. 2. PIIDB identifies inter-piiSphere relationships (proprietors’ relationships with each other) and provides security, policy, and tools to protect the privacy of these relationships.

R′′ = Case (No., Mother ID), R′′ = Case (No., Child ID)} Where R′ is in the piiSphere of father, R′′ is in the piiSphere of mother, and R′′ is in the piiSphere of child. Such a schema permits complete isolation of atomic PIIs from each other. This privacy requirement is essential in many personal identifiable databases. For example, in orphanages it is possible not to allow access to the information that a record exists in the database for a mother. In the example above, access policy for the three piiSpheres is independent from each other. At the conceptual level, reconstructing the relations among proprietors (cpinfons) is a database design problem (e.g., internal pointers among tables across piiSpheres).

VII. PERSONAL IDENTIFIABLE INFORMATION DATABASE (PIIDB)

PIIDB obeys all propositions defined previously. Some of these propositions can be utilized as privacy rules. As an illustration of the applications of these propositions, consider the case of privacy constraint that prohibits disclosing σ ∈ PII. By proposition (9) above, mixing (e.g., amending, inserting, etc.) σ with any other piece of information makes the disclosure constraint apply to the combined piece of information. In this case a general policy is: Applying a

The central mechanism in PIIDB is an explicit declaration of proprietors in a table called PROPRIETORS that includes unique identifiers of all proprietors in the PIIDB. PROPRIETOR_TABLE contains a unique entry with an internal key (#proprietor) for each proprietor in addition to other information such as pointer(s) to his/her piiSphere. The principle of uniqueness of proprietor’s identifiers

19

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 [7]

protection rule to σ1 ∈ PII implies applying the same protection to (σ1 σ2) where σ2 ∉ PII.

[8]

VIII. CONCLUSION The theory of PII infons can provide a theoretical foundation for technical solutions to problems of protection of personal identifiable information. In such an approach, privacy rules form an integral part of the design of the system. PII can be identified (hence becomes an object of privacy rules) during processing of information that may mix it with other types of information. Different types of basic PII infons provide an opportunity for tuning the design of an information system. We propose analyzing and processing PII as a database with clear boundary lines separate from nonidentifiable information, which facilitates meeting the unique requirements of PII. A great deal of work is needed at the theoretical and design levels. An expanded version of this paper includes complete formalization of the theory. Additionally, we are currently applying the approach to analysis of an actual database of a government agency that handles social problems where a great deal of PII is collected.

[9] [10] [11] [12]

[13]

AUTHOR PROFILES Sabah Al-Fedaghi holds an MS and a PhD in computer science from Northwestern University, Evanston, Illinois, and a BS in computer science from Arizona State University, Tempe. He has published papers in journals and contributed to conferences on topics in database systems, natural language processing, information systems, information privacy, information security, and information ethics. He is an associate professor in the Computer Engineering Department, Kuwait University. He previously worked as a programmer at the Kuwait Oil Company and headed the Electrical and Computer Engineering Department (1991–1994) and the Computer Engineering Department (2000–2007).

REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

J. Byun, E. Bertino, and N. Li, “Purpose based access control of complex data for privacy protection,” Proceedings of the Tenth ACM Symposium on Access Control Models and Technologies (SACMAT’050), June 1–3, 2005, Stockholm, Sweden. R. Clarke, “Introduction to dataveillance and informational privacy, and definitions of terms,” 2006. http://www.anu.edu.au/people/Roger.Clarke/DV/Intro.html#InfoPriv K. Devlin, Logic and Information, New York: Cambridge University Press, 1991. J. Kang, “Information privacy in cyberspace transactions,” Stanford Law Review 1193, pp. 1212-1220, 1998. C. J. Bennett and C. D. Raab, The Governance of Privacy: Policy Instruments in Global Perspective. United Kingdom: Ashgate, 2003. L. Sweeney, “Weaving technology and policy together to maintain confidentiality,” Journal of Law, Medicine and Ethics, vol. 25, pp. 98110, 1997. B. Thalheim, Entity-Relationship Modeling: Foundations of Database Technology. Berlin: Springer, 2000.

R. Agrawal, J. Kiernan, R. Srikant, and Y. Xu, “Hippocratic databases,” 28th International Conference on Very Large Databases (VLDB), Hong Kong, China, August 2002. S. Al-Fedaghi, “How to calculate the information privacy,” Proceedings of the Third Annual Conference on Privacy, Security and Trust. (October 2005), pp. 12–14, St. Andrews, New Brunswick, Canada. S. Al-Fedaghi, G. Fiedler, and B. Thalheim, “Privacy enhanced information systems,” Information Modelling and Knowledge Bases XVII, vol. 136, in Frontiers in Artificial Intelligence and Applications, Y. Kiyoki, J. Henno, H. Jaakkola, and H. Kangassalo, Eds. IOS Press, February 2006. S. Al-Fedaghi, “Beyond purpose-based privacy access control,” 18th Australasian Database Conference, Ballarat, Australia, January 29– February 2, 2007. S. Al-Fedaghi, “How sensitive is your personal information?” 22nd ACM Symposium on Applied Computing (ACM SAC 2007), March 11– 15, Seoul, Korea. S. Al-Fedaghi, “Crossing privacy, information, and ethics,” 17th International Conference Information Resources Management Association (IRMA 2006), Washington, DC, USA, May 21–24, 2006. [Also published in Emerging Trends And Challenges in Information Technology Management, Mehdi Khosrow-Pour (ed.), IGI Publishing Hershey, PA, USA.]

Bernhard Thalheim holds an MSc in mathematics from Dresden University of Technology, a PhD in Mathematics from Lomonosov University Moscow, and a DSc in Computer Science from Dresden University of Technology. His major research interests are database theory, logic in databases, discrete mathematics, knowledge systems, and systems development methodologies, in particular for Web information systems. He has been program committee chair and general chair for several international events, including ADBIS, ASM, EJC, ER, FoIKS, MFDBS, NLDB, and WISE. He is currently full professor at Christian-Albrechts-University at Kiel in Germany and was previously with Dresden University of Technology (1979– 1988) (associate professor beginning in 1986), Kuwait University (1988–1990) (visiting professor), Rostock University (1990–1993) (full professor), and Cottbus University of Technology (1993–2003) (full professor).

20

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Improving Effectiveness Of E-Learning In Maintenance Using Interactive-3D Lt.Dr.S Santhosh Baboo Reader P.G. & Research Dept of Computer Science D.G.Vaishnav College, Chennai 106

Nikhil Lobo Research Scholar Bharathiar University [email protected]

Abstract—In aerospace and defense, training is being carried out on the web by viewing PowerPoint presentations, manuals and videos that are limited in their ability to convey information to the technician. Interactive training in the form of 3D is a more cost effective approach compared to creation of physical simulations and mockups. This paper demonstrates how training using interactive 3D simulations in e-learning achieves a reduction in the time spent in training and improves the efficiency of a trainee performing the installation or removal.

What steps a technician needs to perform as part of the procedure?



What steps are to be carried out after completing a procedure? II.

TRAINING EFFECTIVENESS

Effectiveness of training based on statistics convey that trainees generally remember more of what they see than of what they read or hear and more of what they hear, see and do than what they hear and see.

Keywords- Interactive 3D; E-Learning; Training; Simulation

I.



INTRODUCTION

Some procedures are found to be hazardous and need to be demonstrated to maintenance personnel without damaging equipment or injuring personnel. These procedures require continuous practice and when necessary retraining. The technician is also to be trained in problem solving and decision making skills. The training should consider technicians widely distributed with various skills and experience levels. The aerospace and Defense industry in the past have imparted training using traditional blackboard outlines, physical demonstrations and video that is limited in their ability to convey information about tasks, procedures and internal components. Studies have demonstrated that 90% what they do is remembered by a student in contrast to 10% of what they read. Now a need has arisen for interactive three-dimensional content for visualization of objects, concepts and processes. Interactive training in the form of 3D is a more cost effective approach compared to creation of physical simulations and mockups. Online training generally takes only 60 percent of the time required for classroom training on the same material [1].

Figure 1. Statistics of training effectiveness

Three-dimensional models are widely used in the design and development of products because they efficiently represent complex shape information. These 3D models can be used in e learning to impart training. The training process can be greatly enhanced by allowing trainees to interact with these 3D models. Moreover by using WWW-based simulation, the company can make a single copy of the models available over the WWW instead of mailing demonstration software to potential customers. This reduces costs and avoids customer frustration with installation and potential hardware compatibility problems [2]. This simulation-based e-learning is designed to simplify and control reality by removing complex systems that exist in real life, so the learner can focus on the knowledge to be learnt effectively and efficiently [3].

For any removal or installation procedure, it is important that the e-learning content should demonstrate the following aspects: •

What steps are to be carried out before starting a procedure?



What special tools, consumables and spares are required for carrying out the procedure?



What safety conditions are to be adhered to when carrying out the procedure?

21

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

The objectives of using interactive 3D for training is as follows: • Reducing the time spent in training by 30% or more •

the technician to move or rotate the individual part for better understand of the particular part. Each part is labeled with a part number that is unique and a nomenclature.

Reducing the time spent in performing the installation by a trainee by 25% or more III.

COURSEWARE STRUCTURE

Since the web has been providing unprecedented flexibility and multimedia capability to deliver course materials, more and more courses are being delivered through the web. The existing course materials like PowerPoint presentations, manuals and videos, had a limited ability to convey information to the technician and most current internet-based educational applications do not present 3D objects even though 3D visualization is essential in teaching most engineering ideas. Interactive learning is essential for both acquiring knowledge and developing physical skills for carrying out maintenance related activities. Interaction and interactivity are fundamental to all dynamic systems, particularly those involving people [4]. Although there are images of three-dimensional graphics on the web, their two-dimensional format does not imitate the actual environment because objects are three-dimensional. Hence there is a need for integrating three-dimensional components and interactivity, creating three-dimensional visualization providing technicians an opportunity to learn through experimentation and research. The e-learning courseware is linearly structured with three modules for each removal or installation procedure. A technician is required to complete each module before proceeding to the next. These modules include Part Familiarization, Procedure and Practice. These modules provide a new and creative method for presenting removal or installation procedures effectively to technicians.

Figure 3. Part Familiarisation module and Context View

V.

PROCEDURE

Once the technician has a clear understanding of the assembly and its parts, technicians advance to the module to learn how to accurately remove or install the assembly. In procedure, removal or installation is demonstrated in an animated form one step at a time. This allows the technician to study step-bystep removal or installation process using animation that technicians can replay anytime. The use of three-dimensional models in the animation imitates the real removal or installation process helping the technicians to understand concepts very clearly. Removal or installation of each part from the assembly is animated along with callouts indicating the part number, nomenclature, tool required and torque. The animations are presented one step at a time to ensure technicians are able to perform the removal or installation process in the right order. Safety conditions like warnings and cautions are also displayed along with the animation. A warning is used to alert the reader to possible hazards, which may cause loss of life, physical injury or ill health. A caution is used to denote a possibility of damage to material but not to personnel. A voice accompanies the animation to enable the technician to understand the procedure better.

Figure 2. Part Familiarization, Procedure and Practice modules

IV.

PART FAMILIARIZATION

This familiarizes the technician with the parts that constitute the assembly to be installed or removed. This module provides technicians with information as to what each part looks like and where they are located in the assembly. An assembly is displayed with a part list comprising of the parts that make up the assembly. Here the assembly is displayed as a 3D Model in the Main Window allowing the technician to rotate and move the assembly. Individual parts can be identified by selecting them from the parts list and by viewing the model in

Figure 4. Procedure module

VI.

PRACTICE

Before performing any real procedure, technicians are first evaluated using a removal or installation procedural simulation. Practice allows a technician to perform an installation or removal procedure on standard desktops, laptops, and Tablet PCs one step at a time, to ensure that the technician clearly understands the procedure and is ready to perform the procedure using an actual assembly. Three-dimensional models

“context view”. Context View displays only the part selected in the part list with 100% opacity while reducing the opacity of other parts in the assembly. This enables easy identification of a selected part in the assembly. Individual parts that are selected are also displayed in a Secondary Window, allowing

22

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

are used in the simulation to enable the technician to feel as though they were performing the removal or installation procedure using the actual assembly. Using three-dimensional simulations technicians can perform, view, and understand the procedure using a three-dimensional view. In installation, parts are dragged from a bin to create an assembly. In removal, parts are dragged from the assembly into a bin. In either case if a wrong part is installed or removed, an alert box is displayed on the screen preventing the technician from proceeding until the correct part is installed or removed.

weighing more than 5 kilos), triggered by heavy vehicles such as tanks. These mines contain enough explosives to destroy the vehicle that runs over them. Anti-personnel mines are smaller and lighter (weighing as little as 50 grams), triggered much more easily and is designed to wound people. It is critical that these soldiers have access to technical information about the landmine, details regarding safe handling and its disposal. Creation of 3D simulations of landmines that allow soldiers to view its details of its parts, watch safety procedural animations and interact with them resulted in soldiers having greater understanding and knowledge of landmines they encounter.

Figure 7. Anti-tank and Anti-personel mines

C. M79 Grenade Launcher It had been identified by the Army that a lack of access to M79 Grenade Launchers during familiarization trainings had resulted in deployment of soldiers with limited knowledge and experience levels. To overcome this hurdle 3D-enabled M79 Grenade Launchers Virtual Task Trainers were provided simulating the single-shot, shoulder-fired, break open grenade launcher, which fires a 40x46mm grenade. Now soldiers are able to receive familiarization training regardless of geographic barriers or lack of access to weapons.

Figure 5. Practice module

VII.

SUCCESSFUL DEPLOYMENT

A. Turbojet Engines Ineffective training to technicians that are geographically distributed has resulted in improper troubleshooting procedures being carried out on Turbojet Engines resulting in reliability of the engine being compromised. Technicians are now being trained using interactive 3D simulations of the engine explaining its description, operation, maintenance and troubleshooting procedures resulting in an estimated saving of $1.5 Million with an improved maintenance turn-around-time. Technicians now are able to practice these procedures on standard desktops, laptops, and Tablet PCs eliminating geographic barriers and imparting a high standard of training.

Figure 8. M79 Grenade Launcher

D. Black Shark Torpedo Black Shark torpedo is designed for launching from submarines or surface vessels. It is meant to counter the threat posed by any type of surface or underwater target. Due to the fast pace of operations, Navy technicians received little to no training on Black Shark torpedo. This has resulted in improper operating procedures and preventative maintenance checks. Web-enabled 3D simulations have been developed allowing technicians to have hands-on practice anytime and anywhere along with familiarization to parts, maintenance procedures and repair tasks. This has resulted in technicians showing a level of interest using 3D simulations compared to existing course materials like PowerPoint presentations, manuals and videos.

Figure 6. Turbojet Engine

B. Anti-tank and Anti-personel mines There is a difficulty in training soldiers in the Army on handling and disposal of both anti-tank mines and antipersonnel mines. Anti-tank mines are large and heavy (usually

23

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 TABLE II.

Hydraulic Pump

Actual time spent to Time spent in training a trainee complete the installation by a trainee using Interactive 3D 2 hours 20 minutes

Hydraulic Reservoir

3 hours

45 minutes

High Pressure Filter

1 hours 30 minutes

30 minutes

Anti-Skid Control Valve

1 hours 30 minutes

30 minutes

Quick Disconnect Coupling 1 hours 30 minutes Suction

20 minutes

Installation

Figure 9. Black Shark Torpedo

E. Phoenix Missile Technicians were constantly facing operational difficulties concerning Phoenix Missile due to the inability to demonstrate the operation of its internal components. Phoenix Missile is a long-range air-to-air missile. Interactive 3D simulation demonstrating its internal components along with functioning was developed allowing technicians to view parts information, rotate and cross-section of the Phoenix Missile.

USING INTERACTIVE 3D FOR TRAINING

TABLE III.

PERCENTAGE OF EFFORT SAVED IN USING INTERACTIVE 3D FOR TRAINING

Hydraulic Pump

Actual time spent to Time spent in training a trainee complete the installation by a trainee using Interactive 3D 33.3% 34%

Hydraulic Reservoir

33.3%

25%

High Pressure Filter

50%

33.3%

Anti-Skid Control Valve

40%

33.3%

Installation

Quick Disconnect Coupling 50% Suction Figure 10. Phoenix Missile

VIII.

IX.

A case study to find out the effort saved using interactive 3D was carried out on the following installations: • Hydraulic Pump Hydraulic Reservoir



High Pressure Filter



Anti-Skid Control Valve



Quick Disconnect Coupling Suction

TABLE I.



Hydraulic Pump

Actual time spent to Time spent in training a trainee complete the installation by a trainee using video clips 3 hours 30 minutes

Hydraulic Reservoir

4 hours 30 minutes

1 hour

High Pressure Filter

3 hours

45 minutes

Anti-Skid Control Valve

2 hours 30 minutes

45 minutes

Quick Disconnect Coupling 2 hours 15 minutes Suction

30 minutes

Reducing the time spent in performing the installation by a trainee by 25% or more

In conclusion interactive 3D reduces the amount of time spent in hands-on training on real equipment, protects trainees from injury and equipment from damage when performing procedures that are hazardous. It provides an opportunity for technicians to study the internal components of equipments and this training can be provided to technicians that are geographically distributed.

USING TRADITIONAL TOOLS LIKE VIDEO FOR TRAINING

Installation

CONCLUSION

The metrics collected and analyzed during the implementation of interactive 3D demonstrates the following benefits in maintenance • Reducing the time spent in training by 30% or more

CASE STUDIES



34%

REFERENCES [1]

[2] [3] [4]

24

Wright Elizabeth E, "Making the Multimedia Decision: Strategies for Success." Journal of Instructional Delivery Systems, Winter 1993, pp. 15-22. Tamie L. Veith , “World Wide Web-based. Simulation”, Int. J. Engng Ed. 1998, Vol. 14, No. 5, pp. 316-321. Alessi and Trollip, “Computer Based Instruction: Methods and Development”, New Jersey: Prentice Hall, 1991. Jong, Ton, Sarti, Luigi, “Design and Production of Multimedia and Simulation-Based Learning Material”, Kluwer Academic Publishers, Dordrecht, 1993.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

An Empirical Comparative Study of Checklistbased and Ad Hoc Code Reading Techniques in a Distributed Groupware Environment Olalekan S. Akinola

Adenike O. Osofisan

Department of Computer Science, University of Ibadan, Nigeria. [email protected]

Department of Computer Science, University of Ibadan, Nigeria [email protected]

widely used [36] since it was first introduced by Fagan [25] at IBM. This is due to its potential benefits for software development, the increased demand for quality certification in software, (for example, ISO 9000 compliance requirements), and the adoption of the Capability Maturity Model as a development methodology [27]. Software inspection is a necessary and important tool for software quality assurance. It involves strict and close examinations carried out on development products to detect defects, violations of development standards and other problems [18]. The development products could be specifications, source code, contracts, test plans and test cases [33, 4, 8]. Traditionally, the software inspection artifact (requirements, designs, or codes) is normally presented on papers for the inspectors / reviewers. The advent of Collaborative Software Development (CSD) provides opportunities for software developers in geographically dispersed locations to communicate, and further build and share common knowledge repositories [13]. Through CSD, distributed collaborative software inspection methodologies emerge in which group of reviewers in different geographical locations may log on synchronously or asynchronously online to inspect an inspection artifact. It has been hypothesized that in order to gain credibility and validity, software inspection experiments have to be conducted in different environments, using different people, languages, cultures, documents, and so on [10, 12]. That is, they must be redone in some other environments. The motivation for this work therefore stems from this hypothesis. Specifically, the following are the target goals of this research work, to determine if there is any significant difference in the effectiveness of reviewers using ad hoc code reading technique and those using Checklist reading technique in a distributed tool-based environment. Twenty final year students of Computer Science were employed to carry out inspection task on a mediumsized code in a distributed, collaborative environment. The students were grouped into two groups. One group used the Ad hoc code reading technique while the second group used the checklist-based code reading technique (CBR). Briefly, results obtained show that there is no significant difference

Abstract Software inspection is a necessary and important tool for software quality assurance. Since it was introduced by Fagan at IBM in 1976, arguments exist as to which method should be adopted to carry out the exercise, whether it should be paper-based or toolbased, and what reading technique should be used on the inspection document. Extensive works have been done to determine the effectiveness of reviewers in paper-based environment when using ad hoc and checklist reading techniques. In this work, we take the software inspection research further by examining whether there is going to be any significant difference in defect detection effectiveness of reviewers when they use either ad hoc or checklist reading techniques in a distributed groupware environment. Twenty final year undergraduate students of computer science, divided into ad hoc and checklist reviewers groups of ten members each were employed to inspect a mediumsized java code synchronously on groupware deployed on the Internet. The data obtained were subjected to tests of hypotheses using independent t-test and correlation coefficients. Results from the study indicate that there are no significant differences in the defect detection effectiveness, effort in terms of time taken in minutes and false positives reported by the reviewers using either ad hoc or checklist based reading techniques in the distributed groupware environment studied. Key words: Software Inspection; Ad hoc; Checklist; groupware.

I. INTRODUCTION A software could be judged to be of high or low quality depending on who is analyzing it. Thus, quality software can be said to be a “software that satisfies the needs of the users and the programmers involved in it”, [28]. Pfleeger highlighted four major criteria for judging the quality of a software: (i) It does what the user expects it to do; (ii) Its interaction with the computer resources is satisfactory; (iii) The user finds it easy to learn and to use; and (iv) The developers find it convenient in terms of design, coding, testing and maintenance. In order to achieve the above criteria, software inspection was introduced. Software inspection has become

25

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

in the effectiveness of reviewers using Ad hoc and Checklist reading technique in the distributed environment. In the rest of this paper, we focus on review of related work in section 2. In section 3, we stated the experimental planning and instruments used as well as the subjects and hypotheses set up for the experiment. Threats to internal and external validities are also treated in this section. In section 4, we discuss the results and statistical tests carried out on the data in the experiment. Section 5 is about the discussion of our results while section 6 states the conclusion and recommendations.

detection fully depends on the skill, the knowledge, and the experience of an inspector. Training sessions in program comprehension before the take off of inspection may help subjects develop some of these capabilities to alleviate the lack of reading support [19]. Checklists offer stronger, boilerplate support in the form of questions inspectors are to answer while reading the document. These questions concern quality aspects of the document. Checklists are advocated in many inspection works. For example, Fagan [24,25], Dunsmore [2], Sabaliauskaite [12], Humphrey [35] and Gilb and Grahams' manuscript [32] to mention a few. Although reading support in the form of a list of questions is better than none (such as ad-hoc), checklistbased reading has several weaknesses [19]. First, the questions are often general and not sufficiently tailored to a particular development environment. A prominent example is the following question: “Is the inspected artifact correct?” Although this checklist question provides a general framework for an inspector on what to check, it does not tell him or her in a precise manner how to ensure this quality attribute. In this way, the checklist provides little support for an inspector to understand the inspected artifact. But this can be vital to detect major application logic defects. Second, how to use a checklist are often missing, that is, it is often unclear when and based on what information an inspector is to answer a particular checklist question. In fact, several strategies are actually feasible to address all the questions in a checklist. The following approach characterizes the one end of the spectrum: The inspector takes a single question, goes through the whole artifact, answers the question, and takes the next question. The other end is defined by the following procedure: The inspector reads the document. Afterwards he or she answers the questions of the checklist. It is quite unclear which approach inspectors follow when using a checklist and how they achieved their results in terms of defects detected. The final weakness of a checklist is the fact that checklist questions are often limited to the detection of defects that belong to particular defect types. Since the defect types are based on past defect information, inspectors may not focus on defect types not previously detected and, therefore, may miss whole classes of defects. To address some of the presented difficulties, one can develop a checklist according to the following principles [19]: • The length of a checklist should not exceed one page. • The checklist question should be phrased as precise as possible. • The checklist should be structured so that the quality attribute is clear to the inspector and the question give hints on how to assure the quality attribute. And additionally, • The checklist should not be longer than a page approximately 25 items [12, 32].

II.

Related Works: Ad hoc versus Checklist-based Reading Techniques Software inspection is as old as programming itself. It was introduced in the 1970s at IBM, which pioneered its early adoption and later evolution [25]. It is a way of detecting faults in a software documents – requirements, design or code. Recent empirical studies demonstrate that defect detection is more an individual than a group activity as assumed by many inspection methods and refinements [20, 29, 22]. Inspection results depend on inspection participants themselves and their strategies for understanding the inspected artifacts [19]. A defect detection or reading (as it is popularly called) technique is defined as the series of steps or procedures whose purpose is to guide an inspector in acquiring a deep understanding of the inspected software product [19]. The comprehension of inspected software products is a prerequisite for detecting subtle and / or complex defects, those often causing the most problems if detected in later life cycle phases. According to Porter et al, [1], defect detection techniques range in prescription from intuitive, nonsystematic procedures such as ad hoc or checklist techniques, to explicit and highly systematic procedures such as scenarios or correctness proofs. A reviewer's individual responsibility may be general, to identify as many defects as possible, or specific, to focus on a limited set of issues such as ensuring appropriate use of hardware interfaces, identifying un-testable requirements, or checking conformity to coding standards. Individual responsibilities may or may not be coordinated among the review team members. When they are not coordinated, all reviewers have identical responsibilities. In contrast, each reviewer in a coordinated team has different responsibilities. The most frequently used detection methods are ad hoc and checklist. Ad-hoc reading, by nature, offers very little reading support at all since a software product is simply given to inspectors without any direction or guidelines on how to proceed through it and what to look for. However, ad-hoc does not mean that inspection participants do not scrutinize the inspected product systematically. The word `ad-hoc' only refers to the fact that no technical support is given to them for the problem of how to detect defects in a software artifact. In this case, defect

26

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

In practice, reviewers often use Ad Hoc or Checklist detection techniques to discharge identical, general responsibilities. Some authors, especially Parnas and Weiss [9] have argued that inspections would be more effective if each reviewer used a different set of systematic detection techniques to discharge different specific responsibilities. Computer and / or Internet support for software inspections has been suggested as a way of removing the bottlenecks in the traditional software inspection process. The web approach makes software inspection much more elastic, in the form of asynchronicity and geographical dispersal [14]. The effectiveness of manual inspections is dependent upon satisfying many conditions such as adequate preparations, readiness of the work product for review, high quality moderation, and cooperative interpersonal relationships. The effectiveness of tool-based inspection is less dependent upon these human factors [29, 26]. Stein et al., [31] is of the view that distributed, asynchronous software inspections can be a practicable method. Johnson [17] however opined that thoughtless computerization of the manual inspection process may in fact increase the cost of inspections. To the best of our knowledge, many of the works in this line of research from the literatures either report experiences in terms of lessons learned with using the tools, for instance, Harjumaa [14] and Mashayekhi [34], or compare the effectiveness of tools with paper-based inspections, for instance, Macdonald and Miller [23]. In the case of ICICLE [7], the only published evaluation comes in the form of lessons learned. In the case of Scrutiny, in addition to lessons learned [16], the authors also claim that tool-based inspection is as effective as paper-based, but there is no quantifiable evidence to support this claim [15]. In this paper, we examine the feasibility of tool support for software code inspection as well as determining if there is any significant difference in the effectiveness of reviewers using ad hoc and checklist reading techniques in a distributed environment.

used Checklist based reading technique. The tool provided them with some checklist as aid for the inspection. B. Experimental Instrumentation and Set up Inspro, a web-based distributed, collaborative code inspection tool was designed and developed as a code inspection groupware used in the experiment. The experiment was actually run in form of synchronous, distributed collaborative inspection with the computers on the Internet. One computer was configured as a server having WAMP server installed on it, while the other computers served as clients to the server. The tool was developed by the authors, using Hypertext Preprocessor (PHP) web programming language and deployed on Apache Wamp Server. The student reviewers were orientated on the use of the Inspro webbased tool as well as the code artifact before the real experiment was conducted on the second day. The tool has the following features: (i)

(ii)

(iii)

(iv)

III. Experimental Planning and Design A. Subjects: Twenty 20 final year students of Computer Science were employed in the study. Ten (10) of the studentreviewers used Ad Hoc reading technique, without providing any aid for them in the inspection. The other ten

The user interface is divided into three sections. A section displays the code artifact to be worked upon by the reviewers along with their line numbers, while another section displays the text box in which the reviewers keyed in the bugs found in the artifact. The third section below is optionally displayed to give the checklist to be used by the reviewers if they are in checklist group and does not display anything for the ad hoc inspection group. Immediately a reviewer log on to the server, the tool starts counting the time used for the inspection and finally records the time stopped when submit button is clicked. The date the inspection is done as well as the prompt for the name of the reviewer is automatically displayed when the tool is activated. The tool actually writes the output of the inspection exercise on a file which will be automatically opened when submit button is clicked. This file will then be finally opened and printed for further analysis by the chief coordinator of the inspection.

Fig. 1 displays the user interface of the Inspro tool.

27

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

condition not being fulfilled for any of the operations, the program reports appropriate error log for that operation.

C. Experimental Artifact The artifact used for this experiment was a 156 lines java code which accepts data into two 2-dimensional arrays. This small-sized cod was used because the students involved in the experiment had their first experience in code inspection with this experiment; even though they were given some formal trainings on code inspection prior the exercise. The experiment was conducted as a practical class in a Software Engineering course in 400 level (CSC 433) in the Department of Computer Science, University of Ibadan, Nigeria. The arrays were used as matrices. Major operations on matrices were implemented in the program such as sum, difference, product, determinant and transpose of matrices. All conditions for these operations were tested in the program. The code was developed and tested okay by the researcher before it was finally seeded with 18 errors: 12 logical and 6 syntax/semantic. The program accepts data into the two arrays, then performs all the operations on them and reports the output results of computation if there were no errors. If there were errors in form of operational

Fig.1

D. Experimental Variables and Hypothesis The experiment manipulated 2 independent variables: The number of reviewers per team (1, 2, 3, or 4 reviewers, excluding the code author) and the review method – ad hoc and checklist. Three dependent variables were measured for the independent variables: the average number of defects detected by the reviewers, that is, defect detection effectiveness (DE), the average time spent for the inspection (T) in minutes and the average number of false positives reported by the reviewers (FP). The defect detection effectiveness (DE) is the number of true defects detected by reviewers out of the total number of seeded defects in a code inspection artifact. The time measures the total time (in minutes) spent by a reviewer on inspecting an inspection artifact. Inspection Time is also a measure of the effort (E) used by the reviewers in a software inspection exercise. False positives (FP) are the perceived defects a reviewer reported but are actually not true defects.

Inspro User Interface

28

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Three hypotheses were stated for this experiment as follows.

negligible or did not take place at all since all the groups inspected the artifacts within the same period of time on the same web based tool.

Ho1: There is no significant difference between the effectiveness of reviewers using Ad hoc and Checklist reading techniques in distributed code inspection. Ho2: There is no significant difference between the effort taken by reviewers using Ad hoc and Checklist techniques in distributed code inspection. Ho3: There is no significant difference between the false positives reported by reviewers using Ad hoc and Checklist techniques in distributed code inspection.

3.4.2 Threats to External Validity Threats to external validity are conditions that can limit our ability to generalize the results of experiments to industrial practice [1]. We considered three sources of such threats: (1) experimental scale, (2) subject generalizability, and (3) subject and artifact representativeness. Experimental scale is a threat when the experimental setting or the materials are not representative of industrial practice. This has a great impact on the experiment as the material used (matrix code) was not a true representative of what obtains in industrial setting. The code document used was invented by the researchers. A threat to subject generalizability may exist when the subject population is not drawn from the industrial population. We tried to minimize this threat by incorporating 20 final year students of Computer Science who have just concluded a 6 months industrial training in the second semester of their 300 level. The students selected were those who actually did their industrial trainings in software development houses.

E. Threats to validity The question of validity draws attention to how far a measure really measures the concept that it purports to measure (Alan and Duncan, 1997). Therefore, in this experiment, we considered two important threats that may affect the validity of the research in the domain of software inspection. F. Threats to Internal Validity Threats to internal validity are influences that can affect the dependent variables without the researcher's knowledge. We considered three such influences: (1) selection effects, (2) maturation effects, and (3) instrumentation effects. Selection effects are due to natural variation in human performance [1]. For example, if one-person inspections are done only by highly experienced people, then their greater than average skill can be mistaken for a difference in the effectiveness of the treatments. We limited this effect by randomly assigning team members for each inspection. This way, individual differences were spread across all treatments. Maturation effects result from the participants' skills improving with experience. Randomly assigning the reviewers and doing the review within the same period of time checked these effects. Instrumentation effects are caused by the artifacts to be inspected, by differences in the data collection forms, or by other experimental materials. In this study, this was

Threats regarding subject and artifact representativeness arise when the subject and artifact population is not representative of the industrial population. The explanations given earlier also account for this threat. 4.

Results

Table 1 shows the raw results obtained from the experiment. The “defect (%)” gives the percentage true defects reported by the reviewers in each group (Ad hoc and Checklist) from the 18 seeded errors in the code artifact, the “effort” gives the time taken in minutes for reviewers to inspect the online inspection documents while the “No of FP” is the total number of false positives reported by the reviewers in the experiment.

29

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 Table 1: Raw Results Obtained from Collaborative Code Inspection

Ad hoc Inspection Defect Effort No of (%) (Mins) FP

Checklist Inspection Defect Effort No of (%) (Mins) FP

1

27.78

31

1

44.44

80

5

2

44.44

72

5

33.33

60

5

3

50.00

57

4

27.78

35

4

4

50.00

50

2

22.22

43

3

5

38.89

82

0

22.22

21

0

6

61.11

98

6

27.78

30

2

7

50.00

52

5

33.33

25

4

8

38.89

50

3

38.89

45

1

9

44.44

48

1

38.89

42

2

10

50.00

51

3

44.44

38

3

s/n

The traditional method of conducting inspection on a software artifact is to do it in teams of different sizes. However, it is not possible to gather the reviewers into teams online, since they did not meet face-to-face. Therefore, nominal teams’ selection as is usually done in this area of research was used. Nominal teams consist of individual reviewers or inspectors who do not communicate with each other during inspection work. Nominal teams can help to minimize inspection overhead and to lower inspection cost and duration [30]. The approach of creating virtual teams or nominal teams has been used in other studies as well, [30, 6]. An advantage of nominal inspections is that it is possible to generate and investigate the effect of different team sizes. A disadvantage is that no effects of the meeting and possible team synergy are present in the data [3]. The rationale for the investigation of nominal teams is to compare nominal inspections with the real world situation where teams would be formed without any re-sampling. There are many ways

by which nominal teams can be created. Aybüke, et al, [3] suggest creating all combinations of teams where each individual reviewer is included in multiple teams, but this introduces dependencies among the nominal teams. They also suggest randomly create teams out of the reviewers without repetition and using bootstrapping as suggested by Efron and Tibshirani [11], in which samples are statistical drawn from a sample space, followed by immediately returning the sample so that it possibly can be drawn again. However, bootstrapping on an individual level will increase the overlap if the same reviewer is chosen more than once in a team. In this study, four different nominal teams were created each for the Ad hoc and Checklist reviewers. The first reviewers form the team of size 1, next two form the teams of size two. So we have Teams of 1-person, 2-person, 3-person and 4-person. Table 2 gives the mean aggregate values of results obtained with the nominal teams.

Table 2: Mean Aggregate Results for Nominal Team Sizes

Nominal Team size

Ad hoc Inspection DE (%) T (mins)

FP

Checklist Inspection DE (%) T (mins) FP

1

27.78

31.0

1.0

44.44

80.00

5.00

2

47.22

64.5

4.5

30.56

47.50

4.50

3

50.00

76.7

2.7

24.06

31.33

1.67

4

46.11

50.3

3.0

38.89

37.50

2.50

30

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

and 3.42 ± 0.79 SEM respectively for the ad hoc and checklist reviewers.

Table 2 shows that 42.8% and 34.5% of the defects were detected by the ad hoc and checklist reviewers respectively in the inspection experiment.

Fig. 2 shows the chart of defect detection effectiveness of the different teams in each of the defect detection method groups.

The aggregate mean effort taken in minutes by the ad hoc reviewers was 55.62 ± 9.82 SEM minutes while the checklist reviewers take 49.08 ± 1 0.83 SEM minutes. SEM means Standard Error of Means. The aggregate mean false positives reported by the reviewers were 2.80 ± 0.72 SEM 60

50

Effectiveness (%)

40

30

20

10

0 0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Team Size Ad Hoc DE (%)

CBR DE (%)

Fig. 2: Chart of Defect Detection Effectiveness of Reviewers Against The Nominal Team Sizes

Fig. 2 shows that the effectiveness of Ad hoc reviewers rises steadily and positively with team size, with peak value recorded on team size 3. However, checklist reviewers take a negative trend in that their effectiveness decreases with team size up to team size 3 before rising again on team 4.

Effort in terms of time taken by reviewers in minutes also takes the same shape as shown in Fig. 3.

31

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

90.0

80.0

70.0

Average Effort (Mins)

60.0

50.0

40.0

30.0

20.0

10.0

0.0 1

2

3

4

Team Sizes Ad hoc Av Effort (mins)

Checklist Av Effort (mins

Fig. 3: Chart of Effort (Mins) against the Nominal Team Sizes

Fig. 4 shows the mean aggregate false positives reported by the reviewers in the experiment. 6.0

Average False Positives

5.0

4.0

3.0

2.0

1.0

0.0 1

2

3

4

Nominal Team Sizes Ad hoc Average False Positives

Checklist Average False Positives

Fig. 4: Average False Positives against the Nominal Team Sizes

32

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Examining the false positive curves in Fig. 4 critically, we could see that the shapes follow more or less trend with what are obtained on defect detection effectiveness and effort.

Further Statistical Analyses Table 3 shows the results of major statistical tests performed on the data obtained in this experiment. Independent T-test was used for the analyses since different subjects were involved in the experiments and the experiments were carried out independently.

Table 3: Major Statistical Test Results

Hypothesis Tested Ho: There is no significant difference between the effectiveness of reviewers using Ad hoc and Checklist techniques in distributed code inspection. Ho: There is no significant difference between the effort taken by reviewers using Ad hoc and Checklist techniques in distributed code inspection. Ho: There is no significant difference between the false positives reported by reviewers using Ad hoc and Checklist techniques in distributed code inspection.

p-value

Correlation

Decision

0.267

- 0.83

Ho accepted

0.670

- 0.85

Ho accepted

0.585

- 0.15

Ho accepted

Even though the shapes of the curves obtained on the experiment indicates differences in the defect detection effectiveness, the effort taken and the false positives reported by the reviewers, statistical tests conducted show that there are no significant differences in the defect detection effectiveness (p = 0.267), the effort taken (p = 0.670) as well as false positives reported by the reviewers (p = 0.585) for the two defect detection techniques understudied. Null hypotheses are thus accepted for all the tests. Correlation coefficients are highly strong for defect detection effectiveness and effort / time taken albeit in negative directions as depicted in the charts in Figs. 2 and 3. There is a very weak negative correlation coefficient in the false positives reported by the reviewers.

reviewers, and that another method, the scenario method, had a higher fault detection rate than either Ad hoc or Checklist methods. However, their results were obtained from a manual (paper-based) inspection environment. Lanubile and Visaggio [21] on their work on evaluating defect detection techniques for software requirements inspections, also show that no difference was found between inspection teams applying Ad hoc or Checklist reading with respect to the percentage of discovered defects. Again, they conducted their experiment in a paper-based environment. Nagappan, et al, [26] on their work on preliminary results on using static analysis tools for software inspection made reference to the fact that inspections can detect as little as 20% to as much as 93% of the total number of defects in an artifact. Briand, et al, [5] reports that on the average, software inspections find 57% of the defects in code and design documents. In terms of percentage defects detected, low results were obtained from the experiment compare to what obtains in some related works. For instance, Giedre et al, [12] results from their experiment to compare checklist based reading and perspective-based reading for UML design documents inspection shows that Checklist-based reading (CBR) uncovers 70% in defect detection while Perspective – based (PBR) uncovers 69% and that checklist takes more time (effort) than PBR. The implication of these results is that any of the defect detection reading techniques, Ad hoc or Checklist, could be conveniently employed in software inspection depending on choice either in a manual (paper-based) or

5.

Discussion of Results Results from the groupware experiment show that there is no significant difference between the Ad Hoc and Checklist based reviewers in terms of the parameters measured – defect detection effectiveness, effort and false positives reported. Aggregate mean values of defect detection effectiveness and effort are slightly higher for Ad hoc reviewers while aggregate mean false positives result is slightly higher for Checklist based reviewers. About 43 and 35% of the defects were detected by the reviewers using ad hoc and checklist reading techniques respectively. Our results are in consonance with some related works in the literatures. To mention a few, Porter and Votta [1] on their experiment for comparing defect detection methods for software requirements inspections show that checklist reviewers were no more effective than Ad hoc

33

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

tool-based environment; since they have roughly the same level of performance. 6.

Conclusion and Recommendations In this work we demonstrate the equality of ad hoc and checklist based reading techniques that are traditionally and primarily used as defect reading techniques in software code inspection; in terms of their defect detection effectiveness, effort taken and false positives. Our results show that none of the two reading techniques outperforms each other in the tool-based environment studied. However, results in this study need further experimental clarifications especially in industrial setting with professionals and large real-life codes.

[11]

[12]

[13]

References [1] Adam A. Porter, Lawrence G. Votta, and Victor R. Basili (1995): Comparing detection methods for software requirements inspections: A replicated experiment. IEEE Trans. on Software Engineering, 21(Harvey, 1996):563-575. [2] Alastair Dunsmore, Marc Roper and Murray Wood (2003): Practical Code Inspection for Object Oriented Systems, IEEE Software 20(4), 21 – 29. [3] Aybüke Aurum, Claes Wohlin and Hakan Peterson (2005): Increasing the understanding of effectiveness in software inspections using published data set, journal of Research and Practice in Information Technology, vol. 37 No.3 [4] Brett Kyle (1995): Successful Industrial Experimentation, chapter 5. VCH Publishers, Inc. [5] Briand, L. C., El Emam, K., Laitenberger, O, Fussbroich, T. (1998): Using Simulation to Build Inspection Efficiency Benchmarks for Development Projects, International Conference on Software Engineering, 1998, pp. 340 – 449. [6] Briand, L.C., El Emam, K., Freimut, B.G. and Laitenberger, O. (1997): Quantitative evaluation of capture-recapture models to control software inspections. Proceedings of the 8th International Symposium on Software Reliability Engineering, 234–244. [7] L. R. Brothers, V. Sembugamoorthy, and A. E. Irgon. Knowledge-based code inspection with ICICLE. In Innovative Applications of Artificial Intelligence 4: Proceedings of IAAI92, 1992. [8] David A. Ladd and J. Christopher Ramming (1992): Software research and switch software. In International Conference on Communications Technology, Beijing, China, 1992. [9] David L. Parnas and David M. Weiss (1985): Active design reviews: Principles and practices. In Proceedings of the 8th International Conference on Software Engineering, pages 215-222, Aug. 1985. [10] Dewayne, E. Perry, Adam A. Porter and Lawrence G. Votta(2000): Empirical studies of software

[14]

[15]

[16]

[17]

[18] [19]

[20]

[21]

[22]

[23]

34

engineering: A Roadmap, Proc. of the 22nd Conference on Software Engineering, Limerick Ireland, June 2000. Efron, B. and Tibshirani, R.J. (1993): An Introduction to the Bootstrap, Monographs on statistics and applied probability, Vol. 57, Chapman & Hall. Giedre Sabaliauskaite, Fumikazu Matsukawa, Shinji Kusumoto, Katsuro Inoue (2002): "An Experimental Comparison of Checklist-Based Reading and Perspective-Based Reading for UML Design Document Inspection," ISESE, p. 148, 2002 International Symposium on Empirical Software Engineering (ISESE'02), 2002 Haoyang Che and Dongdong Zhao (2005). Managing Trust in Collaborative Software Development. http://l3d.cs.colorado.edu/~yunwen/KCSD2005/paper s/che.trust.pdf downloaded in October, 2008. Harjumaa L., and Tervonen I. (2000): Virtual Software Inspections over the Internet, Proceedings of the Third Workshop on Software Engineering over the Internet, 2000, pp. 30-40 J. W. Gintell, J. Arnold, M. Houde, J. Kruszelnicki, R. McKenney, and G. Memmi. Scrutiny (1993): A collaborative inspection and review system. In Proceedings of the Fourth European Software Engineering Conference, September 1993. J.W. Gintell, M. B. Houde, and R. F. McKenney (1995): Lessons learned by building and using Scrutiny, a collaborative software inspection system. In Proceedings of the Seventh International Workshop on Computer Aided Software Engineering, July 1995. Johnson P.M. and Tjahjono D. (1998): Does Every Inspection Really Need a Meeting? Journal of Empirical Software Engineering, Vol. 4, No. 1. Kelly, J. (1993): Inspection and review glossary, part 1, SIRO Newsletter, vol. 2. Laitenberger Oliver (2002): A Survey of Software Inspection Technologies, Handbook on Software Engineering and Knowledge Engineering, vol. II, 2002. Laitenberger, O., and DeBaud, J.M., (2000): An Encompassing Life-cycle Centric Survey of Software Inspection. Journal of Systems and Software, 50, 531. Lanubile and Giuseppe Visaggio (2000): Evaluating defect Detection Techniques for Software Requirements Inspections, http://citeseer.ist.psu.edu/Lanubile00evaluating.html, downloaded Feb. 2008. Lawrence G. Votta (1993): Does every inspection need a meeting? ACM SIGSoft Software Engineering Notes, 18(5):107-114. Macdonald F. and Miller, J. (1997): A comparison of Tool-based and Paper-based software inspection,

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

[24]

[25]

[26]

[27]

[28]

[29]

Empirical Foundations of Computer Science, University of Strathclyde. Michael E. Fagan (1976): Design and code inspections to reduce errors in program development. IBM Systems Journal, 15(3):182-211. Michael E. Fagan (1986): Advances in software inspections. IEEE Trans. on Software Engineering, SE-12(7):744-751. Nachiappan Nagappan, Laurie Williams, John Hudepohl, Will Snipes and Mladen Vouk (2004): Preliminary Results On Using Static Analysis Tools for Software Inspections, 15th International Sypossium on Software reliability Engineering (ISSRE’04), pp. 429 – 439. Paulk, M., Curtis, B., Chrissis, M.B. and Weber, C. V. (1993): “Capacity Maturation Model for Software”, Technical Report CMU/SEI-93-TR-024, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania. Pfleeger S. Lawrence. (1991): Software Engineering: The Production of Quality Software, Macmillan Publishing Company, NY, USA. Porter A. Adam and Johnson P. M. (1997): Assessing Software review Meetings: Results of a Comparative Analysis of Two Experimental Studies, IEEE Transactions on Software Engineering, vol. 23, No. 3, pp. 129 – 145. ix 1

[30]

[31]

[32] [33]

34]

[35] [36]

Stefan Biffl and Michael Halling (2003): Investigating the Defect Detection Effectiveness and Cost benefit of Nominal Inspection teams, IEEE Transactions on Software Engineering, vol. 29, No. 5, pp. 385 – 397. Stein, M., Riedl, J., SÖren, J.H., Mashayekhi V. (1997): A case study Distributed Asynchronous software inspection, in Proc. of the 19th International Conference on Software Engineering, pp. 107-117, 1997. Tom Gilb and Dorothy Graham (1993): Software Inspection. Addison-Wesley Publishing Co. Tyran, Craig K (2006): “A Software Inspection Exercise for the Systems Analysis and Design Course”. Journal of Information Systems Education, vol 17(3). Vahid Mashayekhi, Janet M. Drake, Wei-tek Tsai, John Riedl (1993): Distributed, Collaborative Software Inspection, IEEE Software, 10: 66-75, Sept. 1993. Watts S. Humphrey (1989): Managing the Software Process, chapter 10. Addison-Wesley Publishing Company. Wheeler, D. A., B. Brykczynski, et al. (1996). Software Inspection: An Industry Best Practice, IEEE CS Press.

Adenike OSOFISAN is currently a Reader in the Department of Computer Science, University of Ibadan, Nigeria. She had her Masters degree in Computer Science from the Georgia Tech, USA and PhD from Obafemi Awolowo University, Ile-Ife, Nigeria. Her areas of specialty include data mining and communication.

Solomon Olalekan AKINOLA is currently a PhD student of Computer Science in the University of Ibadan, Nigeria. He had his Bachelor of Science in Computer Science and Masters of Information Science in the same University. He specializes in Software Engineering with special focus on software inspection.

35

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Robustness of the Digital Image Watermarking Techniques against Brightness and Rotation Attack Harsh K Verma1, Abhishek Narain Singh2, Raman Kumar3 1,2,3

Department of Computer Science and Engineering Dr B R Ambedkar National Institute of Technology Jalandhar, Punjab, India. E-mail: [email protected], [email protected], [email protected]

structures digital watermarks. Digital watermarks may be comprised of copyright or authentication codes, or a legend essential for signal interpretation. The existence of these watermarks within a multimedia signal goes unnoticed except when passed through an appropriate detector. Common types of signals to watermark are still images, audio, and digital video. To be effective a watermark must be [4]: Unobstructive; that is, it should be unperceivable when embedded in the host signal. Discreet; unauthorized watermark extraction or detection must be arduous as the mark’s exact location and amplitude are unknown to unauthorized individuals. Easily extracted; authorized watermark extraction from the watermarked signal must be reliable and convenient. Robust/fragile to incidental and unintentional distortions; depending on the intended application, the watermark must either remain intact or be easily modified in the face of signal distortions such as filtering, compression, cropping and re-sampling performed on the watermarked data. In order to protect ownership or copyright of digital media data, such as image, video and audio, encryption and watermarking techniques are generally used. Encryption techniques can be used to protect digital data during the transmission from sender to the receiver. Watermarking technique is one of the solutions for the copyright protection and they can also be used for fingerprinting, copy protection, broadcast monitoring, data authentication, indexing, medical safety and data hiding [5].

Abstract- The recent advent in the field of multimedia proposed a many facilities in transport, transmission and manipulation of data. Along with this advancement of facilities there are larger threats in authentication of data, its licensed use and protection against illegal use of data. A lot of digital image watermarking techniques have been designed and implemented to stop the illegal use of the digital multimedia images. This paper compares the robustness of three different watermarking schemes against brightness and rotation attacks. The robustness of the watermarked images has been verified on the parameters of PSNR (Peak Signal to Noise Ratio), RMSE (Root Mean Square Error) and MAE (Mean Absolute Error). Keywords- Watermarking, Spread Spectrum, Fingerprinting, Copyright Protection.

I. INTRODUCTION Advancements in the field of computer and technology have given many facilities and advantages to the human. Now it becomes very easier to search and develop any digital content on the internet. Digital distribution of multimedia information allows the introduction of flexible, cost-effective business models that are advantageous for commerce transactions. On the other hand, its digital nature also allows individuals to manipulate, duplicate or access media information beyond the terms and conditions agreed upon. Multimedia data such as photos, video or audio clips, printed documents can carry hidden information or may have been manipulated so that one is not sure of the exact data. To deal with the problem of trustworthiness of data, authentication techniques are being developed to verify the information integrity, the alleged source of data, and the reality of data [1]. Cryptography and steganography have been used throughout history as means to add secrecy to communication during times of war and peace [2].

II. WATERMARK EMBEDDING AND EXTRAXTION A watermark, which is often consists of a binary data sequence, is inserted into a host signal with the use of a key [6]. The information embedding routine imposes small signal changes, determined by the key and the watermark, to generate the watermarked signal. This embedding procedure (Fig. 1) involves imperceptibly modifying a hoist signal to reflect the information content in

A. Digital Watermarking Digital watermarking involves embedding a structure in a host signal to “mark” its ownership [3]. We call these

36

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

c.

Read in the watermark message and reshape it into a vector d. For each value of the watermark, a PN sequence is generated using an independent seed e. Scatter each of the bits randomly throughout the cover image f. When watermark contains a '0', add PN sequence with gain k to cover image i. if watermark(bit) = 0 watermarked_image=watermarked_image + k*pn_sequence ii. Else if watermark (bit) = 1 watermarked_image=watermarked_image + pn_sequence g. Process the same step for complete watermark vector

Original Multimedia Signal Watermarking Algorithm

Watermarked Signal

Watermark 1010101…. . Key Fig. 1 Watermark embedding process

Watermarked Multimedia Signal

Watermark Extraction

2. To recover the watermark

Extracted Watermark 1010101….

a. b. c.

Convert back the watermarked image to vectors Each seed is used to generate its PN sequence Each sequence is then correlated with the entire image i. If ( the correlation is high) that bit in the watermark is set to “1” ii. Else that bit in the watermark is set to “0” d. Process the same step for complete watermarked vector e. Reshape the watermark vector and display recovered watermark

Key

Fig. 2 Watermark extraction process

the watermark so that the changes can be later observed with the use of the key to ascertain the embedded bit sequence. The process is called watermark extraction. The principal design challenge is in embedding the watermark so that it reliably fulfills its intended task. For copy protection applications, the watermark must be recoverable (Fig. 2) even when the signal undergoes a reasonable level of distortion, and for tamper assessment applications, the watermark must effectively characterize the signal distortions. The security of the system comes from the uncertainty of the key. Without access to this information, the watermark cannot be extracted or be effectively removed or forged.

B. Comparison of Mid-Band DCT Coefficients in Frequency Domain The algorithm of the above method is given below: 3. To embed the watermark a. b.

III. WATERMARKING TECHNIQUES Three different watermarking techniques each from different domain i.e. Spatial Domain, Frequency Domain and Wavelet Domain [7] watermarking have been chosen for the experiment. The techniques used for the comparative analysis of watermarking process are CDMA Spread Spectrum watermarking in spatial domain, Comparison of mid band DCT coefficients in frequency domain and CDMA Spread Spectrum watermarking in wavelet domain [8]. A. CDMA Spread Spectrum Watermarking in Spatial Domain The algorithm of the above method is given below:

c.

1. To embed the watermark a. b.

Convert the original image in vectors Set the gain factor k for embedding

Process the image in blocks. For each block Transform block using DCT. If message_bit is 0. If dct_block (5,2) < (4,3) . Swap them. Else If (5, 2) > (4,3) Swap them. If (5, 2) - (4,3) < k (5, 2) = (5, 2) + k/2; (4, 3) = (4, 3) – k/2; Else (5, 2) = (5, 2) - k/2; (4, 3) = (4, 3) + k/2; Move to next block.

4. To recover the watermark a.

37

Process the image in blocks.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

b.

For each block

c.

Transform block using DCT. If (5,2) > (4,3) Message = 1; Else Message=0; Process next block.

(a)

(b)

(c)

(d)

C. CDMA Spread Spectrum Watermarking in Wavelet Domain The algorithm of the above method is given below: 5. To embed the watermark a. b. c. d.

e.

f.

Convert the original image in vectors Set the gain factor k for embedding Read in the watermark message and reshape it into a vector Do Discrete Wavelet Transformation of the cover image i. [cA,cH,cV,cD]=dwt2(X,'wname')computes the approximation coefficients matrix cA and details coefficients matrices cH, cV, and cD (horizontal, vertical, and diagonal, respectively), obtained by wavelet decomposition of the input matrix X. The 'wname' string contains the wavelet name Add PN sequence to H and V components ii. If (watermark == 0) cH1=cH1+k*pn_sequence_h; cV1=cV1+k*pn_sequence_v; Perform Inverse Discrete Wavelet Transformation iii. watermarked_image = idwt2(cA1,cH1,cV1,cD1,'wname',[Mc,Nc])

Fig. 3 Brightness Attack (a) Original watermarked image, (b) Watermarked image after -25% brightness, (c) Watermarked image after 25% brightness, (d) Watermarked image after 50% brightness

IV. BRIGHTNESS ATTACK Brightness attack is one of the most common types of attack on digital multimedia images. Three different levels of brightness attacks have been done. First the brightness is increased by -25% i.e. decreased by 25%, again the brightness is increased by 25% and at last the brightness is increased by 50%. The brightness attack has been shown below in Fig. 3. Table 1 shows the results of brightness attack on different watermarking techniques on various parameters like Peak Signal to Noise Ratio [9], average Root Mean Square Error and average Mean Absolute Error. V. ROTATION ATTACK

6. To recover the watermark a. b. c.

d.

e.

f.

Rotation attack is among the most popular kinds of geometrical attack on digital multimedia images [10]. Three levels of rotations have been implemented. First the original watermarked image is being rotated by 90 degree, then 180 degree and at last the image is being rotated by 270degree in clock wise direction. The rotation attack has been shown below in Fig. 4. The results of the rotation attack have been shown below in Table 2 for all the three watermarking schemes.

Convert back the watermarked image to vectors Convert the watermark to corresponding vectors Initialize watermark vectors to all ones i. Watermark_vector = ones (1,MW*NW) where, MW= Height of watermark. NW = Width of watermark. Find correlation in H and V components of watermarked image i. correlation_h()=corr2(cH1,pn_sequence_h); ii. correlation_v()=corr2(cV1,pn_sequence_v); iii. correlation(wtrmkd_img)=(correlation_h() + correlation_v())/2; Compare the correlation with mean correlation i. if (correlation(bit) > mean(correlation)) watermark_vector(bit)=0; Revert back the watermar_vector to watermark_image

VI. EXPERIMENTAL RESULTS The comparative analysis of the three watermarking schemes has been done on the basis of brightness and rotation attacks. Results of the individual watermarking technique have been compared on the basis of PSNR (Peak Signal to Noise Ratio), RMSE (Root Mean Square Error) and MAE (Mean Absolute Error).

38

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 Table 1 Performance analysis of watermarking techniques against Brightness Attack PSNR (dB)

Avg. RMSE

Avg. MAE

CDMA SS in Spatial D.

19.982

25.55

2.563

-25 %

Comp. of Mid Band DCT

22.427

19.283

1.459

CDMA SS in Wavelet D.

26.215

12.466

0.61

CDMA SS in Spatial D.

17.203

35.184

4.86

25%

Comp. of Mid Band DCT

21.339

21.855

1.874

CDMA SS in Wavelet D.

24.305

15.533

0.946

50%

CDMA SS in Spatial D.

16.921

36.345

5.185

Comp. of Mid Band DCT

18.435

30.534

3.659

CDMA SS in Wavelet D.

22.605

18.892

1.40

30 20

CDMA

10

DCT

0

DWT -25%

25%

50%

Fig. 5 Graph showing PSNR values for brightness attack on different watermarking schemes

40 30

CDMA

20

DCT

10

DWT

0 -25%

(a)

25%

50%

Fig. 6 Graph showing Average RMSE values for brightness attack on different watermarking schemes

(b)

6

(c)

2700

2

DCT

0

DWT -25%

PSNR (dB)

Avg. RMSE

Avg. MAE

CDMA SS in Spatial D.

12.286

61.973

15.087

Comp. of Mid Band DCT

15.602

42.305

7.035

CDMA SS in Wavelet D.

19.50

27.008

2.866

CDMA SS in Spatial D.

11.193

70.283

19.398

Comp. of Mid Band DCT

11.999

64.059

16.122

CDMA SS in Wavelet D.

13.517

53.784

11.368

CDMA SS in Spatial D.

11.818

65.404

16.803

Comp. of Mid Band DCT

13.533

53.685

11.323

CDMA SS in Wavelet D.

16.251

39.263

6.057

25%

50%

Fig. 7 Graph showing Average MAE values for brightness attack on different watermarking schemes

Table 2 Performance analysis of watermarking techniques against Rotation Attack

1800

CDMA

(d)

Fig. 4 Rotation Attack (a) Original watermarked image, (b) Watermarked image after 90 degree rotation, (c) Watermarked image after 180 degree rotation, (d) Watermarked image after 270 degree rotation

900

4

A. Results of Brightness Attack The Fig. 5, 6 and 7 show the results of brightness attack on all the three techniques of watermarking. A comparative analysis is done thereafter. The graphs shown above are the comparative results of the brightness attack on the three watermarking techniques discussed. Greater the PSNR value implies more robust is the watermarking technique against attack. Having a look on the Fig. 5 the DWT (CDMA SS watermarking in Wavelet domain) technique is proved to be the best candidate for the digital image watermarking, since its having greater PSNR value than the other two techniques. Similarly from Fig. 6 and 7, the values of Root Mean Square Error and Mean Absolute Error are also minimum for the Discrete Wavelet Transform

39

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

25 20 15 10 5 0

Experimental values of the brightness and rotation attack shows that the CDMA Spread Spectrum watermarking technique is the best choice for the watermarking of digital multimedia images. Discrete Cosine Transformation Domain shows somewhat greater robustness against the rotation attack. Spatial domain watermarking techniques are not good candidates for large size of watermarks. They show poor results with larger size of watermarks.

CDMA DCT 90 Degree

180 Degree

270 Degree

DWT

VII. CONCLUSIONS Fig. 8 Graph showing PSNR values for rotation attack on different watermarking schemes

80 60 40 20 0

This paper focuses on the robustness of the watermarking techniques chosen from all the three domains of watermarking against brightness and rotation attack. The key conclusion of the paper is that the Wavelet domain watermarking technique is the best and most robust scheme for the watermarking of digital multimedia images. This work could further be extended to the watermarking purpose of another digital content like audio and video.

CDMA DCT 90 Degree

180 Degree

270 Degree

REFERENCES

DWT

[1]

[2] [3]

Fig. 9 Graph showing Average RMSE values for rotation attack on different watermarking schemes

[4]

25 20 15 10 5 0

[5]

CDMA

[6]

DCT 90 Degree

180 Degree

270 Degree

[7]

DWT

[8]

Fig. 10 Graph showing Average MAE values for rotation attack on different watermarking schemes

[9]

domain technique, so it is being proved to be the best against brightness attack. B. Results of Rotation Attack A comparison of the results of rotation attack is being done by showing the results in the form of Fig. 8, 9 and 10. Graphs are drawn for all the three parameters of evaluation. A comparative analysis of the result has been done thereafter. The PSNR values in Fig. 8 shows that the CDMA SS watermarking in Wavelet domain technique is having the greatest value for the PSNR value. This shows that the wavelet domain watermarking is the best practice for the digital image watermarking purpose.

[10]

40

Austin Russ “Digital Rights Management Overview”, SANS Institute Information Security Reading Room. Retrieved October, 2001. Stallings W., “Cryptography and Network Security: Principles and Practice”, Prentice-Hall, New Jersey, 2003. D. Kundur, D. Hatzinakos, "A Robust Digital Image Watermarking Scheme Using the Wavelet-Based Fusion," icip, vol. 1, pp.544, 1997 International Conference on Image Processing (ICIP'97) - Volume 1, 1997. Liu J. and he X., “A Review Study on Digital Watermark”, Information and Communication Technologies, 2005, ICICT 2005 First International Conference, pp. 337-341, August, 2005. Petitcolas F.A.P, Anderson R.J, Kuhn M. G,"Information Hiding-A Survey" Proceedings of the IEEE,Vol. 87, No.7, PD. 1062-1078, 1999. Nedeljko Cvejic, Tapio Seppanen, “Digital Audio Watermarking Techniques and Technologies Applications and Benchmarks”, pages x-xi, IGI Global, Illustrated edition, August 7, 2007. Corina Nafornita, "A Wavelet-Based Watermarking for Still Images", Scientific Bulletin of Politehnica University of Timisoara, Trans. on Electronics and Telecommunications, 49(63), special number dedicated to the Proc. of Symposium of Electronics and Telecommunications ETc, Timisoara, pp. 126-131, 22 - 23 October 2004. Chris Shoemaker,“Hidden Bits: A Survey of Techniques for Digital Watermarking”, Independent Study EER-290. Prof Rudko, Spring 2002. Kutter M. and Hartung F., Introduction to Watermarking Techniques, Chapter 5 of “Information Hiding: Techniques for Steganography and Digital Watermarking”, S. Katzenbeisser and F. A. P. Petitcolas (eds.), Norwood, MA: Artech House, pp. 97-120, 2000. Ping Dong, Jovan G. Brankov, Nikolas P. Galatsanos, Yongyi Yang, Franck Davoine,”Digital Watermarking Robust to Geometric Distortions”, IEEE Transactions on Image Processing, Vol. 14, NO. 12, December, 2005.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security Vol. 5, No. 1, 2009

ODMRP with Quality of Service and local recovery with security Support Farzane kabudvand Computer Engineering Department zanjan Azad University Zanjan, Iran E-mail: [email protected]

battlefields, emergency search and rescue sites, classrooms, and conventions where participants share information dynamically using their mobile devices.

Abstract In this paper we focus on one critical issue in mobile ad hoc networks that is multicast routing and propose a mesh based ”on demand” multicast routing protocol for Ad-Hoc networks with QoS (quality of service) support. Then a model was presented which is used for create a local recovering mechanism in order to joining the nodes to multi sectional groups at the minimized time and method for security in this protocol we present . Keywords: multicast protocol, ad hoc, security, request packet

QoS (Quality of Service) routing is another critical issue in MANETs. QoS defines nonfunctional characteristics of a system that affect the perceived quality of the result. In multimedia, this might include picture quality, image quality, delay, and speed of response. From a technological point of view, QoS characteristics may include timeliness (e.g., delay or response time), bandwidth (e.g., bandwidth required or available), and reliability (e.g., normal operation time between failures or down time from failure to restarting normal operation) [8].

1.Introduction Multicasting is the transmission of packet to group of hosts identified by destination address.

In this paper, we propose a new technique for supporting QoS Routing for this protocol, and a technique then a model was presented which is used to create a local recovering mechanism in order to joining the nodes to multi-sectional groups at the minimized time, the fact that increases reliability of the network and prevents data wastage while distributing in the network.

A multicast datagram is typically delivered to all members of its destination host group with the same reliability as regular unicast datagrams[4]. In the case of IP, for example, the datagram is not guaranteed to arrive intact at all members of the destination group or in the same order relative to other datagrams. Multicasting is intended for group-oriented computing. There are more and more applications in which one-to-many dissemination is necessary. The multicast service is critical in applications characterized by the close collaboration of teams (e.g., rescue patrols, military battalions, scientists, etc.) with requirements for audio and video conferencing and sharing of text and images [3].

2.Proposed protocol Mechanism A.

Motivation ODMRP1] provides a high packet delivery ratio even at

high mobility, but at the expense of heavy control overhead. It does not scale well as the number of senders and traffic load increases. Since every source periodically floods advertising RREQ2 packets through the network, congestion is likely to occur when the number of sources is high . So control overhead is one of the main weaknesses of ODMRP, under the presence of multiple sources, CQMP solved the this problem, but both of these protocols have common weakness which is the lack of any admission control policy and resource reservation mechanism. Hence, to reduce the overhead generated by the control packets during the route discovery and apply admission control to network traffics, proposed protocol adopts two efficient optimization mechanisms. One is

A MANET consist of a dynamic collection of nodes without the aid of the infrastructure of centralized administration . the network topology can change randomly and rapidly at predictable times. The goal of MANETs is to extend mobility into the realm of autonomous, mobile, wireless domains, where a set of nodes form the network routing infrastructure in an ad hoc fashion. The majority of applications for the MANET technology are in areas where rapid deployment and dynamic reconfiguration are necessary and the wireline network is not available [4]. These include military

1 2

41

On-demand Multicast Routing Protocol route request packet http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security Vol. 5, No. 1, 2009

As the RREQ may contain more than one Source Row, the processing node goes through each and every Source Row entry in the RREQ, and make admission decision for non-duplicated rows. Admission decision is made at the processing node and it's neighbors listed in neighbor table as described in Section 3. If the request is accepted and there was enough bandwidth, the node will add a route entry in its routing table with status explored. The node will remain in explored status for a short period of Texplored. If no reply arrives at the explored node in time, the route entry will be discarded at the node and late coming reply packets will be ignored. Thus, we reduce the control overhead as well as exclude invalid information from the node’s routing table. Upon receiving each request packet, as the RREQ may contain more than one Source Row the receiver goes through each entry in the packet, builds and transmits a REPLY packet based upon matched entries along the reverse route. Available bandwidth of intermediate and neighboring nodes may have been changed due to the activities of other sessions. Therefore, similar to the admission control in RREQs, upon receiving a RREP, nodes double check the available bandwidth to prevent possible changes during the route discovery process. If the packet is accepted, the node will update the route status to registered. After registration, the nodes are ready to accept the real data packets of the flow. The node will only stay in registered status for a short period of Tregistered. If no data packet arrives at the registered node in time, it means that the route was not chosen by the source. Then the route entry will be deleted at the node. When any node receives a REPLY packet, it checks if the next node Id in any of the entries in the REPLY matches its own. If so, it realizes that it is on the way to a source, It checks its own available bandwidth and compares it with required bandwidth of this flow, then checks its one-hop neighbor's available bandwidth which recorded in the neighbor table. If there was enough bandwidth it sets a flag indicating that it is part of the FORWARDING GROUP for that multicast group, and then builds and broadcasts its own REPLY packet. When a REPLY reaches a source, a route is established from the source to the receiver. The source can now transmit data packets towards the receiver. A Forwarding Group node will forward any data packets received from a member for that group.

applied on nodes that cannot support QoS requirements, thus ignore the RREQ packet. The other is for every intermediate node and based on the comparison of available bandwidth of each node versus required bandwidth according to node position and neighboring node's role (sender, intermediate, receiver …). To address control packet problem, we use CQMP protocol's idea in RREQ packet consolidation, moreover we apply an admission control policy along with bandwidth reservation to our new protocol. B. Neighborhood maintenance Neighborhood information is important in proposed protocol.. To maintain the neighborhood information, each node is required to periodically disseminate a “Hello” packet to announce its existence and traffic information to its neighbor set. This packet contains the Bavailable of the originator and is sent at a default rate of one packet per three seconds with time to live (TTL) set to 1. Every node in the network receives the Hello packet from its neighbors, maintains a neighbors list that contains all its neighbors with their corresponding traffic and coneighbor number. C. Route discovery and resource reservation Proposed protocol conforms to a pure on-demand routing protocol. It neither maintains any routing table, nor exchange routing information periodically. When a source node needs to establish a route to another node, with respect to a specific QoS requirement, it disseminates a RREQ that includes mainly, the requested bandwidth, delay and node's neighbor list. Hence, each intermediate node, upon receiving the RREQ performs the following tasks; • Updates its neighbor's co-neighbor number; • Determines whether it can consolidate into this RREQ packet information about other sources from which it is expecting to hear a RREQ. When a source receives a RREQ from another source, it processes the packet just as non-source intermediate node does, in addition checks its INT to determine if it would expire within a certain period of time, in other words the source checks if it is about to create and transmit its own RREQ between now and TIME-INTERVAL. If so, it adds one more row to the RREQ. • Tries to respond to QoS requirements by applying a bandwidth decision in reserving the requested bandwidth B which described in the follow, and before transmitting the packet appends its one-hop neighbor list along with their corresponding coneighbor number to the packet.

D. Data Forwarding After constructing the routes, the source can send packets to multicast group via selected routes and forwarding nodes. Upon receiving a data packet forwards it, only when;

42

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security Vol. 5, No. 1, 2009

It is not a duplicate packet, Forwarding flag for this session has not expired,There was an entry with registered or reserved status corresponds to this session.

structure of the protocol, that is, every FG add only the name of the node to the received answer package by sending it up to a higher node. In other words the existing addresses in the answer package are not to be omitted rather some desired address of FG node is added to the answer package. In this way every FG can be aware of other FG between itself and destination, and starts to use them. Here the number of the steps is considered 2. As it can be seen in figure 4. while sending the answer package of membership destination in this method puts the address of the proceeding group in the package and sends it. Now FGs also do the same. Therefore every node can recognize the member of the proceeding group of all proceeding nodes between itself and destination and begin to send local recovering package in case of route corruption.

It then changes its ‘registered’ status to ‘reserved’. The node will only stay in reserved status for a short period of Treserved. This procedure minimizes the traffic overhead and prevents sending packets through the stale routes. 3. Local Recovery Mechanism based on proposed protocol with reliability In this section the mechanism of local recovery will be discussed on the basis of a suggested protocol. The suggested method leads to fast improvement of the network and therefore the destination can be connected to the source through a new route. Discovered routes between destination and source may be corrupted for many reasons most of which could be occurred because of removing in nodes.

M,K,D

H

K K,D

M

D H,M,K,D

D

D

M

Sending membership reply packet

C

A

C,D,E

B

D,E

E

C Fig3. local recovery with proposed method

D

By considering figure .3, if direct link A-B corrupts an indirect route of A to B will be formed by C which stands next door to them. In this condition if some package with many steps is sent to find next node regenerating of the present route will be possible and there is no need to regenerate the end to end by three times. Algorithm follows as that when a middle node FG recognizes route corruption between itself and the next step it places data on its buffer and starts to set a timer. Then it sends the package with more steps (i.e. two steps) and puts on it a set of nodes which are placed at a farther space between source and destination. Receiving this package, every node begins to consider whether its name to be there or not. If the address of the node corresponds to one of the current addresses, the answer package may be sent and as a result of that it can be sent through a new route. But if the answer isn’t received by the end of given time determined on the timer, the package is thrown away and another route may be discovered again. Every node which receives a local regained package and its answer will function as a FG for that destination. Thus every node should be aware of its FG between itself and destination. In this way we can recognize some alteration in the

A

E

B

A,C,D,E B,E A,B,E

Fig4.sending local recovery packet and update address

Given timer is taken 1 second, namely if FG which sends data fails to receive the same data after utmost one second from following FG, then it discovers a route corruption and modulates another timer amount to 0/1 sec. in order to receive the answer package and therefore a new route can be resulted. During this the package is put in a temporary buffer. If new routes cannot be found the package would be thrown away.

4. Security in mobile Ad hoc networks

43

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security Vol. 5, No. 1, 2009

was 512 bytes. The nodes are placed randomly within this region. The multicast sources are selected from all 50 nodes randomly and most of them act as receivers at the same time. The mobility model used is random waypoint, in which each node independently picks a random destination and speed from an interval (min, max) and moves toward the chosen destination at this speed. Once it reaches the destination, it pauses for pause number of seconds and repeats the process. Our min speed is 1 m/s, max speed is 20 m/s and pause interval is 0 seconds. The RREQ interval is set at 3 second. The HELLO refresh interval is the same as the RREQ interval. We've varied the following items: mobility speed, number of multicast senders and network traffic load.

In mobile Ad hoc networks , due to unreliable data and lack of infrastructure , providing secure communications is a big challenge . in wired and wireless networks cryptographic techniques are used for secure communications. The key is a piece of input information for cryptography. If the key is discovered ,the encrypted information can be revealed. There are some domaining trust model because the authentication of key ownership is important . One of important models is centralized .In this model we can use a hierarchical trust structure . It is necessary for security we distribute the control trust to multiple entities that is the system public key is distributed to whole network . because a single certification node could be a security bottleneck and multiple replicas of certification node are fault tolerant. In proposed technique for security we consider number of nodes that they hold a system private key share and are able of producing certificates. These nodes are named snode . s-nodes and forwarding nodes( a sunset of non snode) generate a group . When a s-node enters the network it broadcasts a request packet . this packet has extra attributes this packet consist of TTL field , this field decrease by 1 as the packet leaves the node . When a node receives the request packet it first checks the validity of packet before taking any further actions. Then discards non-authenticated packets. Neighbor nodes to snodes receive the request and rebroadcast it .This process continues at other nodes . When another s-node receives the packet from neighbor ( example node B ) it could send back a server reply message to neighbor ( example node B ). When B receives the join reply packet , it learns that it’s neighbor is a s-node and it is on the selected path between two server and set the forwarding attribute to 1 . After all s-nodes finish the join procedure the group mesh structure is formed . This procedure can create security in whole network .

Performance Metrics used: • RREQ Control Packet Load: The average number of RREQ packet transmissions by a node in the network. • Packet delivery Ratio: The ratio of data packets sent by all the sources that is received by a receiver. • End to end delay: refers to the time taken for a packet to be transmitted across a network from source to destination.

6.Results In Fig. 5, we calculated the delivery ratio of data packets received by destination nodes over data packets sent by source nodes. Without admission control, more packets are injected into the network despite they cannot reach destinations. These packets waste a lot of channel bandwidth. On the other hand, if the admission control scheme is enabled, the inefficiency usage of channel resource can be limited and the saturation condition can be alleviated. Since proposed protocol has less RREQ packet transmissions than ODMRP and CQMP, there is less chance of data packet loss by collision or congestion. Owning to additional Hello overhead, proposed protocol performs a litter worse when there are few sources. The data delivery ratio of evaluated protocols decreases as the number of sources increases under high mobility conditions, but proposed protocol constantly maintains about 4 to 5 percent higher packet delivery ratio than others because of reduction of join query overhead.

5 .Performance Evaluation We implement the proposed protocol in GloMoSim. The performance of the proposed scheme is evaluated in terms of average Number of RREQ sent by every node, end-to-end delay, and packet delivery ratio. In the simulation, we modeled a network of 50 mobile hosts 2

placed randomly within a 1000*1000 m area. Radio propagation range for each node was 250 meters and channel capacity was 2Mbit/sec. Each simulation runs for 300 seconds of simulation time. The MAC protocol used in our simulations is IEEE 802.11 DCF [22]. We used Constant Bit Rate as our traffic. The size of data payload

44

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security Vol. 5, No. 1, 2009

[6] H. Dhillon, H.Q. Ngo, "CQMP: a mesh-based multicast routing protocol with consolidated query packets", IEEE Wireless Communications and Networking Conference, WCNC 2005, pp. 2168– 2174. [7] Y. Yi, S. Lee, W. Su, and M. Gerla, "On-Demand Multicast Routing Protocol (ODMRP) for Ad-hocNetworks",draft-yi-manet-odmrp-00.txt, 2003. [8] D. Chalmers, M. Sloman, "A survey of quality of service in mobile [9] M. Effatparvar, A. Darehshoorzadeh, M. Dehghan, M.R. Effatparvar, "Quality of Service Support and Local Recovery for ODMRP Multicast Routing in Ad hoc Networks," 4th International Conference on Innovations in Information Technology (IEEE IIT 2007), Dubai, United Arab Emirates, PP. 695-699, 18-20 Nov. 2007. [10] Y. Chen, Y. Ko, “A Lantern-Tree Based QoS on Demand Multicast Protocol for A wireless Ad hoc Networks”, IEICE Transaction on Communications Vol.E87-B., 2004, pp. 717-726. [11] Xu, K. Tang, K. Bagrodia, R. Gerla, M. Bereschinsky, M. MILCOM, "Adaptive Bandwidth Management and QoS Provisioning in Large Scale Ad hoc Networks", Proceedings of MILCOM, Boston, MA, 2003 VOL 2, pp. 1018-1023 [12] M. Saghir, T. C. Wan, R. Budiarto, "QoS Multicast Routing Based on Bandwidth Estimation in Mobile Ad Hoc Networks," in Proc. Int. Conf. on Computer and Communication Engineering (ICCCE`06), Vol. I, 9-11 May 2006, Kuala Lumpur, Malaysia, pp. 384-389. [13] G. S. Ahn, A. T. Campbell, A. Veres and L.H. Sun, "SWAN: Service Differentiation in Stateless Wireless Ad hoc Networks", In Proc. IEEE INFOCOM, 2002, VOL 2, pp. 457-466. [14] J. Garcia-Luna-Aceves, E. Madruga. "The Core Assisted Mesh Protocol", IEEE Journal on Selected Areas in Communications, vol. 17, no. 8, 1999 [15] H. Zhu, I. Chlamtac, " Admission control and bandwidth reservation in multi-hop ad hoc networks", Computer Networks 50 (2006) ,1653–1674. [16] Q. Xue, A. Ganz, "QoS routing for mesh-based wireless LANs", International Journal of Wireless Information Networks 9 (3) (2002) 179–190. [17] A. Darehshoorzadeh, M. Dehghan, M.R. Jahed Motlagh, "Quality of Service Support for ODMRP Multicast Routing in Ad hoc Networks", ADHOC-NOW 2007, LNCS 4686, pp. 237–247, 2007. [18] IEEE Computer Society LAN MAN Standards Committee, Wireless LAN Medium Access Protocol (MAC) and Physical Layer (PHY) Specification, IEEE Std 802.11-1997, IEEE, New York, NY (1997). [19] Q. Xue, A. Ganz, "Ad hoc QoS on-demand routing (AQOR) in mobile ad hoc networks", Journal of Parallel Distributed Computing 63 (2003) 154–165 [20] Herberg A , Jarecki S ,Krawczyk H, Yung M.Proactive secret sharing or : how to cope with perpetual leakage . proceeding of Crypto ’95,vol.5.1995.p.339-52 [21] Luo H, Lu S.URSA:ubiquitous and robust access control for mobile ad hoc networks IEEE/ACM Trans Networking 2004;12(6):1049-63 [22] Shamir A.How to share a secret .Commun ACM 1979;22(11):612-3

100

Packet Delivery ratio

90 80 70 60

AMOMQ

50

CQMP

40

ODMRP

30 20 10 0 1

5

10

15

20

25

Number of sources

Fig5 Packet Delivery Ratio as a function of Number of Sources

7. Conclusion In this paper, we have proposed a mesh-based, ondemand multicast routing protocol with admission control decision, proposed protocol, which similar to CQMP uses consolidation of multicast group membership advertising packets plus admission control policy. then model was presented which is used to create a local recovering mechanism in order to joining the nodes to multisectional groups at the minimized time, the fact that increases reliability of the network and prevents data wastage while distributing in the network. In this mechanism a new package known as local recovering package was created by using of a membership suit package and placing the address of the nodes between a proceeding group and destination. Here we considered the number of steps restricted but it can be changed. We implemented proposed protocol using GlomoSim and show by simulations that proposed protocol shows up to 30 percent reduction in control packet load. In addition, our results

show that as the number of mobile sources increased and under large traffic load, proposed protocol performs better than ODMRP and CQMP in terms of data packet delivery ratio, end-to-end delay and number of RREQ packets. By proposed scheme, network saturation under overloaded traffic can be alleviated, and thereby, the quality of service can be improved.

References [1] Yu-Chee Tseng, Wen-Hua Liao, Shih-Lin Wu, "Mobile Ad Hoc Networks and Routing Protocols" pp . 371–392, 2002. [2] S. Deering, "Host extensions for IP multicasting", RFC 1112, August 1989, available at http://www.ietf.org/rfc/rfc1112.txt. [3] Thomas Kunz, "Multicasting: from fixed networks to ad hoc networks", pp. 495–507, 2002. [4] S. Corson , J. Macker " Mobile ad hoc networking (MANET): Routing protocol performance issues and evaluation considerations", RFC 2501, January 1999, available at http://www.ietf.org /rfc/rfc2501.txt computing environments", IEEE Communications Surveys, Second Quarter, 2–10, 1999. [5] H. Moustafa, H. Labiod, " Multicast Routing in Mobile Ad Hoc Networks", Telecommunication Systems 25:1,2, 65–88, 2004.

45

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

A Secure and Fault-tolerant framework for Mobile IPv6 based networks Thanuskodi K

Rathi S

Principal, Akshaya College of Engineering Coimbatore, Tamilnadu, INDIA

Sr. Lecturer, Dept. of Computer Science and Engineering Government College of Technology Coimbatore, Tamilnadu, INDIA

gets the service from the HA. If the MN roams away from the coverage of HA, it has to register with any one of the FAs around to obtain the COA. This process is known as “Registration” and the association between MN and FA is known as “Mobility Binding”. In the Mobile IP scenario described above, the HAs are the single point of failure. Because all the communication to the MN is through the HA, since the Correspondent Node (CN) knows only the Home Address. Hence, when a particular HA is failed, all the MNs getting service from the faulty HA will be affected. According to the current specification of Mobile IP when a MN detects that it’s HA is failed, it has to search for some other HA and recreate the bindings and other details. This lacks the transparency, since everything is done by the MN. Also, this is a time consuming process which leads to the service interruption. Another important issue is the security problem in Mobile IP registration. Since the MN is allowed to change its point of attachment, it is highly mandatory to ensure and authenticate the current point of attachment. As a form of remote redirection that involves all the mobility entities, the registration part is very crucial and must be guarded against any malicious attacks that might try to take illegitimate advantages from any participating principals. Hence, the major requirements of Mobile IPv6 environment are providing fault-tolerant services and communication security. Apart from the above said basic requirements, the Mobile IP framework should have the following characteristics: 1) The current communication architecture must not be changed. 2) The mobile node hardware should be simple and does not require complicated calculations. 3) The system must not increase the number of times that communication data must be exchanged. 4) All communication entities are to be highly authenticated 5) Communication confidentiality and location privacy are to be ensured and 6) Communication data must be protected from active and passive attacks. Based on the above said requirements and goals, this paper proposes “A secure and fault-tolerant framework for Mobile IPv6 based networks” as a complete system architecture and an extension to Mobile IPv6 that supports reliability and offers solutions to the registration security problems. The key features of the proposed approach over other approaches are:

Abstract— Mobile IPv6 will be an integral part of the next generation Internet protocol. The importance of mobility in the Internet gets keep on increasing. Current specification of Mobile IPv6 does not provide proper support for reliability in the mobile network and there are other problems associated with it. In this paper, we propose “Virtual Private Network (VPN) based Home Agent Reliability Protocol (VHAHA)” as a complete system architecture and extension to Mobile IPv6 that supports reliability and offers solutions to the security problems that are found in Mobile IP registration part. The key features of this protocol over other protocols are: better survivability, transparent failure detection and recovery, reduced complexity of the system and workload, secure data transfer and improved overall performance.

Keywords-Mobility Agents; VPN; VHAHA; Fault-tolerance; Reliability; Self-certified keys; Confidentiality; Authentication; Attack prevention

I.

INTRODUCTION

As mobile computing has become a reality, new technologies and protocols have been developed to provide mobile users the services that already exist for non-mobile users. Mobile Internet Protocol (Mobile IP) [1, 2] is one of those technologies that enables a node to change its point of attachment to the Internet in a manner that is transparent to the application on top of the protocol stack. Mobile IP based system extends an IP based mobility of nodes by providing Mobile Nodes (MNs) with continuous network connections while changing their locations. In other words, it transparently provides mobility for nodes while backward compatible with current IP routing schemes by using two types of Mobility Agents (MA), the Home Agent (HA) and the Foreign Agent (FA). While HA is responsible for providing permanent location to each mobile user, the FA is responsible for providing CareOf-Address (COA) to each mobile user who visits the Foreign Network. Each HA maintains a Home Location Register (HLR), which contains the MN’s Home Address, current COA, secrets and other related information. Similarly, FA maintains Visitors Location Register (VLR) which maintains information about the MNs for which the FA provides services. When the MN is within the coverage area of HA, it

46

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

better survivability, transparent failure detection and recovery, reduced complexity of the system and workload, secure data transfer and improved overall performance. Despite its practicality, the proposed framework provides a scalable solution for authentication, while sets minimal computational overhead on the Mobile Node and the Mobility agents. II.

categories: (i) Certificate Authority – Public key Infrastructure (CA-PKI) based protocol [15] (ii) Minimal public key based protocol [16] (iii) Hybrid technique of Secret and CA-PKI based protocol [17] and (iv) Self-certified public key based protocols [18]. (i) CA-PKI based mechanisms define a new Certificate Extension message format with the intention to carry information about Certificates, which now must always be appended in all the control messages. Due to high computational complexity, this approach is not suitable for wireless environment. (ii) The Minimal Public key based method aims to provide public key based authentication and a scalable solution for authentication while setting only minimal computing on the mobile host. Even if this approach uses only the minimal public key based framework to prevent the replay attack, the framework must be executed by using complex computations due to the creation of digital signatures at the MN. This increases the computational complexity at the MN. (iii) Hybrid technique of Secret and CA-PKI based protocol proposes the secure key combine minimal public key besides producing the communication session key in mobile node registration protocol. The drawback of this approach is found to be the registration delay. When compared to other protocols, this approach is considerably increasing the delay in registration. In addition to that, the solution to the location anonymity is only partial. (iv) Providing strong security at the same time reducing the Registration delay and Computational complexity is an important issue in Mobile IP. Hence, for the first time selfcertified public keys are used in [18, 19] which considerably reduce the time complexity of the system. But, this proposal does not address the authentication issues of CN, Binding Update (BU) messages which lead to Denial-Of-Service attack and impersonation problems. Based on the above discussions, it is observed that a secure and fault-tolerant framework is mandatory which will tolerate inter home link failures and ensure secure registration that should not increase registration overhead and computational complexity of the system.

EARLIER RESEARCH AND STUDIES

Several solutions have been proposed for the reliability problem. The proposals that are found in [3-8] are for Mobile IPv4 and [9-15] are for Mobile IPv6 based networks. The architecture and functionality of Mobile IPv4 and Mobile IPv6 are entirely different. Hence, any solutions that are applicable for Mobile IPv4 can not be applicable for Mobile IPv6 for the reason cited here: In mobile IPv4, the single HA at the Home Link serves the MN which makes the Mobile IPv4 prone to single point of failure problems. To overcome this problem, the Mobile IPv4 solutions propose HA redundancy. But in Mobile IPv6, instead of having single HA, the entire Home Link would serve the MNs. The methods proposed in [9, 10, 11, 12] are providing solutions for Mobile IPv6 based networks. In Inter Home Agent Redundancy Protocol (HAHA) [9], one primary HA will provide service to the MNs and Multiple HAs from different Home Links are configured as Secondary HAs. When the primary HA failed, the secondary HA will be acting as Primary HA. But, the registration delay is high and the approach is not transparent to MNs. The Home Agent Redundancy Protocol (HARP) proposed in [10] is similar to [9], but here all redundant HAs are considered from the same domain. The advantages of this approach are registration delay and computational overhead are less when compared to the other methods. But, the drawback of this approach is that the Home link is the single point of failure. The Virtual Home Agent Redundancy Protocol (VHARP) [11, 12, 13] is similar to [10], but it deals with load balancing issues also. In [14], the reliability is provided by using two HAs in the same Home link. The primary and the secondary HAs are synchronized by using transport layer connections. This approach provides transparency and load balancing. Also, registration delay and service interruptions are less. But, if the Home Link or both HAs are failed, then the entire network will be collapsed. Moreover, none of the above said approaches deals with the registration security even if it plays a crucial role. Registration in mobile IP must be made secure so that fraudulent registration can be detected and rejected. Otherwise, any malicious user in the internet could disrupt communications between the home agent and the mobile node by the simple expedient of supplying a registration request containing a bogus care-of-address. The secret key based authentication in Base Mobile IP is not scalable. Besides, it also can’t provide non-repudiation that seems likely to be demanded by various parties, especially in commercial settings. Many proposals are available to overcome the above said problems which can be broadly classified under the following

III.

PROPOSED APPROACH

This paper proposes a fault-tolerant framework and registration protocol for Mobile IPv6 based networks to provide reliability and security. The solution is based on interlink HA redundancy and self certified keys. The proposal is divided into two major parts: (i) Virtual Home Agent Redundancy (VHAHA) architecture design and (ii) VHAHA Registration protocol. The first part proposes the design of fault-tolerant framework while the second part ensures the secure registration with the Mobility Agents. This proposed approach provides reliability and security by introducing extension to overall functionality and operation of current Mobile IPv6. The advantages of this approach are: reliable Mobile IPv6 operations, better survivability, transparent failure detection and recovery, reduced complexity of the system and workload,

47

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Figure 1. VHAHA Architecture

secure data transfer and improved overall performance. The results are also verified by performing simulation. The simulation results show that with minimal registration delay and computational overhead the proposed approach achieves the desired outcome. IV.

Tunneling, reverse Tunneling, Return Routability and IPv6 neighbor discovery. Backup HA: For each MN, there will be at least two HAs which will be acting as backup HAs (no limits on maximum no. of HAs). The purpose of Backup HA is to provide continuous HA services in case of HA failures or overloading. The back up HA could hold [1-N] bindings in its binding cache. This provides all the services of Active HA except the exclusive services. Inactive HA: Inactive HAs will not hold any Mobility Bindings and it provides only limited services from Backup HA services since any HA in the Home Link can act as Inactive HA. The VHAHA is configured with static IP address that is referred as Global HA Address. The Global HA address is defined by the Virtual Identifier and a set of IP addresses. The VHAHA may associate an Active HA’s real IP address on an interface with the Global HA address. There is no restriction against mapping the Global HA address with a different Active HA. In case of the failure of an Active HA, the Global address can be mapped to some other backup HA that is going to act as active HA. If the Active HA becomes unavailable, the highest priority Backup HA will become Active HA after a short delay, providing a controlled transition of the responsibility with minimal service interruption. Besides minimizing service interruption by providing rapid transition from Active to Backup HA, the VHAHA design incorporates optimizations that reduce protocol complexity while guaranteeing controlled HA transition for typical operational scenarios. The significant feature of this architecture is that, the entire process is completely transparent to MN. The MN knows only the Global HA address and it is unaware of the actual Active HA. It also does not know about the transition between backup and active HAs.

FRAMEWORK OF VHAHA

The design of VHAHA framework is divided into three major modules: (i) Architecture design (ii) VHAHA Scenario and data transmission and (iii) Failure detection and recovery algorithm. A.

Architecture design The architecture of the proposed protocol is given in Fig. 1. As part of Mobile IPv6, multiple Home Links are available in the network and each Home Link consists of multiple HAs. In this approach, one HA is configured as Active HA, some of the HAs are configured as Backup HAs and few other HAs are configured as Inactive HAs from the Home Link. The Active HA provides all Mobile IPv6 services, the Inactive HA provides minimal set of services and Backup HA provides mid range of services. VHAHA requires that for each MN there should be at least two HAs (one active HA and the other could be any one of the backup HA) holding its binding at any instance of time. The functionalities of these HAs are given below: Active HA: There must be a HA on the Home Link serving as the Active HA. Only one HA could act as Active HA at any instance of time. The active HA maintains the Binding cache, which stores the mobility bindings of all MNs that are registered under it. This will hold [0-N] mobility bindings. This is responsible for data delivery and exclusive services. The exclusive services mainly include Home Registration, Deregistration, Registration, Registration-refresh, IKE and DHAD. Besides these, it provides regular HA services such as

48

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

of the MN (Fig. 2. Step 4). Finally, the COA decapsulate and send the packet (Fig. 3d) to the MN using base Mobile IPv6 (Fig. 2. Step 5).

B.

VHAHA Scenario Two or more HAs (One active HA and minimum of one backup HA) from each Home Link are selected. Then Virtual Private Network (VPN) [20, 21, 22, 23] is constructed among the selected HAs through the existing internetworking. This VPN is assigned with Global HA address and it will act as Global HA. HAs of the VPN will announce their presence by periodically multicasting Heart Beat messages inside the VPN. So, each HA will know the status of all other HAs in the Private network.

C.

Failure detection and Recovery In contrast to Mobile IPv6 and other approaches, failure detection and tolerance is transparent to the MN. Since the MN is unaware of this process, over-the-air (OTA) messages are reduced, the complexity of the system is reduced and the performance is improved. The failure detection and recovery algorithm is illustrated in procedure 1. __________________________________________________ Begin Calculate the priority for HAs that are part of Virtual Private Network workload(HAi) Å (Current mobility bindings of HAi x Current Throughput) / (Maximum no. of mobility bindings of HAi x Maximum Throughput) Priority(HAi) Å 1/workload(HAi) If(HAs failed to receive heartbeats from HAx) HAx Å Faulty

Figure 2. VHAHA Scenario.

The scenario of VHAHA protocol is given in Fig. 2. The protocol works at layer 3. In this approach, the HAs are located in different Home Links still sharing the same subnet address. The shared subnet address is known as Global HA address and the HAs in inter home link are identified by using Local HA addresses. The data destined to the MN will be addressed to the Global HA address of the MN, which will be decapsulated by the Active HA and forwarded to the MN appropriately using base Mobile IP. The various steps in forwarding the data packets are illustrated in Fig. 2.

If(HAx == Faulty) Then Delete entries of HAx from the tables of all HAi, where 1≤i≤n, i≠x If(HAx == Active HA) Then Activate exclusive services of Backup HA Active HA Å Backup HA with highest priority Backup HA Å Select_Backup_HA (Inactive HA with highest priority), activate the required services and acquire the binding details from primary HA to synchronize with it If(HAx == Backup HA) Then Backup HA Å Select_Backup_HA(Inactive HA with highest priority), activate the required services and acquire the binding details from primary HA to synchronize with it. If(HAx == Inactive HA) Then Do nothing till it recovers, if it permanently goes off; select an Inactive HA from the Home Link of HAx End __________________________________________________

Figure 3. Packet Formats

Procedure 1: Failure detection and Recovery

The packet formats are shown in Fig. 3. As in Mobile IPv6, the CNs and MNs only know about the Global HA address. The packet (Fig. 3a) addressed to the MN from CN (Fig. 2. Step 1) will be directed to the Home Network using the Global HA address (Fig. 2. Step 2) of the MN. Here, the Home Network refers to the VPN that is constructed by using the above procedure. Once the packet reaches the Global HA address, all HAs that belong to Global HA address will hear the packet and the one which is closer to the MN and has less workload will pick up (Fig. 2. Step 3) the packet (Fig. 3b) using the internal routing mechanism. Then the packet will be routed to the Active HA and this Active HA will do the required processing and tunnel the packet (Fig. 3c) to the COA

The workload of each HA in the VPN is calculated based on the number of mobility bindings associated with each HA. This workload is used for setting priority for the HAs. The priority is dynamically updated based on the changes in the number of mobility bindings. The heartbeat messages are exchanged among the HAs at a constant rate. These heartbeat messages are used for detecting the failure. When any HA fails, it will not broadcast the heartbeat message and all other HAs will not receive the heartbeats from the faulty one. Hence, the failure of the faulty HA can be detected by all other HAs that are part of the VPN. Once the failure is detected, entry of that faulty HA will be deleted from all other HAs that are part of the Global HA

49

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 TABLE I. Metrics Recovery overhead Fault tolerance mechanism Fault tolerant Range Transparency OTA messages exchanged for recovery

COMPARISON OF VHAHA WITH OTHER APPROACHES

MIPv6 High

HAHA No

HARP No

VHARP No

TCP No

VHAHA No

No

MN initiated

MN initiated

HA initiated

HA initiated

HA initiated

No

Covers entire range No

Limited to Home Link No

Limited to Home Link Yes

Limited to Home Link Yes

Covers entire range Yes

More

More

Less

Nil

Nil

Nil

No

subnet. Then, if the faulty HA is Active HA, based on the priority of backup HAs anyone of the backup HA with the highest priority will be mapped to Active HA by activating its exclusive services. Now, the new Active HA will be the owner of the Global HA address. If the faulty HA is a backup HA then anyone of the Inactive HA will be set as the corresponding backup HA by activating the required services and acquiring binding cache entries from the Primary HA. If the Inactive HA is failed then nothing needs to be done. But if it permanently goes off, then any other HA from the link will be set as Inactive HA. The significant feature of this approach is that the Global HA address will never change. Based on the availability of the Active HA, the Global HA address will be mapped to the appropriate backup HA. The CN and the MN would know only the Global HA address and do not know any thing about this mapping of addresses. All other internal mappings will be handled by the VHAHA’s internal routing mechanism.

with the Global HA address instead of with a single HA. Hence, this delay will be high when compared to the normal Mobile IP registration. The initial registration delay includes the time taken by the MN to get registered with the Active HA and the time taken by the Active HA to update this information in all other backup HAs. The Time Complexity is O (D log3k) and Message Complexity is O (|1| + klog3k), where ‘D’ is the diameter of VHAHA and ‘k’ is number of active, backup and Inactive HAs of the MN. 3) Failure detection and Recovery overhead: The failure is detected when heartbeats are not received from a particular HA for a particular period of time (T). The heartbeat is actually multicasted using the multicast address. The number of heartbeats exchanged depends on ‘T’ and the time taken to detect the failure depends on the speed of the underlying wired network. After the failure is detected, it requires just a single message to switch over to the backup HA and the time taken is negligible. The Time Complexity is O (D log3n) and the Message Complexity is O (|L| + nlog3n), where ‘D’ is the diameter of VHAHA, ‘n’ is number of HAs that are part of VHAHA and ‘L’ represents the number of links that constitute VHAHA. 4) Over-the-air messages: This is very important factor because it is dealing with the air interface which is having less bandwidth. When OTA messages are increased performance of the system will be degraded. But in the proposed approach, the MN is not involved in failure detection and recovery process, so no OTA messages are exchanged during this process. The time and message complexity introduced by this factor is Nil. From the above description, it is observed that the performance of VHAHA is directly proportional to the speed of the wired network because the proposal only involves the wired backbone operations. Actually, this is not a fair constraint because bandwidth of the network is very high thanks to the high speed and advanced networks.

D.

Performance Evaluations The proposed protocol will introduce certain amount of overhead in the network to construct the Virtual Network and to provide reliability. Hence, the performance of the proposed approach depends on two overheads: (a) Time and (b) Control message overhead. In the proposed approach, these two overheads depend on the following four factors: (1) VHAHA configuration (2) Home Registration (3) failure detection and recovery and (4) Over-the-air communication between MNs and Mobility Agents. 1) VHAHA configuration: The VHAHA is configured only during the initialization of the network and it will be updated only when the inactive HA fails. This happens to be a rare case, since most of the implementations will not take any action if the Inactive HA fails and let the Inactive HA to heal automatically because it will not affect the overall performance. Hence, this can be considered as one time cost and it is negligible. The Time complexity and message complexity introduced to the over all systems are negligible. 2) Home Registration: This factor depends on the total numbers and locations of Active, Backup and Inactive HAs that are part of VHAHA network. The registration messages include the number of messages required for the MN to get registered with the Active HA and the control messages required by the Active HA to update this information in all other backup and Inactive HAs of the MN. In the proposed approach, the Initial registration of the MN should take place

E.

Simulation Results and Analysis The proposed approach is compared with Simple Mobile IPv6, HAHA, HARP, VHARP, and TCP. The comparison results are given in Table 1. From the comparisons, it is found that VHAHA is persistent and has less overhead when compared to other approaches. Simulation experiments are performed to verify the performance of the proposed protocol. It is done by extending

50

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

the Mobile IP model given in ns-2 [24]. MIPv6 does not use any reliability mechanism; hence the time taken to detect and recover from the failure will be high. TCP, VHARP and VHAHA take almost same time to recover from the failure. This is in the case of the HAs from the same link fail. But, when the entire network fails, only the VHAHA survives. All other methods will collapse. The following parameters are used to evaluate the performance. (i) Failure detection and Recovery time when a HA fails in the Home Link (ii) Failure detection and Recovery time when entire Home Link fails (iii) Home Registration delay (iv) Packet loss (v) Number of messages exchanged during registration and (vi) Failure detection and Recovery messages. The simulation results are shown in Figures 4, 5, 6, 7, 8 and 9. The results are also compared with Mobile IPv6, TCP and VHARP to analyze the performance of the proposed approach. 1) Failure detection and Recovery time when a HA fails in the Home Link: When a particular HA is failed, all other HAs will not hear the heartbeat messages. When the heartbeat message from a particular HA is missed continuously for three times, then it is decided that the particular HA is a faulty HA. Once the failure is detected, the corresponding backup HA will be activated by the Recovery procedure. The failure detection and recovery time (TFD-R) is calculated by using the equation (1).

previous section. But TCP and VHARP will collapse and Failure detection and Recovery will be left to MNs. 120 Failure detection & Recovery time (sec).

110 100 90 80

30 20 10

reg .delay = reg .delay

80 00

30 00

40 00

+ prop .delayOfVHA HA (2)

120 110 100 Registration time (sec)

MIPv6

90 80

MIP v6

70

T CP

60

VHARP

50

VHAHA

40

TCP

30

VHARP

20

VHAHA

10 10000

9000

8000

7000

6000

5000

4000

3000

0

20

2000

0

30

1000

Failure detection & Recovery time (sec)

Active − HA

130

90

No.of MNs

10

40 00

Figure 6. Comparison of Home Registration delay

80 00

20 00

30 00

10 00

0 0

20 00

This situation is represented in Fig. 5, where VHAHA’s Recovery time is almost equal to the previous scenario. But, TCP and VHARP approaches fail to handle the situation and Recovery time is very high which is equal to that of base MIPv6. 3) Registration delay: The registration delay is calculated by using the equation (2). The Active HA Registration delay is equal to that of base MIPv6. Nowadays, the bandwidth of the core network is very high and hence the propagation delay of the VHAHA is very less. The values are given in Fig. 6 and it is compared with other protocols.

100

40

10 00

0

Figure 5. Comparison of Failure detection and recovery time, when the entire Home Link fails

110

50

VHAHA

40

No.of MNs

120

60

VHARP

50

where TH represents the time required to hear the heartbeat messages by HAs.

70

T CP

60

0

T FD _ R = 3T H + prop .delayOfVHA HA (1)

80

MIPv6

70

No.of MNs

4) Packet loss: The packet losses of the compared protocols are represented in Fig. 7. From the Figure, it is inferred that packet loss in the proposed approach is very less when compared with MIPv6, TCP and VHARP, because it is able to handle both intra link and interlink failures.

Figure 4. Comparison of Failure detection and recovery time, when a HA fails in the Home Link

The Fig. 4 shows the TFD_R of VHAHA and other protocols. Base Mobile IPv6 does not take any action for failure detection and Recovery of HAs. This needs to be handled by MN itself. Because of that, the time taken for failure detection and Recovery is very high. This causes service interruption to MNs that are affected by the faulty HA. Other schemes like TCP, VHARP and VHAHA handle the problem and almost take same amount of time for failure detection and Recovery. 2) Failure detection and Recovery time when entire Home Link fails: The proposed protocol constructs VPN by considering HAs from different Home links. Hence, when one Home Link fails completely also, the proposed approach handles the problem in normal manner as described in

Packet Loss(pkts/sec)

10

1 1000

5000

10000

MIPv6 TCP VHARP VHAHA

0.1 No. of MNs

Figure 7. Comparison of Packet Loss

51

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations.

5) Number of messages exchanged during registration: This includes number of messages required to register with the Active HA and Binding Update messages to the backup HA during the Initial Registration, FA Registration and deregistration. Again, the bandwidth of the core network is very high and hence delay experienced by the MN will be negligible. This is illustrated in the Fig. 8. From the Figure, it is found that the number of messages exchanged in VHAHA is somewhat high when compared to base protocol but it is comparable with the VHARP protocol.

V.

The VHAHA secure registration protocol is based on self certified keys. Self-certified public keys were introduced by Girault. In contrast to the traditional public key infrastructure (PKI), self-certified public keys do not require the use of certificates to guarantee the authenticity of public keys. The authenticity of self-certified public key is verified implicitly by the proper use of the private key in any cryptographic applications in a logically single step. Thus, there is no chain of certificate authorities in self-certified public keys. This property of the self certified keys optimizes the registration delay of the proposed protocol at the same time ensuring registration security.

No. of msgs exchanged during Home Registration..

80000 70000 60000 50000

MIPv6 T CP

40000

VHARP VHAHA

30000

VHAHA SECURE REGISTRATION

20000 10000 0 250s---> Simulation time

A.

VHAHA Secure Registration Protocol The proposed protocol is divided into three different parts: (i) Mobile node’s initial registration with home network (ii) Registration protocol of MN (from Foreign Network) with authentication and (iii) Authentication between MN and CN. The MN’s initial registration part deals with how the MN is initially registered with its Home Network. First, the identity of the MN is verified and other details like nonce, Temporary Identity and secret key between MN and HA will be assigned to the MN. __________________________________________________ a. Mobile node initial registration with home network(VHAHA) (IR1) Verify the Identity of the MN (IR2) Allocate nonce, Temporary ID(H(IDMN//NHA) and shared secret KMN-HA (IR3) Transfer these details to the MN through secret channel and also store in the HA’s database.

Figure 8. Comparison of no. of msgs exchanged during Home Registration

6) Failure detection and Recovery messages: This is represented in Fig 9. Here, also the complexity of the VHAHA is approximately equal to that of VHARP while TCP based mechanism is having less complexity and the base protocol is having the maximum complexity.

No. of Failure detection & Recovery msgs exchanged

250000

200000

150000

MIPv6 TCP VHARP VHAHA

100000

50000

0 250s---> Simulation time

Figure 9. Comparison of no. of failure detection and Recovery messages

b. Registration protocol of MN (from Foreign Network) with authentication Agent Advertisement: (AA1) FA Æ MN: M, where M1 = Advertisement, FAid, MNCoA, NFA,wF Registration: (R1) MN Æ FA: M2, <M2>KMN-HA where M2 = Request, Key-Request, FAid, HAid, MNCoA, NHA, NMN, NFA, H(IDMN || NHA), wH (R2) FA: (upon receipt of R1) • Validate NFA and Compute the key KFA-HA

F.

Observation From the results and analysis, it is observed that the proposed approach outperforms all other reliability mechanisms because it survives even when the entire Home link fails. The overhead and complexity introduced by the proposed approach is almost negligible when compared to other existing recovery mechanisms. The failure detection and recovery overhead imposed by the proposed approach is increased by 2% when compared to VHARP. The home registration delay is also increased by 2% when compared to VHARP. The packet loss in the proposed approach is reduced by 25% when compared to all other approaches. The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are

K FA−HA = H1 [ (wH



52

h( I H )

+ I H ) xF mod n] = H1 [(wHh ( HAid ) + HAid ) xF . mod n]

Compute MAC

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

(R3) FA Æ HA: M3 , <M3>KFA-HA, where M3 = M2, <M2>KMN-HA , FAid, wF (R4) HA: (upon receipt of R3) • Check whether FAid in M3 equals FAid in R1. • Compute the key, K FA−HA = H1 [ (wF

h( I H )

except that the MN authenticates the FA using its witness value. And, in Registration part, instead of passing the MN’s actual identity, it is combined with nonce and then hashed. This provides the location anonymity. Also, the witness value is passed which enables the calculation of shared secrets. The third part deals with the authentication between MN and CN. This authentication enables the MN to communicate with the CN directly which resolves the triangle routing problem. First MN sends the authentication request to the Home Agent (HA2) of the MN. There the HA2 verifies and authenticates the MN and forward the message to CN. The CN validates the MN, calculates the shared secret and sends response to MN. Finally, the MN calculates the shared secret and validates the CN. Then, the MN and CN can directly communicate each other. The details of the proposed protocol are summarized in procedure 2.

+ I F ) xH mod n] = H1 [(wFh ( FAid ) + FAid ) xH . mod n]

Compute MAC* and compare it with MAC value received. This is the authentication of FA by HA • Check the identity of the MN in HA’s database. • Produce new nonce NHA’, new Temporary ID(H(IDMN|| NHA’)) and new session key K’MN-FA and overlay the details in database. (R5) HA Æ FA: M4, <M4>KFA-HA If IDMN is found out in HA’s dynamic parameter database, M4=M5, <M5>KMN-HA, NFA, {KMN-FA}KFA-HA Else, M4 = M5, <M5>K0MN-HA, NFA, {KMN-FA}, KFA-HA M5 = Reply, Result, Key-Reply, MNHM, HAid, N’HA,NMN (R6) FA: (upon receipt of R5) • Validate NFA • Validate <M4>KFA-HA with KFA-HA. This is the authentication of FA to HA. • Decrypt {KMN-FA}KFA-HA with KFA-HA and get the session key KMN-FA (R7) FA Æ MN: M5, <M5>KMN-HA (R8) MN: (upon receipt of R7) • Validate NMN • Validate <M5>KMN-HA with the secret key, KMNHA used in R1. This is the authentication of MN to HA. •

B. Performance Evaluations In this section, the security aspects of the proposed protocol are analyzed. The following attributes have been considered for the analyses are: (i) Data confidentiality, (ii) Authentication, (iii) Location anonymity and synchronization and (iv) Attack prevention. 1) Confidentiality: Data delivered through the Internet can be easily intercepted and falsified. Therefore, ensuring confidentiality of communication data is very important in Mobile IP environment. The data confidentiality of the various protocols and the proposed one is listed in the Table 2. TABLE II. Methods Secret key CA-PKI Minimal Public key Hybrid Self certified VHAHA secure Registration

c. Authentication between MN and CN (A1) MN Æ HA2: M1 <M1>KMN-HA2, where M1=Auth-Request, MNCOA, CNCOA, NMN, wMN (A2) HA2: (Upon receipt of A1) • Validate NMN, and compute MAC HA2 Æ CN: M2 <M2>KHA2-CN , where M2= • M1 <M1>KMN-HA2 (A3) CN Æ MN: M3 <M3>KCN-MN, wCN, where M3 =Auth-Response, MNCOA, CNCOA, h(NMN) • Validate MAC and nonce. This is the authentication of HA2 and MN by CN. • Compute KCN-MN (A4) MN: (Upon receipt of A3) • Validate MAC and nonce. This is the authentication of HA2 and CN by MN. • Compute KCN-MN __________________________________________________ Procedure 2: VHAHA Secure Registration Protocol

DATA CONFIDENTILAITY ANALYSIS

MNFA No No

FAHA No No

MNHA Yes Yes

CNMN Yes Yes

Yes

No

Yes

Yes

Yes Yes

Yes Yes

Yes Yes

Yes Yes

Yes

Yes

Yes

Yes

The proposed approach achieves data confidentiality between all pairs of network components. From the table, it is found that Hybrid and Self certified approaches also provide the same result. But the computational complexities of these protocols are high when compared to the proposed one, due to the usage public keys and dynamically changing secret keys. 2) Authentication: Prior to data delivery, both parties must be able to authenticate one another’s identity. It is necessary to avoid any bogus parties from sending unwanted messages to the entities. The Mobile IP user authentication protocol is different from the general user service authentication protocol. Table 3 shows the authentication analysis of various protocols with the proposed one. From the analysis, it is found that the VHAHA secure registration excels all approaches because it provides authentication between all pairs of the networking nodes.

The second part deals with how the MN is registered with the Foreign Network when it roams away from the Home Network. There is no change in Agent advertisement part

53

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

TABLE III. Methods Secret key

AUTHENTICATION ANALYSIS

MN-FA None Digital Signature

CA-PKI Minimal Public key

FA-HA None Digital Signature

None

Hybrid

None

Self certified

None

VHAHA secure Registration

TTP

None Digital Signature MAC (Static/ dynamic key) MAC (Static/ dynamic)

MN-HA MAC Digital Signature Digital Signature Symmetric Encryption

C.

Simulation Results and Analysis The system parameters are shown in Table 6. The cryptography operation time on the FA, HA and MN is obtained from [25]. The following parameters are used for the evaluation: (i) Registration delay and (ii) Registration Signaling traffic. 1) Registration delay: The registration delay plays an important role in deciding the performance of the Mobile IP protocol. To strengthen the security of the Mobile IP registration part, the data transmission speed can not be compromised because it will cause the direct impact on the end user. If the delay is high, then the interruption and packet loss will be more. Due to the properties of public keys, naturally the registration delay of these protocols is very high and the packet loss is also high. But certificate based protocols are not based on public keys and thanks to the properties of the certificates, the delay is less. The registration time is calculated by using the equation (3) and the results are given in table 7.

MN-CN None None None None

MAC (dynamic key)

None

MAC (dynamic key)

MAC (dynamic key)

3) Location Anonymous and Synchronization: The proposed approach uses temporary identity instead of the actual identity of the MNs. Since, the actual location of the MN is not revealed to the outsides environment (i.e. CNs and Foreign links). Similarly, the proposed approach maintains two databases: (i) Initial parameter base and (ii) Dynamic parameter base. These are used for maintaining synchronization between MNs and HAs. The results are given in table 4. TABLE IV.

Re g.Time = RREQMN − FA + RREQFA− HA + RREPHA− FA + RREPFA− MN (3)

where, RREQMN-FA is the time taken to send the registration request from MN to FA, RREQFA-HA is the time taken to forward the registration request from FA to HA, RREPHA-FA is the time taken to send the registration reply from HA to FA and RREPFA-MN is the time taken to forward the registration reply from FA to MN.

LOCATION ANONYMITY AND SYNCHRONIZATION

Methods Secret key CA-PKI Minimal Public key Hybrid Self certified VHAHA secure Registration

Location anonymity No No No No No

No No No No No

Yes

Yes

Synchronization

TABLE VI.

Methods

4) Attack Prevention: The following attacks are considered for the analysis: (i) Replay attack (ii) TCP Splicing attack (iii) Guess attack (iv) Denial-of-Service attack (v) Manin-the-middle attack and (vi) Active attacks. Table 5 shows the attack prevention analyses of the various approaches. TABLE V. Methods Secret key CA-PKI Minimal Public key Hybrid Self certified VHAHA secure Registrat -ion

Replay

Secret key CA-PKI Minimal Public key Hybrid Self certified VHAHA Registrat -ion

ATTACK PREVENTION ANALYSIS

Splic -ing

Guess

DOS

Man in middle

Active

No

No

No

No

Yes

Yes

No

Yes

No

Yes

Yes

Yes

Yes

No

No

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

COMPARISON OF REGISTRATION DELAY

RREQ MN-FA (1)

RREQ FA-HA (2)

RREP HA-FA (3)

RREP FA-MN (4)

Delay (ms) (1)+(2)+ (3)+(4)

2.7191

1.004

1.0144

2.7031

7.4406

7.6417

5.9266

6.3170

7.6457

27.5312

2.8119

0.9966

10.8770

7.7466

22.4322

2.7934

16.0565

15.011

2.8007

36.6625

3.4804

14.2649

1.0176

2.8402

21.6023

3.3813

7.64708

1.0156

2.7615

14.8056

2) Registration Signaling Traffic: The computation overhead depends on the amount of traffic (i.e. the packet size) that is to be transmitted to successfully complete the registration. If the amount of signaling traffic is high means, computational complexity at the MN and the mobility agents will be high. The signaling traffic of the various protocols considered for comparison are computed and given in Table 8. From the table, it is observed that VHAHA secure registration is having the lowest traffic. Hence complexity is less both at MNs and Mobility Agents. Because of the lowest traffic, the bandwidth consumption is comparatively less.

54

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 TABLE VII.

[4] R. Ghosh, and G. Varghese, “Fault Tolerant Mobile IP,” Washington University, Tech nical Report (WUCS-98-11),1998. [5] J. Ahn, and C. S. Hwang, “Efficient Fault-Tolerant Protocol for Mobility Agents in Mobile IP,” in Proc. 15th Int. Parallel and Distributed Processing Symp., California, 2001. [6] K. Leung, and M. Subbarao, “Home Agent Redundancy in Mobile IP,” IETF Draft, draft-subbarao-mobileipredundancy-00.txt, June 2001. [7] M. Khalil, “Virtual Distributed Home Agent Protocol(VDHAP),” U.S.Patent 6 430 698, August 6, 2002. [8] J. Lin, and J. Arul, “An Efficient Fault-Tolerant Approach for Mobile IP in Wireless Systems,” IEEE Trans. Mobile Computing, vol. 2, no. 3, pp. 207-220, July-Sept. 2003. [9] R. Wakikawa, V. Devarapalli, and P.Thubert, “Inter Home Agents Protocol (HAHA),” IETF Draft, draft-wakikawamip6- nemo-haha-00.txt, October 2003. [10] F. Heissenhuber, W. Fritsche, and A. Riedl, “Home Agent Redundancy and Load Balancing in Mobile IPv6,” in Proc. 5th International Conf. Broadband Communications, Hong Kong, 1999. [11] Deng, H. Zhang, R. Huang, X. and K. Zhang, “Load balance for Distributed HAs in Mobile IPv6”, IETF Dreaft, draft-wakikawa-mip6nemo-haha-00.txt, October 2003. [12] J. Faizan, H. El-Rewini, and M. Khalil, “Problem Statement: Home Agent Reliability,” IETF Draft, draftjfaizan- mipv6-ha-reliability-00.txt, November 2003. [13] J. Faizan, H. El-Rewini, and M. Khalil, “Towards Reliable Mobile IPv6” Southern Methodist University, Technical Report (04-CSE-02), November 2004. [14] Adisak Busaranun1, Panita Pongpaibool and Pichaya Supanakoon, “Simple Implement of Home Agent Reliability for Mobile IPv6 Network”, Tencon, November 2006. [15] S. Jacobs, “Mobile IP Public Key based Authentication,”http: // search /ietf.org /internet drafts / draft jacobs-mobileip-pkiauth- 01.txt. 1999. [16] Sufatrio and K.Y. Lam, “Mobile-IP Registration Protocol: a Security Attack and New Secure Minimal Pubic-key based Authentication,” Proc.1999 Intnl. Symp. Parallel Architectures, Sep. 1999. [17] C.Y. Yang and C.Y. Shiu, “A Secure Mobile IP Registration Protocol,” Int. J. Network Security, vol. 1, no. 1, pp. 38-45, Jul. 2005. [18] L. Dang, W. Kou, J. Zhang, X. Cao, J. Liu, “Improvement of Mobile IP Registration Using Self-Certified Public Keys.” IEEE Transaction on Mobile Computing, June 2007. [19] M. Girault, “Self-certified Public Keys,” Advances in Cryptology (Proceeding EuroCrypt 91), LNCS, vol. 547, pp. 490-497, SpringerVerlag 1991. [20] T.C. Wu, Y.S. Chang, and T.Y. Lin, “Improvement of Saeedni’s Selfcertified Key Exchange Protocols,” Electronics Letters, vol 34, Issue: 11, pp.1094–1095, 1998. [21] RFC 3069, VLAN Aggregation for Efficient IP Address Allocation. D. McPherson, B. Dykes. February 2001. ftp://ftp.isi.edu/innotes/rfc3069.txt. [22] Ruixi Yuan, W. Timothy Strayer, " Virtual Private Networks: Technologies and Solutions," Addison-Wesley, April 2001. [23] Dave Kosiur, David Kosiur, "Building & Managing Virtual Private Networks," Wiley, October 1998. [24] NS -2 , http://www.isi.edu/nsnam. [25] Wei Dai, “Crypto++ 5.2.1 Benchmarks,” http:// www.eskimo.com/ ~weidai/ benchmarks.html. 2004.

COMPARISON OF REGISTRATION TRAFFIC

Methods Secret key CA-PKI Minimal Public key Hybrid Self certified VHAHA Registration

MNFA 50 224

FAHA 50 288

HAFA 46 64

FAMN 46 128

Size (bytes) 192 704

178

178

174

174

704

66 226

578 404

582 124

66 70

1292 824

206

364

108

54

732

D.

Observation The proposed approach does not affect the complexity of the initial registration. But, foreign network registration delay is significantly increased due to the application of security algorithms. From the procedure 2, it is understood that the proposed scheme does not change the number of messages exchanged for the registration process. But, the size of the message will be increased due to the security attributes that are passed along with the registration messages. From the results and analysis, it is observed that the VHAHA secure registration reduces the registration delay overhead by 40% and signaling traffic overhead by 20% when compared to other approaches. VI.

CONCLUSION AND FUTURE WORK

This paper proposes a fault-tolerant and secure framework for mobile IPv6 based networks that is based on inter home link HA redundancy scheme and self-certified keys. The performance analysis and the comparison results show that the proposed approach has less overhead and the advantages like, better survivability, transparent failure detection and recovery, reduced complexity of the system and workload, secure data transfer and improved overall performance. Moreover, the proposed approach is compatible with the existing Mobile IP standard and does not require any architectural changes. This is also useful in future applications like VoIP and 4G. The formal load balancing of workload among the HAs of the VPN is left as future work. REFERENCES [1]

C. Perkins, D. Johnson, and J. Arkko, “Mobility Support in IPv6,” IETF Draft, draft-ietf-mobileip-ipv6-24 August 2003. [2] C. Perkin RFC 3344: “IP Mobility Support for IPv4”, august 2002. [3] B. Chambless, and J. Binkley, “Home Agent Redundancy Protocol,” IETF Draft, draft-chambless-mobileip-harp- 00.txt, October 1997.

55

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

A New Generic Taxonomy on Hybrid Malware Detection Technique Robiah Y, Siti Rahayu S., Mohd Zaki M, Shahrin S., Faizal M. A., Marliza R. Faculty of Information Technology and Communication Univeristi Teknikal Malaysia Melaka, Durian Tunggal, Melaka, Malaysia [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]

process. Therefore certain detection mechanisms or technique need to be integrated with IDS correlation process in order to guarantee the malware is detected in the alert log. Hence, the proposed research is to generate a new generic taxonomy of malware detection technique that will be the basis of developing new rule set for IDS in detecting malware to reduce the number of false alarm.

Abstract-Malware is a type of malicious program that replicate from host machine and propagate through network. It has been considered as one type of computer attack and intrusion that can do a variety of malicious activity on a computer. This paper addresses the current trend of malware detection techniques and identifies the significant criteria in each technique to improve malware detection in Intrusion Detection System (IDS). Several existing techniques are analyzing from 48 various researches and the capability criteria of malware detection technique have been reviewed. From the analysis, a new generic taxonomy of malware detection technique have been proposed named Hybrid-Malware Detection Technique (Hybrid-MDT) which consists of HybridSignature and Anomaly detection technique and HybridSpecification based and Anomaly detection technique to complement the weaknesses of the existing malware detection technique in detecting known and unknown attack as well as reducing false alert before and during the intrusion occur.

The rest of the paper is structured as follows. Section II discuses the related work on malware and the current taxonomy of malware detection technique. Sections III present the classification and the capability criteria of malware detection techniques. Section IV discusses the new propose taxonomy of malware detection technique and. Finally, section V conclude and summarize future directions of this work.

Keywords: Malware, taxonomy, Intrusion Detection System.

II

RELATED WORK

A. What is Malware? I

INTRODUCTION

According to [3], malware is a program that has malicious intention. Whereas [4] has defined it as a generic term that encompasses viruses, Trojans, spywares and other intrusive codes. Malware is not a “bug” or a defect in a legitimate software program, even if it has destructive consequences. The malware implies malice of forethought by malware inventor and its intention is to disrupt or damage a system.

Malware is considered as worldwide epidemic due to the malware author’s activity to have a finance gain through theft of personal information such as gaining access to financial accounts. This statement has been proved by the increasing number of computer security incidents related to vulnerabilities from 171 in 1995 to 7,236 in 2007 as reported by Computer Emergency Response Team [1]. One of the issues related to this vulnerability report is malware attack which has generated significant worldwide epidemic to network security environment and bad impact involving financial loss.

[5] has done research on malware taxonomy according to their malware properties such as mutually exclusive categories, exhaustive categories and unambiguous categories. In his research he has stated that generally malware is consists of three types of malware of the same level as depicted in Figure 1 which are virus, worm and Trojan horse although he has commented that in several cases these three types of malware are defined as not being mutually exclusive

Hence, the wide deployment of IDSs to capture this kind of activity can process large amount of traffic which can generate a huge amount of data. This huge amount of data can exhaust the network administrator’s time and implicate cost to find the intruder if new attack outbreak happen especially involving malware attack. An important problem in the field of intrusion detection is the management of alerts as IDS tends to produce high number of false positive alerts [2]. In order to increase the detection rate, the use of multiple IDSs can be used and correlate the alert but in return it increases the number of alerts to

Malware

Virus

Worm

Trojan horse

Figure 1. General Malware Taxonomy by Karresand

56

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

According to [8] and [9], intrusion detection technique can be divided into three types as in Figure 2 which are signature-based or misuse detection, anomaly-based detection and specification-based detection which shall be a major reference in these research. Based on previous worked [8][9][10][11], the characteristics of each techniques are as follows.

B. What is Malware Intrusion Detector? Malware intrusion detector is a system or tool that attempts to identify malware [3] and contains malware before it can reach a system or network. Diverse research has been done to detect this malware from spreading on host and network. These detectors will use various combinations of technique, approach and method to enable them to detect the malware effectively and efficiently during program execution or static. Malware intrusion detector is considered as one of the component of IDS, therefore malware intrusion detector is a complement of IDS. C. What is Taxonomy Technique?

of

Malware

B. Signature-based detection Signature-based or sometime called as misuse detection as described by [10] will maintain database of known intrusion technique (attack signature) and detects intrusion by comparing behavior against the database. It shall require less amount of system resource to detect intrusion. [8] also claimed that this technique can detect known attack accurately. However the disadvantage of this technique is ineffective against previously unseen attacks and hence it cannot detect new and unknown intrusion methods as no signatures are available for such attacks.

Detection

To clearly identify the malware detection technique terms in depth, a research on a structured categorization which is call as taxonomy is required in order to develop a good detection tools. Taxonomy is defined in [6] as “a system for naming and organizing things, especially plants and animals, into groups which share similar qualities”.

C. Anomaly-based detection

[7] has done a massive survey on malware detection techniques done by various researchers and they have come out with taxonomy on classification of malware detection techniques which have only two main detection technique which are signature-based detection and anomaly-based detection. They have considered the specification-based detection as sub-family of anomaly-based detection. The researcher has done further literature review on 48 various researches on malware detection technique to verify the relevancies of the detection technique especially the hybrid malware detection technique so that it can be mapped into the proposed new generic taxonomy of malware detection technique. Refer to Table IV for the mappings of the literature review with the malware detection technique. III

Anomaly-based detection stated by [10] analyses user behavior and the statistics of a process in normal situation, and it checks whether the system is being used in a different manner. In addition [8] has described that this technique can overcome misuse detection problem by focusing on normal system behavior rather than attack behavior. However [9] assume that attacks will result in behavior different from that normally observed in a system and an attack can be detected by comparing the current behavior with pre-established normal behavior. This detection approach is characterized by two phases which is the training phase and detection phase. In training phase, the behavior of the system is observed in the absence of attack, and machine learning technique is used to create a profile of such normal behavior. In detection phase, this profile is compared against the current behavior, and deviations are flagged as potential attacks. The effectiveness of this technique is affected by what aspect or a feature of system behavior is learnt and the hardest challenge is to be able to select the appropriate set of features.

CLASSIFICATION OF MALWARE DETECTION TECHNIQUES

Malware detection technique is the technique used to detect or identify the malware intrusion. Generally, malware detection technique can be categorized into Signature-based detection, Anomaly-based detection and Specification-based detection.

The advantage of this detection technique is that it can detect new intrusion method and capable to detect novel attacks. However, the disadvantage is that it needs to update the data (profiles) describing the user’s behavior and the statistics in normal usage and therefore it tend to be large and therefore need more resources, like CPU time, memory and disk space. Moreover, the malware detector system often exhibit legitimate but previously unseen behavior, which leads to high rate of false alarm

A. Overview of Detection Technique

D. Specification-based detection Specification-based detection according to [9] will rely on program specifications that describe the intended behavior of security-critical programs. The goal of the policy specification language according to [11] is to provide

Figure 2. Existing taxonomy of malware detection technique

57

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

generic taxonomy on malware detection technique. It can be done by analyzing the current malware detection technique and identify the significant criteria within each technique that can improve the IDS problem. As mentioned by [12], IDS has developed issues on alert flooding, contextual problem, false alert and scalability. The characteristic that shall be analyzed in each detection technique is according to the issue listed in Table II.

a simple way on specifying the policies of privileged programs. It monitors executions program involve and detecting deviation of their behavior from the specification, rather than detecting the occurrence of specific attack patterns. This technique is similar to anomaly detection where they detect the attacks as deviate from normal.

TABLE II Issue analyzed in IDS

The difference is that instead of relying on machine learning techniques, it will be based on manually developed specifications that capture legitimate system behavior. It can be used to monitor network components or network services that are relevant to security, Domain Name Service, Network File Sharing and routers. The advantage of this technique according to [8] is that the attacks can be detected even though they may not previously encounter and it produce low rate of false alarm. They avoid high rate of false alarm caused by legitimatebut-unseen-behavior in anomaly detection technique. However, the disadvantage is that it is not as effective as anomaly detection in detecting novel attacks, especially involving network probing and denial-of-service attacks due to the development of detail specification is time-consuming and hence increase false negative due to attacks may be missed. Table I summarized the advantages and disadvantages of each technique. TABLE I Comparison of Malware detection techniques

[13] has proposed the criterion of malware detection technique that shall be analyzed against the issue listed in Table II , which are :1. 2. 3. 4. 5. 6.

Capability to do alert reduction Capability to identify multi-step attack. Capability to reduce false negative alert. Capability to reduce false positive alert Capability to detect known attack Capability to detect unknown attack

Alert reduction is required in order to overcome the problem of alert flooding or large amount of alert data generated by the IDS. This capability criterion is important in order to reduce the network security officer’s tension in performing troubleshooting when analyzing the exact attacker in their environment.

E. Proposed criteria Technique

for

Malware

For second criteria, most of the malware detection technique is incapable to detect multi-step attack. Therefore this capability is required as attacker behavior is becoming more sophisticated and it shall involve one to many, many to one and many to many attacks.

Detection

Three major detection techniques have been reviewed and the objective of this research is to develop a new

The third and fourth criteria, most of the IDS have the tendency to produce false positive and false negative alarm.

58

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

This false alarm reduction criterion is important as it closely related to alert flooding issue. For fifth and sixth criterion, the capability to detect both known and unknown attack is required to ensure that the alert generated will overcome the issue of alert flooding and false alert. IV

DISCUSSION AND ANALYSIS OF MALWARE DETECTION TECHNIQUES

In the current trend, few researches such as [14], [15], [16], [17] and [8] have been found to manipulate this detection technique by combining either Signature-based with Anomaly-based detection technique(Hybrid-SA) or Anomaly-based with Specification-based detection technique (Hybrid-SPA) in order to develop an effective malware detector’s tool.

Figure 3. Proposed generic taxonomy of malware detection technique

In this paper, a new proposes taxonomy of malware detection technique is proven to be effective by matching the current malware detection technique: Signature-based detection, Anomaly-based detection and Specification-based detection with capability criteria propose by [13] as discussed in section III. This analysis is summarized in Table III.

To further verify the relevancies of the above proposed generic taxonomy of malware detection technique, the researchers have review on 48 researches of various malware detection techniques which can be mapped to the propose taxonomy in Figure 3. Table IV shows the related literature review in malware detection techniques. TABLE IV Related literature review in malware detection techniques

TABLE III Malware detection technique versus proposed capability criteria (Capable=√, incapable=×)

Referring to Table III, all of the detection techniques have the same capability to detect known attack. However, anomaly-based and specification-based have the additional capabilities to detect unknown attack. Anomaly-based has the extra capabilities compare to other detection techniques in terms of reducing false negative alert and detecting multistep attack. Nevertheless, it cannot reduce the false positive alert which can only be reduced by using signature-based and specification-based technique.

V

CONCLUSION AND FUTURE WORKS

In this study, the researchers have reviewed and analyzed the existing malware detection techniques and match it with the capability criteria propose by [13] to improve the IDS’s problem. From the analysis researcher has proposed a new generic taxonomy of malware detection techniques which is called Hybrid-Malware Detection Technique (Hybrid-MDT) which consists of Hybrid-SA detection and Hybrid-SPA detection technique. Both techniques in Hybrid-MDT shall complement the weaknesses found in Signature-based, Anomaly-based and Specification based technique. This research is a preliminary worked for malware detection. This will contribute ideas in malware detection technique field by generating an optimize rule set in IDS. Hence, the false alarm in the existing IDS will be reduced.

Due to the incapability to reduce either false negative or false positive alert, all of these techniques are incapable to reduce false alert. This has given an implication that there are still some rooms for improvement in reducing false alarm. Based on the analyses, the researcher has propose an improved solution for malware detection technique which can either use combination of signature-based with anomaly-based detection technique (Hybrid-SA) or specification-based with anomaly-based detection technique (Hybrid-SPA) to complement each other weaknesses. These new technique is later on named by the researcher as Hybrid-Malware Detection Technique (Hybrid-MDT) which shall consists of Hybrid-SA detection and HybridSPA detection technique as depicted in Figure 3.

59

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 [24] Castaneda, F., Can Sezer, E., & Xu, J. (2004). Worm Vs. Worm: Preliminary Study of an Active Counter-Attack Mechanism. Paper presented at the 2004 ACM Workshop on Rapid Malcode. [25] Christodorescu, M., & Somesh, J. (2003). Static Analysis of Executables to Detect Malicious Pattern. Paper presented at the 12th USENIX Security Symposium. [26] Debbabi, M., Giasson, E., Ktari, B., Michaud, F., & Tawbi, N. (2000). Secure Self-Certified COTS. Paper presented at the IEEE International Workshops on Enabling Technologies:Infrastructure for Collaborative Enterprises. [27] E. Schechter, S., Jung, J., & W. Berger, A. (2004). Fast detection of scanning worms infections. Paper presented at the 7th International Symposium on Recent Advances in Intrusion Detection (RAID) 2004. [28] Filiol, E. (2006). Malware pattern scanning schemes secure against black-box analysis. Journal of Computer Virol 2, 35-50. [29] Forrest, S., S. Perelson, A., Allen, L., & Cherukuri, R. (1994). SelfNonself Discrimination in a Computer. Paper presented at the IEEE Symposium on Research in Security and Privacy. [30] Ilgun, K., A. Kemmerer, R., & A. Porras, P. (1995, March 1995). State Transition Analysis: A Rule- Based Intrusion Detection Approach. Paper presented at the IEEE Transactions on Software Engineering. [31] Kirda, E., Kruegel, C., Vigna, G., & Jovanovic, N. (2006). Noxes: A client-side solution for mitigating cross-site scripting attacks. Paper presented at the 21st ACM Symposium on Applied computing. [32] Kreibich, C., & Crowcroft, J. (2003). Honeycomb - Creating Intrusion Detection Signatures Using Honeypots. Paper presented at the 2nd Workshop on Hot Topics in Network. [33] Kumar, S., & H. Spafford, E. (1992). A generic virus scanner in C++. Paper presented at the 8th IEEE Computer Security Applications Conference. [34] Lee, W., & J. Stolfo, S. (1998). Data Mining approaches for intrusion detection. Paper presented at the 7th USENIX Security Symposium. [35] Li, W.-J., Wang, K., J. Stolfo, S., & Herzog, B. (2005). Fileprints:Identifying file types by n-gram analysis. Paper presented at the 2005 IEEE Workshop on Information Assurance and Security, United States Military Academy, West Point, NY. [36] Linn, C. M., Rajagopalan, M., Baker, S., Collberg, C., K. Debray, S., & H. Hartman, J. (2005). Protecting against unexpected system calls. Paper presented at the Usenix Security Symposium. [37] Masri, W., & Podgurski, A. (2005, 30 May 2005). Using dynamic information flow analysis to detect attacks against applications. Paper presented at the 2005 Workshop on Software Engineering for secure systems-Building Trustworthy Applications. [38] Milenkovic, M., Milenkovic, A., & Jovanov, E. (2005). Using Instruction Block Signatures to Counter Code Injection Attacks. ACM SIGARCH Computer Architecture News(33), 108-117. [39] Mori, A., Izumida, T., Sawada, T., & Inoue , T. (2006). A Tool for analyzing and detecting malicious mobile code. Paper presented at the 28th International Conference on Software Engineering. [40] Peng, J., Feng, C., & W.Rozenblit, J. (2006). A Hybrid Intrusion Detection and Visualization System. Paper presented at the 13th Annual IEEE International Symposium and Workshop on Engineering of Computer Based Systems (ECBS '06). [41] R. Ellis, D., G.Aiken, J., S.Attwood, K., & D. Tenaglia, S. (2004). A Behavioral Approach to Worm Detection. Paper presented at the 2004 ACM Workshop on Rapid Malcode. [42] Rabek, J. C., Khazan, R. l., Lewandowski, S. M., & Cunningham, R. K. (2003). Detection of Injected, Dynamically Generated, and Obfuscated Malicious Code. Paper presented at the 2003 ACM Workshop on Rapid Malcode. [43] Sekar, R., Bendre, M., Dhurjati, D., & Bollineni, P. (2001). A Fast Automaton-Based Method for Detecting Anomalous Program Behaviours. Paper presented at the IEEE Symposium on security and Privacy. [44] Sekar, R., Bowen, T., & Segal, M. (1999). On preventing intrusions by process behavior monitoring. Paper presented at the USENIX Intrusion Detection Workshop, 1999. [45] Suh, G. E., Lee, J., & Devadas, S. (2004). Secure program execution via dynamic information flow tracking. Paper presented at the International Conference Architectural Support for Programming Languages and Operating Systems,2004. [46] Sulaiman, A., Ramamoorthy, K., Mukkamala, S., & Sung, A. (2005). Malware Examiner using disassembled code (MEDiC). Paper

REFERENCES [1]

[2]

[3]

[4]

[5] [6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

“CERT/CC Statistics 2008", CERT Coordination Center, Software Engineering Institute, Carnegie Mellon University Pittsburg, PA, 2003. Retrieved August 2008, from: http://www.cert.org/stats/ Autrel, F & Cuppens, F (2005), “Using an Intrusion Detection Alert Similarity Operator to Aggregate and Fuse Alerts”, The 4th Conference on Security and Network Architectures, France, 2005. Mihai Christodorescu , Somesh Jha , Sanjit A. Seshia , Dawn Song , Randal E. Bryant, “Semantics-Aware Malware Detection”, Proceedings of the 2005 IEEE Symposium on Security and Privacy, p.32-46, May 08-11, 2005 Vasudevan, A., & Yerraballi, R., “SPiKE: Engineering Malware Analysis Tools using Unobtrusive Binary-Instrumentation”. Australasian Computer Science Conference (ACSC 2006),2006 Karresand, M., “A proposed taxonomy of software weapons” (No. FOI-R-0840-SE): FOI-Swedish Defence Research Agency, 2003. Cambridge, U. P., “Cambridge Advanced Learner’s Dictionary” Online. Retrieved 29 January 2008, from http://dictionary.cambridge.org Idika, N., & Mathur, A. P. (2007). A Survey of Malware Detection Techniques. Paper presented at the Software Engineering Research Center Conference, West Lafayette, IN 47907. Sekar, R., Gupta, A., Frullo, J., Shanbhag, T., Tiware, A., & Yang, H., “Specification-based Anomaly Detection: A New Approach for DetectingNetwork Intrusions”, ACM Computer and Communication Security Conference, 2002 Ko, C., Ruschitzka, M., & Levitt, K., “Execution monitoring of security critical programs in distributed systems: A Specificationbased Approach”, IEEE Symposium on Security and Privacy,1997. Okazaki, Y., Sato, I., & Goto, S., “A New Intrusion Detection Method based on Process Profilin”, Symposium on Applications and the Internet (SAINT '02) IEEE, 2002. Ko, C., Fink, G., & Levitt, K., ‘Automated detection of Vulnerabilities in priviliged programs by execution monitoring”, 10th Annual Computer Security Applications Conference, 1994. Debar, H., & Wespi, A., “Aggregation and Correlation of Intrusion Detection Alert”, International Symposium on Recent Advances in Intrusion Detection, Davis, CA, 2001. Robiah Yusof, Siti Rahayu Selamat, Shahrin Sahib, “Intrusion Alert Correlation Technique Analysis for Heterogeneous Log”, IJCNS,2008 Cowan, C., Pu, C., Maier, D., Walpole, J., Bakke, P., Beattie, S., et al, “Stackguard: Automatic adaptive detection and prevention of buffer-overflow attacks”, 7th USENIX security Conference, 1998. G.J. Halfond, W., & Orso, A., “AMNESIA: Analysis and Monitoring for NEutralizing Sql-Injection Attacks”, 20th IEEE/ACM International Conference on Automated Software Engineering, 2005. Bashah, N., Shanmugam, I. B., & Ahmed, A. M., ”Hybrid Intelligent Intrusion Detection System”, 2005 World Academy of Science, Engineering and Technology, 2005. Garcia-Teodoro, P., E.Diaz-Verdejo, J., Marcia-Fernandez, G., & Sanchez-Casad, L., “Network-based Hybrid Intrusion Detection Honeysystems as Active Reaction Schemes”, IJCSNS International Journal of Computer Science and Network Security, 7, 2007 A. Hofmeyr, S., Forrest, S., & Somayaji, A. (1998). Intrusion detection using sequences of system calls. Journal of Computer Security, 151-180. Adelstein, F., Stillerman, M., & Kozen, D. (2002). Malicious Code Detection For Open Firmware. Paper presented at the 18th Annual Computer Security Applications Conference (ACSAC '02), IEEE. B. Lee, R., K. Karig, D., P. McGregor, J., & Shi, Z. (2003). Enlisting hardware architecture to thwart malicious code injection. Paper presented at the International Conference on Security in Pervasive Computing (SPC) Bergeron, J., Debbabi, M., Desharnais, J., M., E., M., Lavoie, Y., & Tawbi, N. (2001). Static Detection of Malicious Code in executables programs. International Journal of Req Engineering Bergeron, J., Debbabi, M., M. Erhioui, M., & Ktari, B. (1999). Static Analysis of Binary code to Isolate Malicious Behaviours. Paper presented at the 8th Worksop on Enabling Technologies: Infrastructure for Collaborative Enterprises. Boldt, M., & Carlsson, B. (2006). Analysing Privacy-Invasive SoftwareUsing Computer Forensic Methods. from http://www.eevidence.info/b.html

60

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 [47]

[48]

[49]

[50] [51]

[52]

[53]

[54]

[55]

presented at the 20th Annual Computer Security Application Conference (ACSAC'04). Sung, A., Xu, J., Chavez, P., & Mukkamala, S. (2004). Static Analyzer of Vicious Executables. Paper presented at the 20th Annual Computer Security Applications Conferece (ACSAC '04), IEEE T. Giffin, J., Jha, S., & P. Miller, B. (2002). Detecting manipulated remote call streams. Paper presented at the 11th USENIX Security Symposium. Taylor, C., & Alves-Foss, J. (2002). NATE: Network Analysis of anomalous traffic events, a low cost approach. Paper presented at the New Security Paradigm Workshop, New Mexico, USA. W. Lo, R., N. Levitt, K., & A. Olsson, R. (1994). MCF: A Malicious Code Filter. Computers and Society, 541-566. Wagner, D., & Dean, D. (2001). Intrusion Detection via static analysis. Paper presented at the IEEE Symposium on Security and Privacy 2001. Wang, K., & J. Stolfo, S. (2004). Anomalous payload-based network intrusion detection. Paper presented at the 7th International Symposium on (RAID). Wang, Y.-M., Beck, D., Vo, B., Roussev, R., & Verbowski, C. (2005). Detecting Stealth Software with Strider Ghostbuster. Paper presented at the International Conference on Dependable Systems and Networks (DSN '05). Wespi, A., Dacier, M., & Debar, H. (2000). Intrusion detection using variable-length audit trail patterns. Paper presented at the Recent Advances in Intrusion Detection (RAID). Xiong, J. (2004). Act: Attachment chain tracing scheme for email virus detection and control. Paper presented at the ACM Workshop on Rapid Malcode (WORM).

61

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

Hybrid Intrusion Detection and Prediction multiAgent System, HIDPAS Farah Jemili

Montaceur Zaghdoud

Mohamed Ben Ahmed

RIADI Laboratory Manouba University Manouba 2010, Tunisia

RIADI Laboratory Manouba University Manouba 2010, Tunisia

RIADI Laboratory Manouba University Manouba 2010, Tunisia

resources [1]. Malicious behavior is defined as a system or individual action which tries to use or access to computer system without authorization and the privilege excess of those who have legitimate access to the system. The term attack can be defined as a combination of actions performed by a malicious adversary to violate the security policy of a target computer system or a network domain [2]. Each attack type is characterized by the use of system vulnerabilities based on some feature values. Usually, there are relationships between attack types and computer system characteristics used by the intruder. If we are able to reveal those hidden relationships, we will be able to predict the attack type.

Abstract— This paper proposes an intrusion detection and prediction system based on uncertain and imprecise inference networks and its implementation. Giving a historic of sessions, it is about proposing a method of supervised learning doubled of a classifier permitting to extract the necessary knowledge in order to identify the presence or not of an intrusion in a session and in the positive case to recognize its type and to predict the possible intrusions that will follow it. The proposed system takes into account the uncertainty and imprecision that can affect the statistical data of the historic. The systematic utilization of an unique probability distribution to represent this type of knowledge supposes a too rich subjective information and risk to be in part arbitrary. One of the first objectives of this work was therefore to permit the consistency between the manner of which we represent information and information which we really dispose. Besides, our system integrates host intrusion detection and network intrusion prediction in the setting of a global antiintrusions system capable to function like a HIDS (Host based Intrusion Detection System) before functioning like NIPS (Network based Intrusion Prediction System). The so proposed anti-intrusions system permits to combine two powerful tools together to permit a reliable host intrusion detection leading to an as reliable network intrusion prediction. In our contribution, we chose to do a supervised learning based on Bayesian networks. The choice of modeling the historic of data with Bayesian networks is dictated by the nature of learning data (statistical data) and the modeling power of Bayesian networks. However, taking into account the incompleteness that can affect the knowledge of parameters characterizing the statistical data and the set of relations between phenomena, the proposed system in the present work uses for the inference process a propagation method based on a bayesian possibilistic hybridization. The so proposed system is adapted to the modeling of reliability with taking into account imprecision.

From another side, an attack generally starts with an intrusion to some corporate network through a vulnerable resource and then launching further actions on the network itself. Therefore, we can define the attack prediction process as the sequence of elementary actions that should be performed in order to recognize the attack strategy. The use of distributed and coordinated techniques in attacks makes their detection more difficult. Different events and specific information must be gathered from all sources and combined in order to identify the attack plan. Therefore, it is necessary to develop an advanced attack strategies prediction system that can detect attack strategies so that appropriate responses and actions can be taken in advance to minimize the damages and avoid potential attacks. Besides, the proposed anti-intrusions system should take into account the uncertainty that can affect the data. The uncertainty on parameters can have two origins [3]. The first source of uncertainty comes from the uncertain character of information that is due to a natural variability resulting from stochastic phenomena. This uncertainty is called variability or stochastic uncertainty. The second source of uncertainty is related to the imprecise and incomplete character of information due to a lack of knowledge. This uncertainty is called epistemic uncertainty. The systematic utilization of an unique probability distribution to represent this type of knowledge supposes a too rich subjective information and risk to be in part arbitrary. The system proposed here offers a formal setting adapted to treat the uncertainty and imprecision, while combining probabilities and possibilities.

Keywords-uncertainty; imprecision; host intrusion detection; network intrusion prediction; Bayesian networks; bayesian possibilistic hybridization.

I.

INTRODUCTION

The proliferation of Internet access to every network device, the use of distributed rather than centralized computing resources, and the introduction of network-enabled applications has rendered traditional network-based security infrastructures vulnerable to serious attacks.

In this paper we propose an intrusion detection and prediction system which recognizes an upcoming intrusion and predicts the attacker’s attack plan and intentions. In our approach, we apply graph techniques based on bayesian

Intrusion detection can be defined as the process of identifying malicious behavior that targets a network and its

62

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

There are two general methods of detecting intrusions into computer and network systems: anomaly detection and signature recognition [13,14]. Anomaly detection techniques establish a profile of the subject’s normal behavior (norm profile), compare the observed behavior of the subject with its norm profile, and signal intrusions when the subject’s observed behavior differs significantly from its norm profile. Signature recognition techniques recognize signatures of known attacks, match the observed behavior with those known signatures, and signal intrusions when there is a match. Systems that use misuse-based techniques contain a number of attack descriptions, or ‘signatures’, that are matched against a stream of audit data looking for evidence of the modeled attacks. The audit data can be gathered from the network [15], from the operating system [16], or from application log files [6].

reasoning for learning. We further apply inference to recognize the attack type and predict upcoming attacks. The inference process is based on hybrid propagation that takes into account both the uncertain and imprecise character of information. II.

RELATED WORK

Several researchers have been interested in using Bayesian network to develop intrusion detection and prediction systems. Axelsson in [5] wrote a well-known paper that uses the Bayesian rule of conditional probability to point out the implications of the base-rate fallacy for intrusion detection. It clearly demonstrates the difficulty and necessity of dealing with false alarms [6]. In [7], a model is presented that simulates an intelligent attacker using Bayesian techniques to create a plan of goal-directed actions.

IDSs are usually classified as host-based or network-based. Host-based systems use information obtained from a single host (usually audit trails), while network based systems obtain data by monitoring the trace of information in the network to which the hosts are connected [17].

In [8], a naïve Bayesian network is employed to perform intrusion detection on network events. A naïve Bayesian network is a restricted network that has only two layers and assumes complete independence between the information nodes (i.e., the random variables that can be observed and measured). Kruegel in [6] proposed an event classification scheme that is based on Bayesian networks. Bayesian networks improve the aggregation of different model outputs and allow one to seamlessly incorporate additional information.

A simple question that will arise is how can an Intrusion Detection System possibly detect every single unknown attack? Hence the future of intrusion detection lies in developing an Intrusion Prediction System [10]. Intrusion Prediction Systems must be able to predict the probability of intrusions on each host of a distributed computer system. Prediction techniques can protect the systems from new security breaches that can result from unknown methods of attacks. In an attempt to develop such a system, we propose a global anti-intrusions system which detects and predicts intrusions based on hybrid propagation in Bayesian networks .

Johansen in [9] suggested a Bayesian system which would provide a solid mathematical foundation for simplifying a seemingly difficult and monstrous problem that today’s Network IDS (NIDS) fail to solve. The Bayesian Network IDS (BNIDS) should have the capability to differentiate between attacks and the normal network activity by comparing metrics of each network traffic sample. Govindu in [10] wrote a paper which discusses the present state of Intrusion Detection Systems and their drawbacks. It highlights the need of developing an Intrusion Prediction System, which is the future of intrusion detection systems. It also explores the possibility of bringing intelligence to the Intrusion Prediction System by using mobile agents that move across the network and use prediction techniques to predict the behavior of user. III.

IV.

BAYESIAN NETWORKS

A Bayesian network is a graphical modeling tool used to model decision problems containing uncertainty. It is a directed acyclic graph where each node represents a discrete random variable of interest. Each node contains the states of the random variable that it represents and a conditional probability table (CPT) which give conditional probabilities of this variable such as realization of other connected variables, based upon Bayes rule:

INTRUSION DETECTION AND PREDICTION SYSTEM

Π(Β|Α)=Π(Α|Β)Π(Β)/Π(Α) (1) The CPT of a node contains probabilities of the node being in a specific state given the states of its parents. The parentchild relationship between nodes in a Bayesian network indicates the direction of causality between the corresponding variables. That is, the variable represented by the child node is causally dependent on the ones represented by its parents [18].

The detection of certain attacks against a networked system of computers requires information from multiple sources. A simple example of such an attack is the so-called doorknob attack. In a doorknob attack the intruder’s goal is to discover, and gain access to, insufficiently-protected hosts on a system. The intruder generally tries a few common account and password combinations on each of a number of computers. These simple attacks can be remarkably successful [12]. An Intrusion Detection system, as the name suggests, detect possible intrusions [13]. An IDS installed on a network is like a burglar alarm system installed in a house. Through various methods, both detect when an intruder/attacker/burglar is present. Both systems issue some type of warning in case of detection of presence of burglar/intrusion [10].

Several researchers have been interested by using Bayesian network to develop intrusion detection systems. Axelsson in [5] wrote a well-known paper that uses the Bayesian rule of conditional probability to point out the implications of the base-rate fallacy for intrusion detection. It clearly demonstrates the difficulty and necessity of dealing with false alerts.

63

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

Herskovits, is to define a database of variables : X1,..., Xn, and to build an acyclic graph directed (DAG) based on the calculation of local score [22]. Variables constitute network nodes. Arcs represent “causal” relationships between variables.

Kruegel in [1] presented a model that simulates an intelligent attacker using Bayesian techniques to create a plan of goal-directed actions. An event classification scheme is proposed based on Bayesian networks. Bayesian networks improve the aggregation of different model outputs and allow one to seamlessly incorporate additional information.

K2 Algorithm used in learning step needs:

Johansen in [9] suggested that a Bayesian system which provides a solid mathematical foundation for simplifying a seemingly difficult and monstrous problem that today’s Network IDS fail to solve. He added that Bayesian Network IDS should differentiate between attacks and the normal network activity by comparing metrics of each network traffic sample.

• •

A given order between variables and the number of parents of the node.

K2 algorithm proceeds by starting with a single node (the first variable in the defined order) and then incrementally adds connection with other nodes which can increase the whole probability of network structure, calculated using the S function. A requested new parent which does not increase node probability can not be added to the node parent set.

Bayesian networks learning has several advantages. First, it can incorporate prior knowledge and expertise by populating the CPTs. It is also convenient to introduce partial evidence and find the probability of unobserved variables. Second, it is capable of adapting to new evidence and knowledge by belief updates through network propagation.

(2) Where, for each variable Vi, ri is the number of possible instantiations, Nij is the j-th instantiation of C(Vi) in the database, qi is the number of possible instantiations for C(Vi), Nijk is the number of cases in D for which Vi takes the value Vik with C(Vi) instantiated to Nij, Nij is the sum of Nijk for all values of k.

A. Bayesian Network Learning Algorithm Methods for learning Bayesian graphical models can be partitioned into at least two general classes of methods: constraint-based search and Bayesian methods. The constraintbased approaches [19] search the data for conditional independence relations from which it is in principle possible to deduce the Markov equivalence class of the underlying causal graph. Two notable constraint based algorithms are the PC algorithm which assumes that no hidden variables are present and the FCI algorithm which is capable of learning something about the causal relationships even assuming there are latent variables present in the data [19].

V.

JUNCTION TREE INFERENCE ALGORITHM

The most common method to perform discrete exact inference is the Junction Tree algorithm developed by Jensen [23]. The idea of this procedure is to construct a data structure called a junction tree which can be used to calculate any query through message passing on the tree.

Bayesian methods [21] utilize a search-and-score procedure to search the space of DAGs, and use the posterior density as a scoring function. There are many variations on Bayesian methods, however, most research has focused on the application of greedy heuristics, combined with techniques to avoid local maxima in the posterior density (e.g., greedy search with random restarts or best first searches). Both constraint-based and Bayesian approaches have advantages and disadvantages. Constraint-based approaches are relatively quick and possess the ability to deal with latent variables. However, constraint-based approaches rely on an arbitrary significance level to decide independencies.

The first step of JT algorithm creates an undirected graph from an input DAG through a procedure called moralization. Moralization keeps the same edges, but drops the direction, and then connects the parents of every child. Junction tree construction follows four steps: •



Bayesian methods can be applied even with very little data where conditional independence tests are likely to break down. Both approaches have the ability to incorporate background knowledge in the form of temporal ordering, or forbidden or forced arcs. Also, Bayesian approaches are capable of dealing with incomplete records in the database. The most serious drawback to the Bayesian approaches is the fact that they are relatively slow.





JT Inference Step1: Choose a node ordering. Note that node ordering will make a difference in the topology of the generated tree. An optimal node ordering with respect to the junction tree is NP-hard to find. JT Inference Step2: Loop through the nodes in the ordering. For each node Xi, create a set Si of all its neighbours. Delete the node Xi from the moralized graph. JT Inference Step3: Build a graph by letting each Si be a node. Connect the nodes with weighted undirected edges. The weight of an edge going from Si to Sj is |Si ∩ Sj |. JT Inference Step4: Let the junction tree be the maximal-weight spanning tree of the cluster graph. VI.

In this paper, we are dealing with incomplete records in the database so we opted for the Bayesian approach and particularly for the K2 algorithm. K2 learning algorithm showed high performance in many research works. The principle of K2 algorithm, proposed by Cooper and

PROBLEM DESCRIPTION

The inference in bayesian networks is the post-calculation of uncertainty. Knowing the states of certain variables (called variables of observation), the inference process determines the states of some other variables (called variables targets)

64

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

measures reflects the imprecise character of the information. It is about defining a possibility distribution on a probability measure. This possibility distribution reflects the imprecise character of the true probability of the event.

conditionally to observations. The choice to represent the knowledge by probabilities only, and therefore to suppose that the uncertainty of the information we dispose has stochastic origin, has repercussions on the results of uncertainty propagation through the bayesian model.

A probability measure is more reliable and informative when the gap between its two upper and lower terminals is reduced, ie imprecision on the value of the variable is reduced, as opposed to a measure of probability in a confidence interval relatively large, this probability is risky and not very informative.

The two sources of uncertainties (stochastic - epistemic) must be treated in different manners. In practice, while the uncertain information is treated with rigorous manner by the classic probability distributions, the imprecise information is much more better treated by possibility distribution. The two sources of uncertainty don't exclude themselves and are often related (for example: imprecise measurement of an uncertain quantity ).

C. Hybrid Propagation Process The hybrid propagation proceeds in three steps:

The merely probabilistic propagation approach can generate some too optimistic results. This illusion is reinforced by the fact that information is sometimes imprecise or incomplete and the classic probabilistic context doesn't represent this type of information faithfully.

1) Substitute probability distributions of each variable in the graph by probability distributions framed by measures of possibility and necessity, using the variable transformation from probability to possibility TV, applied to probability distributions of each variable in the graph. The gap between the necessity and possibility measures reflects the imprecise character of the true probability associated to the variable. 2) Transformation of the initial graph to a junction tree. 3) Uncertain and imprecise uncertainty propagation which consists in both : a) The classic probabilistic propagation of stochastic uncertainties in junction tree through message passing on the tree, and

In the section below, we will present a new propagation approach in bayesian networks called hybrid propagation combining probabilities and possibilities. The advantage of this approach over the classic probabilistic propagation is that it takes into account both the uncertain and the imprecise character of information. VII.

HYBRID PROPAGATION IN BAYESIAN NETWORKS

b) The possibilistic propagation of epistemic uncertainties in the junction tree. Possibilistic propagation in junction tree is a direct adaptation of the classic probabilistic propagation. Therefore, the proposed propagation method:

The mechanism of propagation is based on Bayesian model. Therefore, the junction tree algorithm is used for the inference in the Bayesian network. The hybrid calculation combining probabilities and possibilities, permits to propagate both the variability (uncertain information) and the imprecision (imprecise information).

1) Preserves the power of modeling of Bayesian networks (permits the modeling of relations between variables), 2) This method is adapted to both stochastic and epistemic uncertainties, 3) The result is a probability described by an interval delimited by possibility and necessity measures.

A. Variable Transformation from Probability to Possibility (TV) Let's consider the probability distribution p=(p1,...,pi ,...,pn) ordered as follows: p1>p2>…>pn. The possibility distribution π=(π1,…,πi,…,πn) according to the transformation (p→π) proposed in [24] is π1> π2> …> πn. Every possibility is defined by: ∀ i = 1, 2, .., n Where k1=1,

VIII.

HYBRID INTRUSION DETECTION AND PREDICTION SYSTEM Our anti-intrusions system operates to two different levels, it integrates host intrusion detection and network intrusion prediction.

(3)

, ∀ i =2, 3, …, n

The intrusion detection consists in analyzing audit data of the host in search of attacks whose signatures are stocked in a signatures dataset. The intrusion prediction, consists in analyzing the stream of alerts resulting from one or several detected attacks, in order to predict the possible attacks that will follow in the whole network.

B. Probability Measure and Possibility Distribution Let's consider a probabilistic space (Ω,A,P). For all measurable whole A⊆Ω, we can define its high probability and its low probability. In other terms the value of the probability P(A) is imprecise: ∀ A⊆Ω, N(A) ≤ P(A) ≤ Π(A) where N(A) = 1-Π( ).

Our anti-intrusions approach is based on hybrid propagation in Bayesian networks in order to benefit the power of modeling of Bayesian networks and the power of possibilistic reasoning to manage imprecision.

Each couple of necessity/possibility measures (N,Π) can be considered as the lower and higher probability measures induced by a probability measure. The gap between these two

65

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

A. Hybrid intrusion detection The main objective of intrusion detection is to detect each security policy violation on a system of information. Signature Recognition approach, adopted in our contribution, analyzes audit data in search of attacks whose signatures are stocked in a signatures dataset. Audit data are data of the computer system that bring back information on operations led on this later. A signatures dataset contains a set of lines, every line codes a stream of data (between two definite instants) between a source (identified by its IP address) and a destination (identified also by its IP address), under a given protocol (TCP, UDP...). Every line is a connection characterized by a certain number of attributes as its length, the type of the protocol, etc. According to values of these attributes, every connection in the signatures dataset is considered as being a normal connection or an attack.

network learning step. The first step is to browse the set of entry variables to extract their different values and to calculate their probabilities. Then, we use the K2 probabilistic learning algorithm to build the Bayesian network for intrusion detection. The result is a directed acyclic graph whose nodes are the entry variables and edges denote the conditional dependences between these variables. To each variable of the graph is associated a conditional probability table that quantifies the effect of its parents, 4) Hybrid propagation in Bayesian network : consists in the three steps mentioned previously. At the end of this step, every connection (normal or intrusion) in a host is classified in the most probable connection type. In case of detected intrusions in a host, one or several alerts are sent in direction of the intrusion prediction module, this later is charged to predict the possible intrusions that can propagate in the whole network.

In our approach, the process of intrusion detection is considered as a problem of classification. Given a set of identified connections and a set of connections types, our goal is to classify connections among the most plausible corresponding connections types.

B. Hybrid intrusion prediction The intrusion prediction aims to predict attack plans, given one or several intrusions detected at the level of one or several hosts of a computer network. An intrusion detected at the level of a host results to one or several alerts generated by a HIDS. The intrusion prediction tent to classify alerts among the most plausible hyper-alerts, each hyper-alert corresponds to a step in the attack plan, then, based on hyper-alerts correlation, deducts the possible attacks that will follow in the whole computer network [11].

Our approach for intrusion detection consists in four main steps [30]: 1) Important attributes selection : In a signatures dataset, every connection is characterized by a certain number of attributes as its length, the type of the protocol, etc. These attributes have been fixed by Lee and al. [31]. The objective of this step is to extract the most important attributes among attributes of the signatures dataset. To do so, we proceeded by a Multiple Correspondences Factorial Analysis (MCFA) of attributes of the dataset, then we calculated the Gini index for every attribute of the dataset in order to visualize the different attributes distribution and to select the most important attributes [32]. It results a set of the most important attributes characterizing connections of the signatures dataset. Some of these attributes can be continuous and can require to be discretized to improve classification results, 2) Continuous attributes discretization : The selected attributes, can be discrete (admitting a finished number of values) or continuous. Several previous works showed that the discretization improved bayesian networks performances [4]. To discretize continuous attributes, we opted for the discretization by the fit together averages. it consists in cutting up the variable while using some successive averages as limits of classes. This method has the advantage to be bound strongly to the variable distribution, but if the variable is cut up a big number of time, this method risks to either produce some empty or very heterogeneous classes, in the case of very dissymmetric distributions. Thus, we use only one iteration, i.e. a binary discretization based on the average, but this supposes that the behavior of the observation variables is not too atypical, 3) Bayesian network learning : The set of important attributes being discretized as well as the class of connection types constitute the set of entry variables to the Bayesian

1) Alerts Classification: The main objective of the alerts classification is to analyze the stream of alerts generated by intrusion detection systems in order to contribute in the attack plans prediction. In our approach, given a set of alerts, alerts classification's goal is to classify alerts among the most plausible corresponding hyper-alerts. Our approach for alerts classification consists in four main steps : a) Important attributes selection: In addition to time information, each alert has a number of other attributes, such as source IP, destination IP, port(s), user name, process name, attack class, and sensor ID, which are defined in a standard document, “Intrusion Detection Message Exchange Format (IDMEF)”, drafted by the IETF Intrusion Detection Working Group [20]. For the most important attributes selection, we proceed by a Multiple Correspondences Factorial Analysis (MCFA) of the different attributes characterizing alerts. The attributes selection doesn't include time stamps, we will use time stamps in the attack plans prediction process in order to detect alerts series. It results of this step, a set of the most important attributes characterizing alerts of the alerts dataset,

b) Alerts aggregation: An alerts dataset generally contains a big number of alerts, most are raw alerts and can make reference to one same event. Alerts aggregation consists in exploiting alerts attributes similarities in order to reduce the redundancy of alerts. Since alerts that are output by the same IDS and have the same attributes except time stamps correspond to the same step in the attack plan [26], we

66

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

entry variables to extract their different values and to calculate their probabilities. Then, we use the K2 probabilistic learning algorithm to build the Bayesian network for attack plans prediction. The result is a directed acyclic graph whose nodes are the hyper-alerts and edges denote the conditional dependences between these hyper-alerts.

aggregate alerts sharing the same sensor, the same attributes except time stamps in order to get clusters of alerts where each cluster corresponds to only one step of the attack plan, called hyper-alert. Then, based on results of this first step, we merge clusters of alerts (or hyper-alert) corresponding to the same step of the attack plan. At the end of this step of alerts aggregation, we get, a cluster of alerts (or hyper-alert) for each step of the attack plan (i.e. hyper-alert = step of the attack plan). We regroup in one class all the observed hyper-alerts,

c) Hybrid propagation in Bayesian network : consists in the three steps mentioned previously. At the end of this step, given one or several attacks detected, we can predict the possible attacks that will follow.

c) Bayesian network learning: The set of selected attributes of alerts as well as the class regrouping all the observed hyper-alerts forms the set of entry variables to the Bayesian network learning step. The first step is to browse the set of entry variables in order to extract their different values and calculate their probabilities. Then, we use the K2 probabilistic learning algorithm to build the Bayesian network for alerts classification,

IX.

HIDPAS system architecture is composed by two interconnected layers of intelligent agents. The first layer is concerned by host intrusion detection. On each host of a distributed computers system an intelligent agent is charged by detecting intrusion eventuality.

d) Hybrid propagation in Bayesian network : consists in the three steps mentioned previously.

Each agent of the intrusion detection layer uses a signature intrusion database (SDB) to build its own bayesian network. For every new suspect connection, the intrusion detection agent (IDA) of the concerned host uses hybrid propagation in its bayesian network to infer the conditional evidences of intrusion given the new settings of the suspect connection. Therefore, based on the probability degree and the gap between the necessity and the possibility degrees associated with each connection type, we can perform quantitative analysis on the connection types.

At the end of this step, every generated alert is classified in the most probable corresponding hyper-alert. 2) Attack plans prediction: Attack plans prediction consists in detecting complex attack scenarios, that is implying a series of actions by the attacker. The idea is to correlate hyper-alerts resulting from the previous step in order to predict, given one or several attacks detected, the possible attacks that will follow. Our approach for attack plans prediction consists in three main steps : a) Transaction data formulation [26]: we formulate transaction data for each hyper alert in the dataset. Specifically, we set up a series of time slots with equal time interval, denoted as ∆t, along the time axis. Given a time range time slots. Recall that each hyper alert A T, we have m =

In the final selection of possible connection type, we can select the type who has the maximum informative probability value. An informative probability is a probability delimited by two confidence measures where the gap between them is under a threshold.

includes a set of alert instances with the same attributes except time stamps, i.e., A = [a1 ,a2, …, an], where ai represents an alert instance in the cluster. We denote NA = {n1, n2, …, nm} as the variable to represent the occurrence of hyper alert A during the time range T, where ni is corresponding to the occurrence (i.e., ni = 1) or un-occurrence (i.e., ni = 0) of the alert A in a specific time slot , Using the above process, we can create a set of transaction data. Table 1 shows an example of the transaction data corresponding to hyper alerts A, B and C. TABLE I.

HIDPAS SYSTEM AGENT ARCHITECTURE

IPA

Network Intrusion Prediction Layer

IDA SDB

ADB

IDA SDB

AN EXAMPLE OF TRANSACTION DATA SET

Time Slot



IDA

A 1 1 1 … 1

B 0 0 1 … 0

C 0 1 0 … 0

IDA

SDB

SDB

Host Intrusion Detection Layer Figure 1. HIDPAS system architecture

b) Bayesian network learning: The set of the observed hyper-alerts forms the set of entry variables to the Bayesian network learning step. The first step is to browse the set of

In case of intrusion, IDA agent informs the intrusion prediction agent (IPA) which is placed in the prediction layer,

67

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

about the eventuality of intrusion on the concerned host and its type.

operating systems and services. Additional three machines are then used to spoof different IP addresses to generate traffic.

The second layer is based upon one intelligent agent which is charged by network intrusion prediction.

Finally, there is a sniffer that records all network traffic using the TCP dump format. The total simulated period is seven weeks [27]. Packet information in the TCP dump file is summarized into connections. Specifically, “a connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows from a source IP address to a target IP address under some well defined protocol” [27].

When the Intrusion Prediction Agent (IPA) is informed about a new intrusion which will be happened on a host of the distributed computers system and its type, it tries to compute conditional probabilities that other attacks may be ultimately happen. To accomplish this task, IPA uses another database type (ADB) which contains historical data about alerts generated by sensors from different computer systems.

DARPA KDD'99 dataset represents data as rows of TCP/IP dump where each row consists of computer connection which is characterized by 41 features. Features are grouped into four categories:

Given a stream of alerts, agent IPA first output results as evidences to the inference process of the first graph for alerts classification, second, it output results of alerts classification to the inference process of the second graph for attack plans prediction. Each path in the second graph is potentially a subsequence of an attack scenario. Therefore, based on the probability degree and the gap between the necessity and the possibility degrees associated with each edge, IPA can perform quantitative analysis on the attack strategies.

1) Basic Features: Basic features can be derived from packet headers without inspecting the payload. 2) Content Features: Domain knowledge is used to assess the payload of the original TCP packets. This includes features such as the number of failed login attempts; 3) Time-based Traffic Features: These features are designed to capture properties that mature over a 2 second temporal window. One example of such a feature would be the number of connections to the same host over the 2 second interval; 4) Host-based Traffic Features: Utilize a historical window estimated over the number of connections – in this case 100 – instead of time. Host based features are therefore designed to assess attacks, which span intervals longer than 2 seconds.

The advantage of our approach is that we do not require a complete ordered attack sequence for inference. Due to bayesian networks and the hybrid propagation, we have the capability of handling partial order and unobserved activity evidence sets. In practice, we cannot always observe all of the attacker’s activities, and can often only detect partial order of attack steps due to the limitation or deployment of security sensors. For example, IDA can miss detecting intrusions and thus result in an incomplete alert stream. In the final selection of possible future goal or attack steps, IPA can either select the node(s) who has the maximum informative probability value(s) or the one(s) whose informative probability value(s) is (are) above a threshold.

In this study, we used KDD'99 dataset which is counting almost 494019 of training connections. Based upon a Multiple Correspondences Factorial Analysis (MCFA) of attributes of the KDD’99 dataset, we used data about only important features. Table 2 shows the important features and the corresponding Gini index for each feature:

After computing conditional probabilities of possible attacks, IPA informs the system administrator about possible attacks.

TABLE II.

X.

HIDPAS SYSTEM IMPLEMENTATION

N° A23 A5 A24 A3 A36 A2 A33 A35 A34

HIDPAS was implemented using JADE multiagent plateform. The dataset used for intrusion detection implementation and experimentation is DARPA KDD’99 which contains signatures of normal connections and signatures of 38 known attacks gathered in four main classes: DOS, R2L, U2R and Probe. A.

DARPA’99 DATA SET MIT Lincoln Lab’s DARPA intrusion detection evaluation datasets have been employed to design and test intrusion detection systems. The KDD 99 intrusion detection datasets are based on the 1998 DARPA initiative, which provides designers of intrusion detection systems (IDS) with a benchmark on which to evaluate different methodologies [25].

CONNECTIONS IMPORTANT FEATURES

Feature Count src_bytes src_count service dst_host_same_src_port_rate protocol_type dst_host_srv_count dst_host_diff_srv_rate dst_host_same_srv_rate

Gini 0.7518 0.7157 0.6978 0.6074 0.5696 0.5207 0.5151 0.4913 0.4831

To these features, we added the "attack_type". Indeed each training connection is labelled as either normal, or as an attack with specific type. DARPA'99 base counts 38 attacks which can be gathered in four main categories: 1) Denial of Service (dos): Attacker tries to prevent legitimate users from using a service.

To do so, a simulation is made of a factitious military network consisting of three ‘target’ machines running various

68

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

4) Phase 4: The attacker uses telnet and rpc to install a DDoS program on the compromised machines. 5) Phase 5: The attacker telnets to the DDoS master machine and launches the mstream DDOS against the final victim of the attack. We used an alert log file [28] generated by RealSecure IDS. As a result of replaying the “Inside-tcpdump” file from DARPA 2000, Realsecure produces 922 alerts. After applying the proposed alerts important attributes selection, we used data about only important features as shown in Table4.

2) Remote to Local (r2l): Attacker does not have an account on the victim machine, hence tries to gain access. 3) User to Root (u2r): Attacker has local access to the victim machine and tries to gain super user privileges. 4) Probe: Attacker tries to gain information about the target host. Among the selected features, only service and protocoltype are discrete, the other features need to be discretized. Table 3 shows the result of discretization of these features. TABLE III. N°

CONTINUOUS FEATURES DISCRETIZATION

Feature

A23

count

A5

src_bytes

A24

src_count

A36

dst_host

A33

dst_host_srv_count

A35

dst_host_diff_srv_rate

A34

dst_host_same_srv_rate

TABLE IV.

Values cnt_v1 m < 332.67007446 cnt_v2 m ≥ 332.67007446 sb_v1 m < 30211.16406250 sb_v2 m ≥ 30211.16406250 srv_cnt_v1 m < 293.24423218 srv_cnt_v2 m ≥ 293.24423218 dh_ssp_rate_v1 m < 0.60189182 dh_ssp_rate_v2 m ≥ 0.60189182 dh_srv_cnt_v1 m < 189.18026733 dh_srv_cnt_v2 m ≥ 189.18026733 dh_dsrv_rate_v1 m < 0.03089163 dh_dsrv_rate_v2 m ≥ 0.03089163 dh_ssrv_rate_v1 m < 0.75390255 dh_ssrv_rate_v2 m ≥ 0.75390255

ALERTS IMPORTANT FEATURES

Feature SrcIPAddress SrcPort DestIPAddress DestPort AttackType

Gini 0,6423 0,5982 0,5426 0,5036 0,4925

After applying the proposed alerts aggregation, we obtained 17 different types of alerts as shown in Table5. TABLE V.

HYPER-ALERTS REPORTED BY REALSECURE IN LLDOS 1.0

ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

The dataset used for intrusion prediction implementation and experimentation is LLDOS 1.0 provided by DARPA 2000, which is the first attack scenario example dataset to be created for DARPA. It includes a distributed denial of service attack run by a novice attacker.

C.

B.

LLDOS 1.0 – SCENARIO ONE DARPA2000 is a well-known IDS evaluation dataset created by the MIT Lincoln Laboratory. It consists of two multistage attack scenarios, namely LLDDOS1.0 and LLDOS2.02. The LLODS1.0 scenario can be divided into five phases as follows [29]:

Hyper-alert Sadmind_Ping TelnetTerminaltype Email_Almail_Overflow Email_Ehlo FTP_User FTP_Pass FTP_Syst http_Java http_Shells Admind Sadmind_Amslverify_Overflow Rsh Mstream_ Zombie http_ Cisco_ Catalyst_ Exec SSH_Detected Email_Debug Stream_DoS

Size 3 128 38 522 49 49 44 8 15 17 14 17 6 2 4 2 1

SYSTEM IMPLEMENTATION HIDPAS system contains three interfaces:

1) Intrusion Detection Interface : Figure 2 shows the bayesian network built by AGENT ID1. For every new connection, AGENT ID1 uses its bayesian network to decide about the intrusion and its type. 2) Alerts Classification Interface : Figure 3 shows the bayesian network built by the IPA for alerts classification. The IPA receives alerts messages sent by intrusion detection agents about the detected intrusions. The IPA uses its bayesian network to determine hyper-alerts corresponding to these alerts. 3) Attack Plans Prediction Interface : Figure 4 shows the bayesian network built by the IPA for attack plans prediction.

1) Phase 1: The attacker scans the network to determine which hosts are up. 2) Phase 2: The attacker then uses the ping option of the sadmind exploit program to determine which hosts selected in Phase 1 are running the Sadmind service. 3) Phase 3: The attacker attempts the sadmind Remoteto-Root exploit several times in order to compromise the vulnerable machine.

69

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009 TABLE VI.

The IPA uses its bayesian network to determine the eventual attacks that will follow the detected intrusions.

DETECTION RATE COMPARISON

Detection Normal (60593) DOS (229853) Probing (4166) R2L (16189) U2R (228)

Classic propagation 99.52% 97.87% 89.39% 19.03% 31.06%

Hybrid propagation 100% 99.93% 98.57% 79.63% 93.54%

Table 6 shows high performance of our system based on hybrid propagation in intrusion detection. •

Figure 2. Intrusion Detection Interface

False Alerts : Bayesian networks can generate two types of false alerts: False negative and false positive alarms. False negative describe an event that the IDS fails to identify as an intrusion when one has in fact occurred. False positive describe an event, incorrectly identified by the IDS as being an intrusion when none has occurred. TABLE VII.

FALSE ALERTS RATE COMPARISON

False alerts Normal (60593) DOS (229853) Probing (4166) R2L (16189) U2R (228)

Classic propagation 0.48% 1.21% 5.35% 6.96% 6.66%

Hybrid propagation 0% 0.02% 0.46% 2.96% 1.36%

Table 7 shows the gap between false alerts results given by the two approaches. Hybrid propagation approach gives the smallest false alerts rates.

Figure 3. Alerts Classification Interface



Correlation rate: can be defined as the rate of attacks correctly correlated by our system.



False positive correlation rate : is the rate of attacks correlated by the system when no relationship exists between them.



False negative correlation rate : is the rate of attacks having in fact relationship but the system fails to identify them as correlated attacks.

Table 8 shows experimentation results about correlation measured by our system: TABLE VIII.

CORRELATION RATE COMPARISON

Figure 4. Intrusion Prediction Interface

XI.

Correlation rate False positive correlation rate False negative correlation rate

EXPERIMENTATION

The main criteria that we have considered in the experimentation of our system are the detection rate, false alerts rate, alerts correlation rate, false positive correlation rate and false negative correlation rate. •

Classic propagation 95.5%

hybrid propagation 100%

6.3%

1.3%

4.5%

0%

Table 8 shows high performance of our system based on hybrid propagation in attack correlation and prediction. The use of hybrid propagation in bayesian networks was

Detection Rate: is defined as the number of examples correctly classified by our system divided by the total number of test examples.

70

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009 [10] Surya Kumari Govindu, Intrusion Forecasting System, http://www.securitydocs.com/library/3110 15/03/2005. [11] Jemili F., Zaghdoud M., Ben Ahmed M., « Attack Prediction based on Hybrid Propagation in Bayesian Networks », In Proc. of the Internet Technology And Secured Transactions Conference, ICITST-2009. [12] B. Landreth, Out of the Inner Circle, A Hacker’s Guide to Computer Security, Microsoft Press, Bellevue, WA, 1985. [13] Paul Innella and Oba McMillan, An introduction to Intrusion detection, Tetrad Digital Integrity, LLC, December 6, 2001, by URL: http://www.securityfocus.com, 2001. [14] Brian C. Rudzonis, Intrusion Prevention: Does it Measure up to the Hype? SANS GSEC Practical v1.4b. April 2003. [15] M. Roesch. Snort - Lightweight Intrusion Detection for Networks. In USENIX Lisa 99, 1999. [16] K. Ilgun. USTAT: A Real-time Intrusion Detection System for UNIX. In Proceedings of the IEEE Symposium on Research on Security and Privacy, Oakland, CA, May 1993. [17] Biswanath Mukherjee, Todd L. Heberlein and Karl N. Levitt, Network intrusion detection. IEEE Network, 8(3):26{41, May/June 1994. [18] Peter Spirtes, Thomas Richard-son, and Christopher Meek. Learning Bayesian networks with discrete variables from data. In Proceedings of the First International Conference on Knowledge Discovery and Data Mining, pages 294-299, 1995. [19] Peter Spirtes, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search. Springer Verlag, New York, 1993. [20] GROUP, I. I. D. W., « Intrusion Detection Message Exchange Format » http://www.ietf.org/internet-drafts/draft-ietf-idwg-idmef-xml-09.txt, 2002. [21] Gregory F. Cooper and Edward Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 1992. [22] Sanguesa R., Cortes U. Learning causal networks from data: a survey and a new algorithm for recovering possibilistic causal networks. AI Communications 10, 31–61, 1997. [23] Frank Jensen, Finn V. Jensen and Soren L. Dittmer. From influence diagrams to junction trees. Proceedings of UAI, 1994. [24] M. Sayed Mouchaweh, P. Bilaudel and B. Riera. “Variable ProbabilityPossibility Transformation”, 25th European Annual Conference on Human Decision-Making and Manual Control (EAM'06), September 27-29,Valenciennes, France, 2006. [25] DARPA. Knowledge Discovery in Databases, 1999. DARPA archive. Task Description http://www.kdd.ics.uci.edu/databases/kddcup99/task.htm [26] Qin Xinzhou, «A Probabilistic-Based Framework for INFOSEC Alert Correlation », PhD thesis, College of Computing Georgia Institute of Technology, August 2005. [27] Kayacik, G. H., Zincir-Heywood, A. N. Analysis of Three Intrusion Detection System Benchmark Datasets Using Machine Learning Algorithms, Proceedings of the IEEE ISI 2005 Atlanta, USA, May, 2005. [28] North Carolina State University Cyber Defense Laboratory, Tiaa: A toolkit for intrusion alert analysis, http://discovery.csc.ncsu.edu/software/correlator/ver0.4/index.html [29] MIT Lincoln Laboratory, 2000 darpa intrusion detection scenario speci¯ c data sets, 2000. [30] Jemili F., Zaghdoud M., Ben Ahmed M., « Intrusion Detection based on Hybrid Propagation in Bayesian Networks », In Proc. of the IEEE International Conference on Intelligence and security informatics, ISI 2009. [31] Lee W., Stolfo S. J., Mok K. W., « A data mining framework for building intrusion detection models », Proceedings of the 1999 IEEE symposium on security and privacy, 1999. [32] Arfaoui N., Jemili F., Zaghdoud M., Ben Ahmed M., « Comparative Study Between Bayesian Network And Possibilistic Network In Intrusion Detection », In Proc. of the International Conference on Security and Cryptography, Secrypt 2006.

especially useful, because we have deal with a lot of missing information. XII.

CONCLUSION

In this paper, we outlined a new approach based on hybrid propagation combining probability and possibility, through a bayesian network. Bayesian networks provide automatic learning from audit data. Hybrid propagation through bayesian network provide propagation of both stochastic and epistemic uncertainties, coming respectively from the uncertain and imprecise character of information. The application of our system in intrusion detection context helps detect both normal and abnormal connections with very considerable rates. Besides, we presented an approach to identify attack plans and predict upcoming attacks. We developed a bayesian network based system to correlate attack scenarios based on their relationships. We conducted inference to evaluate the likelihood of attack goal(s) and predict potential upcoming attacks based on the hybrid propagation of uncertainties. Our system demonstrates high performance when detecting intrusions, correlating and predicting attacks. This is due to the use of bayesian networks and the hybrid propagation within bayesian networks which is especially useful when dealing with missing information. There are still some challenges in attack plan recognition. First, we will apply our algorithms to alert streams collected from live networks to improve our work. Second, our system can be improved by integrating an expert system which is able to provide recommendations based on attack scenarios prediction. REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7] [8]

[9]

Kruegel Christopher, Darren Mutz William, Robertson Fredrik Valeur. Bayesian Event Classification for Intrusion Detection Reliable Software Group University of California, Santa Barbara, , 2003. F. Cuppens and R. Ortalo. LAMBDA: A language to model a database for detection of attacks. In Third International Workshop on the Recent Advances in Intrusion Detection (RAID’2000), Toulouse, France, 2000. C. Baudrit and D. Dubois. Représentation et propagation de connaissances imprécises et incertaines: Application à l'évaluation des risques liées aux sites et aux sols pollués. Université Toulouse III – Paul Sabatier, Toulouse, France, Mars 2006. Dougherty J., Kohavi R., Sahami M., «Forrests of fuzzy decision trees», Proceedings of ICML’95 : supervised and unsupervised discretization of continuous features, p. 194-202, 1995. S. Axelsson. The Base-Rate Fallacy and its Implications for the Difficulty of Intrusion Detection. In 6th ACM Conference on Computer and Communications Security, 1999. Christopher Kruegel, Darren Mutz William, Robertson Fredrik Valeur, Bayesian Event Classification for Intrusion Detection Reliable Software Group University of California, Santa Barbara, 2003. R. Goldman. A Stochastic Model for Intrusions. In Symposium on Recent Advances in Intrusion Detection (RAID), 2002. A. Valdes and K. Skinner. Adaptive, Model-based Monitoring for Cyber Attack Detection. In Proceedings of RAID 2000, Toulouse, France, October 2000. Krister Johansen and Stephen Lee, Network Security: Bayesian Network Intrusion Detection (BNIDS) May 3, 2003.

71

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

An Algorithm for Mining Multidimensional Fuzzy Assoiation Rules 1

2

Neelu Khare

Department of computer Applications MANIT, Bhopal (M.P.) Email: [email protected]

Department of Applied Mathematics SVNIT, Surat (Gujrat) Email: [email protected]

K. R. Pardasani

Department of Mathematics MANIT, Bhopal (M.P.) Email: [email protected]

itemset, i.e., A ∩ B= φ . The strength of an association rule can be measured in terms of its support and confidence. Support determines how often a rule is applicable to a given data set, while confidence determines how frequently items in B appear in transactions that contain A [5]. The formal definitions of these metrics are

Abstract— Multidimensional association rule mining searches for interesting relationship among the values from different dimensions/attributes in a relational database. In this method the correlation is among set of dimensions i.e., the items forming a rule come from different dimensions. Therefore each dimension should be partitioned at the fuzzy set level. This paper proposes a new algorithm for generating multidimensional association rules by utilizing fuzzy sets. A database consisting of fuzzy transactions, the Apriory property is employed to prune the useless candidates, itemsets. Keywords- interdimension ; multidimensional association rules; fuzzy membership functions ;categories.

I.

3

Neeru Adlakha

Support s (A

⇒ B) =

Confidence c (A

INTRODUCTION

σ (A ∪ B) N

⇒ B) =

σ (A ∪ B) σ (A)

In general, association rule mining can be viewed as a twostep process : 1. Find all frequent itemsets: By definition, each of these itemsets will occur at least as frequently as a predetermined minimum support count, min_sup. 2. Generate strong association rules from the frequent itemsets: By definition, these rules must satisfy minimum support and minimum confidence [6]. Association rule mining that implies a single predicate is referred as a single dimensional or intra-dimension association rule since it contains a single distinct predicate with multiple occurrences (the Predicate occurs more than once within the rule) [8]. The terminology of single dimensional or intradimension association rule is used in multidimensional database by assuming each distinct predicate in the rule as a dimension [11]. Association rules that involve two or more dimensions or predicates can be referred as multidimensional association rules. Rather than searching for frequent itemsets (as is done in mining single dimensional association rules), in multidimensional association rules, we search for frequent predicate sets (here the items forming a rule come from different dimensions) [10]. In general, there are two types of multidimensional association rules, namely inter-dimension association rules and hybrid-dimension association rules [15]. Inter-dimension association rules are multidimensional association rules with no repeated predicates. This paper introduces a method for generating inter-dimension association rules. Here, we introduce the concept of fuzzy transaction as a subset of items. In addition we present a

Data Mining is a recently emerging field, connecting the three worlds of Databases, Artificial Intelligence and Statistics. The computer age has enabled people to gather large volumes of data. Every large organization amasses data on its clients or members, and these databases tend to be enormous. The usefulness of this data is negligible if “meaningful information” or “knowledge” cannot be extracted from it. Data Mining answers this need. Discovering association rules from large databases has been actively pursued since it was first presented in 1993, which is a data mining task that discovers associations among items in transaction databases such as the sales data [1]. Such kind of associations could be "if a set of items A occurs in a sale transaction, then another set of items B will likely also occur in the same transaction". One of the best studied models for data mining is that of association rules [2]. This model assumes that the basic object of our interest is an item, and that data appear in the form of sets of items called transactions. Association rules are “implications” that relate the presence of items in transactions [16]. The classical example is the rules extracted from the content of market baskets. Items are things we can buy in a market, and transactions are market baskets containing several items [17][18]. Association rules relate the presence of items in the same basket, for example, “every basket that contains bread contains butter”, usually noted bread ⇒ butter [3]. The basic format of an association rule is: An association is an implication of expression of the form A ⇒ B, where A and B is disjoint

72

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

general model to discover association rules in fuzzy transactions. We call them fuzzy association rules. II.

attribute will be considered to find support of an item [8]. A new algorithm is proposed by considering that every item will have a relation (similarity) to the others if they are purchased together in the same record of transaction. They will have stronger relationship if they are purchased in more transactions. On the other hand, increasing number of categories/values in a dimension will reduce the total degree of relationship among the items involved from the different dimensions [12]. The proposed algorithm is given in the following steps: Step-1: Determine λ ∈ {2, 3, …, n}(maximum value threshold). λ is a threshold to determine maximum number of categories/values in a dimension by which the dimension can or cannot be considered in the process of generating rules mining. In this case, the process just considers all dimensions with the number of categories/values in the relational database RD less than or equal to λ . Formally, let DA = (D1 × D2 × D3………. × Dn ) is a universal set of attributes or a domain of attributes/dimensions [13]. M ⊆ DA is a subset of qualified attributes/dimensions for generating rules mining that the number of unique categories/values(n) in DA are not greater than λ : M={D|n(D) ≤ λ } (1) where n(D) is the number of categories/values in attribute/ dimension D. Step-2: Set k=1, where k is an index variable to determine the number of combination items in itemsets called k-itemsets. Whereas each item belongs to different attribute/dimension. Step-3: Determine minimum support for k-itemsets, denoted by βk ∈ (0,|M|) as a minimum threshold of a combination k items appearing from the whole qualified dimensions, where |M| is the number of qualified dimensions. Here, βk may have different value for every k. Step-4: Construct every candidate k-itemset, Ik, as a fuzzy set on set of qualified transactions, M. A fuzzy membership function, µ, is a mapping: µ I K : M → [0,1] as defined by:

APRIORI ALGORITHM AND APRIORI PROPERTY

Apriori is an influential algorithm in market basket analysis for mining frequent itemsets for Boolean association rules [1]. The name of Apriori is based on the fact that the algorithm uses prior knowledge of frequent itemset properties. Apriori employs an iterative approach known as a level-wise search, where k-itemsets are used to explore (k+1)-itemsets[2]. First, the set of frequent 1-itemsets is found, denoted by L1. L1 is used to find L2, the set of frequent 2-itemsets, which is used to find L3, and so on, until no more frequent k-itemsets can be found. Property: All non empty subsets of frequent item sets must be frequent [5]. A. Itemsets in multidimensional data sets Let RD is relational database with m records and n dimensions [4]. It consists of a set of all attributes/dimensions D = {d1 ∧ d2…….∧dn } and set of tuples T ={t1, t2, t3…….. tm} [10]. Where ti represents ith tuple and if there are n domains of attributes D1, D2, D3………. Dn, then each tuple ti = (vi1 ∧vi2……∧vin ) here vij is atomic value of tuple ti with vij ∈ Dj ; j-th value in i-th record, 1 ≤ i ≤ m and 1 ≤ j ≤ n [9]. Whereas RD can be defined as: RD ⊆ D1 × D2 × D3………. × Dn. To generate Multidimensional association rules we search for frequent predicate sets. A k-predicate set contains k conjunctive predicate. Dimensions, which are also called predications or fields, constitute a dimension combination with a formula (d1, d2, …, dn), in which dj represents j-th dimension [9]. The form (dj, vij) is called an “item” in relational database or other multidimensional data sets, which is denoted by Iij. That is: Iij = (dj, vij), where 1 ≤ i ≤ m and 1 ≤ j ≤ n. Suppose that A and B are items in the same relational database RD. A equals B if and only if the dimension and the value of the item A are equal to the dimension and the value of the item B, which is denoted by A=B. If it is not true that A equals B, then it is denoted by A ≠ B. A set constituted by some “item” defined above is called “itemset”. III.

η (i )  µ I ( D ) = inf  D ij , ∀D j ∈ M i ∈D

GENERATION OF FUZZY ASSOCIATION RULES

Apriori algorithm ignores the number items when determining relationship of the items. The algorithm that calculates support of itemsets just count the number of occurrences of the itemsets, in every record of transaction (shopping cart), without any consideration of the number of items in a record of transaction. However, based on human intuitive, it should be considered that the larger number of items purchased in a transaction means that the degree of association among the items in the transaction may be lower. When calculating support of itemsets in relational database, count of the number of categories/values in every dimension /

K

ij

j

(2)

 n ( D j ) 

K

Where I is a k-itemset, and items belongs to different dimensions. A Boolean membership function, η, is a mapping: η D : D →{0,1}as defined by:



η D (i ) =  

73

1, i ∈ D 0 , otherwise

(3)

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

such that if an item, i, is an element of D then

η D (i)

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

step. The process is started from a given relational database as shown in TABLE I.

=1,

otherwise η D (i ) =0. Step-5: Calculate support for every (candidate) k-itemset using the following equations [7] : Support( I

K

)=

∑µ

T ∈M

IK

TABLE I. TID T1 T2 T3 T4 T5 T6 T7 T8 T9 T10

(4)

(D)

M is the set of qualified dimensions as given in (1); it can be proved that (4) satisfied the following property:

∑ Support (i ) =| M | i∈D

For k=1, Ik can be considered as a single item. Step-6: Ik will be stored in the set of frequent k-itemsets, Lk if and only if support (Ik) ≥ βk. Step-7: Set k=k+1, and if k > λ , then go to Step-9. Step-8: Looking for possible/candidate k-itemsets from Lk-1 by the following rules: A k-itemset, Ik, will be considered as a candidate k-itemset if Ik satisfied:

TID T1 T2 T3 T4 T5 T6 T7 T8 T9 T10

For example, Ik={i1, i2, i3, i4} will be considered as a candidate 4-itemset, iff: {i1, i2, i3}, { i2, i3, i4},{i1, i3, i4} and {i1, i2, i4} are in L3. If there is not found any candidate k-itemset then go to Step-9. Otherwise, the process is going to Step-3. Step-9: Similar to Apriori Algorithm, confidence of an association rule mining, A ⇒ B , can be calculated by the following equation[14]:

⇒ B ) = P(B|A) =

where A, B ∈ DA.

Support(A ∪ B) Support(A)

∑ ∑

Conf(A⇒ B) =

(5)

D∈M

(6)

inf ( µ i ( D ))

i∈ A

∑ inf (µ ( D))

D∈M

i∈I K

IV.

i

D D1 D1 D2 D1 D1 D1 D2 D1 D2 D1

E E1 E1 E1 E1 E2 E2 E1 E1 E1 E2

F F1 F2 F1 F4 F3 F1 F4 F2 F4 F3

A A1 A2 A2 A1 A2 A2 A1 A1 A1 A2

B B1 B2 B2 B3 B3 B1 B2 B2 B2 B3

C C1 C2 C2 C2 C2 C2 C1 C2 C1 C1

D D1 D1 D2 D1 D1 D1 D2 D1 D2 D1

E E1 E1 E1 E1 E2 E2 E1 E1 E1 E2

{A1} = {0.5/T1, 0.5/T4, 0.5/T7,0.5/T8, 0.5/T9}, {A2} = {0.5/T2, 0.5/T3, 0.5/T5, 0.5/T6, 0.5/T10}, {B1} = {0.33/T1, 0.33/T6}, {B2} = {0.33/T2, 0.33/T3, 0.33/T7, 0.5/T8, 0.5/T9,}, {B3} = {0.5/T4, 0.33/T5, 0.33/T10}, {C1} = {0.5/T1, 0.5/T7, 0.5/T9, 0.5/T10}, {C2} = {0.5/T2, 0.5/T3,0.5/T4, 0.5/T5, 0.5/T6, 0.5/T8}, {D1} = {0.5/T1, 0.5/T2, 0.5/T4,0.5/T5, 0.5/T6,0.5/T8, 0.5/T10}, {D2} = {0.5/T3, 0.5/T7, 0.5/T9}, {E1} = {0.5/T1, 0.5/T2, 0.5/T3, 0.5/T4, 0.5/T7,0.5/T8, 0.5/T9}, {E2} = {0.5/T5, 0.5/T6, 0.5/T10}

Where A and B are any k-itemsets in Lk. (Note: µi(T) = µ{i}(T), for simplification)[12]. Therefore, support of an itemset as given by (4) can be also expressed by: Support(IK)=

C C1 C2 C2 C2 C2 C2 C1 C2 C1 C1

where M={A,B,C,D,E} Step-2: The process is started by looking for support of 1-itemsets for which k is set equal to 1. Step-3: Since λ =3 and 1≤k≤ m. It is arbitrarily given β1= 2, β2 =2, β3=1.5. That means the system just considers support of kitemsets that is ≥ 2, for k=1,2 and ≥ 1.5, for k=3. Step-4: Every k-itemset is represented as a fuzzy set on set of transactions as given by the following results: 1-itemsets:

It can be followed that (5) can be also represented by: inf( µ i ( D )) D ∈ M i∈ A ∪ B

B B1 B2 B2 B3 B3 B1 B2 B2 B2 B3

Step-1: Suppose that λ arbitrarily equals to 3; that means qualified attribute/dimension is regarded as an attribute/dimension with no more than 3 values/categories in the attribute/dimension. Result of this step is a set of qualified attribute/dimensions as seen in TABLE II. TABLE II.

∀ F ⊂ Ik, | F |= k -1 ⇒ F ∈ L

Conf = ( A

A A1 A2 A2 A1 A2 A2 A1 A1 A1 A2

(7)

AN ILLUSTRATIVE EXAMPLE

An illustrative example is given to understand well the concept of the proposed algorithm and how the process of the generating fuzzy association rule mining is performed step by

From Step-5 and Step-6, {B1},{B2},{B3},{D2},{E2} cannot be considered for further process their because support is < β1.

74

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 . TABLE IV : L2 (β2=2)

2-itemsets:

L2 {A2,C2} {A2,D1} {A1,E1} {C2,D1} {D1,E1} {C2,E1}

{A1,C1}={0.5/T1, 0.5/T7,0.5/T9}, {A1,C2}={0.5/T4, 0.5/T8}, {A2,C1}={0.5/T10}, {A2,C2}={0.5/T2, 0.5/T3, 0.5/T5,0.5/T6}, {A1, D1}={0.5/T1, 0.5/T4, 0.5/T8}, {A2,D1}={ 0.5/T2, 0.5/T5, 0.5/T6, 0.5/T10}, {A1,E1}={ 0.5/T1, 0.5/T4, 0.5/T7, 0.5/T8, 0.5/T9}, {A2,E1}={ 0.5/T2, 0.5/T3}, {C1,D1}={ 0.5/T1,0.5/T10}, {C2,D1}={ 0.5/T2, 0.5/T4, 0.5/T5, 0.5/T6, 0.5/T8}, {C1,E1}={ 0.5/T1, 0.5/T7, 0.5/T9}, {C2,E1}={ 0.5/T2, 0.5/T3, 0.5/T4, 0.5/T8}, {D1,E1}={ 0.5/T1, 0.5/T2, 0.5/T4,0.5/T8}

TABLE V : L3 (β3=1.5)

L3 {A2,C2,D1} {C2,D1,E1}

From Step-5 and Step-6 {A1,C1}, {A2,C1}, {A1,C2}, {A1,D1}, {A2,E1}, {C1,D1}, {C1,E1} cannot be considered for further process because their support< β2. 1-itemsets

1.5 1.5

Step-6: From the results as performed by Step-4 and 5, the sets of frequent 1-itemsets, 2-itemsets and 3-itemsets are given in Table 8, 9 and 10, respectively. Step-7: This step is just for increment the value of k in which if all elements of LK< β K, then the process is going to Step-9. Step-8: This step is looking for possible/candidate k-itemsets from Lk-1. If there is no anymore candidate k-itemset then go to Step-9. Otherwise, the process is going to Step-3. Step-9: The step is to calculate every confidence of each possible association rules as follows:

2-itemsets

support({A1}) = 2.5, support({A2}) = 2.5, support({B1}) = 0.66, support({B2}) = 1.65, support({B3}) = 0.99, support({C1}) = 2, support({C2}) = 3, support({D1}) = 3.5, support({D2}) =1.5, support({E1}) = 3.5 support({E2}) = 1.5

2.5 2 2.5 2.5 2 2

support({A1,C1}) = 1.5 support({A1,C2}) = 0.5 support({A2,C1}) = 0.5 support({A2,C2}) = 2.5 support({A1,D1}) = 1.5 support({A2,D1}) = 2 support({A1,E1}) = 2.5 support({A2,E1}) = 1 support({C1,D1}) = 1 support({C2,D1}) = 2.5 support({C1,E1}) = 1.5 support({C2,E1}) = 2 support({D1,E1}) = 2

2  2



2, 2 2.5

1

2 2.5

3-itemsets: {A2,C2,D1} = {0.5/T2, 0.5/T5,0.5/T6}, {C2,D1,E1} = {0.5/T2, 0.5/T4,0.5/T8}

.

Step-5: Support of each k-itemset is calculated as given in the following results:

2  1

3-itemsets: support({A2,C1,E1}) = 1.5 support({C2,D1,E1}) = 1.5



2



2

0.8 2.5



2, 2, 1 1.5 2 ∧ 2  1

2,



0.6 2, 1

2, 2 1.5 2.5 2  2 ∧ 1



0.6

2 2.5

TABLE III : L1(β1=2)

L1 {A1} {A2} {C1} {C2} {D1} {E1}

. . .

2, 1

2.5 2.5 2 2 3.5 3.5

2 ∧ 1  1



2, 1, 1 1.5

0.6

2, 1 2.5

2  1 ∧ 1



2, 1, 1 1.5

0.5

2 3

V.

CONCLUSION

This paper introduced an algorithm for generating fuzzy multidimensional association rules mining as a generalization of inter-dimension association rule. The algorithm is based on the concept that the larger number of values/categories in a dimension/attribute means the lower degree of association

75

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 . [10] Rolly Intan, Department of Informatics Engineering, Petra among the items in the transaction. Moreover, to generalize

inter-dimension association rules, the concept of fuzzy itemsets is discussed, in order to introduce the concept of fuzzy multidimensional association rules. Two generalized formulas were also proposed in the relation to the fuzzy association rules. Finally, an illustrated example is given to clearly demonstrate and understand steps of the algorithm. In future we discuss and propose a method to generate conditional hybrid dimension association rules using fuzzy logic, whereas hybrid dimension association rule is hybridization between inter-dimension and intra-dimension association rules.

Christian

“A

Proposal

Of

Fuzzy

7: pp. 85-90 ,NOV 2006. [11] Reda ALHAJJ ADSA , Mehmet KAYA ,”Integrating Fuzziness into OLAP for Multidimensional Fuzzy Association Rules Mining”, Third IEEE International Conferenceon Data Mining (ICDM’03) , (2003) [12]Rolly Intan, “An Algorithm for Generating Single Dimensional Fuzzy Association Rule Mining”, JURNAL INFORMATIKA VOL. 7, NO. 1, MEI 2006: 61 - 66 (2006)

[1] Agrawal, R., Imielinski, T., and Swami, A. N. “Mining

[13] Rolly Intan, “Mining Multidimensional Fuzzy Association Rules

association rules between sets of items in large databases”. In

from a Normalized Database”, International Conference on

Proceedings of the ACM SIGMOD International Conference on

Convergence and Hybrid Information Technology © 2008 IEEE

Management of Data, pp .207-216 (1993).

[14] Rolly Intan1, Oviliani Yenty Yuliana, Andreas Handojo, Mining

[2] Agrawal, R. and Srikant, R. ”Fast algorithms for mining

“Multidimensional Fuzzy Association Rules From A Database Of

association rules”. In Proc. 20th Int. Conf. Very Large Data Bases,

Medical Record Patients” , Jurnal Informatika Vol. 9, No. 1, 15 - 22

pp. 487-499, (1994).

Mei 2008.

[3] Klemetinen, L., Mannila, H., Ronkainen, P., “Finding interesting

[15] Anjna Pandey and K. R. Pardasani, “Rough Set Model for

rules from large sets of discovered association rules”. Third Conference

Surabaya,

Multidimensional Association Rules”, Jurnal INFORMATIKA VOL

REFERENCES

International

University,

on

Information

and

Discovering Multidimensional Association Rules” IJCSNS VOL 9,

Knowledge

pp 159-164, June 2009.

Management Gaithersburg, pp.401-407 USA (1994).

[16] Miguel Delgado, Nicolás Marín, Daniel Sánchez, and María-

[4] Houtsma M, Swami A. “Set-oriented mining of association rules

Amparo Vila, “Fuzzy Association Rules: General Model and

in relational databases”. In: Proc of the 11th International Conference

Applications” IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 11,

on Data Engineering. Taipei, pp. 25-33 Taiwan: 1995.

NO. 2, APRIL 2003.

[5] R. Agrawal, A. Arning, T. Bollinger, M. Mehta,J. Shafer, and R.

[17] Han, J., Pei, J., Yin, Y. “Mining Frequent Patterns without

Srikant. “ The Quest Data Mining System” , Proceedings of the 2nd

Candidate Generation”,SIGMOD Conference, pp 1-12, ACM Press

Int'l Conference on Knowledge Discovery in Databases and Data

(2000).

Mining”, Portland, Oregon, August (1996).

[18] Han, J., Pei, J., Yin, Y., Mao, R. “Mining Frequent Patterns

[6] J. Han, M. Kamber, Data Mining: Concepts and Techniques, The

without Candidate Generation: A Frequent-Pattern Tree Approach”.

Morgan Kaufmann Series, (2001).

Data Mining and Knowledge Discovery, 53–87 (2004).

[7] G. J. Klir, B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and

[19] Hannes Verlinde, Martine De Cock, and Raymond Boute,

Applications, New Jersey: Prentice Hall, (1995).

“Fuzzy Versus Quantitative Association Rules: A Fair Data-Driven Comparison”

[8] Jurgen M. Jams Fakultat ,”An Enhanced Apriori Algorithm for

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND

CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 3, pp.679-684,

Mining Multidimensional Association Rules”, 25th Int. Conf.

JUNE 2006.

Information Technology interfaces ITI Cavtat, Croatia (1994). [9] Wan-Xin Xu1, Ru-Jing Wang, “A Fast Algorithm Of Mining Multidimensional Association Rules Frequently”, Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 13-16 August 2006 IEEE.

76

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

Analysis, Design and Simulation of a New System for Internet Multimedia Transmission Guarantee O. Said, S. Bahgat, S. Ghoniemy, Y. Elawady Computer Science Department Faculty of Computers and Information Systems Taif University, Taif, KSA. [email protected] routing protocol, sender, and receiver. Also, it's notable that the resource reservation process occurs before the multimedia transmission. At the beginning of the multimedia streams transmission, the relations between the QoS elements are disjoint. Hence, if a change occurs in the reserved path during the multimedia streams transmission operation, the previous stated problems may occur [1], [2]. In this paper, a new system for internet multimedia transmission guarantee is proposed and solves the old ones problem. This paper is organized as follows. In section 2, the related work that contains the RSVP analysis and DiffServ & MPLS evaluation is illustrated; in section 3, the problem definition is introduced; in section 4, our system is demonstrated; in section 5, detailed simulation and evaluation of our system are showed. Finally, the conclusion and the future work are illustrated.

Abstract: QoS is a very important issue for multimedia communication systems. In this paper, a new system that reinstalls the relation between the QoS elements (RSVP, routing protocol, sender, and receiver) during the multimedia transmission is proposed, then an alternative path is created in case of original multimedia path failure. The suggested system considers the resulting problems that may be faced within and after the creation of rerouting path. Finally, the proposed system is simulated using OPNET 11.5 simulation package. Simulation results show that our proposed system outperforms the old one in terms of QoS parameters like packet loss and delay jitter. Key words: Multimedia Protocols, RSVP, QoS, DiffServ, MPLS

1. INTRODUCTION

The path that the multimedia streams is to follow should provide it with all required Quality of Services (QoS). Suppose that the determined multimedia path gives the multimedia streams all the needed services. In this situation, an urgent question arises. The question is what is the solution if, during the multimedia streams are transmitted in the path, that path is failed? This state may cause a loss in multimedia streams especially when are transported under the User Datagram Protocol (UDP). So, the solution is either to create an alternative path and change the multimedia streams away to flow in the new path or retransmit the failed multimedia streams. The second solution is so difficult (if not impossible) because the quantity of lost multimedia streams may be too huge to be retransmitted. So, the only available solution is to create another alternative path and complete the transmission process. To determine an alternative path, we face two open questions. The first question is: how a free path, that will transport the multimedia streams to the same destination, is created? The second question that may be put forward after the path creation is: can the created path provide the required QoS assigned for the failed one? From these queries and RSVP analysis, it's obvious that the elements of resource reservation and QoS are RSVP,

2. RELATED WORK (RSVP, DIFFSERV, AND MPLS) The three systems that are closely related to our work are RSVP, DiffServ, and MPLS. In this section, a brief analysis for RSVP is introduced. In addition, an evaluation of DiffServ & MPLS is demonstrated. A. RSVP operational model The RSVP resource-reservation process initiation begins when an RSVP daemon consults the local routing protocol(s) to obtain routes. A host sends Internet Group management Protocol (IGMP) messages to join a multicast group and RSVP messages to reserve resources along the delivery path(s) from that group. Each router that is capable of participating in resource reservation passes incoming data packets to a packet classifier and then queues them as necessary in a packet scheduler. The RSVP packet classifier determines the route and QoS class for each packet. The RSVP scheduler allocates resources for transmission on the particular data link layer medium used by each interface. If the data link layer medium has its own QoS management capability, the packet scheduler

77

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

In a Differentiated Service domain, all the IP packets crossing a link and requiring the same DiffServ behavior are said to constitute a behavior aggregate (BA). At the ingress node of the DiffServ domain, the packets are classified and marked with a DiffServ Code Point (DSCP), which corresponds to their Behavior Aggregate. At each transit node, the DSCP is used to select the Per-Hop Behavior (PHP) that determines the queue and scheduling treatment to use and, in some cases, drop probability for each packet [5], [6]. From the preceding discussion, one can see the similarities between MPLS and DiffServ: an MPLS LSP or FEC is similar to a DiffServ BA or PHB, and the MPLS label is similar to the DiffServ Code Point in some ways. The difference is that MPLS is about routing (switching) while DiffServ is rather about queuing, scheduling and dropping. Because of this, MPLS and DiffServ appear to be orthogonal, which means that they are not dependent on each other, they are both different ways of providing higher quality to services. Further, it also means that it is possible to have both architectures working at the same time in a single network, but it is also possible to have only one of them, or neither of them, depending on the choice of the network operator. However, they face several limitations: 1. No Provisioning methods 2. No Signaling as (RSVP). 3. Works per hop (i.e. what to do with non-DS hop in the middle?) 4. No per-flow guarantee. 5. No end user specification. 6. Large number of short flows works better with aggregate guarantee. 7. Works only on the IP layer 8. DiffServ is unidirectional – no receiver control. 9. Long multimedia flow and flows with high bandwidth need per flow guarantee. 10. Designed for static topology.

is responsible for negotiation with the data-link layer to obtain the QoS requested by RSVP. The scheduler itself allocates packet-transmission capacity on a QoS-passive medium, such as a leased line, and also can allocate other system resources, such as CPU time or buffers. A QoS request, typically originating in a receiver host application, is passed to the local RSVP implementation as an RSVP daemon. The RSVP protocol is then used to pass the request to all the nodes (routers and hosts) along the reverse data path(s) to the data source(s). At each node, the RSVP program applies a local decision procedure called admission control to determine whether it can supply the requested QoS. If admission control succeeds, the RSVP program sets the parameters of the packet classifier and scheduler to obtain the desired QoS. If admission control fails at any node, the RSVP program returns an error indication to the application that originated the request. However, it was found that unsurprisingly, the default best effort delivery of RSVP messages performs poorly in the face of network congestion. Also, the RSVP protocol is receiver oriented and it's in charge of setting up the required resource reservation. In some cases, to reallocate the bandwidth in a receiver oriented way could delay the required sender reservation adjustments [3], [4], see Fig. (1).

3. PROBLEM FORMULATION The routing and resource reservation protocols must be capable to adapt a route change without failure. When new possible routes pop up between the sender and the receiver, the routing protocol may tend to move the traffic onto the new path. Unfortunately, there is a possibility that the new path can’t provide the same QoS as the previous one. To avoid these situations, it has been suggested that the resource reservation protocol should be able to use a technique called the route pinning. This would deny the routing protocol the right to change such a route as long as it is viable. Route pinning is not as easy to implement as it sounds. With technologies such as Classless Inter-Domain Routing (CIDR) [7], [8], a pinned route can use as much memory from a router as a whole continent! Also, this problem may occur if a path station can’t provide the

Fig. 1 The RSVP Operations

B. DiffServ & MPLS MPLS simplifies the routing process used in IP networks, since in an MPLS domain, when a stream of data traverses a common path, a Label Switched Path can be established using MPLS signaling protocols. A packet will typically be assigned to a Forwarding Equivalence Class (FEC) only once, when it enters the network at the ingress edge Label Switch Router, where each packet is assigned a label to identify its FEC and is transmitted downstream. At each LSR along the LSP, only the label is used to forward the packet to the next hop.

78

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .



multimedia streams with the same required QoS during a transmission operation. At this situation, the multimedia streams should search about an alternative path to complete the transmission process.

The detector and the connector are fired simultaneously. The detector precedes the connector in visiting the multimedia path’s stations. The detector visits each path station to test the required QoS. If the detector notes a defect in the QoS at any station (i.e. the station can’t provide the required QoS), then it sends to the connector an alarm message containing the station IP address and the failed required QoS, see algorithm 3for more detector discussion.

4. THE PROPOSED SYSTEM From the problem definition and the RSVP analysis, it is obvious that the elements of the resource reservation and QoS are RSVP, routing protocol, sender, and receiver. Also, it is notable that the resource reservation process occurs before the multimedia transmission. At the beginning of the multimedia streams transmission (i.e. after the resources are reserved for the multimedia), the relations between the QoS elements are disjoint. So, if a change occurs in the reserved path during the multimedia streams transmission operation, the above stated problem may occur. If the connections between the QoS elements are reinstalled during the multimedia streams transmission, then the QoS problems may be solved. The reinstallation process is accomplished by three additive components that are called the proposed system components.

Algorithm 1 1- While the number of multimedia packets < > Null 2-1 Begin 2-2 The multimedia starts the transmission operation 2-3 The connector agent is fired with the starting of the transmission operation. 2-4 For I = 1 To N. 2-4-1 Begin 2-4-2 The connector agent tests the stored detector flag value. 2-4-3 If the flag value is changed to one. 2-3-3-1 Go to the step number 3 2-4-4 Else 2-4-4-1 Complete the I For Loop 2-4-5 End I For Loop. 2-5 While ((SW-SC) * TR ) < > Null) 2-5-1 Begin 2-5-2 The connector extracts the nearest router address around the failed station. 2-5-3 The connector sends a message to the router asking about alternative path (or station). 2-5-4 The connector receives all available paths in a reply message sent by the router. 2-5-5 The connector sends the router reply message to the analyzer asking about the new QoS request for the new path. 2-5-6 For J = PFS To M 2-5-6-1 Begin. 2-5-6-2 The connector tests the QoS. 2-5-6-2-1 If the QoS fails, the router returns to the step 2-5. 2-5-6-2-2 Else, complete the J For Loop. 2-5-6-3 End J For Loop. 2-5-7 (SW-SC) * TR = ((SW-SC) * TR) –1(Unite time) 2-5-8 End Inner while loop 2-6 End outer while loop 2-7 Stored flag value = 0. 2- End of the connector algorithm.

A. The proposed system components The proposed system comprises three additive components in addition to the old system components. The additive components are 1- Connector. 2- Analyzer. 3- Detector. In the following subsections, the definition and the functions of each additive component are demonstrated. •

Connector

This component is fired at the transmission starting and can be considered as a software class(s). The connector has more than one task for helping the system to accomplish its target. The main function of the connector is to reinstall the connections between QoS elements in a problem occurrence case, see algorithm 1 for more connector discussion. •

Detector

Analyzer

This component, located at the receiver, is considered also as a software class(s). The main function of the analyzer is to extract the failed station(s) and its alternative(s). Also, the analyzer connects to RSVP at the receiver site to extract a QoS request or a flow description of the new path. Also the analyzer uses some features of DiffServ and MPLS to acquire an alternative simple path with full QoS requirements. The DiffServ provides the system with simplest path and pushes the complexity to the network edges. The MPLS provides our system with next hop for each packet and to perform traffic conditioning on traffic streams flow in different domains (paths), see algorithm ٢ for more analyzer discussion.

79

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

Algorithm 2

Table 1: The Data Stored in each System Component

1- If the stored connector flag is changed to one 2-1 The analyzer receives an old and a new paths from the connector. 2-2 The analyzer compares between the two paths and separates the similar stations and the different ones. 2-3 The analyzer keeps the similar stations in a table (called same) and keeps the different stations in another two tables (called Diff1 and Diff2). 2-4 The analyzer constructs a mapping in relation to the QoS in the tables of different stations, see step 2. 2-5 The analyzer cooperates with the RSVP to extract the QoS request of a new path. 2-6 The analyzer capsulate the results in a message and sends it to the connector.

Analyzer stored data Connector ID Analyzer ID

Detector stored data

Connector address

Connector address

RSVP connections

Analyzer Address Stream ID

Similar table Different tables

Detector flag value (default value =0)

The connector flag value (default value =0)

QoS required from each path station Path structure The connector flag value (default value =0) QoS test value (default value =0)

Detector ID Connector ID

RSVP connections

2- The analyzer handling and mapping operations 2-1 For I = 1 to old[N]. 2-2-1 Begin 2-2-2 If the old[I] = New[I] 2-2-2-1 Begin. 2-2-2-2 old[I] = Same[K] 2-2-2-3 K=K+1 2-2-2-4 End IF. 2-2-3 Else 2-2-3-1 Begin. 2-2-3-2 old[I] = Diff1[H]. 2-2-3-3 old[I] = Diff1[H]. 2-2-3-4 H = H+1 2-2-3-5 End Else. 2-2-4 If H=K 2-2-4-1no changing in the old QoS request. 2-2-5 For J = 1 to H 2-2-5-1 Begin 2-2-5-1 Diff2 [J] = Construct a QoS request. 2-2-5-2 End J For Loop. 2-2-6 End I For Loop 3- End of the analyzer Algorithm.

`

Connector stored data Connector ID Address of each path station Time of each visiting station Analyzer ID

B. System approach After the resource reservation processes have been done, the multimedia streams begin the flood across the predetermined path. The connector accompanies the multimedia streams at every station. When the connector receives an error message from the detector, the connector starts to install the connections between the QoS elements.

Algorithm 3 1- While the number of multimedia packets < > Null 1-1 Begin 1-2 If the QoS test value = 1 1-2-1 Begin 1-2-2 The detector multicast alarm message including the connector ID. 1-2-3 The detector changes the test value to 0. 1-2-4 The detector tests another succeed stations. 1-2-5 End IF. 1-3 End of the While Loop 1-4 QoS test value = 0. 2- End of the detector algorithm.

Fig. 2 Functional Diagram of the Proposed System

The connector extracts the address of the failed station and the nearest router. The connector constructs a message that will be sent to the routing protocol asking for an alternative path (or station). The routing protocol provides the connector with the available path(s) that compensates the old one. The connector constructs a message, containing the old and new paths, and sends it to the analyzer. The analyzer extracts the failed station(s) and its corresponding one(s) in the new path. The analyzer connects the RSVP to extract the QoS request. The

Note: the symbols description is found at appendix A.

80

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

message to access the alternative path (or station) that replaces the failed path (or station). There are two types of this message, the request message and the reply message. The request message comprises the failed path and the reply message contains the alternative path. The request message has the following fields, 1) Message type, 2) Container ID, and 3) Old path. The reply message has the following fields, 1) Message type, 2) Connector ID, and 3) Alternative path(s).

analyzer constructs a message to be sent to the connector. The connector transforms the analyzer message to the sender informing it with the new selected path. Hence; the sender transmits the new multimedia streams using the new connector path see figures (2), (3).



Between the connector and the analyzer (Request and Reply).

This message is used to communicate the connector and the analyzer. This message is fired when the connector needs a QoS request for the new path. The message has two types, the request message and the reply message. The request message contains a new path that is accessed from the router. The reply message contains the QoS request that is extracted after the analysis operation. The request message contains the following fields 1) Message type, 2) Container ID, and 3) Alternative path. The reply message contains the following fields 1) Message type, 2) Connector ID, and 3) QoS request. Fig. 3 Analyzer Operation



C. System messages To complete the connections between proposed system components, we have to demonstrate the structure of each used message. The proposed system contains five new messages that can be stated as follows. 1. From the connector to the sender. 2. Between the connector and the routing protocol (router) (Request and Reply). 3. Between the connector and the analyzer (Request and Reply). 4. Between the analyzer and RSVP at the receiver site (Request and Reply). 5. From the detector to the connector •

Between the analyzer and the RSVP at the receiver (Request and Reply).

This message is used to complete the dialog between the analyzer and the RSVP at the receiver site. The analyzer handles the old path and its alternative(s) to extract the failed station(s) and its corresponding station(s) in the new path. The analyzer needs it to construct a QoS request for the new path (s). This message has two types, the request message and the reply message. The request message contains the new path that was sent by the connector. The reply message contains the QoS request that is extracted by the RSVP. The request message contains the following fields, 1) Message type, 2) Analyzer ID, and 3) Alternative path. The reply message contains the following fields, 1) Message type, 2) Analyzer ID, and 3) Required QoS.

From the connector to the sender

This message joins the connector with the multimedia sender. This message is sent when the connector receives the QoS request from the analyzer. This message structure looks like the RSVP reservation request message but with the connector ID (This field is used in case of more than one connector in the proposed system).



From the detector to the connector

Between the connector and the routing protocol (Request and Reply).

This message can be used to alarm the connector with a new event occurrence. If the detector finds a failure at a station in relation to QoS, then it sends this message to the connector asking to start its function for solving the problem. The message contains the following fields, 1) Message type, 2) Connector ID, 3) QoS request, and 4) Address of the failed station.

This message joins the connector with the router or the routing protocol. This message is fired when the detector alarms the connector that a QoS failure is occurred at a station in the multimedia path. The connector needs this

D. Decreasing the number of system messages It is notable that our system contains a number of messages that may cause a network overload. To make our system suitable for every network state, a new strategy to



81

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

4.

decrease a number of sent and received messages should be demonstrated. This strategy is built on the cumulative message idea. For the detector component, it’s clear that its job is to test if each router (station) can provide the multimedia with required QoS or not. In case of network overload, the detector can capsulate its messages in one message. The capsulated message contains the addresses of the QoS failed stations that not visited by the multimedia streams in the transmission trip. For the analyzer component, it can use the same idea during the communication with the DiffServ and MPLS provided that the multimedia streams keep away from the analyzer transactions.

5.

The links between the workstations (video transmitters and receivers), are 1 Mbps. The links between the routers are 2 Mbps. For internet simulation, the routers are connected via IP cloud.

5. PERFORMANCE STUDY In this section, the performance of the suggested multiresource reservation system is studied. In our simulation the network simulator OPNET 11.5 [9] is used. A distributed reservation-enabled environment, with multiple distributed services deployed and multiple clients requesting these services is simulated. In particular, for runtime computation of end-to-end multi-resource reservation plans, the performance of the proposed system with the best effort communication system (old system) is compared. The key performance metrics in our simulations are: 1) End-to-end delay, 2) Packet loss, 3) Packet Loss in Case of Compound Services, 4) Re-Routing State, 5) Reservation Success Rate, 6) Utilization, and 7) Delay jitter. These parameters are evaluated for an increasing network load. Also, in our simulations, we compare between our system and the DiffServ && MPLS. In our simulation, Abhay Agnihotri study [10] is used to build the simulation environment. A.

Fig. 4 Simulation Model Infrastructure

5.2 General Notes and Network Parameters 1.

The data link rate and queue size for each queue scheme are fixed. 2. The multimedia traffics are considered MPEG with all characters. 3. The small queue size didn’t affect the queue delay. 4. Inadequate data link rate causes more packet drops and too much consumed bandwidth. 5. Data link rate are fixed at 2 Mbps between the routers. 6. For FIFO queue scheme, the queue size was fixed at 400 packets. 7. The traffic pattern (continuous or discrete), the protocol (TCP or UDP), and application (prioritized or not) are considered input parameters. 8. The output parameters are calculated as regards the RSVP, DiffServ, MPLS, and our proposed technique. 9. It’s supposed that the number of multimedia packets is increased with simulation time. 10. The simulation of the old system can be found at [11], [12].

Simulation Setup

The infrastructure of the simulation contains the following items: 1. 3 Ethernet routers to send and receive the incoming traffics and police it according to the QoS seniors specified in the proposed system, DiffServ, MPLS, and RSVP. 2. 15 video transmitters distributed on the router 1 and the router 2 as follows: 10 video transmitters are connected to router 1 and 5 are connected to router 2. The video workstations used to transmit 375 MPEG video packets per second, of size 1000 bytes. Each transmitter can send the multimedia packets only if it has a full required QoS like specified priority interactive, streaming, full bandwidth, specified delay jitter, and excellent effort. 3. 15 video receivers distributed on the router 2 and the router 3 as follows: 10 video receivers are connected to the router 2 and 5 are connected to the router 3.

82

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

packet loss in our system is decreased compared to the old system. This decrease is justified by the following; the increasing in the network load means the increasing in the network hosts and this require services with different qualities. When the number of services and resources increases, the old system efficiency decreases hence; the number of packet loss increases. Unlike the old system, our system uses the detector, the connector, and the analyzer, to handle a failure that occurred in the old system before the multimedia packets affect and this promotes its efficiency. The number of packet loss is approximately equal especially before the middle of simulation time. The notable packet loss in our system comes from making the analyzer component inactive. The system fault tolerance will be discussed in the future work.

B. Simulation Results In our simulation, the parameters of multimedia and network QoS are scaled. The curves below contain a comparison between the old system (RSVP, DiffServ, and MPLS) and the new proposed system. •

End-to-End Delay

End-to-End Delay Percentage

Packet Loss

One of the key requirements of high speed packet switching networks is to reduce the end-to-end delay in order to satisfy real time delivery constraints and to achieve the necessary high nodal throughput for the transport of voice and video [13]. Figure 5 displays the end-to-end delay that may result from our computations, component messages and a buffer size. It’s clear that our system computations didn’t affect the delay time. This is because the computations are done during the multimedia transmission even a path failure is detected. Also, the old one uses the rerouting technique when finds a failure at any path station. The rerouting operations load the old system with more computations that will increase the time delay. In addition, our proposed system uses the cumulative message technique in case of network overflow.

Simulation Time Figure (6): Packet Loss



This metric scales the efficiency of our system as regards the complete reservation of the resources that are required quality of the compound service. The compound service is a service that needs other one(s) to be reserved (dependant service). The curve in figure 7 shows the relation between the number of lost bits versus the generic times. It’s notable that the efficiency of our system in compound service reservation is better than the old one. This indicates that the old system has a delay in dealing with the required compound services and this causes a loss of huge number of bits especially at the start of simulation time.

Simulation Time Fig. 5 End-to-End Delay



Packet loss in case of compound services

Packet Loss

This metric demonstrates the number of packet loss that occurred in the proposed system and the old system. The diagram found in figure 6 demonstrates the packet loss versus the time unit (it’s supposed that the network load is increased with the time). It’s obvious that the number of

83

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .



Reservation Success Rate

Generic Times Fig. 7 Packet Loss in Case of Compound Services



Re-Routing State

To meet high throughput requirements, paths are selected, resources reserved, and paths are recovered in case of failure. This metric should be scaled to make sure that our new system has an ability to find a new path when a failure occurred. This metric scales the rerouting state for our system and old one. The curve in figure 8 shows the relation between the number of recovered paths versus simulation time for new system and old one.

The Success Reservation Rate

The Number of Lost Bits

This metric scales the efficiency of the proposed system as regard the resource reservation. The diagram in figure 9 shows the success reservation rates per time unit. It is observed that the success reservation rate in our system increases the success reservation rate in the old system. This increasing is due to efficiency of the detector in fault detection at any resource before it is used, in addition, efficiency of the connector in finding and handling the alternative solution. Also, the difference between the two systems is notable at the second hour of simulation time.

Simulation Time Fig. 9 Reservation Success Rate

The Number of Recovered Paths



Utilization

This metric scales the efficiency of our system additive components (the connector, the analyzer, and the detector). The efficiency of the connector is scaled by the number of successful connections in relation to the number of stations that cannot provide their QoS. The efficiency of the analyzer is scaled by the number of successful QoS requests extraction in relation to the number of its connections with the connector. The efficiency of the detector is scaled by the number of failed point’s detection in relation to the number of failed points in the new system during the simulation time. For accuracy, all the components efficiency is scaled under different network loads. Figure 10 shows the average efficiency of three system components compared with the old system efficiency. The old system efficiency is calculated with a percentage of the services that are correctly reserved with the same required quality.

Simulation Time Fig. 8 Re-Routing State

84

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

6. CONCLUSION

The Utilization Percentage

In this paper, a brief analysis for the RSVP, the DiffServ, and the MPLS is demonstrated. Also, the QoS problems that may be occurred during the multimedia transmission are demonstrated. A new system to solve the QoS problem is introduced. The proposed system adds new three additive components, called connector, analyzer, and detector, over the old RSVP system to accomplish its target. A simulated environment is constructed and implemented to study the proposed system performance. A network simulator called OPNET 11.5 is used in the environment simulation construction. Finally, detailed comments are demonstrated to clarify the extracted simulation results. The test-bed experiments showed that our proposed system increases the efficiency of the old system with approximately 40 % 7. FUTURE WORK

Simulation Time

To complete our system efficiency, the fault tolerance problem should be. What will be done if one of the system components fails? In our simulation, we faced this problem in packet loss diagram; hence we should find an alternative component (software or hardware) to replace the failed one and solve this problem. The suggested solution is to use a multi agent technology instead of one agent. Consequently, we simulate the multi agent QoS system and show the results. We will apply the proposed system with different types of multimedia data. This will make our system goes to the standardization. Hence, we can transform the proposed system to a new application layer protocol used for solving the multimedia QoS problems.

Fig. 10 Utilization (System Efficiency)



Delay Jitter

This metric is introduced to make sure that the additive components didn’t affect the multimedia packets delay jitter. The delay jitter as regards the multimedia streams is a very important QoS parameter. The plot in figure 11 describes the relation between the delay jitter and the first 1500 packets sent by the new system. In the new system’s curve, it is obvious that the delay jitter is less than the old system’s curve in the most simulation time. So, the additive components operate in harmony without affecting the delay jitter of the multimedia packets.

ACKNOWLEDGMENT

The Delay Jitter

The authors would like to convey thanks to the Taif University for providing the financial means and facilities. This paper is extracted from the research project number 1/430/366 that is funded by the deanship of the scientific research at Taif University.

REFERENCES

Sent Packets

[1]

K. RAO, Z. S. Bojkovic, and D. A. Milovanovic, Multimedia Communication Systems Techniques, Standards, and Networks, Prentice-Hall Inc. Upper Saddle River, NI, 2003

[2]

G. Malkin, R. Minnear, RIPng for IPv6, Request For Comment (RFC) 2080, January 1997.

[3]

R. Guerin, S. Kamat, S. Herzog, QoS Path Management with RSVP, Internet Engineering Task Force (IETF) Draft, March 20, 1997.

Fig. 11 Delay Jitter

85

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 .

[4]

Marcos Postigo- Bois, and Jose L. Melus, "Performance Evaluation of RSVP Extension for a guaranteed Delivery Scenario", Computer Communication, Volume 30, Issue 9, June, 2007.

[5]

B. Davie, and Y. Rekhter, MPLS Technology and Applications, Morgan Kaufmann, San Francisco, CA, 2000.

[6]

S. Blake et al, An Architecture for Differentiated Services, Request For Comment (RFC) 2475, December 1998.

[7]

Daniel Zappala, Bob Braden, Deborah Estrin, Scott Shenker, Interdomain Multicast Routing Support for Integrated Services Networks, Internet Engineering Task Force (IETF) InternetDraft, March 26, 1997.

[8]

Y. Rekhter,C. Topolcic, Exchanging Routing Information Across Provider Boundaries in the CIDR Environment, Request For Comment (RFC) 1520, September 1993

[9]

http://www.opnet.com/.

[10]

Abhay Agnihotri.” Study and Simulation of QoS for Multimedia Traffic”, Master’s project, 2003. http://www3.uta.edu/faculty/reyes/teaching/software/OPNET_ Modeler/AgnihotriMasterProject.ppt

[11]

M. Pullen, R. Malghan, L. Lavu, G. Duan, J. Ma, J. Ma, A Simulation Model for IP Multicast with RSVP, Request For Comment (RFC) 2490, January 1999.

[12]

Raymond Law and Srihari Raghavan, “DiffServ and MPLS – Concepts and Simulation” 2003. http://nmg.upc.es/intranet/qos/8/8.7/8.7.2.pdf

[13]

System for coding voice signals to optimize bandwidth occupation in high speed packet switching networks, 2000. http://www.patentstorm.us/patents/6104998/fulltext.html

Dr. Omar Said is currently assistant professor in the Computer Science at Dept. of Computer Science, Taif University, Taif, KSA. He received Ph.D degree from Menoufia University, Egypt. He has published many papers at international journals and conferences. His research areas are Computer Networking, Internet Protocols, and Multimedia Communication Prof. Sayed F. Bahgat is a Professor in the Department of Scientific computing, Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt. He received his Ph.D. from the Illinois Institute of Technology, Chicago, Illinois, U.S.A., in 1989. From 2003 to 2006, he was the head of Scientific Computing Department, Ain Shaams University, Cairo, Egypt. From 2006 to 2009 he was the head of Computer Science Department, Taif University, KSA. He is now a professor in the Computer Science Department, Taif University, KSA. Dr. Sayed F. Bahgat has written over 39 research articles and supervised over 19 M.Sc. and Ph. D. Theses. His research focuses in computer architecture and organization, computer vision and robotics. Prof. Said Ghoniemy is a Professor in the Department of Computer Systems, Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt. He received his Ph.D. from the Institute National Polytechnique du Toulouse, Toulouse, France, in 1982. From 1996 to 2005, he was the head of Computer Systems Department and director of the Information Technology Research and Consultancy Center (ITRCC), Ain Shaams University, Cairo, Egypt. From 2005 to 2007 he was the vice-dean for post-graduate affairs, Faculty of Computer and Information Sciences, Ain Shams University. He is now a professor in the Computer Engineering Department, Taif University, KSA. Dr. Ghoniemy has written over 60 research articles and supervised over 40 M.Sc. and Ph. D. Theses. His research focuses in computer architecture and organization, computer vision and robotics.

Appendix A Assumptions: Symbol SW SC TR TC I, J H, K PFS N M Old[ ] New[ ] Same[ ] Diff1[ ] Diff2[ ]

Description Detector visited station address. Connector visited station address. Time spent to reach any station. Connector visiting time. Counters. Two used variables. Failed station position. Number of stations in the old path. Number of stations in the new path. Array used to keep the old path stations addresses. Array used to keep the new path stations addresses. Array used to keep the similar stations found in the two paths. Array used to keep the different stations found in the old path. Array used to keep the different stations found in the new path.

Eng. Yasser Elawady is a Lecturer in the Department of Computer Engineering, Faculty of Computers and Information Systems,Taif University,Taif, KSA. He received his M.Sc. from the Department of Computer Engineering, Faculty of Engineering, Mansoura university, Mansoura, Egypt, in 2003. His subject of interest includes Multimedia Communication, Remote Access and Networking.

86

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Hierarchical Approach for Key Management in Mobile Ad hoc Networks Renuka A.

Dr. K.C.Shet

Dept. of Computer Science and Engg. Manipal Institute of Technology Manipal-576104-India [email protected]

Dept. of Computer Engg. National Institute of Technology Karnataka Surathkal, P.O.Srinivasanagar-575025 [email protected]

87

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

We propose a distributed group key management approach wherein there is no central authority and the users themselves arrive at a group key through simple computations. In large and high mobility mobile ad hoc networks, it is not possible to use a single group key for the entire network because of the enormous cost of computation and communication in rekeying. So, we logically divide the entire network into a number of groups headed by a group leader and each group is divided into subgroups called clusters headed by the cluster head. Though the term group leaders and cluster heads are used these nodes are no different from the other nodes, except for playing the assigned roles during the initialization phase and inter group and inter cluster communication. After initialization phase, within any cluster, any member can initiate the rekeying process and the burden on the cluster head is reduced. The transmission power and memory of the cluster head and the group leaders is same as other members. The members within the cluster communicate with the help of a group key. Inter cluster communication take place with the help of gate way nodes if the nodes are in the adjacent clusters and through the cluster heads if the are in far off clusters.. Inter group communication is routed through the group leaders. Each member also carries a public key, private key pair used to encrypt the rekeying messages exchanged. This ensures that the forward secrecy is preserved.

Abstract—Mobile Ad-hoc Network (MANET) is a collection of autonomous nodes or terminals which communicate with each other by forming a multi-hop radio network and maintaining connectivity in a decentralized manner. The conventional security solutions to provide key management through accessing trusted authorities or centralized servers are infeasible for this new environment since mobile ad hoc networks are characterized by the absence of any infrastructure, frequent mobility, and wireless links. We propose a hierarchical group key management scheme that is hierarchical and fully distributed with no central authority and uses a simple rekeying procedure which is suitable for large and high mobility mobile ad hoc networks. The rekeying procedure requires only one round in our scheme and Chinese Remainder Theorem Diffie Hellman Group Diffie Hellmann and Burmester and Desmedt it is a constant 3 whereas in other schemes such as Distributed Logical Key Hierarchy and Distributed One Way Function Trees, it depends on the number of members. We reduce the energy consumption during communication of the keying materials by reducing the number of bits in the rekeying message. We show through analysis and simulations that our scheme has less computation, communication and energy consumption compared to the existing schemes. Keywords- mobile ad hoc network; key management; rekeying.

I.

INTRODUCTION

The rest of the paper is organized as follows. Section II focuses on the related work in this field. The proposed scheme is presented in Section III. Performance analysis of the scheme is discussed in Section IV. Experimental Results and Conclusion are given in Section V and Section VI respectively.

A mobile ad hoc network (MANET) is a collection of autonomous nodes that communicate with each other, most frequently using a multi-hop wireless network. Nodes do not necessarily know each other and come together to form an ad hoc group for some specific purpose. Key distribution systems usually require a trusted third party that acts as a mediator between nodes of the network. Ad hoc networks typically do not have an online trusted authority but there may be an off line one that is used during system initialization. Group key establishment means that multiple parties want to create a common secret to be used to exchange information securely. Without relying on a central trusted entity, two people who do not previously share a common secret can create one based on the party Diffie Hellman (DH) protocol. The 2-party Diffie Hellman protocol can be extended to a generalized version of n-party DH. Furthermore, group key management also needs to address the security issue related to membership changes. The modification of membership requires refreshment of the group key. This can be done either by periodic rekeying or updating right after member change. The change of group key ensures backward and forward security. With frequently changing group memberships, recent researches began to pay more attention on the efficiency of group key update. Recently, collaborative and group-oriented applications in MANETs have been an active research area. Obviously, group key management is a central building block in securing group communications in MANETs. However, group key management for large and dynamic groups in MANETs is a difficult problem because of the requirement of scalability and security under the restrictions of nodes’ available resources and unpredictable mobility.

II.

RELATED WORK

Key management is a basic part of any secure communication. Most cryptosystems rely on some underlying secure, robust, and efficient key management system. Group key establishment means that multiple parties want to create a common secret to be used to exchange information securely. Secure group communication (SGC) is defined as the process by which members in a group can securely communicate with each other and the information being shared is inaccessible to anybody outside the group. In such a scenario, a group key is established among all the participating members and this key is used to encrypt all the messages destined to the group. As a result, only the group members can decrypt the messages. The group key management protocols are typically classified in four categories: centralized group key distribution (CGKD), de-centralized group key management (DGKM), distributed/contributory group key agreement (CGKA), and distributed group key distribution (DGKD). In CGKD, there exists a central entity (i.e. a group controller (GC)) which is responsible for generating, distributing, and updating the group key. The most famous CGKD scheme is the key tree scheme (also called Logical Key Hierarchy (LKH) proposed in [1] is based on the tree 88

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

networks. Moreover the computational burden is high since it involves a lot of exponentiations.

structure with each user (group participant) corresponding to a leaf and the group initiator as the root node. The tree structure significantly reduces the number of broadcast messages and storage space for both the group controller and group members. Each leaf node shares a pairwise key with the root node as well as a set of intermediate keys from it to the root. One Way Function (OFT) is another centralized group key management scheme proposed in [2].similar to LKH. However, all keys in the OFT scheme are functionally related according to a one-way hash function

Another approach using logical key hierarchy in a distributed fashion was proposed in [13] called Distributed One-way Function Tree (D-OWT) This protocol uses the one-way function tree. A member is responsible for generating its own key and sending the blinded version of this key to its sibling. Reference [14] also uses a logical key hierarchy to minimize the number of key held by group members called Diffie–Hellman Logical Key Hierarchy. The difference here is that group members generate the keys in the upper levels using the Diffie– Hellman algorithm rather than using a one-way function. In Chinese Remainder Theorem Diffie-Hellman (CRTDH) [15] each member computes the group key as the XOR operation of certain values computed. This requires that the members agree on two large primes. CRTDH is impractical in terms of efficiency and security, such as low efficiency, possibly a small key, and possessing the same Least Common Multiple (LCM). However this CRTDH scheme was modified in [16] wherein the evaluation of the LCM was eliminated and other steps were modified slightly, so that a large value for the key is obtained. In both these methods, whenever membership changes occur, the new group key is derived from the old group key as the XOR function of the old group key and the value derived from the Chinese Remainder Theorem values broadcast by one of its members. Since it is possible for the leaving member to obtain this message, and hence deduce the new group key backward secrecy is not preserved.

The DGKM approach involves splitting a large group into small subgroups. Each subgroup has a subgroup controller which is responsible for the key management of its subgroup. The first DGKM scheme to appear was IOLUS [3]. The CGKA schemes involve the participation by all members of a group towards key management. Such schemes are characterized by the absence of the GC. The group key in such schemes is a function of the secret shares contributed by the members.Typical CGKA schemes include binary tree based ones [4] and n-party Diffie-Hellman key agreement [5, 6]. Tree Based Group Diffie Hellman (TGDH) is a group key management scheme proposed in [4]. The basic idea is to combine the efficiency of the tree structure with the contributory feature of DH. The DGKD scheme, proposed in [7], eliminates the need for a trusted central authority and introduces the concepts of sponsors and co distributors. All group members have the same capability and are equally trusted. Also, they have equal responsibility, i.e. any group member could be a potential sponsor of other members or a co-distributor. Whenever a member joins or leaves the group, the member’s sponsor initiates the rekeying process. The sponsor generates the necessary keys and securely distributes the keys to co-distributors respectively. The co distributors then distribute in parallel, corresponding keys to corresponding members. In addition to the above four typical classes of key management schemes, there are some other forms of key management schemes such as hierarchy and cluster based ones [6, 8]. A contributory group key agreement scheme is most appropriate for SGC in this kind of environment.

In this paper, we propose a distributed approach in which members contribute to the generation of group key by sending the hash of a random number during initialization phase within the cluster. They regenerate the group key themselves by obtaining the rekeying message from one of its members during rekeying phase or whenever membership changes occur. In a group the group key used for communication among the cluster heads is generated by the group leader and transmitted securely to the other clusterheads. The same procedure is used to agree on a common key among the group leaders wherein the network head generates the key and passes on to the other group leaders. Symmetric key is used for communication between the members of a cluster and asymmetric key cryptography for distributing the rekeying messages to the members of the cluster.

Several group key management schemes have been proposed for SGC in wireless networks [9, 10]. In Simple and Efficient Group Key (SEGK) management scheme for MANETs proposed in [11] group members compute the group key in a distributed manner. Also, a new approach was developed in [12] called BALADE, based on a sequential multi-sources model, and takes into account both localization and mobility of nodes, while optimizing energy and bandwidth consumptions. Most of these schemes involve complex operations which is not suitable for large and high mobility networks. In Group DiffieHellman, the group agrees on a pair of primes and starts calculating in a distributive fashion the intermediate values. The setup time is linear since all members must contribute to generating the group key. Therefore, the size of the message increases as the sequence is reaching the last members and more intermediate values are necessary. With that, the number of exponential operations also increases. Therefore this method is not suitable for large

III.

PROPSED SCHEME

A. System model The entire set of nodes is divided into a number of groups and the number of nodes within a group is further subdivided into subsets called clusters. Each group is headed by a group leader and a cluster by the cluster head. The layout of the network is as shown in Fig.1. One of the nodes in the cluster is head. A set of eight such clusters form a group and each group is headed by a group leader. The cluster head is similar to the nodes in the network. The nodes within a cluster are also the 89

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

become members of the cluster or local nodes. The nodes update the status values accordingly. Step 3: The cluster head broadcasts the message “I am cluster head” so as to know its members. Step 4: The members reply with the message “I am member” and in this way clusters are formed in the network. Step 5: If a node receives more than one “I am cluster head” messages, it becomes Gateway which acts as a mediator between two clusters. In this manner clusters are formed in the network. The cluster heads broadcast the message, “Are there any cluster heads” so as to know each other. The cluster head with the smallest id is selected as the leader of the cluster heads which is representative of the group called the group leader. The group leaders establish communication with other group leaders in a similar manner and one among the group leaders is selected as the leader for the entire network. The entire network is hierarchical in nature and the following hierarchy is observed networkgroupclustercluster members

physical neighbors. The nodes within a cluster use contributory key agreement. Each node within a cluster contributes his share in arriving at the group key. Whenever membership changes occur, the adjacent node initiates the rekeying operation thereby reducing the burden on the cluster head. The group leader chooses a random key to be used for encrypting messages exchanged between the cluster heads and the network head sends the key to the group leaders that is used for communication among the group leaders. The hierarchical arrangement of the network is shown in Fig.2.

C. Group Key Agreement within a cluster Step 1: Each member broadcasts the public key along with its id to all other members of the cluster along with the certificate for authentication. Step 2: The members of the cluster generate the group key in a distributive manner. Each member generates a random number and sends the hash of this number to the other members encrypted with the public key of the individual members, so that the remaining members can decrypt the message with their respective private key. Step 3: Each member concatenates the hash values of the received members in the ascending order of the ids and mixes it using a one way hash function on the concatenated string. This is the group key used for that cluster. Let HRi be the hash of the random number generated by node i and GK denote the group key then GK=f (HR1 , HR2 , HR3 , ........... HRn) where HRi = hash(Random number i) f is a one way function and hash is secure hash function such as SHA1. All the members now possess a copy of the same key as same operations are performed by all the nodes.

Figure 1. Network Layout

The key management system consists of two phases (i) Initialization (ii) Group Key Agreement

D.

Inter cluster group key agreement The gateway node initiates communication with the neighboring node belonging to another cluster and mutually agrees on a key to be used for inter cluster communication between the two clusters. Any node belonging to one cluster can communicate with any other node in another cluster through this node as the intermediary. In this way adjacent clusters agree on group key. A set of eight clusters form a group. The cluster heads of each of these clusters mutually agree on a group key to be used for communication among the clusterheads

Figure 2. Hierarchical layout

B. Initialization Step 1: After deployment, the nodes broadcast their id value to their neighbors along with the HELLO message. Step 2: When all the nodes have discovered their neighbors, they exchange information about the number of one hop neighbors. The node which has maximum one hop neighbors is selected as the cluster head. Other nodes 90

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

within a group in the similar manner. This key is different from the key used within the cluster. Going one level above in the hierarchy, a number of groups can be combined headed by a group leader and the group leaders agree on a group key to be used for communication among the group leaders which aids in intergroup communication.

When a member leaves the group key of the cluster to which it belongs must be changed. This is changed in the similar manner as described above. The leaving member informs the neighboring node which in turn informs the other nodes about the leaving member. It also generates two random numbers and sends it securely to the other members which generate the group key.

Even though we have considered that the network is divided into eight groups, each group consisting of eight clusters and each cluster consisting of eight members, it need not be constant. It may vary and this number does not change the manner in which group key is derived. This is assumed so that it gives the hierarchical appearance in the form of a fractal tree.

leaving node adjacent node : {leaving message} adjacent node leaving node :{acknowledge} adjacent node each node i :{rekeying message}pki b) When a gateway node leaves When a gateway node leaves the network, it delegates the role of the gateway to the adjacent node. In this case, the group key of both the clusters with which this node is associated need to be changed. When the gateway node moves into one of the clusters only the group key of the other cluster has to be changed. leaving gateway node adjacent node : {leaving message + other messages for delegating its role}

E. Network Dynamics The mobile ad hoc network is dynamic in nature. Many nodes may join or leave the network. In such cases, a good key management system should ensure that backward and forward secrecy is preserved. 1) Member join: When a new member joins, it initiates communication with the neighbouring node. After initial authentication, this node initiates the rekeying operations for generating a new key for the cluster. The rekeying operation is as follows.

adjacent node :{acknowledge}

new node adjacent node : {authentication} adjacent node new node :{acknowledge}

gateway

node

adjacent node each node message}pki

i

in cluster1:{rekeying

adjacent node each node message}pkj

j

in cluster2:{rekeying

c)

adjacent node all nodes:{rekeying message}k(old

leaving

When the cluster head leaves

When the cluster head leaves the group key used for communication among the cluster heads need to be changed. Also, the group key used within the cluster has to be changed. This cluster head informs the adjacent cluster head about its desire to leave the network which initiates the rekeying procedure. The adjacent cluster head generates two random numbers and sends it to the other cluster heads in a secure manner. leaving cluster head adjacent cluster head : {leaving message}

cluster key)

The neighboring node broadcasts two random numbers that are mixed together using a hashing function and is inserted at a random position in the old group key, the position being specified by the first random number. The two random numbers are sent in a single message, so that any transmission loss may not result in wrong key being generated. Let the two bit strings be I Random no. = 00100010

adjacent node leaving node :{acknowledge}

II Random no. = 10110111

adjacent node each node i :{rekeying message}pki

Suppose the result of mixing function is 11010110

leaving clusterheadadjacent clusterhead : {leaving message + other messages for delegating its role}

and the previous group key is 10010100010101010001110000111100000110001000 0001

adjacent node leaving clusterhead :{acknowledge} adjacent node each message}pki

The new group key is 10010100010101010001110000111100000110011010 110010000001

cluster headi

:{rekeying

The group key of the clusterheads is obtained by taking the product of the two random numbers, inserting at the position of indicated by the first number and removing the initial bits old group key of the clusterheads and removing the bits equal to the number of bits in the product from the old group key.

Since all members know the old group key they can compute the new group key. This new group key is transmitted to the new member by the adjacent node in a secure manner.

Suppose

2) Member Leave a) When Cluster Member leaves

I Random no. = 00101101 II Random no. = 00111111 91

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

10010100010101010001110000111100000110001000 000100110101

The product of the two numbers is00 00010101000110 Suppose the old group key is

Dividing the group key into 8 bit blocks (size of the random number), we get,

10010100010101010001110000111100000110001000 000100110

10010100 01010101 00011100 00111100 00011000 10000001 00110101

The new group key is 00011100001111000001100010000000010101000110 000100110. Thus the cluster heads compute the group key after rekeying operation. This is the new group key for clusterheads within a group.Even the group key used for intra cluster communication in that particular cluster needs to be changed. This is changed in the manner described above for rekeying within the clauster.

Performing the XOR operation and concatenating as shown, 00100010 XOR 10010100 || 00100010 XOR 01010101 || 00100010 XOR 00011100 || 00100010 XOR 00111100 || 00100010 XOR 00011000 || 00100010 XOR 10000001 || 00100010 XOR 00110101 the following group key is obtained

d) Whenever the group leader leaves

10110110011101110011111000011110001110101010 001100010111

Whenever the group leader leaves all the three keys should be changed. These are (i) group key among the group leaders

This is the new group key of the group leaders.

(ii)group key among the clusterheads and

F. Communication Protocol The nodes within the cluster communicate using the intra cluster group key. The communication between intra group and inter cluster nodes takes place through the gateway node, if they belong to adjacent clusters and through the cluster heads if the are in far off clusters. Sourcenodegateway nodeDestination node --- For adjacent clusters Sourcenodeclusterhead(source)clusterhead(destinati on)Destination node ---For far away clusters

(iii) group key within the cluster leaving group leader adjacent group leader:{leaving message + other messages for delegating its role } adjacent group leader leaving node :{acknowledge} adjacent group leader :{rekeying message}pki

each

group leaderi

leaving group leaderadjacent node : {leaving message} adjacent node leaving group leader :{acknowledge}

For adjacent clusters

adjacent node each node i in that cluster:{rekeying message}pki

GKCL1 Source node ------------->Gateway Destination node For nodes in far off clusters

leaving group leader adjacent clusterhead : {leaving message + other messages for delegating its role} adjacent clusterhead :{acknowledge}

leaving

adjacent node each message}pki

cluster headi

clusterhead

GKCL2 node ---------->

GKCL1 GKCH Source node ------------->Cluster head1 ----------> Cluster GKCL2 head1 -------------- Destination node

:{rekeying

leaving node adjacent node : {leaving message} adjacent node leaving node :{acknowledge}

The inter group communication is through corresponding cluster heads and the group leaders as shown Source nodecluster head (source)group leader (source) group leader (destination) cluster head (destination) Destination node.

adjacent node each node i :{rekeying message}pki The first two group keys are changed in the manners described above. To change the group key of the group leaders, the leaving group leader delegates the role of the group leader to another cluster head in the same group and informs it to the other group leaders about this change. The adjacent group leader initiates the rekeying operation. It generates two random numbers, and sends it the other group leaders. The group leaders divide the old group key into blocks of size which is the same as the number of bits in the random number, perform the exclusive OR of the random number and the blocks of the old group key and concatenate the result to arrive at the new group key.

GKCL1 GKCH1 Source node ------------->Cluster head1 ----------> Group GKGR GKCH2 Leader1-----------Group Leader2--------------Cluster GKCL2 head2 ------------ Destination node

Suppose the Random no. is 00100010 and the old group key is 92

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

IV.

Table II gives the communication cost of rekeying for various schemes. In our scheme, the entire network is divided into a number of groups which in turn is divided into a number of clusters, wherein each cluster consists of members. When a member leaves, in the non hierarchical scheme, the key of the entire network needs to be changed. But in hierarchical scheme, it is just sufficient if the group key of the cluster to which it belongs is changed. The hierarchical scheme reduces the number of rekeying messages transmitted and this is shown in Table I. The communication between far off nodes (nodes in different groups) has to undergo 5 encryptions and decryptions whereas in non hierarchical schemes it is only one. In very large networks, this is tolerable compared to the enormous rekeying messages that need to be transmitted whenever membership changes occur. From this table we observe that the rekeying procedure requires only one round in our scheme and CRTDH and modified CRTDH, in GD-H and BD it is a constant 3 whereas in other schemes such as D-LKH and D-OFT, it depends on the number of members. Regarding the number of messages sent, BD method involves 2N broadcast messages and no unicast messages, whereas in our technique, the number of unicast messages is N-1. We also observe that CRTDH has the least communication cost among all the methods, but it does not provide forward secrecy because the rekeying message is broadcast and even the leaving member can derive the new group key. Moreover, in our scheme the rekeying message is only 32 bits wide and thus the communication overhead is greatly reduced.

PERFORMANCE ANALYSIS

A. Cost Analysis We compute communication cost of our scheme under various situations and for different network organizations. We also compare the communication cost of rekeying for various schemes. Some schemes such as GD-H use 1024 bit message for rekeying whereas our seheme uses a 32 bit meaasge and therefore the energy required for rekeying is very less. This is very important in energy constrained mobile ad hoc networks. Let us denote N= Network size M=Group Size P=Cluster Size G=No. of groups CH=Cluster Head CL=Cluster member GL =Group leader 1) Member joins When a new member joins, the public key of the new member is broadcast to all old members encrypted with the old group key. Suppose the average number of members in a cluster is P, two 16 bit numbers or a message of 32 bits is transmitted to all the existing members encrypted with the old key. This scheme requires one round and 1 broadcast message . The group keys of other clusters need not be changed. 2) Member leaves When a node leaves, there are three cases (i) The cluster member leaves (ii)The cluster head leaves (iii)The gateway node leaves (iv)The group leader leaves a) When the cluster member leaves

TABLE I.

NETWORK SIZES

Network Organization

No. of nodes that receive rekeying messages (Our scheme) CL leaves or CL joins

The random numbers are sent to the existing members encrypted with their respective public keys and unicast to the existing members. Therefore this requires one round and P-1 unicast messages. b) When the cluster head leaves The rekeying is similar to member leave within the cluster i.e P-1 unicast messages and M-1 messages among the cluster heads for changing the cluster head key. c)

NO. OF REKEYING MESSAGES FOR DIFFERENT

CH leaves

Nonhierarchical scheme

GL leaves

N=256 M=8 P=16 G=2

Join -15 (Broadcast) Leave-15

32

34

256

N=256 M=4 P=16 G =4

Join -16 Leave-15

32

36

256

N=256 M=4 P=8 G =8

Join -8 Leave-7

40

44

256

N=256 M=4 P=4 G =16

Join -4 Leave-3

68

72

256

N=256 M=2 P=4 G=32

Join -4 Leave-3

68

70

256

When gateway node leaves

The group key of both the cluster with which it is associated have to change the group keys. Therefore, this requires one round in each cluster and M-1 unicast messages in each cluster that is a total of 2 (P-1) messages. d) When the group leader leaves The group key of the group leaders , the group key of the cluster heads and also the cluster key of the cluster need to be changed. This requires one round and G-1 unicast messages among the group leaders, M-1 unicast messages among the group leaders and P-1 messages within the cluster.

93

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. XXX, No. XXX, 2009

TABLE II.

COMMUNICATION COST OF REKEYING

Scheme Burmester Desmedt(BD)

V.

No. of messages

No. of rounds

Broadcast

Unicast

and

3

2N

0

Group-Diffie Hellman(GDH) Distributed Logical Key Hierarchy (D-LKH) Distributed One Way Function Trees(DOFT) CRTDH

N

N

N-1

3

1

N

Log 2 N

0

2Log 2 N

1

1

Modified CRTDH

1

1

Our scheme(join)

1

1

0

Our scheme CL (leave)

1

0

P-1

Gateway leave

1

0

2(P-1)

CH leave

1

0

M+P-2

GL leave

1

0

G+P+M-3

Let Exp=Exponential operation D=Decryption operation OWF=One Way Function X=Exclusive OR operation CRT=Chinese Remainder solving congruence relation i= node id M= Cluster size

TABLE III.

Scheme

Theorem

method

The experiment is conducted with different mobility patterns generated using the setdest tool of ns2. These are stationary nodes located at random positions, nodes moving to random destinations with speeds varying between 0 and a maximum of 5m/s, 10m/s and 20m/s. The random waypoint mobility model is used in which the nodes move to a randomly selected position with the speed varying between 0 and maximum speed, pauses for a specified pause time and again starts moving with the same speed to a new destination. The pause time is set to 200 secs. Different message sizes of 16, 32, 48, 64, 128, 152, 180, 200, 256, 512 and 1024 bits are used. We observed that in all the four scenarios the energy consumed by the node increases as the message size increases. This is depicted in Fig.3. Since the nodes in a mobile ad hoc network communicate in a hop by hop manner, the energy consumed by all the nodes is not the same, even though same number of messages are sent and received by the nodes. This is clearly visible from the graphs. From the graph we observe that the energy consumed is less for a speed of 10m/s. This may be due to the fact that the movement brings the nodes closer to each other which reduces the relaying of the messages. The energy shown is inclusive of the energy for forwarding the message by the intermediate node.

for

COMPUTATIONAL COMPLEXITY

During Set up phase Cluster head

TABLE IV.

During rekey

Members

Parameters Simulation time Topology size Initial energy Transmitter Power Receiver Power Node mobility

Burmester and Desmedt

(M+1)Exp

-----

(M+1)Exp

Group-Diffie Hellman

(i+1)Exp

------

(i+1)Exp

Distributed Logical Key Hierarchy

Log2(MExp)

Log 2MD

Log 2MD

Distributed One Way Function Trees CRTDH

(Log 2M+ 1) Exp

------

(Log 1)Exp

-----

Modified CRTDH

------

LCM+X+CRT leader CRT+Xmem bers X+CRT

Our scheme

Sort+ OWF

LCM(M-1) + (M-1) X +MExp + CRT (M-1) X +MExp +CRT Sort+ OWF

2M

EXPERIMENTAL RESULTS

The simulations are performed using Network Simulator (NS-2.32) [17], particularly popular in the ad hoc networking community. The MAC layer protocol IEEE 802.11 is used in all simulations. The Ad Hoc Ondemand Distance Vector (AODV) routing protocol is chosen for the simulations. Every simulation run is 500 seconds long. The simulation is carried out using different number of nodes. The simulation parameters are shown in Table III.

Routing Protocol Traffic type MAC Mobility model Max. no. of packets Pause time

+

SIMULATION PARAMETERS Values 1000 sec 500m X 500m 100 Joules 0.4W 0.3W Max. speed 0m/s,5m/s, 10m/s, 20m/s AODV CBR, Message IEEE 802.11 Random Waypoint 10000 200sec

In the next experiment, we varied the cluster size and observed the effect of the cluster size on the average energy consumed by the nodes for communicating the rekeying messages. In this setup one node sends a message to every other node in the cluster. For P nodes, P1 messages are exchanged. This is indicated in Fig 4 for the mobility pattern of max. speed 20m/s. We observe that the energy consumed by the nodes increases as the network size increases and this is true with message sizes also.

D+OWF Multiplication( CH leave) XOR(GL leave)

94

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

member should not have access to this information, doing this in a secure manner is a challenging task.

REFERENCES [1]

[2]

[3]

[4]

[5]

Figure 3. Average energy consumed by the nodes for various message sizes for cluster size of 8 nodes

[6]

[7]

[8]

[9]

[10]

. Figure 4. Average energy consumed by the nodes vs. message size for different cluster sizes with mobility pattern of max. speed=20m/s

[11]

[12]

VI. CONCLUSION We proposed a hierarchical scheme scheme for group key management that does not rely on a centralized authority for regenerating a new group key. Any node can initiate the process of rekeying and so the energy depletion of any one particular node is eliminated unlike the centralized schemes. Our approach satisfies most of the security attributes of a key management system. The communication and computational overhead is small in our scheme compared with other distributed schemes. The energy saving is approximately 41% for 8 nodes and 15% for 200 nodes when the message size is reduced from 1024 to 16 bits. This indicates that small message size and small cluster size is most suitable for energy limited mobile ad hoc networks. A small cluster size increases the overhead of inter cluster communication since it needs more encryptions and decryptions whereas a large cluster size increases the communication cost of rekeying. An optimal value is chosen based on the application. As a future work, instead of unicasting the rekeying messages, broadcasting may be done that will reduce the number of messages sent through the network.Since the leaving

[13]

[14]

[15]

[16]

[17]

95

Wallner, D.M., Harder, E.J. and Agee, R.C. (1998) “Key management for multicast: issues and architectures”, Internet Draft, draft-wallner-key-arch-01.txt Sherman, A.T. and McGrew, D.A. (2003) “Key establishment in large dynamic groups using one-way function trees”, IEEE Transactions on Software Engineering, Vol. 29, No. 5, pp.444– 458 S. Mittra. Iolus: “A framework for scalable secure multicasting”, Journal of Computer Communication Reviews, 27(4):277–288, 1997. Y. Kim, A. Perrig, and G. Tsudik., “Tree-based group key agreement. ACM Transactions on Information Systems Security”, 7(1):60–96, Feb. 2004. Y. Amir, Y. Kim, C. Nita-Rotaru, J. L. Schultz, J. Stan, and G. Tsudik, “Secure group communication using robust contributory key agreement”. IEEE Trans. Parallel and Distributed Systems, 15(5):468–480, 2004. M. Burmester and Y. Desmedt,. “A secure and efficient conference key distribution system” In Advances in Cryptology EUROCRYPT, 1994. P. Adusumilli, X. Zou, and B. Ramamurthy, “DGKD: Distributed group key distribution with authentication capability” Proceedings of 2005 IEEEWorkshop on Information Assurance and Security, West Point, NY, USA, pages 476–481, June 2005. J.-H. Huang and S. Mishra, “Mykil: a highly scalable key distribution protocol for large group multicast”, IEEE Global Telecommunications Conference, (GLOBECOM), 3:1476– 1480, 2003. B. Wu, J. Wu, E. B. Fernandez, M. Ilyas, and S. Magliveras, “Secure and efficient key management in mobile ad hoc networks” Journal of Network and Computer Applications, 30(3):937–954, 2007. Z. Yu and Y. Guan., “A key pre-distribution scheme using deployment knowledge for wireless sensor networks” Proceedings of the 4th ACM/IEEE International Conference onInformation Processing in Sensor Networks (IPSN), pages 261–268, 2005. Bing Wu, Jie Wuand Yuhong Dong, “An efficient group key management scheme for mobile ad hoc networks”, Int. J. Security and Networks, 2008. MS. Bouassida, I. Chrisment, and 0. Festor , “A Group Key Management in MANETs. in International Journal of Network Security”, Vol.6, No. 1, PP.67-79, Jan. 2008 Dondeti L., Mukherjee S., and Samal A. 1999a. “A distributed group key management scheme for secure many-to-many communication”. Tech. Rep. PINTL-TR-207-99, Department of Computer Science, University of Maryland. Kim Y., Perrig, A., And Tsudik G. 2000, “Simple and faulttolerant key agreement for dynamic collaborative groups”, In Proceedings of the 7th ACM Conference in Computer and Communication Security, (Athens, Greece Nov.). (S. Jajodia and P. Samarati, Eds.), pp. 235–241. R. Balachandran, B. Ramamurthy, X. Zou, and N. Vinodchandran, “CRTDH: An efficient key agreement scheme for secure group communications in wireless ad hoc networks” Proceedings of IEEE International Conference on Communications (ICC), pages 1123–1127, 2005. Spyros Magliveras and Wandi Wei Xukai Zou, “Notes on the CRTDH Group Key Agreement Protocol” The 28th International Conference on Distributed Computing Systems Workshops 2008 The network simulator, http://www.isi.nsnam

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

An Analysis of Energy Consumption on ACK+Rate Packet in Rate Based Transport Protocol P.Ganeshkumar

K.Thyagarajah

Department of IT PSNA College of Engineering & Technology, Dindigul, TN, India, 624622 [email protected]

Principal PSNA College of Engineering & Technology, Dindigul, TN, India, 624622 [email protected]

respectively. Transport layer protocols tailored specifically for ad hoc network are broadly classified in to (i) Rate based transport protocol. (ii) Window based transport protocol. In rate based transport protocol the rate is determined first and then the data are transmitted according to that rate. The intermediate node calculates the rate of data transmission. This rate is appended in the data packet and it is transmitted to the receiver. Receiver collates the received rate from the intermediate node and sends it to sender the along with ACK. This ACK+Rate packet is transmitted based upon epoch timer expiry. If the epoch timer is set to 1 second, then for each and every 1 second the receiver transmits ACK+Rate packet to the sender. In this paper, the epoch timer based transmission of ACK+Rate packet and its problems are discussed. The frequency of ACK+Rate packet transmission with respect to the energy consumption, mobility and rate adaptation is presented. Frequency of rate change with respect to mobility and its result is also presented. Energy consumption with respect to simulation and its results are found out for various mobility speeds.

Abstract— Rate based transport protocol determines the rate of data transmission between the sender and receiver and then sends the data according to that rate. To notify the rate to the sender, the receiver sends ACK+Rate packet based on epoch timer expiry. In this paper, through detailed arguments and simulation it is shown that the transmission of ACK+Rate packet based on epoch timer expiry consumes more energy in network with low mobility. To overcome this problem, a new technique called Dynamic Rate Feedback (DRF) is proposed. DRF sends ACK+Rate whenever there is a change in rate of ±25% than the previous rate. Based on ns2 simulation DRF is compared with a reliable transport protocol for ad hoc network (ATP) . Keywords- Ad hoc network, Ad hoc transport Protocol, Rate based transport protocols, energy consumption, Intermediate node

I.

INTRODUCTION

In this paper, the design of new technique called Dynamic rate feedback (DRF), which minimizes the frequency of ACK+Rate packet transmission for rate based transport protocol in ad hoc network is focused. Ad hoc network is dynamically reconfigurable wireless network that does not have a fixed infrastructure. The characteristics of ad hoc network are completely different from that of the wired network. Therefore TCP protocol which is designed originally for wired network cannot be used as such for ad hoc network. Several studies have focused on the transport layer issues in ad hoc network. Research work have been carried out on both studying the impact of using TCP as the transport layer protocol and improving its performance either through lower layer mechanisms that hide the characteristics of ad hoc network from TCP, or through appropriate modifications to the mechanisms used by TCP [1-8]. Existing approaches to improve transport layer performance over ad hoc networks fall under three broad categories [9] (i) Enhancing TCP to adopt to the characteristics of ad hoc network (ii) Cross layer design (iii) A new transport protocol tailored specifically for ad hoc network environment. TCP ELFN proposed by Holland et al [10], Atra framework proposed by Anantharaman et al [11], Ad hoc transport protocol (ATP) proposed by Sunderasan et al [12] are examples protocols of the three categories

II.

RELATED WORK

In this paper, the focus is based on the proposals that aim to minimize the frequency of the ACK packets transmitted. Jimenz and Altman [13] investigated the impact of delaying more than 2 ACKs on TCP performance in multi hop wireless networks and they proved through simulation that encouraging result have been obtained. Johnson [14] investigated the impact of using extended delayed acknowledgement interval on TCP performance. Allman [15] conducted an extensive simulation evaluation on delayed acknowledgement (DA) strategies. Most of the approaches aims only on the ACK packets of window based transmission. In contrast to window based transmission, rate based transmission is another classification which falls under transport layer protocols as mentioned in section 1. As compared to window based transmission, rate based transport protocols aid in improving the performance in the following 2 ways [12] (i) Avoid the draw back due to burstiness. (ii) the transmission are scheduled by a timer at the sender, therefore the need for self clocking through the arrival of ACK is eliminated. The latter

96

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

The ATP receiver collates this information and sends it back to the sender in the next ACK packets, and the ATP sender can adjust its transmission rate based on this information. During the establishment of the connection, the ATP sender determines the initial transmission rate by sending probe packet to the receiver. Each intermediate node attaches network congestion information to the probe packet and the ATP receiver replies to the sender with an ACK packet containing relevant congestion information. In order to minimize control overhead, ATP uses connection request and ACK packets as probe packets. ATP increases the transmission rate only if the new transmission rate (R) received from the network is beyond a threshold (x) greater than a current rate (S), e.g. if R>S(1+x) then the rate is increased. The transmission rate is increased only by a fraction (k) of the difference between two rates, i.e.: S=S+(RS)/k, this kind of method avoids rapid fluctuations in the transmission rate. If the ATP sender has not receive ACK packets for two consecutive feedback periods, it significantly decreases the transmission rate. After a third such period, connection is assumed to be lost and the ATP sender moves to the connection initiation phase where it periodically generates probe packets. When a path break occurs, the network layer sends an explicit link failure notification (ELFN) packet to the ATP sender and the sender moves to the connection initiation phase. The major advantage of ATP is the avoidance of congestion window fluctuations and the separation of the congestion control and reliability. This leads to a higher performance in ad hoc wireless networks. The biggest disadvantage of ATP is incompatibility with a traditional TCP, nodes using ATP cannot communicate directly with the Internet. In addition, fine-grained per-flow timer used at the ATP sender may become a bottleneck in large ad hoc wireless networks.

benefit is used by rate based protocols to decouple congestion control mechanism from the reliability mechanism. The protocols that fall under rate based transmission are as follows. Sunderasan et al [12] proposed ATP: a reliable transport protocol for ad hoc network. Kaichen et al [16] proposed an end to end rate based flow control scheme called EXACT. Ashishraniwala et al [17] designed a link layer aware reliable transport protocol called LRTP. Danscofield [18] in his thesis presented a protocol called a hop by hop transport control for multi hop wireless network. Nengchungwang et al [19] have proposed improved transport protocol which uses fuzzy logic control. Ganeshkumar et al [20,21] have studied ATP and designed a new transport protocol called PATPAN. In all the rate based transport protocol mentioned above, the rate (congestion information) is transmitted by the intermediate node to the receiver during the transmission of data packets. The receiver collates the congestion information and notifies the same to the sender along with ACK. The sender adjusts the rate of data packet transmission according to the rate (congestion information) received from the receiver. The ACK+Rate feedback packet is transmitted periodically or based on epoch timer. The granularity of setting epoch timer and the frequency of ACK+Rate feedback packet transmission highly influences the performance of the protocol. In window based transport protocol huge amount of work is done to minimize the frequency of ACK packet transmission. The literature survey on the related work of rate based transport protocol clearly depicts that until now research work have not been carried out in minimizing the frequency of ACK+Rate feedback packet transmission. Therefore this motivated us to study the behaviour of well known rate based transport protocol ATP [12] and further explore deep in to the frequency of ACK + Rate feedback packet transmission. III.

BACKGROUND

IV.

A. Ad Hoc Transport Protocol (ATP) Ad hoc transport protocol (ATP) is a protocol designed for ad hoc wireless networks. It is not based on TCP. ATP differs from TCP in many ways: ATP uses coordination between different layers, ATP uses rate based transmissions and assisted congestion control and finally, congestion control and reliability are decoupled in ATP. Like many TCP variants, ATP also uses information from lower layers for many purposes like estimating of initial transmission rate, congestion detection, avoidance and control, and detection of path breaks. ATP obtains network congestion information from intermediate nodes, while the flow control and reliability information are obtained from the ATP receiver. The ATP uses a timer-based transmission where the rate is dependent on the congestion in the network. As packets travel through the network, intermediate nodes attach the congestion information to each ATP packet. This congestion information is expressed in terms of weighted average of queuing delay and contention delay experienced by the packets at intermediate node.

DYNAMIC RATE FEEDBACK (DRF)

A. Design issues In rate based transport protocols, the intermediate node sends the congestion information (rate) to the receiver. The congestion information is usually appended with the data packet. The receiver collates the congestion information and sends it to the sender. The sender finds out the rate according to the received congestion information and starts to transmit the data. According to the concept of IEEE 802.11 MAC, at any particular instant only one node make use of the channel, even though the channel is common for all the nodes lying in the same contention area. The receiver sends transport layer ACK for the data which it have received. Since MAC uses shared channel, the data packet and ACK packet will contend to occupy the channel. This reduces the number of data packet send. To address this issue SACK is used in the transport layer. In TCP SACK, the ACK will be send for every 3 packets. In ATP SACK, the ACK will be send for every 20 packets or less than 20 packets. In ATP, to trigger the process of sending ACK, epoch timer is used. This epoch timer has a fixed value. After each epoch timer expires, ACK will be send

97

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

The congestion information which the intermediate node sends to the receiver is transmitted by the receiver to the sender along with ACK. The ACK is triggered by epoch timer expiry. Therefore for each and every epoch timer expiry the congestion information (delay/rate) is send to the sender.

data transmission. The data is transmitted only based on the received rate. In DRF while using slow speed mobile topology, transmission of ACK+Rate packet is limited. This does not affect the number of data packet transmitted in the forward path. In low mobility network, the frequency of rate changes will be minimum. This does not cause any side effect except for which the sender should not discard the buffered data transmitted already until ACK is received for the same. DRF does not affect the data recovery through retransmission. If any data is lost, then it is the indication of congestion, route failure. This causes change in rate. If rate change occurs, then ACK+Rate will be transmitted. From the ACK, the lost data packet can be found out and the same can be recovered through retransmission.

Even though the rate is determined on the fly and dynamically through out the session, it is notified to the sender only once per epoch. Therefore the granularity of epoch affects the performance of the protocol. If the epoch time is very short, then ACK packet transmitted per unit time will become more. This unnecessary traffic in the reverse path creates lot of problems such as high energy consumption leading to poor network life time and throughput reduction in the forward path. If the epoch time is very large, then ACK+Rate packet transmitted per unit time will be less. Due to this the rate changes which occur within the epoch will not be notified to the sender promptly. This causes the sender to choose rate that are much lower than necessary, resulting in periodic oscillations between high and low transmission speeds.

C. Triggering ACK+Rate Packet The DRF technique is developed to eliminate the unnecessary transmissions of ACK+Rate packet in order to reduce the additional overhead and energy consumption bared by the intermediate nodes. In this paper, through detailed arguments it is told that the triggering of ACK+Rate packet in response to expiry of epoch timer causes serious performance problems. If the epoch time period is set to 1 second, then the rate changes that occur with in 1 second could not be informed to the sender. This causes harmful effects such as reduction in throughput and under utilization of network resources. To overcome this drawback, DRF sends ACK+Rate packet when ever there is a rate changes, rather than sending ACK+Rate packet for each and every expiry of epoch timer. If ACK+Rate packet is transmitted for every rate change, then more traffic will be generated in the network which causes high energy consumption in the intermediate node and reduction in the throughput due to the contention of data and ACK+Rate packet in the forward and reverse path respectively. If ACK+Rate packet is transmitted after a ±100% rate change(i.e., if the current rate is 4, then the next ACK+Rate packet will be transmitted only if rate becomes 8) than the previous rate, then the sender will choose an inappropriate rate which does not suits exactly with the characteristics of the network at that particular instant. Therefore, in order to find out the optimal time to transmit ACK+Rate packet an experiment is conducted in ns2 simulator. The simulation set up is discussed in section 5.1. The source node and destination node are randomly chosen. Throughput is analyzed in various angles. Transmission of ACK+Rate packets with respect to ± 15%, ±25%, ±35%, ±50%, ±65%, ±75% rate changes than the previous rate is analyzed. Throughput in pedestrian mobile topology (1m/s), throughput in slow speed mobile topology (20m/s), and throughput in high speed mobile topology (30m /s) is found out for 1flow, 5flow and 25flow. The results are shown in Table1. The results are rounded to nearest integer. The average throughput for pedestrian, slow speed and high speed mobile topology for ±15%, ±25%, ±35%, ±50%, ±65%,±75% rate changes in a network load of 1 flow are 539, 529, 468, 366, 278, 187 respectively. From the result it is clear that

In ad hoc network, the topology of the network changes dynamically which result in frequent rate changes. If the sender is not notified with frequent rate changes, then the sender will choose a rate which does not appropriately matches with the characteristics of network at that particular instant. Due to this the sender may choose a rate that is much lower or higher than necessary resulting in periodic oscillation between high and low transmission speeds. The epoch timer plays an important role in terms of throughput in the forward path, energy consumption and congestion in the network. According to the characteristics of the ad hoc network choosing a constant epoch timer value for the entire period of operation of protocol does not hold good. Therefore in the proposal the ACK+Rate packet is transmitted by the receiver to the sender when ever there is 25 percent rate change than the previous rate. If the receiver finds a rate which is 25 percent more or less than the previous rate, then it transmits the ACK+Rate to the sender. This procedure is termed as Dynamic Rate Feedback(DRF). According to this technique the rate is notified to the sender when ever there exist a 25 percent rate change, eliminating the concept of ACK+Rate packet transmission based on epoch timer expiry. B. Independence on ACKs In rate based transport protocols, the rate is notified to the sender along with ACK. The receiver sends ACK+Rate to the sender, the sender adopts its date transmission according to the received rate. In window based transmission, the reception of ACK triggers the data transmission. So if ACK is late or if only few ACK is received per unit time, then only less number of data packets will be transmitted. This decreases the throughput and performance of the protocol. In rate based transport protocols, the reception of ACK does not trigger the

98

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 TABLE 1:

Transmission of ACK+Rate packet with respect to % of rate change

THROUGHPUT STATISTICS OF DIFFERENT MOBILITY FOR VARIOUS LOADS.

Throughput in pedestrian topology

1 flow 635 629 522 424 367 217

±15 ±25 ±35 ±50 ±65 ±75

5 flow 214 198 153 132 123 105 TABLE 2:

Transmission of ACK+Rate packet with respect to % of rate change ±15

25 flow 46 39 31 27 21 19

Throughput in slow speed topology

1 flow 551 537 492 371 254 184

25 flow 43 39 31 24 19 15

1 flow 432 421 392 304 213 161

5 flow 141 133 111 92 63 43

25 flow 38 32 24 17 14 8

STATISTICS OF ACK+RATE PACKET TRANSMISSION

Number of ACK+Rate packet transmitted for simulation time of 100 second and a network load of 1 flow. Pedestrian Slow Speed Topology High Speed Topology Topology 43 56 69

±25

64

73

84

±35

76

85

94

A.

the throughput for ±15%, ±25% and ±35% are greater than that of ±50%, ±65%, ±75%. Therefore, it can be concluded that ACK+Rate packet may be triggered for transmission if there is ±15%, ±25% or ±25% of rate change than the previous rate change.

Simulation Setup

Simulation study was conducted using ns2 network simulator. A mobile topology of 500m X 500m grid consisting of 50 nodes is used. Radio transmission range of each node is kept as 200 meters. The interference range is kept as 500 meters. Channel capacity is chosen as 2 Mbit/sec. channel delay is fixed as 2.5µs. Dynamic source routing and IEEE 802.11b is used as the routing and MAC protocol. FTP is used as the application layer protocol. The source and destination pairs are randomly chosen. The effect of load on the network is studied with 1,5 and 25 flows. The performance of DRF is evaluated and compared with ATP.

Transmission of ACK+Rate packet from the rate change of ±15%, ±25% and ±25% than the previous rate is analysed as shown in table 2. The appropriate percent of rate change so as when to trigger the ACK+Rate packet must be chosen. From the results shown in Table 2, it is clear that the number of ACK+Rate packet in ±25% rate change is lower than ±35% rate change and slightly higher than ±15% rate change. Therefore in DRF the method of triggering ACK+Rate packet, whenever there is ±25% rate change than the previous rate is adopted. V.

5 flow 157 146 114 91 68 54

Throughput in high speed topology

B.

Instantaneous Rate Dynamics

Instantaneous rate dynamics refers to change in rate (packets/sec) with respect to time. Fig.1 shows the result of change in rate for various mobility 1m/s, 10m/s, 30m/s, 50m/s. When mobility is 1 m/s, rate change occurs 3 to 4 times. The average rate of rate changes is 210. When mobility is 50 m/s, frequent rate change occurs. The average rate of rate changes is 225.4. While the mobility is 50 m/s, the maximum deviation of rate change with respect to average value ranges from 50 to 60. From this observation it can be concluded that mobility is directly proportional to the change in rate i.e., whenever mobility increases the rate change also increases.

PERFORMANCE EVALUATION

This section presents the evaluation of DRF and ATP considering the aspects such as energy consumption, change in rate with respect to time for various mobility. The performance of DRF is compared with ATP. The reason for the comparison with ATP is that it is a well known and widely accepted rate based transport protocol. Since the scope of this paper limits to rate based transport protocol, other version of TCP is not considered for comparison

99

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

(a)

Figure 1. Rate Vs Time

D. Energy Consumption In this section, the energy consumption in intermediate node is examined for ATP and DRF. The energy model implemented in the ns2 simulator [22] is used in simulation. A node starts with initial energy level that is reduced whenever the node transmits, receives a packet. The total amount of energy E(n) consumed at a given node n is represented as E(n) = Et(n) + Er(n) Where Et and Er denote the amount of energy spended by transmission, reception respectively. Energy/ Bit ratio i.e., the energy consumption per bit is computed as follows e= n*p*8/es Where n is the number of packet transmitted, p is the size of the packet in bytes and es is the energy spend by the node in joules.

(b)

E. Performance in pedestrian (1 m/s) mobile topology In pedestrian mobile topology, the rate change with respect to time is low as compared to the slow speed and high speed mobile topology. Snapshot of energy consumption due to the transmission of ACK+Rate packet in DRF and ATP are presented for single flow, 5 flows, and 25 flows in Fig. 2 a, b, c respectively. Here the focus is only to high light the energy consumption of ATP and DRF. Hence other variants of TCP and other rate based transport protocols are not considered. The average energy consumption of ATP are 3.84, 4.65, 5.05 for 1 flow, 5 flow, 25 flow respectively. The average energy consumption of DRF are 3.24, 4.01, 4.35 for 1 flow, 5 flow, 25 flow respectively. In ATP since the ACK+Rate packet is transmitted for every epoch period (say 1 second), the number ACK+Rate packet transmitted is high. Therefore the energy consumption in ATP is higher as compared to DRF. Energy consumption in DRF is low because ACK+Rate is transmitted to the sender only when there is rate change. In pedestrian topology since the mobility is low ( 1m/s), the rate change is also low so the number of ACK+Rate packet transmission is minimum. Therefore the energy consumption for both ATP and DRF is minimum as compared to slow speed and high speed mobile topology.

(c) Figure 2. Energy Consumption in Pedestrian Mobile Topology (a) [1flow] (b) [5flow], (c) [25flow].

F. Performance in slow speed mobile topology Snapshot of energy consumption due to the transmission of ACK + Rate packet in DRF and ATP are presented for single flow, 5 flows, and 25 flows in Fig.3 a, b, c respectively. The average energy consumption of ATP are 3.92, 4.75, 5.12 for 1 flow, 5 flows, 25 flows respectively. The average energy consumption of DRF are 3.75, 3.94, 4.24 for 1 flow, 5 flow, 25 flow respectively. In slow speed mobile topology the speed of mobility is 10m/s. The average energy consumption in slow speed mobile topology is greater than that of the results obtained in pedestrian mobile topology. According to the results shown in Fig. 1. as the speed increases the change in rate with respect to time also increases. Therefore energy consumption in both ATP and DRF is higher in slow speed topology than that of pedestrian mobile topology. But

100

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

comparing ATP and DRF, ATP has high energy consumption than DRF, this is due to the transmission of ACK+Rate packet based on epoch timer expiry. This is the indication to use DRF technique in a topology where mobility speed varies from 1 m/s to 10 m/s. DRF causes reduction of energy consumption and there by increases the network life time which is critical issue.

consumption of DRF are 4.89, 5.6, 6.23 for 1 flow, 5 flow, 25 flow respectively It is seen that the energy consumption of DRF is greater than that of ATP. This is because as mobility increases rate change also increases. In DRF the ACK+Rate packet is transmitted whenever there is a rate change. Since rate changes are higher, number of ACK+ Rate packets transmitted is higher. The result shows that within 1 second 5 to 6 ACK+Rate packets are transmitted. This raises the energy consumption. In case of ATP ACK+Rate packet is transmitted for each and every epoch period (say 1 second) in contrast, DRF send ACK+Rate packet whenever there is a rate change.

(a)

(a)

(b)

(b)

Figure 3.

(c) Energy Consumption in slow speed mobile topology (a) [1flow], (b) [5flow],(c) [25flow].

G. Performance in high speed mobile topology Snapshot of energy consumption for high speed mobile topology of ATP and DRF are presented for single flow, 5 flows and 25 flows in Fig.4 a, b, c respectively. The average energy consumption of ATP are 4.15, 4.85, 5.33 for 1 flow, 5 flows and 25 flows respectively. The average energy

Figure 4.

(c) Energy Consumption in High speed mobile topology (a) [1flow], (b) [5flow], (c) [25flow].

The results clearly reveals that the DRF technique reduces energy consumption and increases the life time of a node in very slow speed and slow speed mobility. But in high speed

101

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 [13]

mobility topology DRF consumes slightly higher energy than ATP. Therefore it is clear that DRF technique can be deployed in an ad hoc network where the mobility speed does not exceed 20 m/s.

[14]

[15]

VI. CONCLUSION The Dynamic Rate Feedback (DRF) strategy aims to minimize the contention between data and ACK+Rate packets by transmitting as few ACK+Rate packet as possible. This mechanism is self adaptive and it is feasible and optimal to use in ad hoc networks whose mobility is restricted to 20m/s. Through simulation it is found that the frequency of change in rate is directly proportional to the mobility. In DRF technique, it is adopted that the ACK+Rate is transmitted only, whenever there is change in rate of ±25% than the previous rate. The simulation result showed that the DRF can outperform ATP, the well known rate based transport protocol for ad hoc network in terms of energy consumption in intermediate node. This technique is easy to deploy since the changes are limited to the end node (receiver) only. It is important to emphasize that in DRF the semantics of ATP is retained but the performance is improved.

[16]

[17]

[18]

[19]

[20]

[21]

REFERENCE [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8] [9]

[10]

[11]

[12]

B. Bakshi, P. Krishna, N.H. Vaidya, & D.K. Pradhan, Improving Performance of TCP over Wireless Networks, Proc. 17th International Conference on Distributed Computing systems (ICDCS), Baltimore, Maryland, USA,1997, 112-120. G. Holland & N.H. Vaidya, Impact of Routing and Link layers on TCP Performance in Mobile Ad Hoc Networks, Proc. of IEEE Wireless Comm. and Networking Conference, USA, 1999, 505509. M. Gerla, K. Tang, & R. Bagrodia, TCP Performance in Wireless Multi Hop Networks, Proc. IEEE Workshop Mobile computing Systems and Applications, USA, 1999, 202-222. G. Holland & N.H. Vaidya, Analysis of TCP Performance over Mobile Ad Hoc Networks, Proc. of ACM MOBICOM Conference, 1999, Seattle, washinton, 219- 230. J.P. Monks, P. Sinha, & V. Bharghavan, Limitations of TCPELFN for Ad Hoc Networks, Proc. Workshop Mobile and Multimedia Communication, USA, 2000, 56-60. K. Chandran, S. Raghunathan, S. Venkatesan, & R. Prakash, A Feedback Based Scheme for Improving TCP Performance in Ad Hoc Wireless Networks, Proc. International Conference on Distributed Computing Systems, Amsterdam, The Netherlands, 1998, 472-479. T.D. Dyer & R. Bopanna, A Comparison of TCP Performance over Three Routing Protocols for Mobile Ad Hoc Networks, Proc. ACM MOBIHOC 2001 Conference, Long Beach, California, USA, 2001, 156-162. J. Liu & S. Singh, ATCP: TCP for Mobile Ad Hoc Networks, IEEE Journal on Selected Areas in Comm., 2001. Karthikeyan Sundaresan, Seung-Jong Park, & Raghupathy Sivakumar, “Transport Layer Protocols in Ad Hoc Networks, Ad hoc Network, ( Springer US ) 123-152. J. P. Monks, P. Sinha, & V. Bharghavan, Limitations of TCPELFN for Ad hoc Networks, Proc. in Workshop on Mobile and Multimedia Communication, Marina del Rey, CA, Oct. 2000. V. Anantharaman & R. Sivakumar, A Microscopic Analysis of TCP Performance Analysis over Wireless Ad Hoc Networks, Proc. of ACM SIGMETRICS 2002., Marina del Rey, CA, 2002. K.Sundaresan, V.Anantharaman, Hung-Yun Hsieh & A.R.Sivakumar, ATP: a reliable transport protocol for ad hoc networks, IEEE Transaction on Mobile computing, (4)6, 2005,588- 603.

[22]

T. Jimenez and E. Altman, “Novel Delayed ACK Techniques for Improving TCP performance in Multihop Wireless Networks,” Proc. Personal Wireless Comm. (PWC ’03), Sept. 2003. S.R. Johnson, “Increasing TCP Throughput by Using an Extended Acknowledgment Interval,” master’s thesis, Ohio Univ., June 1995. M. Allman, “On the Generation and Use of TCP Acknowledgements,” ACM Computer Comm. Rev., vol. 28, pp. 1114-1118, 1998. K.Chen, K.Nahrstedt, & N.Vaidya, The utility of explicit ratebased flow control in mobile ad hoc networks, Proc. Wireless Communications and Networking Conference, USA, 2004. Raniwala, S Sharma, R Krishnan, & T Chiueh, Evalaution of a Stateful Transport Protocol for Multi-channel Wireless Mesh Networks, Proc. International conf. on Didtributed Computing System, Lisboa, Portugal, 2006. Dan scofield, Hop-by-Hop Transport Control for Multi-Hop Wireless Networks, ms thesis, Department of Computer Science, Brigham Young University, Provo, 2007. Neng-Chung Wang, Yung-Fa Huang, & Wei-Lun Liu, A FuzzyBased Transport Protocol for Mobile Ad Hoc Networks, Proc. of IEEE International Conference on Sensor Networks, Ubiquitous and Trustworthy Computing, SUTC apos, 2008,320-325. P.Ganeshkumar, & K.Thyagarajah, PATPAN: Power Aware Transport Protocol for Adhoc Networks, Proc. of IEEE Conference on Emerging Trends in Engineering and Technology, Nagpur, India, 2008, 182-186. P.Ganeshkumar, & K,Thyagarajah, Proposals for Performance Enhancement of intermediate Node in Ad hoc Transport Protocol, Proc. of IEEE International Conference on Computing, communication and Networking, Karur, India, 2008. Y. Xu, J. Heidemann, and D. Estrin, “Adaptive Energy Conserving Routing for Multihop Ad Hoc Networks,” Research Report 527, Information Sciences Inst., Univ. of Southern California, Oct. 2000 AUTHORS PROFILE

P.Ganesh kumar received the bachelor’s degree in Electrical and Electronics Engineering from Madurai Kamaraj University in 2001, and Master’s degree in Computer science and Engineering with distinction from Bharathiar University in 2002. He is currently working towards the PhD degree in Anna university Chennai. He has 7 years of teaching experience in Information Technology. He is a member of IEEE, CSI and ISTE. He had published several papers in International journal, IEEE international conferences and national conferences. He authored a book “Component based Technology”. His area of interest includes Distributes systems, Computer Network and Ad hoc network. Dr. K.Thyagarajah received the bachelor’s degree in Electrical Engineering and master’s degree in Power systems from Madras university. He received doctoral degree in Power Electronics and AC Motor Drives from Indian Institute of science, Banglore in 1993. He has 30 years of experience in teaching and research. He is a senior member of IEEE. He is a senior member of various bodies such as ISTE, Institution of engineers, India etc. He is syndicate member in Anna university Chennai and in Anna University Tiruchirapalli. He is member of board of studies in various universities in India. He has published more than 50 papers in various national and international referred journals and conferences. He authored a book “Advanced Microprocessor”. His area of interest includes Network Analysis, Power electronics, Mobile computing, Ad hoc networks.

102

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive Integrated Moving Average (SARIMA) Adhistya Erna Permanasari

Dayang Rohaya Awang Rambli

P. Dhanapal Durai Dominic

Computer and Information Science Dept. Universiti Teknonologi PETRONAS Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia [email protected]

Computer and Information Science Dept. Universiti Teknonologi PETRONAS Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia [email protected]

Computer and Information Science Dept. Universiti Teknonologi PETRONAS Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia [email protected]

cases, at least 2185 deaths). Some of these zoonoses recently have major outbreaks worldwide which resulted in many losses of lives both to humans and animals.

Abstract— Zoonosis refers to the transmission of infectious diseases from animal to human. The increasing number of zoonosis incidence makes the great losses to lives, including humans and animals, and also the impact in social economic. It motivates development of a system that can predict the future number of zoonosis occurrences in human. This paper analyses and presents the use of Seasonal Autoregressive Integrated Moving Average (SARIMA) method for developing a forecasting model that able to support and provide prediction number of zoonosis human incidence. The dataset for model development was collected on a time series data of human Salmonellosis occurrences in United States which comprises of fourteen years of monthly data obtained from a study published by Centers for Disease Control and Prevention (CDC). Several trial models of SARIMA were compared to obtain the most appropriate model. Then, diagnostic tests were used to determine model validity. The result showed that the SARIMA(9,0,14)(12,1,24)12 is the fittest model. While in the measure of accuracy, the selected model achieved 0.062 of Theil’s U value. It implied that the model was highly accurate and a close fit. It was also indicated the capability of final model to closely represent and made prediction based on the tuberculosis historical dataset.

Zoonosis evolution from the original form could cause the newly emerging zoonotic disease [4]. Indeed this is evidenced in a report presented by WHO [4] associating microbiological factors with the agent, the animal hosts/reservoirs and the human victims which could result in a new variant of a pathogen that is capable of jumping the species barrier. For example, Influenza A virus mechanism have jumped from wild waterfowl species into domestic farm, farm animal, and humans. The other recent example is the swine flu that the outbreaks in human originally come from a new influenza virus in a swine. The outbreak of disease in people caused by a new influenza virus of swine origin continues to grow in the United States and internationally Worldwide frequency of zoonosis outbreak in the past 30 years [3] and the risk factor of the newly emerging diseases forced many governments to apply stringent measures to prevent zoonosis outbreak, for example by destroying the last livestock in the infected area. These mean great losses to farmer. The significant impact to human life, however, still remains the biggest issue in zoonosis. Therefore, it highlights the need for a modeling approach that can give decision makers an early estimate of future number zoonosis incidence, based on the historical time series data. The use of computer software couple with a statistical modeling can be used to forecast the number of zoonosis incidence.

Keywords—zoonosis; forecasting; time series; SARIMA

I.

INTRODUCTION

Zoonosis refers to any infectious disease that is transmitted from animals humans [1, 2]. It was estimated that around 75% of emerging disease infections to humans come from animal origin [3-5]. The zoonosis epidemics arise and exhibit the potential threat for public health and economic impact. Large numbers of people have been killed by zoonotic disease in different countries.

Time series analysis regarding forecasting model is widely used in various fields. In fact, there are few of studies regarding zoonosis forecasting comparing to other areas, such as energy demand prediction, economic field, traffic prediction, and in the health support. Indeed, prediction the risk of zoonosis impact in human need to be focused, due to the need to obtain the result to take the further decision.

The WHO statistic [6] reported some zoonosis outbreaks including Dengue/dengue haemorrhagic fever in Brazil (647 cases, with 48 deaths); Avian Influenza outbreaks in 15 countries (438 cases, 262 deaths) ; Rift Valley Fever in Sudan (698 cases, including 222 deaths); Ebola in Uganda (75 patients), Ebola in Philippines (6 positive cases from 141 suspect) and Ebola in Congo (32 cases, 15 deaths); and the latest was Swine Flu (H1N1) in many countries (over 209438

Many researchers have developed different forecasting methods to predict zoonosis human incidence.

103

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Multivariate Markov chain model was selected to project the number of tuberculosis (TB) incidence in the United States from 1980 to 2010 [7]. This work pointed out the study of TB incidence based on demographic groups. The uncertainty in model parameters was handled by fuzzy number. The model determined that the decline rate in the number of cases among Hispanics would be slower than among white non-Hispanics and black non-Hispanics.

data from 1993 to 2006 was collected to compute the forecast values until 12 months-ahead. Different forecasting approaches were applied into zoonosis time series. However, the increasing number of zoonosis occurrences in human made the need to take a further study of zoonosis forecasting in different zoonotic disease [13]. Due to that issue, selections of the fitted model were necessary to obtain the optimal result.

The number of human incidence of Schistosoma haematobium at Niono, Mali was forecasted online by using exponential smoothing method [8]. The method was used as a core of a proposed state-space framework. The data was collected from 1996-2004 from 17 community health center in that area. The framework was able to assist managing and assessing S. haematobium transmission and intervention impact,

This paper analyses the empirical results for evaluating and predicting the number of zoonosis incidence by using Autoregressive Integrated Moving Average (ARIMA). This model is selected because of the capability to correct the local trend in data, where the pattern in the previous period can be used to forecast the future. Thus this model also supports in modeling one perspective as a function of time (in this case, the number of human case) [14]. Due to the seasonal trend of time series used, the Seasonal ARIMA (SARIMA) is selected for the model development.

Three different approaches were applied to forecast the SARS epidemic in China. The existing time series was processed by AR(1), ARIMA(0,1,0), and ARMA(1,1). The result could be used to support the disease reports [9]. The result of this study could be used to monitor the dynamic of SARS in China based on the daily data.

The remainder of the paper is structured as follows. Section II presents preparation of the time series. Section III describes basic theory of ARIMA and SARIMA model. Section IV introduces Bayesian Information Criterion (BIC) and Akaike Information Criterion (AIC). Section V reports model development. Finally, Section VI present conclusion and directions for future work.

A Bayesian dynamic model also could be used to to monitor the influenza surveillance as one factor of SARS epidemic [10]. This model was developed to link pediatric and adult syndromic data to the traditional measures of influenza morbidity and mortality. The findings showed the importance of modeling influenza surveillance data, and recommend dynamic Bayesian Network.

II. DATASET FOR MODEL DEVELOPMENT This section describes the dataset (time series) that was used for model development. Salmonellosis disease was selected because these incidences can found in any country. A study collected time series data of human Salmonellosis occurrences in United State for the 168 month period from January 1993 to December 2006. The data was obtained from the summary of notifiable diseases in United States from the Morbidity and Mortality Weekly Report (MMWR) that published by Centers for Disease Control and Prevention (CDC). The seasonal variation of the original data is presented in Fig. 1. Then, trend in every month is plotted by using seasonal stacked line in Fig.2.

Monthly data of Cutaneous Leishmaniasis (CL) incidence in Costa Rica from 1991 to 2001 was analyzed by using seasonal autoregressive models. This work was studying the relationship between the interannual cycles of the diseases with the climate variables using frequency and time-frequency techniques of time series analysis [11]. This model supported the dynamic link between the disease and climate. Additive decomposition method was used to predict Salmonellosis incidence in US [12]. Fourteen years historical

Number of Incidence

7000

7000

6000

6000

5000

5000 4000

4000

3000 3000

2000

2000

1000 Jan Feb

1000 12

24

36

48

60

72 84 96 Month Index

108

120

132

144

156

Mar Apr

May SALM

168

Jun

Jul

Aug Sep

Oct

Nov

Dec

Means by Season

Figure 2. Seasonal stacked line of US Salmonellosis (1993-2006)

Figure 1. Monthly number of US Salmonellosis incidence (1993-2006)

104

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Fig. 2 shows seasonal stacked line for human Salmonellosis incidence in US from 1993 until 2006. The plot shows a peak season of incidence in August while the minimum number of incidence occurrences in January. Since time series plot of the historical data exhibited the seasonal variations which present similar trend every year, then SARIMA was chosen as the appropriate approach to develop a model prediction. III.

• θ Q ( B L ) = (1 − θ 1, L B L − θ 2, L B 2 L − ... − θ Q , L B QL ) is the seasonal moving average operator of order Q • δ = μφ p ( B)φ P ( B L ) is a constant term where μ is the mean of stationary time series • φ ,θ , δ are unknown parameter that can be calculated from the sample data. • at , at −1 ,... are random shocks that are assumed to be

ARIMA AND SEASONAL ARIMA MODEL

This section introduces the basic theory of Autoregressive Integrated Moving Average (ARIMA). The general class of ARIMA (p,d,q) processes shown in (1) as

independent of each other George Box and Gwilym Jenkins studied the simplified step to obtain the comprehensive information of understanding ARIMA model and using the univariate ARIMA model [15],[16]. The Box-Jenkins (BJ) methodology consists of four iterative steps:

yt = δ + φ1 yt −1 + φ2 yt −2 + ... + φ p yt − p + at − θ1at −1 − θ 2 at −2 − ... − θ q at −q

(1)

where d is the level of differencing, p is the autoregressive order, and q is the moving average order [15]. The constant is notated by δ, while φ is an autoregressive operator and θ is a moving average operator.

1) Step 1: Identification This step focus on selection of the order of regular differencing (d), seasonal differencing (D), the non-seasonal order of Autoregressive (p), the seasonal order of Autoregressive (P), the non-seasonal order of Moving Average (q) and the non-seasonal order of Autoregressive (Q). The number of order can be identified by observing the sample autocorrelations (SAC) and sample partial autocorrelations (SPAC). 2) Step 2: Estimation The historical data is used to estimate the parameters of the tentatively model in Step 1. 3) Step 3: Diagnostic checking Various diagnostic tests are used to check the adequacy of the tentatively model. 4) Step 4: Forecasting The final model in step 3 then is used to forecast the forecast values.

Seasonal ARIMA (SARIMA) is used when the time series exhibits a seasonal variation. A seasonal autoregressive notation (P) and a seasonal moving average notation (Q) will form the multiplicative process of SARIMA as (p,d,q)(P,D,Q)s. The subscripted letter ‘s’ shows the length of seasonal period. For example, in a hourly data time series s = 7, in a quarterly data s = 4, and in a monthly data s = 12. In order to formalize the model, the backshift operator (B) is used. The time series observation backward in time by k period is symbolized by Bk, such that Bkyt = yt-k Formerly, the backshift operator is used to present a general stationarity transformation, where the time series is stationer if the statistical properties (mean and variance) are constant through time. The general stationarity transformation is presented below: zt = ∇ ∇ yt = (1 − B ) (1 − B) yt D s

d

s D

d

This approach is widely used to examining the SARIMA model because of the capability to capture the appropriate trend by examining historical pattern. The BJ methodology has several advantages, involving extract a great deal of information from the time series using a minimum number of parameters and the capability in handling stationery and nonstationary time series in non-seasonal and seasonal elements [17],[18].

(2)

where z is the time series differencing, d is the degree of nonseasonal differencing used and D is the degree of seasonal differencing used. Then, the general SARIMA (p,P,q,Q) model is

φ p ( B)φP ( B s ) zt = δ + θ q ( B)θQ ( B s )at

IV.

(3)

Where:

Selection of ARIMA model was based on the Bayesian Information Criterion (BIC) and Akaike Information Criterion (AIC) values. These models are using Maximum Likelihood principle to choose highest possible dimension. The determinant of the residual covariance is computed as:

• φ p ( B ) = (1 − φ1 B − φ2 B − ... − φ p B ) is the nonseasonal autoregressive operator of order p • φ p ( B L ) = (1 − φ1, L B L − φ 2, L B 2 L − ... − φ P , L B PL ) is the 2

BAYESIAN INFORMATION CRITERION (BIC) AND AKAIKE INFORMATION CRITERION (AIC)

p

seasonal autoregressive operator of order P • θ q ( B ) = (1 − θ1 B − θ 2 B 2 − ... − θ q B q ) is the nonseasonal moving average operator of order q

^ ^ ^ ⎛ 1 ⎞ | Ω |= det⎜⎜ ε t ε t ' / T ⎟⎟ ∑ ⎝T − p t ⎠

105

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(4)

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

The log likelihood value is assuming computed by a multivariate normal (Gaussian) distribution as: ^ T l = − {k (1 + log 2π ) + log | Ω |} 2

that the correlogram of time series is likely to have seasonal cycles especially in SAC which implied level non-stationary. Therefore, the regular differencing and seasonal differencing was applied to the original time series as presented in Fig. 4 and Fig. 5.

(5)

Then the AIC and BIC are formulated as [19]: AIC = −2(l / T ) + 2(n / T )

(6)

BIC = −2(l / T ) + n log(T ) / T

(7)

An Augmented Dickey-Fuller (ADF) test was performed to determine whether a data differencing is needed [19]. The null hypothesis of the Augmented Dickey-Fuller t-test is:

where l is the value of the log of the likelihood function with the k parameters estimated using T observations and n = k ( d + pk ) . The various information criteria are all based on –2 times the average log likelihood function, adjusted by a penalty function. V.

MODEL DEVELOPMENT

A. Identification Starting with BJ methodology introduced in section 3, the first step in the model development is to identify the dataset. In this step, sample autocorrelations (SAC) and sample partial autocorrelations (SPAC) of the historical data were plotted to observe the pattern.

Figure 4.



H1 : θ < 0 then the data is stationary and doesn’t need to be differenced

The regular differencing and seasonal differencing was applied to the original time series. The ADF test also applied for both of them. The result showed the critical values of the regular differencing were -14.171 and for the seasonal differencing were -12.514. The one-sided p-value for both differencing was 0.000. While the probability value of 0.000 provided evidence to reject the null hypotheses. It indicated the stationarity of the time series.

Three periodical data was selected to illustrate the plot. The result is shown in Fig. 3. Based on Fig. 3, it could be observed

SAC and SPAC correlogram of original data

H0 : θ = 0 then the data needs to be differenced to make it stationary, versus the alternative hypothesis of

The result was compared with the 1%, 5%, and 10% critical values to indicate non-rejection of the null hypothesis. The ADF test statistic value had a t-Statistic value of -1.779 and the one-sided p-value is 0.389. The critical values reported at 1%, 5%, and 10% were -3.476, -2.882, -2.578. It showed that t α value was greater than the critical values that provide evidence not to reject the null hypothesis of a unit root then the time series need to be differencing.

This following section discusses the result of BJ iterative steps to forecast an available dataset.

Figure 3.



SAC and SPAC correlogram of regular differencing

106

Figure 5.

SAC and SPAC correlogram of seasonal differencing

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Hence, δ value was 0.44, less than |2| and statistically not different from zero, then δ can be excluded from the model. Where an autoregressive and a moving average model presented in the nonseasonal or seasonal level, then multiplicative terms was used as the following:

Selecting of whether to use regular or seasonal differencing was based on the correlogram. In order to develop ARIMA, time series should be stationary. Based on the correlogram shown in Figure 3 and Figure 4 was observed that more spikes was found in the regular differencing than the seasonal differencing. Then, the seasonal differencing was chosen for model development. B. Parameter Estimation Different ARIMA models were applied to find the best fitting model. The most appropriate model was selected by using BIC and AIC values. The best model was determined from the minimum BIC and AIC. Table I presents the results of estimating the various ARIMA processes for the seasonal differencing of Salmonellosis human incidence using the EViews 5.1 econometric software package. TABLE I. No 1 2 3 4 5 6 7 8 9

BIC 15.614

AIC 15.431

Adj. R2 0.345

15.587

15.427

0.342

15.583

15.423

0.345

15.394

15.286

0.449

15.351

15.243

0.472

15.368 15.359 15.331 15.352

15.282 15.273 15.245 15.245

0.447 0.452 0.468 0.471

MA(14) : z t = δ + a t − θ 14 a t − 14

Step 2: Model for seasonal level AR (12) : z t = δ + φ 1 , 12 z t − 12 + a t MA (24) : z t = δ + a t − θ 2 ,12 a t − 24

− θ 2,12 at −24 + at

(13)

The backshift operator (B) was applied in (13) yield: zt − φ9 B 9 zt − φ1,12 B12 zt − φ9φ1,12 B 21 zt = θ14 B14 at + θ 2,12 B 24 at − θ14θ 2,12 B 38at + at

(14)

(1 − φ9 B − φ1,12 B − φ9φ1,12 B ) zt 12

21

= (1 + θ14 B14 + θ 2,12 B 24 − θ14θ 2,12 B 38 )at

From the computation the parameter result were AR(9) = 0.154, SAR(12) = -0.513, MA(14) = 0.255, SMA(24) = -0.860. The estimated parameters were included into (14) to form the final model that expressed as follows: (1 − 0.154 B 9 + 0.513B12 + 0.078 B 21 ) zt = (1 + 0.255 B14 − 0.860 B 24 − 0.219 B 38 )at

(15)

Since the seasonal differencing was chosen, then (2) was notated with d = 0, D = 1 and s = 12 to define zt as: z t = ∇ sD ∇ d yt = (1 − B s )1 (1 − B) 0 yt = (1 − B s ) yt = yt − B s yt = yt − yt − s

(16)

= yt − yt −12

(8) (9)

The SARIMA final model was used to compute the forecast values for the three years-ahead.

(10) C. Diagnostic Checking Diagnostic check was made into the selected model. Correlogram and the residual plots are presented in Fig. 6. LM (Lagrange multiplier) Test was applied for the first lag period (lag 1 – lag 12) where the result can be seen in Table 2. There is no serial correlation in the residual because the SAC and

(11)

Step 3: Combining (8) – (11) has arrived to (12). z t = δ + φ9 z t −9 − θ14 at −14 + φ1,12 z t −12

the

= θ14 at −14 + θ 2,12 at −24 − θ14θ 2,12 at −38 + at

9

Step 1: Model for nonseasonal level AR (9) : z t = δ + φ9 z t −9 + at

form

zt − φ9 zt −9 − φ1,12 zt −12 − φ9φ1,12 zt −21

To produce the model, the separated non-seasonal and seasonal model was computed first. It was followed by combining these models to describe the final model.



− θ14 at −14 and − θ 2,12 at − 24 was used to form the multiplicative term θ 14 θ 2 , 12 a t − 38

to

+ φ9φ1,12 zt −21 + θ14θ 2,12 at −38 + at

The model AR(9), SAR(12), MA(14), SMA(24) also could be written as SARIMA(9,0,14)(12,1,24)12.





used

zt = φ9 zt −9 − θ14 at −14 + φ1,12 zt −12 − θ 2,12 at −24

The AIC and BIC are commonly used in model selection, whereby the smaller value is preferred. From Table 1, model 8 had a relatively small value of BIC and AIC. It also achieved large adjusted R2.



φ9 zt − 9 and φ1,12 zt −12 was multiplicative term φ9φ1,12 zt − 21

The model was derived using multiplicative form as follows:

ESTIMATION OF SELECTED SARIMA MODEL

Model Variable C, AR(9), SAR(12), SAR(22), SAR(24), MA(9), SMA(12), SMA(24) AR(9), SAR(12), SAR(22), SAR(24), MA(9), SMA(12), SMA(24) AR(9), SAR(12), SAR(24), MA(9),SMA(12), SMA(22), SMA(24) AR(3), AR(9), SAR(12), MA(3), SMA(24) AR(3), AR(9), SAR(12), MA(14), SMA(24) AR(3), AR(9), SAR(12), MA(24) AR(9), SAR(12), MA(3), SMA(24) AR(9), SAR(12), MA(14), SMA(24) AR(9), SAR(12), MA(3), MA(14), SMA(24)



(12)

107

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 TABLE III.

SPAC at all lag nearly zero and within the 95% confidence interval.

Month January February March April May June July August September October November December

Variable AR(9) SAR(12) MA(14) SMA(24) RESID(-1) RESID(-2) RESID(-3) RESID(-4) RESID(-5) RESID(-6) RESID(-7) RESID(-8) RESID(-9) RESID(-10) RESID(-11) RESID(-12)

LM TEST RESULT FOR RESIDUAL Std. Error 0.224 0.124 0.084 0.029 0.093 0.092 0.093 0.095 0.096 0.096 0.098 0.096 0.224 0.096 0.094 0.156

t-Statistic 0.510 -0.804 -0.049 -0.223 -0.225 1.252 1.448 -1.345 0.730 -1.141 -0.547 0.036 -0.250 -0.647 0.178 0.943

2007 1678.571 1965.707 1969.415 2692.214 2657.628 3364.616 5020.626 4675.720 5655.739 5691.898 3489.930 5639.062

Prediction 2008 1678.544 1965.624 1969.383 2692.268 2657.578 3364.677 5020.563 4675.721 5655.426 5691.992 3489.962 5639.223

2009 1678.558 1965.666 1969.399 2692.240 2657.604 3364.646 5020.595 4675.720 5655.587 5691.944 3489.946 5639.140

E. Error Measures After empirical examination, forecast accuracy was initially calculated using different accuracy measures: RMSE, MAD, MAPE, and Theil’s U-statistic. The result showed that RMSE was RMSE was 479.99, MAD was 367.23, MAPE was 10.36, and Theil’s U was 0.062. On hindsight RMSE, MAD, and MAPE are stand-alone accuracy measures that have inherent disadvantages that could lead to certain losses of functions [20]. The use of relative measures however could resolve this limitation. Instead of naïve forecasting, relative measures evaluate the performance of a forecast. Moreover, it can eliminate the bias from trends, seasonal components, and outliers. As such, forecast accuracy for the selected model is based on relative accuracy measure, Theil’s U statistic. Based on the Theil’s U-statistics of value 0.062, the model is highly accurate and present a close fit. Thus, the empirical result indicated that the model was able to accurately represent the Salmonellosis historical dataset.

Figure 6. SAC and SPAC correlogram of model residual TABLE II.

THE FORECASTING RESULT.

Prob. 0.611 0.423 0.961 0.824 0.822 0.213 0.150 0.181 0.467 0.256 0.586 0.971 0.803 0.274 0.721 0.432

VI.

CONCLUSION

In this paper, the use of forecasting method was applied to predict the number of Salmonellosis human incidence in US based on the monthly data. The adjusted model prediction was developed by using SARIMA model based on the historical data. Different SARIMA model was tested to select an appropriate ARIMA model. Various diagnostic check were used to determine model validity, including BIC and AIC. The result indicate that SARIMA(9,0,14)(12,1,24)12 was the fittest model among them.

Refer to the result from Table II, it proved that there were no correlation up to order 12 because the t-Statistic for SAC and SPAC less than |2|. Then it was concluded that the selected model was fit.

The empirical result indicated that the model was able to represent the historical data with Theil’s U with the value 0,062. In addition, SARIMA model can be obtained by using four iteratively Box-Jenkins steps and provide the prediction of the number of human incidence in other zoonosis to help the stakeholder make further decision. A further work is still needed to evaluate and apply other forecasting methods into the zoonosis time series in order to obtain better accuracy of forecast value.

D. Forecasting SARIMA(9,0,14)(12,1,24)12 was selected as the most appropriate model from various traces. It was used to forecast the predicted incidence from 2007 through 2009 (t169 – t204) that present in the Table III. A completed time series plots in Fig. 7 which consists of three components: actual values, fitted values, and residual values.

108

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 7000 6000

Number of Incidence

5000 4000 3000 2000 1000 0 -1000 Actual

Fitted

Residual

-2000 12

24

36

48

60

72

84

96

108

120

132

144

156

168

180

192

204

Month

Figure 7. Time series forecasting plot

REFERENCES [1]

[13] A. E. Permanasari, D. R. Awang Rambli, D. D. Dominic, and V.

CDC, "Compendium of Measures To Prevent Disease Associated with

Paruvachi Ammasa, "Construction of Zoonosis Domain Relationship as

Animals in Public Settings," National Association of State Public Health

a Preliminary Stage for Developing a Zoonosis Emerging System," in

Veterinarians, Inc. (NASPHV) MMWR 2005;54 (No. RR-4), 2005. [2]

WHO. (2007). Zoonoses and veterinary public health (VPH) [Online].

[3]

B. A. Wilcox and D. J. Gubler, "Disease Ecology and the Global

Proc ITSim 2008, Kuala Lumpur, 2008, pp. 527-534. [14] E. S. Shtatland, K. Kleinman, and E. M. Cain, "Biosurveillance and

Available: http://www.who.int

outbreak detection using the ARIMA and logistic procedures." [15] B. L. Bowerman and R. T. O'Connell, Forecasting and Time Series An

Emergence of Zoonotic Pathogens," Environmental Health and

Applied Approach, 3rd ed: Duxbury Thomson Learning, 1993.

Preventive Medicine, vol. 10, pp. 263–272, 2005. [4] [5]

[16] S. Makridakis and S. C. Wheelwright, Forecasting Methods and

WHO, "Report of the WHO/FAO/OIE joint consultation on emerging

Applications: John Wiley & Sons. Inc, 1978.

zoonotic diseases," Geneva, Switzerland, 3–5 May 2004.

[17] J. G. Caldwell. (2006). The Box-Jenkins Forecasting Technique

J. Slingenbergh, M. Gilbert, K. de Balogh, and W. Wint, "Ecological

[Online]. Available: http://www.foundationwebsite.org/ [18] C. Chia-Lin, S. Songsak, and W. Aree, "Modelling and forecasting

sources of zoonotic diseases," Rev. sci. tech. Off. int. Epiz., vol. 23, pp. 467-484, 2004. [6]

WHO.

(2009).

tourism from East Asia to Thailand under temporal and spatial Disease

Outbreak

News

[Online].

Available:

aggregation," Math. Comput. Simul., vol. 79, pp. 1730-1744, 2009.

http://www.who.int/csr/don/en/ [7]

[19] L. Quantitative Micro Software. (2005). EViews 5.1 User’s Guide

S. M. Debanne, R. A. Bielefeld, G. M. Cauthen, T. M. Daniel, and D. Y.

[Online]. Available: www.eviews.com

Rowland, "Multivariate Markovian Modeling of Tuberculosis: Forecast [8]

[20] Z. Chen and Y. Yang. (2004). Assessing Forecast Accuracy Measures

for the United States," Emerging Infectious Diseases, vol. 6, 2000.

[Online]. Available: http://www.stat.iastate.edu/preprint/articles/2004-

D. C. Medina, S. E. Findley, and S. Doumbia, "State–Space Forecasting

10.pdf

of Schistosoma haematobium Time-Series in Niono, Mali," PLoS Neglected Tropical Diseases, vol. 2, 2008. [9]

D. Lai, "Monitoring the SARS Epidemic in China: A Time Series Analysis," Journal of Data Science, vol. 3, pp. 279-293, 2005.

[10] P. Sebastiani, K. D. Mandl, P. Szolovits, I. S. Kohane, and M. F. Ramoni, "A Bayesian Dynamic Model for Influenza Surveillance," Statistics in Medicine, vol. 25, pp. 1803-1825, 2006. [11] L. F. Chaves and M. Pascual, "Climate Cycles and Forecasts of Cutaneous Leishmaniasis, a Nonstationary Vector-Borne Disease," PLoS Medicine, vol. 3 (8), pp. 1320-1328, 2006. [12] A. E. Permanasari, D. R. Awang Rambli, and D. D. Dominic, "Forecasting of Zoonosis Incidence in Human Using Decomposition Method of Seasonal Time Series," in Proc NPC 2009, 2009, pp. 1-7.

109

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 AUTHORS PROFILE Adhistya Erna Permanasari is a lecturer at the Electrical Engineering Department, Gadjah Mada University, Yogyakarta 55281, Indonesia. She is currently a PhD student at the Department of Computer and Information Science Department, Universiti Teknologi PETRONAS, Malaysia. She received her BS in Electrical Engineering in 2002 and M. Tech (Electrical Engineering) in 2006 from the Department of Electrical Engineering, Gadjah Mada University, Indonesia. Her research interest includes database system, decision support system, and artificial intelligence. Dr Dayang Rohaya Awang Rambli is currently a senior lecturer at the Computer and Information Science Department, Universiti Teknologi PETRONAS, Malaysia. She received her BS from University of Nebraska, M.Sc from Western Michigan University, USA, and Ph.D from Loughborough University, UK. Her primary areas of research interest involve Virtual Reality in Education & training, human factors in VR, Augmented Reality in Entertainment and Education. Dr. P. Dhanapal Durai Dominic obtained his M.Sc degree in operations research in 1985, MBA from Regional Engineering College, Tiruchirappalli, India during 1991, Post Graduate Diploma in Operations Research in 2000 and completed his Ph.D during 2004 in the area of job shop scheduling at Alagappa University, Karaikudi, India. Presently he is working as a Senior Lecturer, in the Department of Computer and Information Science, Universiti Teknologi PETRONAS, Malaysia. His fields of interest are Operations Management, KM, and Decisions Support Systems. He has published technical papers in International, National journals and conferences.

110

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

Performance Evaluation of Wimax Physical Layer under Adaptive Modulation Techniques and Communication Channels University of Rajshahi, Rajshahi, Bangladesh e-mail: [email protected]

Md. Ashraful Islam Dept. of Information & Communication Engineering University of Rajshahi, Rajshahi, Bangladesh e-mail: [email protected]

Md. Zahid Hasan Dept. of Information & Communication Engineering University of Rajshahi, Rajshahi, Bangladesh e-mail: [email protected]

Riaz Uddin Mondal (corresponding author) Assistant Professor, Dept. of Information & Communication Engineering

WiMAX will substitute other broadband technologies competing in the same segment and will become an excellent solution for the deployment of the well-known last mile infrastructures in places where it is very difficult to get with other technologies, such as cable or DSL, and where the costs of deployment and maintenance of such technologies would not be profitable. In this way, WiMAX will connect rural areas in developing countries as well as underserved metropolitan areas. It can even be used to deliver backhaul for carrier structures, enterprise campus, and Wi-Fi hot-spots. WiMAX offers a good solution for these challenges because it provides a cost-effective, rapidly deployable solution [3]. Additionally, WiMAX will represent a serious competitor to 3G (Third Generation) cellular systems as high speed mobile data applications will be achieved with the 802.16e specification. The original WiMAX standard only catered for fixed and Nomadic services. It was reviewed to address full mobility applications, hence the mobile WiMAX standard, defined under the IEEE 802.16e specification. Mobile WiMAX supports full mobility, nomadic and fixed systems [4]. It addresses the following needs which may answer the question of closing the digital divide: • It is cost effective.

Abstract— Wimax (Worldwide Interoperability for Microwave Access) is a promising technology which can offer high speed voice, video and data service up to the customer end. The aim of this paper is the performance evaluation of an Wimax system under different combinations of digital modulation (BPSK, QPSK, 4-QAM and 16-QAM) and different communication channels AWGN and fading channels (Rayleigh and Rician). And the Wimax system incorporates Reed-Solomon (RS) encoder with Convolutional encoder with ½ and 2⁄3 rated codes in FEC channel coding. The simulation results of estimated Bit Error Rate (BER) displays that the implementation of interleaved RS code (255,239,8) with 2/3 rated Convolutional code under BPSK modulation technique is highly effective to combat in the Wimax communication system. To complete this performance analysis in Wimax based systems, a segment of audio signal is used for analysis. The transmitted audio message is found to have retrieved effectively under noisy situation. Keywords-OFDM, Block Coding, Convolution coding, Additive White Gaussian Noise, Fading Channel.

I.

INTRODUCTION

The demand for broadband mobile services continues to grow. Conventional high-speed broadband solutions are based on wired-access technologies such as digital subscriber line (DSL). This type of solution is difficult to deploy in remote rural areas, and furthermore it lacks support for terminal mobility. Mobile Broadband Wireless Access (BWA) offers a flexible and cost-effective solution to these problems [1]. The IEEE WiMax/802.16 is a promising technology for broadband wireless metropolitan area networks (WMANs) as it can provide high throughput over long distances and can support different qualities of services. WiMax/802.16 technology ensures broadband access for the last mile. It provides a wireless backhaul network that enables high speed Internet access to residential, small and medium business customers, as well as Internet access for WiFi hot spots and cellular base stations [2]. It supports both point-to-multipoint (P2MP) and multipoint-to-multipoint (mesh) modes.

• •

It supports fixed, nomadic and mobile applications thereby converging the Fixed and mobile networks.



It is easy to deploy and has flexible network architectures.

• •

111

It offers high data rates.

It supports interoperability with other networks. It is aimed at being the first truly a global wireless broadband network.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

IEEE 802.16 aim to extend the wireless broadband access up to kilometers in order to facilitate both point-to-point and pointto-multipoint connections [5]. II.

The complementary operations are applied in the reverse order at channel decoding in the receiver end. The complete channel encoding setup is shown in above Figure 1. FEC techniques typically use error-correcting codes (e.g., RS, CC) that can detect with high probability the error location. These channel codes improve the bit error rate performance by adding redundant bits in the transmitted bit stream that are employed by the receiver to correct errors introduced by the channel. Such an approach reduces the signal transmitting power for a given bit error rate at the expense of additional overhead and reduced data throughput (even when there are no errors) [6]. The forward error control (FEC) consists of a Reed-Solomon (RS) outer code and a ratecompatible Convolutional Code (CC) inner code. A block Reed Solomon (255,239,8) code based on the Galois field GF (28) with a symbol size of 8 bits is chosen that processes a block of 239 symbols and can correct up to 8 symbol errors calculating 16 redundant correction symbols. Reed Solomon Encoder that encapsulates the data with coding blocks and these coding blocks are helpful in dealing with the burst errors [7]. The block formatted (Reed Solomon encoded) data stream is passed through a convolutional interleaver. Here a code rate can be defined for convolutional codes as well. If there are k bits per second input to the convolutional encoder and the output is n bits per second, the code rate is k/n. The redundancy is on not only the incoming k bits, but also several of the preceding k bits. Preceding k bits used in the encoding process is the constraint length m that is similar to the memory in the system [8], where k is the input bits and n is the number of output bits – is equal to ½ and 2/3 and the constraint length m of 3 and 5. The convolutionally encoded bits are interleaved further prior to convert into each of the either four complex modulation symbols in BPSK, QPSK, 4QAM, 16-QAM modulation and fed to an OFDM modulator for transmission. The simulated coding, modulation schemes and also noisy fading channels used in the present study is shown in Table 1.

SIMULATION MODEL

This structure corresponds to the physical layer of the WiMAX/IEEE 802.16 WirelessMAN-OFDM air interface. In this setup, The input binary data stream obtained from a segment of recorded audio signal is ensured against errors with forward error correction codes (FECs) and interleaved..

Audio Signal Source

Output Data

Channel Encoding

Channel Decoding

Digital Modulati on

Digital Demodul ation

Serial to Parallel Conversi on

Parallel to Serial Conversi on

IFFT

FFT

Cyclic Prefix Insertion

Cyclic Prefix Removal

Parallel to Serial Conversi on

AWGN AWGN Channel Channel and and fading fading channels channels

Table 1: Simulated Coding, Modulation Schemes and noisy channels Modulation

BPSK QPSK 4-QAM 16-QAM BPSK QPSK 4-QAM 16-QAM BPSK QPSK 4-QAM 16-QAM

Serial to Parallel Conversi on

Figure 1: A block diagram represents Wimax communication system with interleaved concatenated channel coding.

112

RS code

CC code rate

Noise Channels

(255,239,8)

(1/2, 2/3)

AWGN Chnnel

(255,239,8)

(1/2, 2/3)

Rayleigh Channel

(255,239,8)

(1/2, 2/3)

Rician Channel

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

In OFDM modulator, the digitally modulated symbols are transmitted in parallel on subcarriers through implementation as an Inverse Fast Fourier Transform (IFFT) on a block of information symbols followed by an analog-to-digital converter (ADC). To mitigate the effects of inter-symbol interference (ISI) caused by channel time spread, each block of IFFT coefficients is typically presented by a cyclic prefix. At the receiving side, a reverse process (including deinterleaving and decoding) is executed to obtain the original data bits. As the deinterleaving process only changes the order of received data, the error probability is intact. When passing through the CCdecoder and the RS-decoder, some errors may be corrected, which results in lower error rates [9]. III.

found not to be suitable for transmission. It is also shown in this figure that the performance of QPSK and 4-QAM is found more better than BPSK modulation for a ½ convolutional code rate with respect to SNR values.

SIMULATION RESULTS

In this section, we have presented various BER vs. SNR plots for all the essential modulation and coding profiles in the standard on different channel models. We analyzed audio signal to transmit or receive data as considered for real data measurement. Figure 2, 3 and 4 display the performance on Additive White Gaussian Noise (AWGN), Rayleigh and Rician channel models respectively. The Bit Error Rate (BER) plot obtained in the performance analysis showed that model works well on Signal to Noise Ratio (SNR) less than 25 dB. Simulation results in figure 2 show the advantage of considering a ½ and 2/3 convolutinal coding and ReedSolomon coding rate for each of the four considered digital modulation schemes (BPSK, QPSK, 4-QAM and 16-QAM). The performance of the system under BPSK modulation in 2/3 convolutional code rate is quite satisfactory as compared to other modulation techniques in AWGN channel.

Figure 3: System performance under different modulation schemes for a Convolutional Encoder with a 1/2 and 2/3 code rates in Rayleigh channel. In figure 4, it also shows that the BER performance of convolutional 2/3 code rate for BPSK modulation technique is better than all other modulation techniques and there is a little difference exists between BPSK-1/2 and BPSK-2/3 convolutional code rated.

Figure 2: System performance under different modulation schemes for a Convolutional Encoder with a 1/2 and 2/3 code rates in AWGN channel.

Figure 4: System performance under different modulation schemes for a Convolutional Encoder with a 1/2 and 2/3 code rates in Rician channel.

The Bit Error Rate under BPSK modulation technique for a typical SNR value of 3 dB is .000035303 which is smaller than that of other modulation techniques.

The transmitted and received audio signal for such a case corresponding with time and amplitude coordinates is shown in fig5.

In figure 3 with Rayleigh channel, the BER performance in case of 16-QAM modulation in 2/3 convolutional code rate is

113

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No.1, 2009

techniques and coding scheme. The effects of the FEC (Forward Error Correction) and different communication channels were also evaluated in the form of BER. Performance results highlight the impact of modulation scheme and show that the implementation of an interleaved Reed-Solomon with 2/3 rated convolutional code under BPSK modulation technique under different channels provides satisfactory performance among the four considered modulations. The IEEE 802.16 standard comes with many optional PHY layer features, which can be implemented to further improve the performance. The optional Block Turbo Coding (BTC) can be implemented to enhance the performance of FEC. Space Time Block Code (STBC) can be employed to provide transmit diversity. (a) V. REFERENCE [1]

[2]

[3] [4] [5]

(b) Figure 5: A segment of an audio signal, (a) Transmitted (b) Retrieved

[6] [7] [8]

IV.

CONCLUSION AND FUTURE WORKS

[9]

Mai Tran, George Zaggoulos, Andrew Nix and Angela Doufexi, ” Mobile WiMAX: Performance Analysis and Comparison with Experimental Results” J. El-Najjar, B. Jaumard, C. Assi, “Minimizing Interference in WiMax/802.16 based Mesh Networks with Centralized Scheduling”, Global Telecommunications Conference, 2008.pp.16. http://www.intel.com/netcomms/technologies/wimax/304471. pdf Intel White Paper, Wi-Fi and WiMAX Solutions: ”Understanding Wi-Fi and WiMAX as Metro-Access Solutions,” Intel Corporation, 2004. A. Yarali, B. Mbula, A. Tumula, “WiMAX: A Key to Bridging the Digital Divide”, IEEE Volume, 2007, pp.159 – 164. WiMAX Forum, “Fixed, Nomadic, Portable and Mobile Applications for 802.16-2004 and 802.16e WiMAX Networks”, November, 2005. T. TAN BENNY BING,"The World Wide Wi-Fi:Technological Trends and Business Strategies", JOHN WILEY & SONS, INC.,2003 M. Nadeem Khan, S. Ghauri, “The WiMAX 802.16e Physical Layer Model”, IET International Conference on Volume, 2008, pp.117 – 120. Kaveh Pahlavan and Prashant Krishnamurthy, “Principles of Wireless Networks”, Prentice-Hall, Inc., 2006. Yang Xiao,"WiMAX/Mobile MobileFi: advanced Research and Technology",Auerbach Publications,2008.

A performance analysis of an Wimax (Worldwide Interoperability for Microwave Access) system adopting concatenated Reed-Solomon and Convolutional encoding with block interleaver has been carried out. The BER curves were used to compare the performance of different modulation

114

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

A Survey of Biometric keystroke Dynamics: Approaches, Security and Challenges Mrs. D. Shanmugapriya Dept. of Information Technology Avinashilingam University for Women Coimbatore, Tamilnadu, India [email protected]

Dr. G. Padmavathi Dept. of Computer Science Avinashilingam University for Women Coimbatore, Tamilnadu, India [email protected]

Traditional keys to the doors can be assigned to the objectbased category. Usually the token-based approach is combined with the knowledge-based approach. An example of this combination is a bankcard with PIN code. In knowledge-based and object-based approaches, passwords and tokens can be forgotten, lost or stolen. There are also usability limitations associated with them. For instance, managing multiple passwords / PINs, and memorizing and recalling strong passwords are not easy tasks. Biometric-based person recognition overcomes the above mentioned difficulties of knowledge-based and object based approaches.

Abstract— Biometrics technologies are gaining popularity today since they provide more reliable and efficient means of authentication and verification. Keystroke Dynamics is one of the famous biometric technologies, which will try to identify the authenticity of a user when the user is working via a keyboard. The authentication process is done by observing the change in the typing pattern of the user. A comprehensive survey of the existing keystroke dynamics methods, metrics, different approaches are given in this study. This paper also discusses about the various security issues and challenges faced by keystroke dynamics. Keywords- Biometris; Keystroke Dynamics; computer Security; Information Security; User Authentication.

User Authentication

I.

INTRODUCTION Knowledge Based

The first and foremost step in preventing unauthorized access is user Authentication. User authentication is the process of verifying claimed identity. The authentication is accomplished by matching some short-form indicator of identity, such as a shared secret that has been pre-arranged during enrollment or registration for authorized users. This is done for the purpose of performing trusted communications between parties for computing applications.

Physiological

Fingerprints, Face Recognition, Retina, Iris, Retina, Hand Geometry

Conventionally, user authentication is categorized into three classes [17]: •

Knowledge - based,



Object or Token - based,



Biometric - based.

Object Based

Biometrics Based

Behavioral

Keystroke Dynamics, Voice, Gait, Signature, Lip Movement, Mouse Dynamics

Figure 1. Classification of User Authentication approaches

Biometric technologies are defined as automated methods of verifying or recognizing the identity of a living person based on a physiological or behavioral characteristics [2]. Biometrics technologies are gaining popularity due to the reason that when used in conjunction with traditional methods of authentication they provide an extra level of security. Biometrics involves something a person is or does. These types of characteristics can be approximately divided into physiological and behavioural types [17]. Physiological characteristics refer to what the person is, or, in other words, they measure physical parameters of a certain part of the body. Some examples are Fingerprints, Hand Geometry, Vein Checking, Iris Scanning, Retinal Scanning, Facial Recognition, and Facial Thermogram. recognition, Signature Recognition, Mouse Dynamics and keystroke dynamics, are good examples of this group.

The following Figure 1. shows the different classification of user authentication methods. The knowledge-based authentication is based on something one knows and is characterized by secrecy. The examples of knowledge-based authenticators are commonly known passwords and PIN codes. The object-based authentication relies on something one has and is characterized by possession. Behavioural characteristics are related to what a person does, or how the person uses the body. Voiceprint, gait

115

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Keystroke dynamics is considered as a strong behavioral Biometric based Authentication system [1]. It is a process of analyzing the way a user types at a terminal by monitoring the keyboard in order to identify the users based on habitual typing rhythm patterns. Moreover, unlike other biometric systems, which may be expensive to implement, keystroke dynamics is almost free as the only hardware required is the keyboard.

constantly analyzed and when they do not match the user’s profile, access to the system is blocked. III.

Previous studies [3, 5, 7, 10, 15] have identified a selection of data acquisition techniques and typing metrics upon which keystroke analysis can be based. The following section summarizes the basic methods and metrics that can be used.

This paper surveys various keystroke dynamics approaches and discusses about the security provided by keystroke dynamics. The paper is structured as follows: the next section gives the identification and verification in keystroke dynamics. Section III explains the methods and metrics of keystroke dynamics. Section IV discusses the various performance measures. Existing approaches are discussed in Section V. The Sixth and Seventh Sections discuss about the security and challenges of keystroke dynamics respectively and final section concludes the topic. II.

METHODS AND METRICS FOR KEYSTROKE DYNAMICS

Static at login – Static keystroke analysis authenticates a typing pattern based on a known keyword, phrase or some other predetermined text. The typing pattern captured is compared against a previously recorded typing patterns stored during system enrollment. Periodic dynamic – Dynamic keystroke analysis authenticates a user on the basis of their typing during a logged session. The data, which is captured in the logged session, is then compared to an archived typing pattern to determine the deviations. In a periodic configuration, the authentication can be constant; either as part of a timed supervision.

IDENTIFICATION AND VERIFICATION

Keystroke dynamics systems can run in two different modes [2] namely the Identification mode or Verification mode. Identification is the process of trying to find out a person’s identity by examining a biometric pattern calculated from the person’s biometric features. A larger amount of keystroke dynamics data is collected, and the user of the computer is identified based on previously collected information of keystroke dynamics profiles of all users. For each of the users, a biometric template is calculated in this training stage. A pattern that is going to be identified is matched against every known template, yielding either a score or a distance describing the similarity between the pattern and the template. The system assigns the pattern to the person with the most similar biometric template. To prevent impostor patterns (in this case all patterns of persons not known by the system) from being correctly identified, the similarity has to exceed a certain level. If this level is not reached, the pattern is rejected. Identification with keystroke dynamics means that the user has to be identified without additional information besides measuring his keystroke dynamics.

Continuous dynamic – Continuous keystroke analysis extends the data capturing to the entire duration of the logged session. The continuous nature of the user monitoring offers significantly more data upon which the authentication judgment is based. Furthermore, an impostor may be detected earlier in the session than under a periodically monitored implementation. Keyword-specific – Keyword-specific keystroke analysis extends the continuous or periodic monitoring to consider the metrics related to specific keywords. Extra monitoring is done to detect potential misuse of sensitive commands. Static analysis could be applied to specific keywords to obtain a higher confidence judgment. Application-specific – Application-specific keystroke analysis further extends the continuous or periodic monitoring. It may be possible to develop separate keystroke patterns for different applications.

A person’s identity is checked in the verification case. The pattern that is verified is only compared with the person’s individual template. Keystroke verification techniques can be classified as either static and dynamic or continuous [22]. Static verification approaches analyze keystroke verification characteristics only at specific times providing additional security than the traditional username/password. For example, during the user login sequence. Static approaches provide more robust user verification than simple passwords but the detection of a user change after the login authentication is impossible. Continuous verification, on contrary, monitors the user's typing behavior throughout the course of the interaction. In the continuous process, the user is monitored on a regular basis throughout the time he/she is typing on the keyboard, allowing a real time analysis [21]. It means that even after a successful login, the typing patterns of a person are

In addition to a range of implementation scenarios, there are also a variety of possible keystroke metrics. The Following are the metrics widely used by keystroke dynamics. Digraph latency – Digraph latency is the metric that is most commonly used and it typically measures the delay between the key-up and the subsequent key-down events, which are produced during normal typing (e.g. pressing letter T-H). Trigraph latency – Trigraph latency extends the digraph latency metric to consider the timing for three successive keystrokes (e.g. pressing letter T-H-E). Keyword latency – Keyword latencies consider the overall latency for a complete word or may consider the unique combinations of digraph / trigraphs in a word-specific context.

116

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

IV.

PERFORMANCE MEASURES

V. KEYSTROKE ANALYSIS APPROACHES A number of studies [5,7,10,12,20-22,27,28] have been performed in the area of keystroke analysis since its conception. There are two main keystroke analysis approaches for the purposes of identity verification. They are statistical techniques and neural networks techniques. Some are the combinations of both the approaches. The basic idea of the statistical approach is to compare a reference set of typing characteristics of a certain user with a test set of typing characteristics of the same user or a test set of a hacker. The distance between these two sets (reference and test) should be below a certain threshold or else the user is recognized as a hacker. Neural Networks process first builds a prediction model from historical data, and then uses this model to predict the outcome of a new trial (or to classify a new observation). Although the studies tend to vary in approach from what keystroke information they utilise to the pattern classification techniques they employ, all have attempted to solve the problem of providing a robust and inexpensive authentication mechanism. Table 1 illustrates a summary of the main research approaches performed till date.

Performance of Keystroke analysis is typically measured in terms of various error rates [13], namely False Accept Rate (FAR) and False Reject Rate (FRR). FAR is the probability of an impostor posing as a valid user being able to successfully gain access to a secured system. In statistics, this is referred to as a Type II error. FRR measures the percent of valid users who are Keystroke Dynamics-based Authentication rejected as impostors. In statistics, this is referred to as a Type I error. Both error rates should ideally be 0%. From a security point of view, type II errors should be minimized that is no chance for an unauthorized user to login. However, type I errors should also be infrequent because valid users get annoyed if the system rejects them incorrectly. One of the most common measures of biometric systems is the rate at which both accept and reject errors are equal. This is known as the Equal Error Rate (EER), or the Cross-Over Error Rate (CER). The value indicates that the proportion of false acceptances is equal to the proportion of false rejections. The lower the equal error rate value, the higher the accuracy of the biometric systems.

TABLE I. Study Joyce & Gupta (1990) [16] Leggett et al. (1991) [18] Brown & Rogers (1993) [6] Bleha & Obaidat (1993) [27] Napier et al (1995) [23] Obaidat & Sadoun (1997) [19]

Classification Technique Static Statistical Dynamic Statistical Static Neural Network Static Neural Network Dynamic Statistical Statistical Static Neural Network

Monrose& Rubin (1999) [22] Cho et al. (2000) [7] Ord & Furnell (2000) [25] Bergadano et al. (2002) [5] Guven & Sogukpinar(2003) [13] Sogukpinar & Yalcin(2004) [28] Dowland & Furnell (2004) [9] Yu & Cho (2004) [10] Gunetti & Picardi (2005) [12] Clarke & Furnell (2007) [8] Lee and Cho (2007) [14]

Static Static Static Static Static Static Dynamic Static Static Static Static

Statistical Neural Network Neural Network Statistical Statistical Statistical Neural Network Neural Network Neural Network Neural Network Neural Network

63 21 14 154 12 40 35 21 205 32 21

Static

Statistical

50

Pin shen The et al (2008) [27]

VI.

APPROACHES IN KEYSTROKE ANALYSIS Users 33 36 25 24 24 15

FAR (%) FRR (%) 0.25 16.36 12.8 11.1 0 12.0 8 9 3.8 (Combined) 0.7 1.9 0

0

7.9 (Combined) 0 1 9.9 30 0.01 4 1 10.7 0.6 60 4.9 0 0 3.69 0.005 5 5 (Equal Error Rate) 0.43 (Average Integrated Errors) 6.36 (Equal Error Rate)

Shoulder Surfing A simple way to obtain a user’s password is to watch them during authentication. This is called shoulder surfing. No matter if keystroke dynamics are used in the verification or identification mode, shoulder surfing is no threat for the authentication system. Password is not used in the identification case and therefore the password cannot be stolen. Only the keystroke pattern is important and decisive. In case of verification, an attacker may be able to obtain the password by shoulder surfing. However, keystroke dynamics for verification is a two-factor authentication mechanism. The keystroke pattern still has to match with the stored profile.

SECURITY OF KEYSTROKE DYNAMICS

So far, very little research has been conducted to analyze keystroke dynamics concerning security [4]. The application of keystroke dynamics to computer access security is relatively new and not widely used in practice. Reports on real cases of breaking keystroke dynamics authentication system do not exist. Keystroke dynamics schemes are analyzed regarding traditional attack techniques in the following section. The traditional attacks can be classified as: Shoulder Surfing, Spyware, Social Engineering, Guessing, Brute Force and Dictionary Attack

117

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Spyware Spyware is software that records information about users, usually without their knowledge. Spyware is probably the best and easiest way to crack keystroke dynamic-based authentication systems. If a user unintentionally installs a Trojan which records all of the user’s typing, keystroke latencies and keystroke durations an attacker can use this information to reproduce the user’s keystroke pattern. A program could simulate the user’s typing and get access to the system from the keystroke pattern. Much more research in the area is expected.

possible to use a dictionary attack which consists of general keystroke patterns, but an automated dictionary attack will be much more complex than a text based dictionary attack. Again the attack programs need to automatically generate keystroke patterns and imitate human input. Overall keystroke dynamics are less vulnerable to brute force and dictionary attacks than text based passwords. VII. CHALLENGES Keystroke dynamics is a behavioral pattern exhibited by an individual while typing on a keyboard [21]. User authentication through keystroke dynamics is appealing for many reasons such as: (i) it is not intrusive, and (ii) it is relatively inexpensive to implement, since the only hardware required is the computer [12]. Unlike other physiological biometrics such as fingerprints, retinas, and facial features, all of which remain fairly consistent over long periods of time, typing patterns can be rather erratic. Even though any biometric can change over time, typing patterns have smaller time scale for changes. Not only the typing patterns is inconsistent when compared to other biometrics, a person’s hands can also get tired or sweaty after prolonged periods of typing. This often results in major pattern differences over the course of a day. Another substantial problem is that typing patterns vary based on the type of the keyboard being used, the keyboard layout (i.e. qwerty or dvorak), whether the individual is sitting or standing, the person’s posture if sitting, etc. The fact is that the distributed nature of keyboard biometrics also means that additional inconsistencies may be introduced into typing pattern data.

Social Engineering Social engineering is the practice of obtaining confidential information by manipulation of legitimate users. A social engineer will commonly use the telephone or Internet to trick people into revealing sensitive information or getting them to do something that are against typical policies. Using this method, social engineers exploit the natural tendency of a person to trust his or her word, rather than exploiting computer security holes. Phishing is social engineering via e-mail or other electronic means. On first sight, social engineering is not possible with keystroke dynamics. In the identification case there is no password that can be given away, not even on purpose. Asking for the password on the phone and pretending to be the authorized user, is not feasible. Nevertheless, phishing, social engineering via Internet, may be a way of tricking a user to give away his keystroke pattern. The attacker might portrait as a trustworthy person, asking the user to log-on to a primed website. When the user logs-on to the website the attacker might record the keystroke rhythm of the users. However, the success rate would probably be very low. The user must type his username and password several times in order to have a meaningful keystroke pattern.

VIII. CONCLUSION

Guessing People use common words for their passwords. The way of typing of a different user can hardly be simulated. There are just too many varieties of ways of typing on the keyboard. Guessing of typing rhymes is impossible in keystroke dynamics.

The future of biometric technologies is promising. Biometric devices and applications continue to grow worldwide. There are several factors that will push the growth of biometric technologies. A major inhibitor of the growth of biometrics has been the cost to implement them. Moreover, increased accuracy rates will play a big part in the acceptance of biometric technologies. The development and research into biometric error testing false reject (false nonmatch) and false accept (false match), has been of keen interest to biometric developers. Keyboard Dynamics, being one of the cheapest forms of biometric, has great scope. In this paper an effort has been taken to give the existing approaches, security and challenges in keystroke dynamics in order to motivate the researches to further come with more novel ideas.

Brute Force In a brute force attack, an intruder tries all possible combinations of cracking a password. The more complex a password is, the more secure it is against brute force attacks. The main defense against brute force search is to have a sufficiently large password space. The password space of keystroke dynamic authentication schemes is quite large. It is nearly impossible to carry out a brute force attack against keystroke dynamics. The attack programs need to automatically generate keystroke patterns and imitate human input. If keystroke dynamics are used in a two-factor authentication mechanism, that is password and keystroke, it is almost impossible to overpower the security system.

REFERENCES [1]

Dictionary Attack A dictionary attack [4] is a technique for defeating authentication mechanism by trying to determine its pass phrase by searching a large number of possibilities. In contrast to a brute force attack, where all possibilities are searched through exhaustively, a dictionary attack only tries possibilities that are most likely to succeed, typically derived from a list of words in a dictionary. As with brute force searches, it is impractical to carry out dictionary attacks against keystroke dynamic authentication mechanisms. It is

[2]

[3]

118

Ahmed Awad E. Ahmed, and Issa Traore, “Anomaly Intrusion Detection based on Biometrics”, Proceedings of the IEEE, 2005. Anil K. Jain, Arun Ross and Salil Prabhakar2, “An Introduction to Biometric Recognition”, IEEE Transactions on Circuits and Systems for Video Technology, Special Issue on Image- and Video-Based Biometrics, Vol. 14, No. 1, January 2004. Attila Meszaros, Zoltan Banko, Laszlo Czuni, “Strengthening Passwords by Keystroke Dynamics”, IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, 6-8, September 2007.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009 [4] [5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16] [17]

[18]

[19]

[20] Monrose, F., Reiter, M., Wetzel, S, “Password Hardening Based on Keystroke Dynamics”, International journal of Information security, 1-15, 2001. [21] Monrose, F., Rubin, A., “Authentication via Keystroke Dynamics”, Proceedings of the 4th ACM Conference on Computer and Communications Security, p 48-56, April 1997. [22] Monrose, R., Rubin, A., “Keystroke Dynamics as a Biometric for Authentication”. Future Generation Computer Systems, 16(4) pp 351359, 1999. [23] Napier, R., Laverty, W., Mahar, D., Henderson, R., Hiron, M., Wagner, M., “Keyboard User Verification: Toward an Accurate, Efficient and Ecological Valid Algorithm”. International Journal of Human-Computer Studies, vol. 43, pp213-222, 1995. [24] Obaidat, M. S., Sadoun, B., “Verification of Computer User Using Keystroke Dynamics”. IEEE Transactions on Systems, Man and Cybernetics – Part B: Cybernetics, Vol. 27, No.2, 1997. [25] Ord, T., Furnell, S., “User Authentication for Keypad-Based Devices using Keystroke Analysis”. MSc Thesis, University of Plymouth, UK, 2000. [26] Pin Shen The et al, “Statistical Fusion Approach on Keystroke Dynamics”, Third International IEEE Conference on Signal-Image Technologies and Internet-Based System”, 2007. [27] S Bleha and M S Obaidat,“Computer user verification using the perceptron,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 23, no. 3, pp. 900–902, May 1993

Benny Pinkas ,“Securing Passwords Against Dictionary Attacks”, CCS’02, 18–22,November 2007. Bergando et al, “User Authentication through keystroke Dynamics”, ACM transaction on Information System Security” Vol.No. 5, pg 367-397, Nov 2002. Brown, M., Rogers, J. , “User Identification via Keystroke Characteristics of Typed Names using Neural Networks”. International Journal of Man-Machine Studies, vol. 39, pp. 999-1014, 1993 Cho et al , “Web based keystroke dynamics identity verification using neural network”, Journal of organizational computing and electronic commerce, Vol. 10, No. 4, 295-307, 2000. Clarke, N. L. and Furnell, S.M., ‘Authenticating mobile phone users using keystroke analysis’ International Journal of Information Security, 6 (1): 1-14, 2007. Downland, P. and Furnell, S., “A long-term trail of keystroke profiling using digraph, trigraph and keyword latencies”, in proceedings of IFIP/SEC 19th International Conference on Information Security, pages 275-289,2004. Enzhe Yu, Sungzoon Cho, “Keystroke dynamics identity verification and its problems and practical solutions”, Computers & Security, 2004. Glaucya C. Boechat, Jeneffer C. Ferreira, and Edson C. B. Carvalho, Filho, “Using the Keystrokes Dynamic for Systems of Personal Security”, Proceedings Of World Academy Of Science, Engineering And Technology, Volume 18 December 2006. Gunetti and Picardi, “ Keystroke analysis of free text”, ACM Transactions on Information and System Security, volume 8, pages 312–347, 2005. Guven, A. and I. Sogukpinar, “Understanding users’ keystroke patterns for computer access security”, Computers & Security, 22, 695–706, 2003. Hyoungjoo Lee, Sungzoon Cho, “Retraining a keystroke dynamicsbased authenticator with impostor patterns”, Computers & Security, 26(4): 300-310, 2007 John A. Robinson, Vicky M. Liang, J. A. Michael Chambers, and Christine L. MacKenzie, “Computer User verification Using Login String Keystroke Dynamics”, IEEE transactions on systems, man, and cybernetics—part a: systems and humans, Vol. 28, No. 2, March 1998. Joyce R., Gupta, G., “Identity Authentication Based on Keystroke Latencies”, Communications of the ACM, vol. 39; pp 168-176, 1990. Lawrence O’Gorman, “Comparing Passwords, Tokens, and Biometrics for User Authentication”, Proceedings of the IEEE, Vol. 91, No. 12, Dec, pp. 2019-2040, 2003. Leggett, J., Williams, G., Usnick, M., “Dynamic Identity Verification via Keystroke Characteristics”. International Journal of Man-Machine Studies, 1991. Mohammad S. Obaidat, Balqies Sadoun, “Verification of computer users using keystroke dynamics”, IEEE Transactions on Systems, Man, and Cybernetics, Part B 27(2): 261-269, April 1997.

AUTHORS PROFILE Shanmugapriya. D received the B.Sc. and M.Sc. degrees in Computer Science from Avinashilingam University for Women, Coimbatore in 1999 and 2001 respectively. And, she received the M.Phil degree in Computer Science from Manonmaniam Sundaranar University, Thirunelveli in 2003 and pursuing her PhD at Avinashilingam University for Women. She is currently working as a Lecturer in Information Technology in the same University and has eight years of teaching experience. Her research interests are Biometrics, Network Security and System Security.

Dr. Padmavathi Ganapathi is the Professor and Head of Department of Computer Science, Avinashilingam University for Women, Coimbatore. She has 21 years of teaching experience and one year Industrial experience. Her areas of interest include Network security and Cryptography and real time communication. She has more than 50 publications at national and International level. She is a life member of many professional organizations like CSI, ISTE, AACE, WSEAS, ISCA, and UWA.

119

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

.

Agent’s Multiple Architectural Capabilities: A Critical Review Ritu Sindhu Ph.D Scholar Banasthali University and WIT Gurgaon, India [email protected] Prof. G.N.Purohit Dean, Banasthali University Rajasthan, India [email protected]

Abdul Wahid Professor,CS Ajay Kumar Garg Enginnering College Ghaziabad, India [email protected] Abstract-It is true that any organization of non-trivial size, scope and function, is destined to change. The organization which is not robust, evolvable or adaptable can not survive. To model an adaptable agent organization, the capability must be present to transition from one state to the next over the life of the organization. The organization model must include not only the structural components, but also the ability to facilitate change. The objective of this paper is to provide some of important capabilities possessed by the Agents. Twelve architectures have been used for this preliminary analysis representing a wide range of current architectures in artificial intelligence (AI). The aim of this paper is to understand the various capabilities that agents should possess generally. Also because of the design of the architecture of the agents that we have taken in to consideration, the capabilities also vary from one agent to another.

predators, building shelters, making tools, etc.) and internally (interpreting sensory data, generating motives, evaluating motives, selecting motives, creating plans, storing information for future use, making inferences from new or old information, detecting inconsistencies, monitoring plan execution, monitoring various kinds of internal processing, noticing resemblances, creating new concepts and theories, discovering new rules, noticing new possibilities, etc.). In this paper we list out majority of the capabilities and finally in the form of a table we indicate which agent among the twelve agents we consider possess what kind of capabilities. The twelve agents we take into consideration , are Subsumption, Architecture, ATLANTIS , Theo , Prodigy , ICARUS ,Adaptive Intelligent Systems (AIS) , A Meta-reasoning Architecture for 'X' (MAX) , Homer , Soar , Teton , RALPH-MEA ,Entropy Reduction Engine (ERE) .[1][2] and their features are shown in the following.

Key Words: Adaptive Intelligent System, Meta Reasoning Architecture, Entropy Reduction System.

I. INTRODUCTION A complete functioning agent, whether biological, or simulated in software, or implemented in the form of a robot, needs an integrated collection of diverse but interrelated capabilities. At present, most work in AI and Cognitive Science addresses only components of such an architecture (e.g. vision, speech understanding, concept formation, rule learning, planning, motor control, etc.) or mechanisms and forms of representation and inference (logic engines, condition-action rules, neural nets, genetic algorithms) which might be used by many components. While such studies can make useful contributions it is important to ask, from time to time, how everything can be put together, and that requires the study of architectures. [6] Besides differences in levels of abstraction or implementation, there are differences in types of functionality. A human-like agent needs to be able to perform a large and diverse collection of tasks, both externally (finding and consuming food, avoiding

TABLE-1 S. No 1

Architecture SUBSUMPTIO N (S1)

Properties Ø Ø

2

THEO (T1)

Ø Ø

120

Subsumption architecture is reactive robot architecture. Subsumption architecture is a way of decomposing "simple" behaviour modules into layers.

Theo is an example of a Plan-ThenCompile architecture Integrates learning, planning, and knowledge representation

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

. 3

ICARUS (C)

Ø

Ø

4

PRODIGY (D)

Ø

Ø

Icarus' architecture is designed around a specific representation of long term memory. Icarus represents all knowledge in a hierarchy of probabilistic concepts.

Ø Meta Reasoning (MAX)

12

Ø Ø Ø

Storing the knowledge in a form of first order predicate logic (FOPL) called Prodigy Description language (PDL) . Has a modular architecture.

Ø Ø

5

ATLANTIS (A1)

Ø

Ø

6

SOAR (S2)

Ø

Ø

Ø Ø

7

8

HOMER (H)

TETON (T2)

Ø Ø

Ø Ø

Ø Ø

9

Entropy Reduction Engine (ERE)

Ø Ø

Ø

10

RALPH-MEA (R)

Ø Ø Ø

Ø

11

Adaptive Intelligent System (AIS) (A2)

Ø Ø Ø

The A Three-Layer Architecture for Navigating Through Intricate Situations (ATLANTIS) is a hybrid reactive/deliberative robot architecture. Three layers of Control layer, Sequencing layer, deliberate layer.

Ø

Soar (originally known as SOAR, State Operator And Result) is a symbolic cognitive architecture. The main goal of the Soar project is to be able to handle difficult open-ended problems The Soar architecture is known as the physical symbol system hypothesis. The ultimate goal for Soar is to achieve general intelligence,

II.

to respond.. Able to coordinate with external agent. The philosophy behind MAX is that it is uniform and declarative. MAX may be traced to Prodigy The architecture only supplies the programmer with a means-endsanalysis (MEA) planner MAX is designed to support modular agents They are used to respond to a dynamic environment in a timely manner, and they may be declared at runtime.. Some of the modules are: v attention focusing v multiple problem solving .strategies v execution monitoring v goal-directed exploration v explanation-based learning v process interruption

CAPABILITIES RELATED TO LEARNING

We discuss here about various learning capabilities. Also we briefly point out against each such learning capability. [4]

Is not designed for general intelligence. The architecture on which Homer exists is a modular architecture. It consists of a memory, planner, a natural language, interpreter and generator, reflective processes, a plan executor

A. Single Learning Method A system is said to learn if it is capable of acquiring new knowledge from its environment. Learning may also enable the ability to perform new tasks without having to be redesigned or reprogrammed, especially when accompanied by generalization. Learning is most often accomplished in a system that supports symbolic abstraction . This type of learning is separated from the acquisition of knowledge through direct programming by the designer.

Is a problem solver Uses two memory areas .Short-Term memory .Long-Term memory Like human beings, interruptions are allowed. It has a feature called Execution Cycle which always look for what to do next.

B. Multi-Method Learning As a capability, learning is often thought of as one of the necessary conditions for intelligence in an agent. Some systems extend this requirement by including a plethora of mechanisms for learning in order to obtain as much as possible from the system, or to allow various components of their system to learn in their own ways (depending on the modularity, representation, etc., of each). Additionally, multiple methods are included in a system in order to gauge the performance of one method against that of another.

ERE is architecture for the integration of planning, scheduling, and control. The architecture is designed to support both multiple planning methods as well as multiple learning methods, Uses many different problem solving methods such as .problem reduction .temporal projection .rule-based execution Is a multiple execution architecture. Like human being , selecting best one from the environment RALPH – MEA uses Execution Architecture (EA) to select from one state to best one. It uses the following v Condition action v Action utility v Goal – based v Decision Theoretic

C. Caching Caching can be seen as rote learning, but can also be seen as a form of explanation-based learning. This is simply storing a computed value to avoid having to compute it in the future. Caching vastly reduces the high cost of relying on meta-knowledge and the necessary retrieval and application[20].

To reason about and interact with other dynamic entities in real time. Problem solving techniques When encountering un-expected situation, decides whether to and how

D. Learning by Instruction An agent that is given information about the environment, domain knowledge, or how to accomplish a particular task on-line (that is in real-time as opposed 121

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

.

to some off-line programming) is said to be able to learn from instruction. Some instruction is completely unidirectional: a teacher simply gives the agent the knowledge in a sequential series of instructions. Other learning is interactive: the teacher is prepared to instruct the agent when the agent lacks knowledge and requests it. This last method supports experiential learning in which a teacher may act as both a guide (when called upon) and as an authority (when the agent is placing itself in danger or making a critical mistake).

and action for a particular problem. Abstraction is often used in planning and problem solving in order to form a condition list for operators that lead from one complex state to another based on the criticality of the precondition. For instance, in an office environment, a robot with a master key can effectively ignore doors if it knows how to open doors in general. Thus, the problem of considering doors in a larger plan may be abstracted from the problem solving. This can be performed by the agent repeatedly to obtain the most general result. Some architectures limit abstraction to avoid the problem of over-generalization, resulting in mistaken applications of the erroneously abstracted operator.

E. Learning from Experimentation Learning from experimentation, also called discovery, involves the use of domain knowledge, along with observations made about the environment, to extend and refine an agent's domain knowledge. The more systematic an agent manipulates its environment to determine new information, the more its behavior seems to follow traditional scientific experimental paradigms. However, the agent's action need not be so planned to produce new behaviour[5].

I. Explanation-Based Learning When an agent can utilize a worked example of a problem as a problem-solving method, the agent is said to have the capability of explanation-based learning (EBL). This is a type of analytic learning. The advantage of explanation-based learning is that, as a deductive mechanism, it requires only a single training example (inductive learning methods often require many training examples). However, to utilize just a single example most EBL algorithms require all of the following:

F. Learning by Analogy Reasoning by analogy generally involves abstracting details from a particular set of problems and resolving structural similarities between previously distinct problems. Analogical reasoning refers to this process of recognition and then applying the solution from the known problem to the new problem. Such a technique is often identified as case-based reasoning. Analogical learning generally involves developing a set of mappings between features of two instances. Paul Thagard and Keith Holyoak have developed a computational theory of analogical reasoning that is consistent with the outline above, provided that abstraction rules are provided to the model[19].

• • • •

The training example A Goal Concept An Operationally Criteria A Domain Theory

From the training example, the EBL algorithm computes a generalization of the example that is consistent with the goal concept and that meets the operationality criteria (a description of the appropriate form of the final concept). One criticism of EBL is that the required domain theory needs to be complete and consistent. Additionally, the utility of learned information is an issue when learning proceeds indiscriminately. Other forms of learning that are based on EBL are knowledge compilation, caching and macro-ops.

G. Inductive Learning and Concept Acquisition In contrast to abstraction, concept acquisition refers to the ability of an agent to identify the discriminating properties of objects in the world, to generate labels for the objects and to use the labels in the condition list of operators, thereby associating operations with the concept.

J. Transfer of Learning A capability that comes from generalization and is related to learning by analogy. Learned information can be applied to other problem instances and possibly even other instances. Three specific types of learning transfer are normally identified:

Concept acquisition normally proceeds from a set of positive and negative instances of some concept (or group of segregated concepts). With the presentation of the instances, the underlying algorithm makes correlations between the feature of the instances and their classification. The problem with this technique as it is described here is that it requires the specification of both relevant features and the possible concepts. In general, as an inductive technique, concept acquisition should be able to generate new concepts spontaneously and to recognize the relevant features over the entire input domain. H. Learning from Abstraction Contrasted with concept acquisition, abstraction is the ability to detect the relevant -- or critical -- information 122



Within-Trial Learning applies immediately to the current situation.



Within-Task Learning is general enough that it may apply to different problem instances in the same domain.



Across-Task Learning applies to different domains. Examples here include some types of concept acquisition in http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

.

which a concept learned in one domain (e.g., blocks) can be related to other concepts (e.g., bricks) through similarities (e.g., stackable). Acrosstask learning is then strongly[7][8].

III.

An intelligent agent should update its plan when it learns new information which helps it accomplish its current goal more quickly. For instance, it may be the case that in the process of satisfying one goal the agent also satisfies one or more of its other goals. The agent should recognize when it has already satisfied a goal and change its plan accordingly.

CAPABILITIES RELATED TO PLANNING AND PROBLEM SOLVING

In addition, an agent should replan when facts about the world upon which its current plan are based change. This is important when in the act of achieving one goal the agent undoes another. The agent must realize this and update its plan to satisfy both goals.

A. Planning Planning is arguably one of the most important capabilities for an intelligent agent to possess. In almost all cases, the tasks which these agents must carry out are expressed as goals to be achieved; the agent must then develop a series of actions designed to achieve this goal.

Re-planning is a capability that arises from other capabilities, namely planning and interrupt ability.

The ability to plan is closely linked to the agent's representation of the world. It seems that effective planning requires that 1) knowledge of the world is available to the planner, and since most worlds in which we are interested are reasonably complex, this is a strong motivation for implementing 2) a symbolic representation of knowledge. Typically, this knowledge contains information about possible actions in the world, which is then used by the planner in constructing a sequence of actions[18].

D. Support of Multiple, Simultaneous Goals Taskable agents can support the achievement of externally specified top-level goals. Some of these agents can support the achievement of many top-level goals at once. This is usually performed in conjunction with planning such that the goals are sequenced in some rational way. E. Self Reflection Systems which are capable of self reflection are able to examine their own internal processing mechanisms. They can use this capability to explain their behavior, and modify their processing methods to improve performance. Such systems must have some form of Meta-Knowledge available, and in addition, they must actively apply the Meta-Knowledge to some task. The list below explains the common uses of self reflection.

Planning itself is a prerequisite for several other capabilities that are often instantiated in intelligent agents. Certainly, problem solving relies heavily on planning, as most approaches to problem solving consist of incremental movements toward a solution; planning is integral to assembling these steps. Learning and planning have a reciprocal relationship wherein planning creates a new method for carrying out a task, which can then be learned for future use by the planner. [5]

E.I

B. Problem Solving It may seem that all agents must solve problems, as indeed they must, but problem solving in the technical sense is the ability to consider and attain goals in particular domains using domain-independent techniques (such as the weak methods) as well as domain knowledge. Problem Solving includes the capability to acquire and reason about knowledge, although the level to which such capability is supported differs between architectures[17]. Problem solving, especially human problem solving has been characterized as deliberate movement through a problem space. A problem space defines the states that are possible for a particular problem instance and the operators available to transform one state to another. In this formulation, problem solving is search through the state space by applying operators until a recognizable goal state is reached. [2]

Learning

Many systems reflect upon traces of problem solutions and try to extract generalities from them to improve their problem solving strategies. Histories of past problem solutions can be collectively examined to find commonalities that can lead to case based learning. E.2

Performance Fine Tuning

Performance can be fine tuned by gathering statistics on the efficiency of various problem solving methods. These statistics are then examined to determine which problem solving methods are most efficient for certain classes of problems. This is closely related to the learning capability described above. E.3

Explanation

Systems can use self reflection to explain their behaviour to an outside observer. This action is often performed by examining traces of the problem solution and reporting key aspects of it.

C. Replanning Intelligent agents operating in dynamic environments often find it necessary to modify or completely rebuild plans in response to changes in their environment. There are several situations in which an agent should re-plan.

E.4

Episodic Recall

Self Reflection can take the form of reporting past

123

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

.

experiences to an outside observer. This is usually accomplished through some form of episodic memory, where experiences are stamped with indications of when they occurred.

H. Inductive and Deductive Reasoning Deductive reasoning can be described as reasoning of the form if A then B. Deduction is in some sense the direct application of knowledge in the production of new knowledge. However, this new knowledge does not represent any new semantic information: the rule represents the knowledge as completely as the added knowledge since any time the assertions (A) are true then the conclusion B is true as well. Purely deductive learning includes methods such as caching, building macro-operators, and explanation-based learning.

There are several different mechanisms that can be included in architecture to help facilitate self reflection. These are explained below. E.5 Episodic Memory Episodic memory is directly applicable to episodic recall. This type of memory is often costly, however, both in terms of space and time. As the agent's experiences grow the size of the memory space to store these experiences must grow as well. In addition, searching through past experiences for some specific detail is often too time consuming to be practical.

In contrast to this, inductive reasoning results in the addition of semantic information. There are a great many ways in which inductive inference has been characterized but most are similar to those specified by the philosopher John Stuart Mill (1843). Combinations of inductive and/or deductive reasoning are present in most cognitive architectures that utilize a symbolic world model and are described in the individual architecture document with more specific capabilities such as planning and learning.

F. Meta-Reasoning Reasoning about reasoning, or meta-reasoning is a critical capability for agents attempting to display general intelligence. Generally intelligent agents must be capable of constantly improving skills, adapting to changes in the world, and learning new information. Meta-reasoning can be deployed implicitly through mechanisms such as domain-independent learning, or explicitly using, for example, declarative knowledge which the agent can interpret and manipulate. The domain-independent approaches seem the most successful so far.

IV. CAPABILITIES RELATED TO INTERACTION WITH THE ENVIROMENT A. Prediction Our use of the term prediction refers to an architecture's ability to predict what the state of the world is or might be what things might happen in the outside world, and what other things might happen as a consequence of the agent's actions. It should be clear that, for architecture to be able to predict it needs to have a fairly good and consistent model of the outside world. In fact, architectures with no such model are unable to do prediction.

Other aspects of meta-reasoning include the consideration of computational costs of processing, leading to the issues such as focused processing and realtime performance. G. Expert-Systems Capability (Diagnosis and Design) An expert system is an artificial intelligence technique in which the knowledge to accomplish a particular task (or set of tasks) is encoded a priori from a human expert. An expert system typically consists to two pieces. The knowledge base represents the expert's domain knowledge and must be encoded as efficiently as possible due to the size of the database. This representation often takes the form of rules. The reasoner exploits the knowledge in the rules in order to apply that knowledge to a particular problem. Expert systems often have an explanation facility as well.

B. Query Answering and Providing Explanations for Decisions Query answering is the ability to query the agent about things like past episodes ("Where were you last night?"), or the current state of the world ("Are your fingernails clean?"). If not posed in natural language, some of these queries are quite simple if the agent simply has episodic or state information immediately available. While a number of architecture discussions omitted query answering, many have general problem-solving ability that could be applied in this direction.

Production systems are often used to realize expert systems. Expert systems also often lag the cutting edge of AI research since they are normally more applicationoriented. Examples of implemented expert systems include: • • • • •

. C.

Navigational Strategies

Agents constructed under the hypothesis of situated action often have rudimentary reactions built into the architecture. These built-in reactions give rise to the strategy that the agent will take under certain environmental conditions. Reactive agents, such as the Brooksian agents, have emergent navigational strategies. Other agents augment emergent strategies with a degree of explicit planning.

MYCIN: Diagnosis of Infectious Diseases MOLE: Disease Diagnosis PROSPECTOR: Mineral Exploration Advice DESIGN ADVISOR: Silicon Chip Design Advice R1: Computer Configuration

D. Natural Language Understanding Natural language understanding and generation abilities

124

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

.

are required to communicate with other agents, particularly with people. Natural language understanding corresponds to receiving words from outside world, and natural language generation corresponds to sending words that may be compiled internal deliberation of the agent itself, to external world.

B. Focused Behaviour and Processing/Selective Attention The designers of most intelligent agents intend their agents to be operative in a complex, dynamic environments, usually the "real world" or some subset thereof. This, however, often causes significant practical problems: the real world provides a literally overwhelming amount of information to the agent; if the agent were to attempt to sense and process all this information, there would be very few computational resources remaining for other processes such as planning or learning.

E. Perception Perception refers to the extraction of knowledge (usually in the form of signals) from the environment. One characteristic of perception is that it may integrate sensory information from different modalities. For example, in humans the modalities of perception correspond to the five senses: taste, touch, sight, sound, and smell.

C.

Goal Reconstruction

Agents that sense the world and generate knowledge accessible to processes that reasons are said to perceive the world. Perception drives a continuum of behaviors that extend from the simplicity of a thermostat which simply measures the temperature to the assumption used by some agents that objects containing all relevant information about things in the world get inserted into the agent's knowledge[11][13].

Goal reconstruction is the ability of an agent to exploit short-cuts to return to a problem where it was last left off, even when the memory in which the problem was stored has been used for other purposes. This capability is implicit in some architecture and explicit in others. Kurt VanLehn argues that goal reconstruction is critical to mimic the human capability of quickly restarting a problem after being indefinitely interrupted. Teton employs goal reconstruction explicitly using two mechanisms in order to balance efficiency and speed with robustness[12].

F. Support for Inaccurate Sensing Sensors provide incomplete information and the state of the agent is always behind the state of the external environment. Some agents account for this in the architecture while others make tacit or explicit assumptions (or requirements) that sensors be perfect. Several architectures support inaccuracies and delays in sensing. Others assume or require that sensors be perfect.

D. Responding Intelligently to Interrupts and Failures The ability to respond intelligently to interrupts is extremely important for agents that must operate in a dynamic environment. In particular, interrupt ability may be an important feature that supports reactivity but neither property implies the other. E. Human-like Math Capability Humans often solve arithmetic problems the "long way". The optimal bit-based methods of the computer are not natural and, as such, not employed by humans. Several psychological experiments have been performed showing that, not only are the arithmetic operations used by humans not optimal, but the long-hand algorithms can be suboptimal and sometimes inconsistent. Some humans classify problems before approaching them (even the classifications can be inconsistent) and use a personal method that varies consistently with the class of problem[14][15].

G. Robotic Tasks Navigation, sensing, grabbing, picking up, putting down and the host of Blocks' World tasks can be considered robotic. Agents that attempt to solve problems in dynamic environment must support these capabilities.

V.

CAPABILITIES RELATED TO EXECUTION

A. Real-Time Execution While speed is an issue in all architectures to varying degrees, the ability to guarantee real-time performance places a tighter restriction on the speed requirements of the system. Real-time performance means that the agent is guaranteed to behave within certain time constraints as specified by the task and the environment.

Kurt VanLehn argues that a non-Last-In, First-Out (LIFO) goal reconstruction technique can reproduce this behavior. An essential component to the reproduction of this behavior is that goals cannot be managed by a LIFO stack. VanLehn's Teton architecture was designed specifically to model these types of behaviors. Additionally, the Soar architecture has also been applied to the cognitively-plausible solution of math problems.In the following table-2 the rows represent the capabilities and the column corresponds to agents with specific architecture. In the cells ‘Y’ indicates that agent corresponds to the column possess the capability represented by the row[16].

This is especially challenging in a dynamic environment because it provides an very tight time constraint on performance. Perfect rationality is perhaps impossible to guarantee when operating under a real-time constraint and thus some architectures will satisfy with bounded rationality to achieve this goal.

125

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

.

TABLE-2

Capability

S A T 1 1 1 P

Single Learning Method Multi-Method Learning

I

A S T 2 M H 2 2 R E

Y

Y

Y Y

Y

Y

Y

Caching Learning by Instruction Learning from Experimentation Learning by Analogy Inductive Learning and Concept Acquisition

Y Y

Y Y

Y

Y

Y

Y

VI. CONCLUSION In this paper we list out major architectural agent’s, which analyse and represent a wide range of current architecture in Artificial Intelligence. The main objective of this paper is to understand the various capabilities in regards of agents. The capabilities vary from one agent to another agent because of their design architecture. Finally in the form of a table we indicate, which agent among the twelve agents, we possess what kind of capabilities. This leads to the study of Multi Agent systems and its applications. In depth analysis of various Agent architectures and their capabilities is to build a Multi Agent System that will be suitable for our future work on Supply Chain Management.

Y

REFERENCES [1]

Y

Y

Y

Y

Abstarction Explanationbased learning Transfer of Learning

Y

Y

Y Y

Y

[2]

Y

Y

Y Y

Planning

Y Y

Problem Solving

Y

Y

Replanning Support of Multiple , Simultaneous Goals

Y

Y

Y

Y

Y Y

Y

Y

Y

Y

Self Reflection

Y Y

Meta-Reasoning Expert System Capability Inductive and deductive Reasoning

Y Y

Prediction Query Answering and providing Explanations for Decisions Navigational Strategies Natural Language Understanding

Y Y

Y

Y

Y Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

[3]

Y

[4]

Y

[5]

Y Y

Y Y

Y

Y

[6]

Y

[7]

[8] Y

Y Y

Y

Y

Perception Support for Inaccurate sensing

Y

Y

Y

Robotic Tasks Real-time Execution Focussed Behavior and Processing/Selec tive Attentions Goal Recontruction Responding Intelligently to Interrupts and Failures Human-like Math Capability

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

[9]

[10]

Y

Y

Y

Y

[11] Y [12]

Y Y

Y Y

Y

Y

Y

Y

Y

[13]

[14] Y

Y

Y Y

Y

Y

Y

Y

Y

[15]

Y Y

Y

Y

Y

Y

Y

Y

Y

[16]

Y

[17]

Y

126

Anderson, J. (1991). Cognitive Architectures in a rational analysis. In K. VanLehn (ed.), Architectures for Intelligence, pp. 1-24, Lawrence Erlbaum Associates, Hillsdale, N.J. Albus, J. S. 1992. A reference model architecture for intelligent systems design. In Antsaklis, P. J., and Passino, K. M., eds., An Introduction to Intelligent and Autonomous Control, 57{64. Boston, MA: Kluwer Academic Publishers Anderson, J. R.; Bothell, D.; D., B. M.; and Lebiere, C. An integrated theory of the mind. To appear in Psychological Review. Andronache, V., and Scheutz, M. 2004a. Ade - atool for the development of distributed architecturesfor virtual and robotic agents. In Proceedings of theFourth International Symposium "From Agent Theoryto Agent Implementation". Andronache, V., and Scheutz, M. 2004b. Integratingtheory and practice: The agent architecture frame-work apoc and its development environment ade. In Proceedings of AAMAS 2004. Arkin, R. C., and Balch, T. R. 1997. Aura: principles and practice in review. JETAI 9(2-3):175{189. Arkin, R. C. 1989. Motor schema-based mobile robotnavigation. International Journal of Robotic Research8(4):92{112. Bonasso, R. P.; Firby, R.; Gat, E.; Kortenkamp, D.;Miller, D.; and Slack, M. 1997. Experiences with anarchitecture for intelligent, reactive agents. Journalof Experimental and Theoretical Arti_cial Intelligence 9(1). Bonasso, R.P. and Kortenkamp, D. 1995. Characterizing an Architecture for Intelligent, Reactive Agents. AAAI Spring Symposium. Brooks, R. A. 1986. A robust layered control systemfor a mobile robot. IEEE Journal of Robotics andAutomation 2(1):14{23. Fikes, R., Nilsson, N. (1971). STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, pp. 189-203. Firby, R.J. and Slack, M.G. 1995. Task Execution: Interfacing to Reactive Skill Networks. AAAI Spring Symposium on Software Architectures. Gat, E. 1992. Integrating Planning and Reacting in a Heterogeneous Asynchronous Architecture for Controlling Real-World Mobile Robots. AAAI-92: 809-815. Horswill, I. 2000. Functional programming ofbehaviorbased systems. Autonomous Robots (9):83{93}. Laird, J. E.; Newell, A.; and Rosenbloom, P. S. 1987.SOAR: An architecture for general intelligence. Arti_cial Intelligence 33:1{64. Laird, J.; Rosenbloom, P.; and Newell, A. 1986.Chunking in soar: The anatomy of a general learningmechanism. Machine Learning 1:11{46. Langley, P.; Shapiro, D.; Aycinena, M.; and Siliski,M. 2003. A value-driven architecture for intelligentbehavior. In Proceedings of the IJCAI-2003 Workshopon Cognitive Modeling of Agents and Multi-Agent In-teractions.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

.

[18]

[19]

[20]

Langley, P.; Cummings, K.; and Shapiro, D. 2004. Hierarchical skills and cognitive architectures. In Proceedings of the Twenty-Sixth Annual Conference of theCognitive Science Society. Lyons, D. M., and Arbib, M. A. 1989. A formal modelof computation for sensory-based robotics. IEEETransactions on Robotics and Automation 5(3):280{293}. Schoppers, M. (1987). Universal Plans for Reactive Robots in Unpredictable Environments. Proc. of the IEEE. Vol. 77, No. 1. (January).pp. 81-98. AUTHORS PROFILE Ritu Sindhu: Ph.D Scholar, Banasthali University and WIT Gurgaon. Completed her B.Tech (CSE) from U.P.T.U, Lucknow, M.Tech (CSE) from Banasthali University, Rajasthan. Abdul Wahid: Presently working as a professor in Computer science department in AKG Engineering College Ghaziabad, India. Completed his MCA, M.Tech. and Ph.D. in Computer Science from Jamia Millia Islamia (Central University), Delhi. G.N.Purohit: Dean, Banasthali University, Rajasthan.

127

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Prefetching of VoD Programs Based On ART1 Requesting Clustering P Jayarekha *, Dr.T R GopalaKrishnan Nair** * Research Scholar, Dr. MGR University Dept. of ISE, BMSCE, Bangalore . Member, Multimedia Research Group, Research Centre, DSI, Bangalore. ** Director, Research and Industry Incubation Centre, DSI, Bangalore. [email protected]

[email protected]

Abstract In this paper, we propose a novel approach to group users according to the VoD user request pattern. We cluster the user requests based on ART1 neural network algorithm. The knowledge extracted from the cluster is used to prefetch the multimedia object from each cluster before the user’s request. We have developed an algorithm to cluster users according to the user’s request patterns based on ART1 neural network algorithm that offers an unsupervised clustering. This approach adapts to changes in user request patterns over period without losing previous information. Each cluster is represented as prototype vector by generalizing the most frequently used URLs that are accessed by all the cluster members. The simulation results of our proposed clustering and prefetching algorithm, shows enormous increase in the performance of streaming server. Our algorithm helps the server’s agent to learn user preferences and discover the information about the corresponding sources and other similar interested individuals.

two classes multimedia content mining and multimedia usage pattern mining. An important topic in learning user’s request pattern is the clustering of multimedia VoD users, i.e, grouping the users into clusters based on their common interest. By analyzing the characteristics of the groups, the streaming server will understand the users better and may provide more suitable, customized services to the users. In this paper, the clustering of the users request access pattern based on their browsing activities is studied. Users with similar browsing activities are clustered or grouped into classes (clusters). A clustering algorithm takes as input a set of input vectors and gives as output a set of clusters and a mapping of each input vector to a cluster. Input vectors which are close to each other according to a specific similarity measure should be mapped to the same cluster[5,8]. Clusters are usually internally represented using prototype vectors which are the vectors indicating a certain similarity between the input vectors are mapped to a cluster. Automated knowledge discovery in large multimedia databases is an increasingly important research area.

Keywords: Adaptive Resonance theory 1 (ART1), clustering, Predictive prefetch, neural networks. 1 Introduction For the past few years, Multimedia applications are growing rapidly and we have witnessed an exponential growth of traffic on the internet [11]. These applications include Video-on demand, Video authoring tools, news broadcasting, Videoconferencing digital libraries and interactive video games. The new challenges that has raised today is concerned with aspects of data storage, management processing, the continuous arrival of requests in multiple, rapid, time-varying and potentially unbounded streams. It is usually not feasible to store the request arrival pattern in a traditional database management system in order to perform delivery operation on the data later on. Instead, the request arrival must generally be processed in an online manner from the cache which also holds the predictive prefetched video streams and guarantee that results can be delivered with a small start up delay for the first-time access videos. The VoD streaming server is an important component as it is responsible for retrieving different blocks of different video stream and sends them to different users simultaneously. This is not an easy task due to the real time factor and the large volume of characteristics possessed by the video. Real time characteristic requires that video blocks have to be retrieved from the server’s disk within a deadline for continuous delivery to users. Failure to meet the deadline will result in jerkiness during viewing [13].

VoD application services over a computer network. It provides to watch any video at any time. One of the requirements for VoD system implementation is to have a VoD streaming server that acts as an engine to deliver videos from the server’s disk to users. Video blocks should be prefetched intelligently with less latency time from the disk and hence service high number of streams. However, due to real time and large volume characteristics possessed by the video, the designing of video layout is a challenging task. Real time constraints the distribution of blocks on disk and hence decrease the number of streams being delivered to users. The deficiency possessed by the existing prefetching techniques like window based prefetching and active prefetching methods and cumulative prefetching[14] is that these schemes only perform prefetching for the currently accessed object and perform prefetching for the currently accessed object and prefetching is only triggered when the client starts to access that object. For the first time accessed objects, its initial portion will not be fetched by current caching and prefetching schemes. So client suffers from start-up delay for the first-time access. In this paper, we consider the problem of periodic clustering and prefetching of first-time access video streams using ART1 neural networks [10]. A target is set as the request arrival pattern, to achieve the results comparative with that of target

With the rapid development in VoD streaming services, the knowledge discovery in the multimedia services has become an important research area. These can be classified broadly into

1 128

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

the videos are prefetched using a ART1 model for which a set of videos to be prefetched are grouped on time-zone.

Outline of the Paper: The paper is organized into various sections as follows: section 2 provides some back-ground information, both on VoD streaming model and clustering. Section 3 discusses about the related work in clustering and prefetching. Section 4 presents the methology used in developing the algorithm. Section 5 evaluates our proposed model through analysis and simulation results. Section 6 presents conclusions. 2 Background 2.1 The VoD Streaming Model Fig 1 Multimedia Streaming Server

The VoD streaming model has the following characteristics: •

The request arrival pattern is online and they might be undergoing on some specific constraints.



The order in which the request arrives is not under the control of the system.



Request arrivals that have been processed are either discarded or serviced. They cannot be re retrieved easily unless being stored in the cache memory, which is just the prefix of the whole video stream.



The prefetching operation should have lower priority than normal request. So the video steaming server should prefetch only when its workload is not heavy.



The prefetched objects are required to content with normally fetched objects for cache space.

Architecture • Live broadcast, such as a digital camera connected to the computer port. • Data stored in the form of media. • Data stored on machines in a network. The system targets Remote Monitoring, live event broadcasting, home/office monitoring, archiving of the video in a centralized server and related applications. Streaming multimedia data is a transaction between the server and client. The client, can be any HTTP client, that accesses the media. The server is an application that provides all the client applications with the multimedia content. Unlike the downloadand-play mechanism, the multimedia streaming client starts playing the media packets as soon as they arrive, without holding back to receive the entire file. While this technology reduces the client’s storage requirements and startup time for the media to be played, it introduces a strict timing relationship between the server and the client.

2.2 Proposed architecture Multimedia streaming servers are designed to provide continuous services to clients on demand. A typical Video-ondemand service allows the remote users to play any video from a large collection of videos stored on one or more servers. The server delivers the videos to the clients, in response to the request. Multimedia Streaming Servers, specifically customized for HTTP, RTSP based streaming are ideally suited for developing quick I.P. based streaming systems. Multimedia streaming setup, as shown in Figure 1, includes two types of interactions. The streaming server processes the real-time multimedia data and sends it to the clients, through the various possible types of devices. In the server part, the multimedia streaming server accepts multimedia data or input from any of the following sources:

2.3 Clustering A cluster is a collection of data objects that are similar to one another within the same cluster and are dissimilar to the objects in other cluster. A cluster of data objects can be treated collectively as one group and so may be considered as a form of data compression. Also, cluster analysis can be used as a form of descriptive statistics, showing whether or not the data cconsists of a set of distinct subgroups. The term cluster analysis (first used by Tryon, 1939) encompasses a number of different methods and algorithms for grouping objects of similar kind into respective categories. A general question facing researchers in many areas of inquiry is how to organize observed data into meaningful structures, that is, to develop taxonomies. In other words cluster analysis is an exploratory data analysis tool which aims at sorting different objects into groups in a way that the degree of association between two objects is maximal if they belong to the same group and minimal otherwise. Clustering and unsupervised learning do not relay on predefined classes and class-labeled training examples. For this reason, clustering is a form of learning by observation, rather than learning by examples.

2 129

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

2.4 Clustering Video Streams

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

given model.

The clustering of (active) video streams concerns the concept of distance or, alternatively, similarity between streams. It is required to decide how two video streams fall into one cluster. Here, we are interested in the time-dependent evolution of a request generated to a video stream. That is to say, two different request to video streams are considered similar if their evolution over time shows similar characteristics. The customer behavior change over time the accurate predictions are rather difficult. A successful video streaming server is the one which offers customer with large selection of videos. While prefetching the videos from the server's disk and clustering them in one cluster the different factors that are considered are, The time at which the requests are generated. For example children's videos are likely to be popular early in the evening or in weekend mornings, but less popular late at night. The maximum total revenue the service provider can make is limited by the capacity of the server and number of active videos that are currently present in the cache, and hence the videos that are clustered into one should generate not only maximum revenue but also reduce the waiting time. The videos can be categorized into children videos, adult videos, house-wife videos and hot news videos. Thus the video steaming system should adopt rapidly and service the request using predictive prefetch to a widely varying and highly dynamic workload.

In the arena of multimedia communication, knowledge of demand and traffic characteristics between geographic area is vital for the planning and engineering of multimedia communications. Demand for any service in a network is a dynamically varying quantity subject to several factors. Prior knowledge of request patterns for various services will allow utilization of storage resources. This prior knowledge is essential for planning of multimedia server architecture for the future in an optimized manner, as well as for allocating available storage in an effective fashion, based on the current demand. In this paper we have proposed a generalized scheme for learning the behavior of video on demand characteristics on a daily and weekly basis using simulation data. 3 Related work We discuss the significant work in the area of clustering and prefetching. 3.1 Related work in clustering Clustering users based on their Web access patterns is an active area of research in Web usage mining. R. Cooley et al. [4] propose a taxonomy of Web Mining and present various research issues. In addition, the research in web mining is centered on extraction and applications of cluster and prefeching. Both of these issues are clearly discussed in [1].It has been proved in this scheme a 97.78% of prediction hierarchy. Clustering of multimedia request access pattern is defined by hierarchical clustering method to cluster the generalized session. G T Raju et al. [5] has proposed a novel approach called Cluster and PreFetch (CPF) for prefetching of web pages based on the Adaptive Resonance Theory (ART) neural network clustering algorithm. Experiments have been conducted and results show that prediction accuracy of our CPF approach is as high as 93 percent.

2.5 Advantages of neural networks and ART1 in clustering and prediction Neural network is highly tolerance of noise data, as well as their ability to classify pattern on which they have not been trained. They can be used when we may have little knowledge of the relationships between attributes and classes. Artificial neural networks are taught trained to perform specific functions in contrast to conventional computers that are programmed. Training in neural networks could be supervised or unsupervised. In supervised learning training is achieved using a data set by presenting the data as input-output pair of patterns, and the weights are accordingly adjusted to capture the relationship between the input and response. Unsupervised network are on the other hand are not aware of desired response and therefore learning is based on observation and self-organization. Adaptive Resonance Theory an unsupervised learning gives solutions for the following problems arising in the design of unsupervised classification systems [12] •

Adaptability : Refers to the capacity of the system to assimilate new data and to identify new clusters (this usually means a variable number of clusters) • Stability: Refers to the capacity of the system to conserve the clusters structures such that during the adaptation process the system does not radically change its output for a given input data. Neural networks can be used for prediction with various levels of success. The advantage of this includes automatic learning of dependencies only from measured data without any need to add further information (such as type of dependency like with the regression). The neural network is trained from the historical data with the hope that it will discover hidden dependencies and that it will be able to use them for predicting into future. In other words, neural network is not represented by an explicitly

3.2 Related work in prefetching Prefetching means fetching the multimedia objects before the user request them. There are some existing prefetching techniques, but they possess some deficiency. The client will suffer from start-up delay for the first-time accesses since, prefetching action is only triggered when a client starts to access that object. However, an inefficient prefetching technique causes wastage of network resources by increasing the web traffic over the network.[6] J Yuan et al has proposed scheme, in which proxy servers aggressively prefetch media objects before they are requested. They make use of servers’ knowledge about access patterns to ensure the accuracy of prefetching, and have tried to minimize the prefetched data size by prefetching only the initial segments of media objects. [7] KJ Nesbit et al has proposed an prefetching algorithm which is based on a global history buffer that holds the most recent miss addresses in FIFO order. 4. Methology 4.1 Preprocessing the web logs

3 130

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

The multimedia Web log file of a web server contains the raw data of the form . The format of the log file is not suitable for directly applying ART1 algorithm on them. Therefore, the log data needs to be transformed into a format. We have selected 50 clients requesting for 200 different videos. 4.2 Extraction of feature vectors For clustering, we need to extract the binary pattern vector P that represents the accessed requested_video’s URL by a client and is an instance of the base vector B ={URL1,URL2…..URLn}.The pattern vector maps the access frequency of each base vector element URLi to binary values. It is of the form P ={P1,P2,….Pn} where each Pi is either 0 or 1.Pi is 1 if URLi is requested by a client 2 or more times, it is 0 otherwise. 0

1

1 0 1 1 1 Fig 2 Sample Pattern Vector

0

1

Fig 3 The ART1 neural network For (ρ→0) the model is expected to have minimum number of clusters. As (ρ→1) it is expected to form maximum number of clusters. Some times to an extent of one input prototype vector in each cluster.

1

Fig 2 is a sample of pattern vector generated during a session. Each pattern vector has a binary bit pattern of length 200.For each session we input 50 such pattern to an ART1, since we have 50 clients.

ART1 consists of comparison layer F1, recognition layer F2,and control gains G1 and G2.  The input pattern is received at F1, whereas classification takes place in F2 Each neuron in the comparison layer receives three inputs: a component of the input pattern, a component of the feedback pattern, and a gain G1. A neuron outputs a 1 if and only if at least three of these inputs are high: the 'two-thirds rule.' Each input activates an output node at F2.The F2 layer reads out the top-down expectation to F1, where the winner is compared with the input vector. The vigilance parameter determines the mismatch that is to be tolerated when assigning each host a cluster. If the new input is not similar to the specimen, it becomes the second neuron in the recognition layer. This process is repeated for all inputs (parts) [9]. ART learns to cluster the input pattern by making the output neurons compete with each other for the right to react to a particular input pattern. The output neuron which has the weight vector that is most similar to the input vector claims this input pattern by producing an output of ‘1’and at the same time inhibits other output neurons by forcing them to produce ‘0’s. in ART, only the winning node is permitted to alter its weight vector , which is modified in such a way that is brought near to the representative input pattern in cluster concerned.

4.3 Session Identification A user requesting for a video may visit a Web site from time to time and spend arbitrary amount of time between consecutive visits. To deal with the unpredictable nature of generation of request arrival pattern, a concept called session is introduced. We cluster the request pattern in each session. The data collected during each session is used as historical data for clustering and prefetching purpose. Subsequent requests generated during a session is added as long as the elapse of time between two consecutive requests does not exceed a pre_defined parameter maximum_idle_time. Otherwise, the current session is closed and a new session is created. 4.4 Clustering users Adaptive Resonance Theory Figure 3 depicts the ART1 architecture and its design parameters bij (bottom-up weights) is the weight matrix from the interface layer to the cluster layer, tij (top-down weights) is the weight matrix from the cluster layer to the interface layer, N(number of input patterns or interface units), M (maximum allowed number of clusters) and ρ(vigilance parameter). The vigilance parameter ρ and the maximum number of clusters M has big impact on the performance[8]:

ART neural networks are known for their ability to perform plastic yet stable on-line clustering of dynamic data set. ART adaptively and autonomously creates a new category. Another advantage of using ART1 algorithm to group users is that it adapts to the change in users’ request access patterns overtime without losing information about their previous access pattern. The advantage comes from the fact that ART neural networks are tailored for systems with multiple inputs and multiple outputs, as well as for nonlinear systems. Besides this advantage, no extra detailed knowledge about the process is required to model using neural networks. The consideration of system dynamics leads to networks with internal and external feedback, where there are many structure variants.

 

4 131

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

 

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

,from the previous history. Those videos for which the binary value set to one in the prototype will be prefetched. 5 Results and Discussion 5.1 Performance of ART1 clustering technique

Fig 6 A sample Input data Fig 6 above is a sample set of data from 10 different users. A value 1 represents that more than two clients have requested for a video and a 0 no one has requested for that particular video. A sample of 10 pattern vectors with N = 50 is taken from our simulation results and variation in number clusters formed with that of vigilance factor is observed[8]. The result is as shown in the graph below. The number clusters formed increases with the increase in vigilance parameter. It clear from the graph that a value of 0.3 to 0.5 forms ideal number of clusters.

  Fig 4 An ART1 clustering Algorithm 4.5 Prefetching Scheme

Fig 7 Number of clusters verses vigilance parameter for sample input data The same when applied to 200 videos from different 50 clients with their frequency ranging from 27 to 1530. The value of the vigilance parameter is varied between 0.30 to 0.50.

Fig 5 ART1_based_Prefeching Algorithm

 

Fig 4 above is the binary ART1 algorithm used to cluster the requests . The number of clusters formed depends on the vigilance factor. Fig 5 is ART1 based prefetching algorithm. At any time instance the videos are prefetched and stored in the cache prior to the request. The prototype selected will depend on the cluster formed at the same instance of time

Table 1 Number of Clusters formed by varying the value of vigilance parameter

5 132

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Vigilance

Number

Parameter

of Clusters

0.3

18

0.35

23

0.40

30

0.45

34

0.475

39

0.50

42

session interval is defined on predefined parameter maximum_idle_time. Whenever a new request arrives, it is checked for the membership of matching cluster. All videos that are frequently requested by the members of that cluster might have been already present in the cache. Sometimes the new request may modify the prototype vector of a cluster. If a request arrives during a course of session the new prototype vector is computed using both historic data and also the current input vectors, so that accurate predictions can be done.

Table 2 Result of prefetching scheme

Fig 8 Variation in the number of clusters formed by increasing the vigilance parameter

Num of members each cluster

Videos prefetced

Hits

Accuracy %

8

36

34

97%

6

45

43

93%

4

39

38

87%

5

32

30

92%

In Table 2, we have shown the result of our prefetching scheme by considering sample of four clusters. We prefetch the URLs for each client and verify the accuracy of our algorithm. The vigilance factor was set to 0.4. The average prediction accuracy of our scheme is 92.2%, which is considerably high.

5.2 Prefetching results We have conducted an performance analysis based on two parameters. (a) Hits and (b) Accuracy. Hits indicate the number of videos that are requested from the prefetched videos, and accuracy is the ratio to hits to the number of videos being prefetched. Most of the techniques for Predictive Prefetching is for a single user. Whereas ART1 neural network based clustering reduces the overload on the network by prefetching the videos not for a single user, but for a community of users. It prefetches request with an accuracy as high as 92%. Each prototype vector of a cluster represents a possibility of community of users requesting for videos. This predictive prefetch is performed periodically over a sliding window defined by a session. An accurate predictions are difficult since customer behavior change over time [11]. The extremely popular videos are accessed frequently and the frequency may change not only on a daily or weekly, but even on an hourly basis. For eg children videos are likely to be popular early in the evening or in weekend morning [11], but less popular late at night.

6 Conclusion In recent years, there have been a number of researches in exploring novel methods and techniques to group users based on the request access pattern. In this paper we have clustered and prefetched user request access pattern using ART1 clustering approach. The predictions were done over a time series domain. The proposed system has achieved good performance with high satisfaction and applicability. Effort on a way to improve by clustering the requests on demand. The future work is on to count the number of requests that were considered to form any cluster. The prefetching algorithm will prefetch the most popular cluster at that instance which may improve the performance. P Jayarekha holds M.Tech (VTU Belgaum ) in computer science securing second rank . She has one and a half decades experience in teaching field. She has published many papers. Currently she is working as a teaching faculty in the department of Information science and engineering at BMS College Of Engineering , Bangalore ,India.

The ART1 model reflects customer behavior over 24-hours period. From the historic data clusters are framed. Each cluster is represented by a prototype vector which is in the binary format and holds the information about most popular videos, requested by the members of that cluster. The historic data is collected at the end of each session. A

6 133

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Scheme for streaming Media Delivery over the Internet” Proceedings of SPIE, 2007. [7]KJ Nesbit, JE Smith “Data Cache Prefetching using a global history buffer” High Performance Computer Architecture, 2004 [8] L Massey “Determination of clustering tendency with ART neural networks” Proceedings of recent adavances in softcomputing (RASC02) 2002 [9] N. Kumar, R.S.Joshi “Data Clustering Using Artificial Neural Networks “ Proceedings of National Conference on Challenges & Opportunities in Information Technology (COIT-2007) [10] H. Chris Tseng, “Internet Applications with Fuzzy Logic and Neural Networks: A Survey” Journal of Engineering, computing and architecture 2007 [11] Xiaobo Zhou “ A video replacement Policy based on Revenue to cost ratio in a Multicast TV-Anytime system” [12] Heins, Lucien G., Tauritz, and, Daniel R., “Adaptive Resonance Theory (ART): An Introduction.” Internal Report 95-35, Department of Computer Science, Leiden University, pages 174-185, The Netherlands, 1995 [13] Dinkar Sitaram, Asit Dan, "Multimedia Servers Applications, Environment, and Design", Morgan Kaufmann Publishers, 2000 [14] J Jung, D Lee, K Chon “ Proactive Web caching with cumulative prefetching for large multimedia data”Computer Networks, 2000 - Elsevier

T.R. Gopalakrishnan Nair holds M.Tech. (IISc, Bangalore) and Ph.D. degree in Computer Science. He has 3 decades experience in Computer Science and Engineering through research, industry and education. He has published several papers and holds patents in multi domains. He won the PARAM Award for technology innovation. Currently he is the Director of Research and Industry in Dayananda Sagar Institutions, Bangalore, India.

REFRENCES [1] S K Rangarajan, V V Phoha, K Balagani, R RSelmic “Web user caching and its application to prefetching using ART neural networks “ IEEE Internet Computing, Data & Knowledge Engineering, v.65 n.3, p.512-543, June, 2008 [2] K Santosh, V Vir, S Kiran, R Selmic, SS Iyengar “Adaptive Neural Network clustering of Web users “ IEEE Computer, 2004 [3] P Venketesh, R Venkatesan “A Survey on Applications of Neural Networks and Evolutionary Techniques in Web Caching” IETE Technical review. 2009 [4] Cooley R., Mobasher B., and Srivatsava J., “Web Mining: Information and Pattern Discovery on the World Wide Web.” ICTAI’97, 1997. [5] G. T. Raju and P. S. Satyanarayana “Knowledge Discovery From Web Usage Data :A Novel Approach for Prefetching of Web Pages based on ART Neural Network Clustering Algorithm” International Journal of Innovative Computing, Information and Control ICIC International 2008 ISSN 1349-4198 Volume 4, Number 4, April 2008 [6] J Yuan, Q Sun, S Rahardja “A More Aggressive Prefetching

7 134

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

Prefix based Chaining Scheme for Streaming Popular Videos using Proxy servers in VoD M Dakshayini,

[email protected]

Research Scholar, Dr. MGR University. Working with Dept. of ISE, BMSCE, Member, Multimedia Research Group, Research Centre, DSI, Bangalore, India.

Director, Research and Industry Incubation Centre, DSI, Bangalore, India [email protected]

Dr T R GopalaKrishnan Nair

Abstract—Streaming high quality videos consumes significantly large amount of network resources. In this context request-toservice delay, network traffic, congestion and server overloading are the main parameters to be considered in video streaming over the internet that effect the quality of service (QoS). In this paper, we propose an efficient architecture as a cluster of proxy servers and clients that uses a peer-to-peer (P2P) approach to cooperatively stream the video using chaining technique. We consider the following two key issues in the proposed architecture (1) Prefix caching technique to accommodate more number of videos close to client (2) Cooperative client and proxy chaining to achieve the network efficiency. Our simulation results shows that the proposed approach yields a prefix caching close to the optimal solution minimizing WAN bandwidth usage on server-proxy path by utilizing the proxy-client and client-client path bandwidth, which is much cheaper than the expensive server –proxy path bandwidth, server load, and client rejection ratio significantly using chaining.

to address this problem: batching, patching, periodical broadcasting and prefix caching and chaining. In the batching scheme [5,8 &10], the server batches the requests for the same video together if their arrival times are close, and serve them by one multicast channel. In the patching scheme [2], the server sends the entire video clip to the first client. Later clients can join the existing multicast channel, and at the same time each of them requires a unicast channel to deliver the missing part of the video. Periodical broadcasting [12] is another innovative technique. In this approach, popular videos are partitioned into a series of segments and these segments are continually broadcasted on several dedicated channels. Before clients start playing videos, they usually have to wait for a time length equivalent to the first segment. Therefore, only near VoD service is provided. Proxy caching [1, 4 & 9] is also a promising scheme to alleviate the bandwidth consumption issue. In this approach, there exists a proxy between a central server and client clouds. Partial video (or entire video) files are stored in proxies and the rest are stored in the central server. Proxies send cached videos to clients, and request the remaining from servers on behalf of clients. Recent works investigate the advantages of connected proxy servers within the same intranet [3, 4 and 8].

Keywords-component: prefix caching, cooperative clients, video streaming, bandwidth usage, and Chaining.

I.INTRODUCTION

II. RELATED WORK

Generally streaming any multimedia object like high quality video consumes a significantly large amount of network resources. So request-to-service delay, network traffic, congestion and server overloading are the main parameters to be considered in video streaming over the internet that affect the quality of service (QoS). So providing video-on-demand (VoD) service over the internet in a scalable way is a challenging problem. The difficulty is twofold first; it is not a trivial task to stream video on an end-to-end basis because of a video’s high bandwidth requirement and long duration. Second, scalability issues arise when attempting to service a large number of clients. In particular, a popular video generally attract a large number of users that issues requests asynchronously [2]. There are many VoD schemes proposed

In this section we briefly discuss the previous work. Tay and pang has proposed an algorithm in [7] called GWQ (Global waiting queue) to reduce the initial startup delay by sharing the videos in a distributed loosely coupled VoD system. They have replicated the videos evenly in all the servers, for which the storage capacity of individual proxy server should be very large to store all the videos. In [11] Sonia Gonzalez, Navarro, Zapata proposed an algorithm to maintain a small initial start up delay using less storage capacity servers by allowing partial replication of the videos. They store the locally requested videos in each server. We differ by caching the partial prefix-I at proxy and prefix-II at tracker in proportion to popularity there by utilizing the proxy server and tracker storage space more efficiently. In

135

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

[3] authors have proposed an hybrid algorithm for chaining, but they do not discuss about the scenario of client failure. LEMP in [6] also have proposed a client chaining mechanism and a solution for handling client failure situation involving too many messages which increases the waiting time for playback start to tw. Another approach to reduce the aggregated transmission cost has been discussed in [12] by caching the prefix and prefix of suffix at proxy and client respectively. But they have not considered chaining. [5 and 8] proposes a batching technique, which increases the client initial waiting time. Edward Mingjun Yan and Tiko kameda in [10] proposes a broadcasting technique which requires huge amount of storage capacity and sufficient bandwidth. Yang Guo in [2] has suggested an architecture to stream the video using patching technique. Hyunjoo and Heon in [13] have proposed another chaining scheme with VCR operations. But they do stream the video data from main server and considered a constant threshold value, due to which more number of clients may not be able to share the same chain. And WAN bandwidth usage on server-proxy path may be comparatively high. In this paper, we propose an efficient load sharing algorithm and a new VoD architecture for distributed VoD system. This architecture consists of a centralized Main multimedia server [MMS] which is connected to a set of trackers [TR]. Each tracker is in turn connected to a group of proxy servers [PS] and these proxy servers are assumed to be interconnected using ring pattern. To each of this PS a set of clients are connected. And all these clients are connected among themselves. This arrangement is called as Local Proxy Servers Group[LPSG(Lp)]. Each of such LPSG, which is connected to MMS, is in turn connected to its left and right neighboring LPSG in a ring fashion through its tracker. And an efficient prefix caching based chaining (PC+Chaining) scheme using proxy servers to achieve the higher network efficiency. The organization of rest of the paper is as follows: Section 3 analyzes various parameters used in the problem. In section 4 we present a proposed approach and algorithm in detail, Section 5 describes Simulation model and section 6 describes the PC+Chaining scheme in detail, Section 7 presents the performance evaluation, Finally, in section 8, we conclude the paper and refer to further work.

Where I is the total number of observations and ni is the number of requests for video Vi. We assume that client’s requests arrive according to Poisson process with the arrival rate λ. Let Si be the size (duration in minutes) of ith video(i=1..N) with mean arrival rates λ1 . . . λ N respectively that are being streamed to the users using M proxy servers (PSs) of J LPSGs (Lp p=1..J). Each TR and PSq (q=1..M), has a caching buffer large enough to cache total P and B minutes of H and K number of video prefixes respectively. The complete video is divided into three parts, first W1 minutes of each video Vi is referred to as prefix-1(pref-1)i of Vi and is cached in any one of the proxy servers of the group only once. and next W2 minutes of video Vi is referred to as prefix-2(pref-2)i of Vi is cached at TR of Lp. i.e.

N

i =1

(pref-2 )i and B=



K

i =1

(pref-1 )i

xi

<1

W (pref − 2 )i = xi × ( Si- (pref-1 )i ) where 0< i

x

<1 Where xi is the probability of occurrence of user requests with frequency for video i from last t minutes. This arrangement caches maximum portion of most frequently requesting videos at Lp. So most of the requests are served immediately from Lp itself, which reduces the network usage on server-proxy path significantly and makes the length of the queue Ql almost negligeble. Let bi be the available bandwidth for Vi between the proxy and main Server. After requesting for a video Vi at PSq, the WAN bandwidth required on server-proxy path for the video Vi may be S −P S −P bwi = bw( S − (pref-1 ) − ( pref − 2))i

p (vi ) =1 .

So the estimation of the probability of requesting Vi video, is p(Vi) =

H

i =1

W (pref −1 )i = xi×Si where 0<

Let N be a stochastic variable representing the group of videos and it may take the different values (videos) for Vi (i=1,2 . . N). And the probability of asking for the video Vi is p(Vi), let the set of values p(Vi) be the probability mass function. Since the variable must take one of the values, it





Based on the frequency of user requests to any video, the popularity of the videos and size of (pref-1) and (pref-2) to be cached at PS and TR respectively is determined. And the size (W) of (pref-1) and (pref-2) for ith video is determined as.

III. MODEL OF THE PROBLEM

follows that

P=

where i=1..N, and bwiS −P is the WAN bandwidth usage required for ith video on server to proxy path. This

ni . I

136

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

And each TR is in turn connected to a set of PSs. These PSs are connected among themselves in a ring fashion. Each PS has various modules like Interaction Module of PS (IMPS)– Interacts with the client and TR. Service Manager of PS (SMPS)–Handles the requests from the user, Client Manager (CM) – Observes and updates the popularity of videos at PS as well as at TR as shown in Fig.3. And also to each of these PSs a large number of users are connected [LPSG]. Each proxy is called as a parent proxy to its clients. All these LPSGs are interconnected through their TR in a ring pattern. The PS caches the (pref-1) of videos distributed by VDM, and streams this cached portion of video to the client upon the request through LAN using its less expensive bandwidth. We assume that, 1. The TR is also a PS with high computational power and large storage compared to other proxy servers, to which clients are connected. It has various modules, using which it coordinates and maintains a database that contains the information of the presence of videos, and also size of (pref-1) and (pref-2) of video in each PS and TR respectively 2. Proxies and their clients are closely located with relatively low communication cost[1]. The Main server in which all the videos completely stored is placed far away from LPSG, which involves high cost remote communication.

S −P

bwi

depends on the amount of Vi (S-(pref-1)-(pref-2)) to be streamed from main server to the proxy. So the aggregate server-proxy bandwidth usage would be S −P WAN bw =



Q i −1

S −P

bw( S − ( pref − 1)( pref − 2))i

bw() is a non linear function of Sz[(pref-1) & (pref-2)]. And another output stochastic variable Rrej is the request rejection ratio, which is the ratio of the number of requests rejected (Nrej) to total number of requests arrived at the system (R), which is inversely proportional to system efficiency S eff .

S eff =

1 where Rrej

S eff =

That is Q R

Rrej =

Nrej R

and

is the ratio of number of requests served (Q) to

total number of requests arrived (R) at the system. The optimization problem to maximize S eff , thereby minimizing the client rejection ratio R rej at the PS, and S −P average WAN bandwidth usage WAN bw is.

Maximize System Efficiency

S eff =

Q R

Minimize Avg WAN Bandwidth usage S −P WAN bw =



Q i −1

S −P

bw( S − ( pref −1)( pref − 2))i

y

=

1 Q



B

=



Subject to



H

i =1

Q

i =1 K

i =1

(Wti) (pref −1 )i ,

P

=

(pref − 2 )i

W ( pref −1) & W ( pref − 2) > 0 IV.PROPOSED ARCHITECTURE AND ALGORITHM A. Proposed Architecture The proposed VoD architecture is as shown in Fig.2. This architecture consists of a MMS, which is connected to a group of trackers (TRs), As shown in Fig.3, each TR has various modules like Interaction Module (IMTR) – Interacts with the PS and MMS. Service Manager (SMTR) – Handles the requests from the PS. Database – Stores the complete details of presence and size of (pref-1) of videos at all the PSs. Video distributing Manager(VDM) – Responsible for deciding the videos, and sizes of (pref-1), (pref-2) of videos to be cached. Also handles the distribution and management of these videos to group of PSs, based on video’s popularity.

3. The MS, the TR and the PSs of LPSG are assumed to be interconnected through high capacity optic fiber cables. In the beginning, all the Nv videos are stored in the MMS. The distribution of the selected N of Nv videos among M PSs in an LPSG is done by Video Distribution Manager of TR as follows. First, all the N videos are arranged with respect to their popularity at jth LPSG . The popularity of a video is defined as the frequency of requests to this video per

137

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

W (pref − 2 )i = xi × ( Si- (pref-1 )i )

threshold time t by the clients. Here, we assume that the frequency of requests to a video follows Zipf law of distribution. B.

where

Proposed Algorithm

Whenever a client Ck request for the video Vi at PS, requesthandler (req-handler) checks whether IS-STREAMING flag of that video is true and the arrival time difference of the new client and the most recent client of the existing chain of Vi is below the threshold W(pref-1) then the servicemanager (service-mgr) adds Ck to the existing client chain of Vi and instructs it to get streamed from any one of the last d clients of the chain. Then by adding Ck to the client list of Vi, service-mgr updates the Streaming Clients List (SCL), which is the list with complete details of list of videos being streamed from that PS and the corresponding chain of clients of that video [line 3-7 of proposed algorithm]. Otherwise PS starts the new stream to Ck and new chain to Vi and SCL is updated by creating a new entry for Vi [line 8]. If it is not present in its cache, the IMPSq forwards the request to its parent TR, VDM at TR searches its database using perfect hashing to see whether it is present in any of the PSs in that Lp. If it finds starts streaming from that PS to Ck and updates the SCL accordingly [line 9,10 &11]. Otherwise request is passed to TR of [NBR[Lp]]. If Vi is found there it will be streamed from that PS to Ck and SCL is updated accordingly by service-manager [line 12,13 & 14]. If Vi is not found at NBR[Lp] also then the Vi is downloaded from the MMS and is streamed to Ck. While streaming the W(pref-1) and W(pref-2) of Vi is calculated as

0<

xi

<1

and cached at PS and TR respectively, if sufficient space is available. Otherwise an appropriate buffer allocation algorithm is executed to make space to cache these prefixes of Vi according to popularity [line 15,16 17 and 18]. So most of the requests are served immediately from the client, who is already being served or from Lp by sharing the videos present among the PSs, which reduces the client rejectionrequest ratio R rej , load at the server SL, and increases the video hit ratio VHR.

W (pref −1 )i = xi×Si where 0< i <1 and

x

138

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 5, No. 1, 2009

contrast, we propose two schemes to address these issues 1) Local group of interconnected proxies’ and clients architecture with prefix caching technique and load sharing among the proxies of the group reduces the frequent access to MMS which in turn reduces the amount of bandwidth consumption between client and main server. 2) PC+Chaining, where the clients not only receive the requested stream, but also contribute to the overall VoD service by forwarding the stream to other clients, whose request arrives within the threshold time of W(pref-1). In PC+Chaining all the clients are treated as potential server points. When there is a request for a video Vi from the client Ck at particular proxy PSq, if the requested video Vi is present at PSq, then the service will be provided to Ck in the following stages

V. SIMULATION MODEL In our simulation model we have a single MMS and a group of 6 TRs. All these TRs are interconnected among themselves in a ring fashion. Each of these TR is in turn connected to a set of 6 PSs. These PSs are again interconnected among themselves in a ring fashion. To each of these PS a set of 25 clients are connected and all these clients are interconnected among themselves. We use the video hit ratio (VHR), the average client waiting time y to measure the performance of our proposed approach more correctly. In addition we also use the WAN bandwidth usage on MMS-PS path and probability of accessing the main server as the performance metrics. Table .1 Simulation Model Notation

System Parameters

Default Values

S

Video Size

CMMS CTR CPS λ

Cache Size(MMS) Cache Size(TR) Cache Size(PS) Mean request arrival rate

2-3hrs, 200units/hr 1000 400(40%) 200(20%) 44 reqs/hr

C. Client admission phase When the request arrives at PSq, the request-handler (reqhandler) of that proxy checks for the presence of the video at PSq. If it is present then it checks the flag IS-STREAMING of the video Vi. If it is not true indicates that, there are no clients existing having streamed with the same video object Vi. Then the req-handler informs the service-manager (service-mgr) to provide the streaming of Vi to Ck. So the service-mgr starts new stream and updates the streaming clients list (SCL) by adding a new entry for the video Vi along with its (pref-1) size. The format of each entry of SCL is

Related Documents