International Journal Of Computer Science Issues (ijcsi), Volume 1, August 2009

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View International Journal Of Computer Science Issues (ijcsi), Volume 1, August 2009 as PDF for free.

More details

  • Words: 34,197
  • Pages: 68
International Journal of Computer Science Issues

Pervasive Computing Systems and Technologies

Volume 1, August 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

© IJCSI PUBLICATION www.IJCSI.org

© IJCSI PUBLICATION 2009 www.IJCSI.org

EDITORIAL There are several journals available in the areas of Computer Science having different policies. IJCSI is among the few of those who believe giving free access to scientific results will help in advancing computer science research and help the fellow scientist.

IJCSI pay particular care in ensuring wide dissemination of its authors’ works. Apart from being indexed in other databases (Google Scholar, DOAJ,

CiteSeerX,

etc…),

IJCSI

makes

articles

available

to

be

downloaded for free to increase the chance of the latter to be cited. Furthermore, unlike most journals, IJCSI send a printed copy of its issue to the concerned authors free of charge irrespective of geographic location.

IJCSI Editorial Board is pleased to present IJCSI Volume One (IJCSI Vol. 1, 2009). This edition is a result of a special call for papers on Pervasive Computing Systems and Technologies. The paper acceptance rate for this issue is 31.6%; set after all submitted papers have been received with important comments and recommendations from our reviewers.

We sincerely hope you would find important ideas, concepts, techniques, or results in this special issue.

As final words, PUBLISH, GET CITED and MAKE AN IMPACT.

IJCSI Editorial Board August 2009 www.ijcsi.org

IJCSI EDITORIAL BOARD

Dr Tristan Vanrullen Chief Editor LPL, Laboratoire Parole et Langage - CNRS - Aix en Provence, France LABRI, Laboratoire Bordelais de Recherche en Informatique - INRIA - Bordeaux, France LEEE, Laboratoire d'Esthétique et Expérimentations de l'Espace - Université d'auvergne, France

Dr Mokhtar Beldjehem Professor Sainte-Anne University Halifax, NS, Canada

Dr Pascal Chatonnay Assistant Professor Maître de Conférences Université de Franche-Comté (University of French-County) Laboratoire d'informatique de l'université de Franche-Comté (Computer Sience Laboratory of University of French-County)

Prof N. Jaisankar School of Computing Sciences, VIT University Vellore, Tamilnadu, India

IJCSI REVIEWERS COMMITTEE • Mr. Markus Schatten, University of Zagreb, Faculty of Organization and Informatics, Croatia • Mr. Forrest Sheng Bao, Texas Tech University, USA • Mr. Vassilis Papataxiarhis, Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Panepistimiopolis, Ilissia, GR-15784, Athens, Greece, Greece • Dr Modestos Stavrakis, Univarsity of the Aegean, Greece • Prof Dr.Mohamed Abdelall Ibrahim, Faculty of Engineering Alexandria Univeristy, Egypt • Dr Fadi KHALIL, LAAS -- CNRS Laboratory, France • Dr Dimitar Trajanov, Faculty of Electrical Engineering and Information technologies, ss. Cyril and Methodius Univesity - Skopje, Macedonia • Dr Jinping Yuan, College of Information System and Management,National Univ. of Defense Tech., China • Dr Alexios Lazanas, Ministry of Education, Greece • Dr Stavroula Mougiakakou, University of Bern, ARTORG Center for Biomedical Engineering Research, Switzerland • Dr DE RUNZ, CReSTIC-SIC, IUT de Reims, University of Reims, France • Mr. Pramodkumar P. Gupta, Dept of Bioinformatics, Dr D Y Patil University, India • Dr Alireza Fereidunian, School of ECE, University of Tehran, Iran • Mr. Fred Viezens, Otto-Von-Guericke-University Magdeburg, Germany • Mr. J. Caleb Goodwin, University of Texas at Houston: Health Science Center, USA • Dr. Richard G. Bush, Lawrence Technological University, United States • Dr. Ola Osunkoya, Information Security Architect, USA • Mr. Kotsokostas N.Antonios, TEI Piraeus, Hellas • Prof Steven Totosy de Zepetnek, U of Halle-Wittenberg & Purdue U & National Sun Yat-sen U, Germany, USA, Taiwan • Mr. M Arif Siddiqui, Najran University, Saudi Arabia • Ms. Ilknur Icke, The Graduate Center, City University of New York, USA • Prof Miroslav Baca, Associated Professor/Faculty of Organization and Informatics/University of Zagreb, Croatia • Dr. Elvia Ruiz Beltrán, Instituto Tecnológico de Aguascalientes, Mexico • Mr. Moustafa Banbouk, Engineer du Telecom, UAE • Mr. Kevin P. Monaghan, Wayne State University, Detroit, Michigan, USA • Ms. Moira Stephens, University of Sydney, Australia

• Ms. Maryam Feily, National Advanced IPv6 Centre of Excellence (NAV6) , Universiti Sains Malaysia (USM), Malaysia • Dr. Constantine YIALOURIS, Informatics Laboratory Agricultural University of Athens, Greece • Dr. Sherif Edris Ahmed, Ain Shams University, Fac. of agriculture, Dept. of Genetics, Egypt • Mr. Barrington Stewart, Center for Regional & Tourism Research, Denmark • Mrs. Angeles Abella, U. de Montreal, Canada • Dr. Patrizio Arrigo, CNR ISMAC, italy • Mr. Anirban Mukhopadhyay, B.P.Poddar Institute of Management & Technology, India • Mr. Dinesh Kumar, DAV Institute of Engineering & Technology, India • Mr. Jorge L. Hernandez-Ardieta, INDRA SISTEMAS / University Carlos III of Madrid, Spain • Mr. AliReza Shahrestani, University of Malaya (UM), National Advanced IPv6 Centre of Excellence (NAv6), Malaysia • Mr. Blagoj Ristevski, Faculty of Administration and Information Systems Management - Bitola, Republic of Macedonia • Mr. Mauricio Egidio Cantão, Department of Computer Science / University of São Paulo, Brazil • Mr. Thaddeus M. Carvajal, Trinity University of Asia - St Luke's College of Nursing, Philippines • Mr. Jules Ruis, Fractal Consultancy, The netherlands • Mr. Mohammad Iftekhar Husain, University at Buffalo, USA • Dr. Deepak Laxmi Narasimha, VIT University, INDIA • Dr. Paola Di Maio, DMEM University of Strathclyde, UK • Dr. Bhanu Pratap Singh, Institute of Instrumentation Engineering, Kurukshetra University Kurukshetra, India • Mr. Sana Ullah, Inha University, South Korea • Mr. Cornelis Pieter Pieters, Condast, The Netherlands • Dr. Amogh Kavimandan, The MathWorks Inc., USA • Dr. Zhinan Zhou, Samsung Telecommunications America, USA • Mr. Alberto de Santos Sierra, Universidad Politécnica de Madrid, Spain • Dr. Md. Atiqur Rahman Ahad, Department of Applied Physics, Electronics & Communication Engineering (APECE), University of Dhaka, Bangladesh • Dr. Charalampos Bratsas, Lab of Medical Informatics, Medical Faculty, Aristotle University, Thessaloniki, Greece • Ms. Alexia Dini Kounoudes, Cyprus University of Technology, Cyprus • Mr. Anthony Gesase, University of Dar es salaam Computing Centre, Tanzania • Dr. Jorge A. Ruiz-Vanoye, Universidad Juárez Autónoma de Tabasco, Mexico

• Dr. Alejandro Fuentes Penna, Universidad Popular Autónoma del Estado de Puebla, México • Dr. Ocotlán Díaz-Parra, Universidad Juárez Autónoma de Tabasco, México • Mrs. Nantia Iakovidou, Aristotle University of Thessaloniki, Greece • Mr. Vinay Chopra, DAV Institute of Engineering & Technology, Jalandhar • Ms. Carmen Lastres, Universidad Politécnica de Madrid - Centre for Smart Environments, Spain • Dr. Sanja Lazarova-Molnar, United Arab Emirates University, UAE • Mr. Srikrishna Nudurumati, Imaging & Printing Group R&D Hub, Hewlett-Packard, India • Dr. Olivier Nocent, CReSTIC/SIC, University of Reims, France • Mr. Burak Cizmeci, Isik University, Turkey • Dr. Carlos Jaime Barrios Hernandez, LIG (Laboratory Of Informatics of Grenoble), France • Mr. Md. Rabiul Islam, Rajshahi university of Engineering & Technology (RUET), Bangladesh • Dr. LAKHOUA Mohamed Najeh, ISSAT - Laboratory of Analysis and Control of Systems, Tunisia • Dr. Alessandro Lavacchi, Department of Chemistry - University of Firenze, Italy • Mr. Mungwe, University of Oldenburg, Germany • Mr. Somnath Tagore, Dr D Y Patil University, India • Mr. Nehinbe Joshua, University of Essex, Colchester, Essex, UK • Ms. Xueqin Wang, ATCS, USA • Dr. Borislav D Dimitrov, Department of General Practice, Royal College of Surgeons in Ireland, Dublin, Ireland • Dr. Fondjo Fotou Franklin, Langston University, USA • Mr. Haytham Mohtasseb, Department of Computing - University of Lincoln, United Kingdom • Dr. Vishal Goyal, Department of Computer Science, Punjabi University, Patiala, India • Mr. Thomas J. Clancy, ACM, United States • Dr. Ahmed Nabih Zaki Rashed, Dr. in Electronic Engineering, Faculty of Electronic Engineering, menouf 32951, Electronics and Electrical Communication Engineering Department, Menoufia university, EGYPT, EGYPT • Dr. Rushed Kanawati, LIPN, France • Mr. Koteshwar Rao, K G REDDY COLLEGE OF ENGG.&TECH,CHILKUR, RR DIST.,AP, INDIA • Mr. M. Nagesh Kumar, Department of Electronics and Communication, J.S.S. research foundation, Mysore University, Mysore-6, India • Dr. Babu A Manjasetty, Research & Industry Incubation Center, Dayananda Sagar Institutions, , India

• Mr. Saqib Saeed, University of Siegen, Germany • Dr. Ibrahim Noha, Grenoble Informatics Laboratory, France • Mr. Muhammad Yasir Qadri, University of Essex, UK

TABLE OF CONTENTS 1. A Survey on Service Composition Middleware in Pervasive Environments Noha Ibrahim, Grenoble Informatics Laboratory, Grenoble, France Frédéric Le Mouël, Université de Lyon, INRIA, INSA-Lyon, CITI, Lyon, France

2. Context Aware Adaptable Applications - A global approach Marc Dalmau, Philippe Roose and Sophie Laplace, LIUPPA, IUT de Bayonne 2, Allée du Parc Montaury 64600 Anglet, France

3 . Embedded Sensor System for Early Pathology Detection in Building Construction Santiago J. Barro Torres and Carlos J. Escudero Cascón, Department of Electronics and Systems, University of A Coruña, A Coruña, 15071 Campus Elviña, Spain

4. SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer Scott Carter and Laurent Denoue, FX Palo Alto Laboratory, Inc., 3400 Hillview Ave., Bldg. 4, Palo Alto, CA 94304

5. Improvement of Text Dependent Speaker Identification System Using Neuro-Genetic Hybrid Algorithm in Office Environmental Conditions Md. Rabiul Islam, Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh Md. Fayzur Rahman, Department of Electrical & Electronic Engineering, Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh

6. MESURE Tool to benchmark Java Card platforms Samia Bouzefrane and Julien Cordry, CEDRIC Laboratory, Conservatoire National des Arts et Métiers, 292 rue Saint Martin, 75141, Paris Cédex 03, France Pierre Paradinas, INRIA, Domaine de Voluceau - Rocquencourt -B.P. 105, 78153 Le Chesnay Cedex, France

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

1

A Survey on Service Composition Middleware in Pervasive Environments Noha Ibrahim1, Frédéric Le Mouël2 1

2

Grenoble Informatics Laboratory Grenoble, France [email protected]

Université de Lyon, INRIA, INSA-Lyon, CITI Lyon, France [email protected]

Abstract The development of pervasive computing has put the light on a challenging problem: how to dynamically compose services in heterogeneous and highly changing environments? We propose a survey that defines the service composition as a sequence of four steps: the translation, the generation, the evaluation, and finally the execution. With this powerful and simple model we describe the major service composition middleware. Then, a classification of these service composition middleware according to pervasive requirements - interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management, and autonomous management - is given. The classification highlights what has been done and what remains to do to develop the service composition in pervasive environments.

Key words: middleware, service oriented architecture, service composition, pervasive environment, classification

1. Introduction Middleware are enabling technologies for the development, execution and interaction of applications. These software layers are standing between the operating systems and applications. They have evolved from simple beginnings - hiding network details from applications - into sophisticated systems that handle many important functionalities for distributed applications - providing support for distribution, heterogeneity and mobility. SOA middleware[2] is a programming paradigm that uses ``services'' as the unit of computer work. Service-oriented computing enables the development of loosely coupled systems that are able to communicate, compose and evolve in an open, dynamic and heterogeneous environment. A serviceoriented system comprises software systems that interact

with each other through well-defined interfaces. If middleware were designed to help manage the complexity and heterogeneity inherent in distributed systems, one can imagine the new role middleware has to play in order to respect the evolution from distributed and mobile computing to pervasive one. Hardly a day passes without some new evidence of the proliferation of portable computers in the marketplace, or of the growing demand for wireless communication. Support for mobility has been the focus of number of experimental systems, researches and commercial products, and that since several decades. The mission of mobile computing is to allow users to access any information using any device over any network at any time. When this access becomes to every information using every device and over every network at every time, one can say that mobile computing has evolved to what we now call pervasive computing[13]. In pervasive environments where SOA has been adopted, functionalities are more and more modeled as services, and published as interfaces. The proliferation of new services encourages the applications to use these latter, all combined together. In this case, we speak of a composite service. The process of developing a composite service is called service composition[7]. Composing services together is the new challenge awaiting the SOA middleware[2] meeting the pervasive environments[13]. Indeed, the variety of service providers in a pervasive environment, and the heterogeneity of the services they provide require applications and users of these kind of environments to develop models, techniques and algorithms in order to compose services and execute them. The service composition needs to follow some

IJCSI

2

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

requirements[19][33][34] in order to resolve the challenges brought by pervasivity. Several surveys[5][7][22][31][33] dealt with service composition. Many of them[7][31] classified the middleware under exclusive criteria such as manual versus automated, static versus dynamic, and so on. Others[5][22][33] classified the service composition middleware under different domains such as artificial intelligence, formal methods, and so on. But none of these surveys proposed a generic reference model to describe the service composition middleware in pervasive environments.

to give the basis to discuss similarities and differences, advantages and disadvantages of all available service composition middleware and to highlight the nowadays existing lacks concerning the service composition problem in pervasive environments. As depicted Figure 1, the SCM interacts with the application layer by receiving functionality requests from users or applications[5][7]. SCM needs to respond to the functionality requests by providing services that fulfill the demand. These services can be atomic or composite. The Service Repository represents all the distributed service repository where services are registered. The SCM interacts with the Service Repository to choose services to compose.

In this article, we propose: •





a generic service composition middleware model, the SCM model, a novel way to describe the service composition problem in pervasive environments, a description of six middleware architectures using the SCM model as a backbone and highlighting the strength and weakness of each middleware, and finally, a classification of these latter under pervasive requirements identified by the literature to be essential for service composition in pervasive environments.

The outlines are as follows. In section 2, we define the service composition middleware (SCM) model and explain its modules. In section 3, we describe six service composition middleware by mapping their architecture to the SCM model. In section 4, we classify these middleware according to the pervasive requirements we identify. Finally, section 5 concludes our work and gives research directions to the service composition problem.

Figure 1 SCM model The SCM is split into four components: the Translator, the Generator, the Evaluator, and the Builder. The process of service composition includes the following phases: 1.

2. SCM: Service Composition Middleware Model Based on several studies[22][24] that resolve the service composition process problem into several fundamental problems, we define a service composition middleware as a framework providing tools and techniques for composing services. We define a service composition middleware model, SCM model, as an abstract layer, general enough to describe all existing service composition middleware. The SCM model is at a highlevel of abstraction, without considering a particular service technology, language, platform or algorithm used in the composition process. The aim of this definition is

Applications specify their needed functionalities by sending requests to the middleware. These requests can be described with diverse languages or techniques. The request descriptions are translated to a system comprehensible language in order to be used by the middleware. Most systems distinguish between external specification languages and internal ones. The external ones are used to enhance the accessibility with the outside world, commonly the users. Users can hence express what they need or want in a relatively easy way, usually using semantics and ontologies. Internal specification corresponds more to a formal way of expressing things and

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

2.

3.

uses specific languages, models, and logics, usually for SOA a generic service model. Some research[30] provide a translation mechanism of the available service technologies and service descriptions into one model. Others, such as SELF-SERV[25], propose a wrapper to provide a uniform access interface to services[8]. These middleware usually realize transformation from one model to another or from one technology to another. The technologies are predefined in advance and usually consist of the legacy ones. If new technology models appear in the environment, the Translator will need to be expanded to take these technologies into consideration. Another family of research[6][26] do not provide the Translator module as they use common model to describe all the services of the environment. They use common description languages such as DAMLS - recently called OWL-S[36], - for describing atomic services, composed services and user queries. Once translated, the request specification is sent to the Generator. The Generator will try to provide the needed functionalities by composing the available service technologies, and hence composing their functionalities. It tries to generate one or several composition plans with the same or different technology services available in the environment. It is quite common to have several ways to do a same requirement, as the number of available functionalities in pervasive environments is in expansion. Composing service is technically performed by chaining interfaces using a syntactically or semantically method matching. The interface chaining is usually represented as a graph or described with a specific language. Graph based approaches[8][10], represent the semantic matching between the inputs and outputs of service operations. It is a powerful technique as many algorithms can be applied upon graphs and hence optimize the service composition. Number of languages have been proposed in the literature to describe data structure in general and functionalities offered by devices in particular. If some languages are widely used, such as XML, and generic for multiple uses others are more specific to certain tasks as service composition, orchestration or choreography such as Business Process Execution Language (BPEL4WS or BPEL[4]) and OWL-S[36]. The Evaluator chooses the most suitable

4.

3

composition plan for a given context. This selection is done from all the plans provided by the Generator. In pervasive environments, this evaluation depends strongly on many criteria like the application context, the service technology model, the quality of the network, the non functional service QoS properties, and so on. The evaluation needs to be dynamic and adaptable as changes may occur unpredictably and at any time. Two main approaches are commonly used: the rule-based planning[27][28][29] and the formal methods approach[6][10][12][30]. The rules evaluate whether a given composition plan is appropriate or not in the actual context. If rules were commonly used as an evaluation approach, their use lacks of dynamism proper to pervasive environments. A major problem of the evaluation approach is namely the lack of dynamic tools to verify the correctness functional and non functional aspects - of the service composition plan. This aspect is at the main advantage of what most formal methods offer. The nowadays most popular and advanced technique to evaluate a given composition plan is the evaluation by formal methods (like Petri nets and process algebras like the Pi-calculus). Petri nets are a framework to model concurrent systems. Their main attraction is the natural way of identifying basic aspects of concurrent systems, both mathematically and conceptually. Petri nets are very commonly merged with composition languages such as BPEL[4] and OWL-S[36]. On the other hand, Automata or labeled transition systems are a well-known model underlying formal system specifications and are more and more used in the service composition process[30]. The Builder executes the selected composition plan and produces an implementation corresponding to the required composite service. It can apply a range of techniques to realize the effective service composition. These techniques depend strongly on the service technology model we are composing and on the context we are evolving in. Once the composite service available, it can be executed by the application that required its functionality. In the literature, we distinguish different kinds of builders provided by the service composition middleware. Some builders are very basic and use only simple invocation in sequence to a list of services[17]. These services need to be

IJCSI

4

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

available otherwise the composition result is not certain. Others[35] provide complex discovery protocols adapted to the heterogeneous nature of the pervasive environments. The discovery takes in charge to find and choose the services taking part into the composition process and to choose contextually the most suitable ones if many are available. Finally some systems propose solutions not only located in the middleware layer but also in the networking one. We argue that the SCM model is generic enough to describe the service composition process in pervasive environments. In the next section, we use the SCM model as a backbone for describing various middleware that do the service composition.

3. Service Composition Middleware in Pervasive Environments In this section, we describe six middleware for service composition adapted for pervasive environments by mapping them to our SCM model. The chosen middleware are architectures, platforms or algorithms that propose solutions to the service composition problem: MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8] and WebDG[12]. For each middleware, we describe the service composition runtime process, the prototypes developed and identify the four modules of our SCM model in their provided architectures.

3.1 MySIM: Spontaneous Service Integration for Pervasive Environment MySIM[17] is a spontaneous middleware that integrate services in a transparent way without disturbing users and applications of the environment. Service integration is defined as being a service transformation from one service technology to another (Translator), a service composition and a service adaptation. MySIM selects services that are composable, generates composition plans (Generator), evaluate their QoS degrees (Evaluator) and implements new composite services in the environment (Builder). These new services publish well known interfaces but new implementations and better QoS. MySIM also proposes to adapt the application execution to the services available by redirecting the application call to services with better QoS.

Figure 2 MySIM mapped to SCM

MySIM architecture is depicted under the SCM model in Figure 2. The Translator service transforms services into a generic Service model. The Generator service is responsible of the syntactic and semantic matching of the service operations for composition and adaptation issues. The QoS service evaluates the composition or substitution matching via non functional properties and the Decision service decides which services to compose or to substitute. Finally the Builder service implements the composite service, and the Registry service publishes its interfaces. MySIM is implemented under the OSGi/Felix platform. It uses the reflexive techniques to do the syntactic interface matching and ontology online reasoner for the semantic matching. The service composition is technically done by generating new bundles (unit of deployment) that composes the services together. The results show the heavy cost of the semantic matching. The solution is interesting but solutions need to be found to make the spontaneous service integration scalable to large environments.

3.2 PERSE: Pervasive Semantic-aware Middleware PERSE[30] proposes a semantic middleware, that deals with well known functionalities such as service discovery, registration and composition. This middleware provides a service model to support interoperability between heterogeneous both semantic and syntactic service description languages (Translator). The model further supports the formal specification of service conversations as finite state automata, which enables the automated reasoning about service behavior independently from the underlying conversation specification language. Hence, pervasive service conversations described with different service conversation languages (Generator) can be integrated (Builder) toward the realization of a user task. The model also supports the specification of service nonfunctional properties based on existing QoS models to meet the specific requirements of each pervasive application through the QoS aware Composition service (Evaluator).

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

5

through service cooperation. Devices with higher resources assist those with restricted resources in accomplishing service-related tasks such as discovery, composition, and execution.

Figure 3 PERSE mapped to SCM PERSE architecture is depicted under the SCM model in Figure 3. The Evaluator module is the most developed as it verifies the correctness of the composition plan and analyzes the service QoS before composing services. A Translator is also available to translate the legacy services into a common model semantically and syntactically described. The Generator semantically matches services. The Builder discovers the services in the environment and simply invoke them in sequence. [30] have implemented a prototype of PERSE using Java 1.5. Selected legacy plugins have been developed for SLP using jSLP, UPnP[35] using Cyberlink, and UDDI using jUDDI. The efficiency of PERSE has been tested and proved in the cost evaluation of semantic service matching, the time to organize the semantic service registry, the time to publish and locate a semantic service description as well as the comparison of the scalability of the registry compared with a WSDL service registry, and finally the processing time for service composition with and without the support of QoS.

3.3 SeSCo: Seamless Service Composition SeSCo[10] presents a service composition mechanism for pervasive computing. It employs the service-oriented middleware platform called Pervasive Information Communities Organization (PICO) to model and represent resources as services. The proposed service composition mechanism models services as directed attributed graphs, maintains a repository of service graphs, and dynamically combines multiple basic services into complex services (Builder). The proposed service composition mechanism constructs possible service compositions based on their semantic and syntactic descriptions (Generator). SeSCo[10] proposes a hierarchical service overlay mechanism based on a LATCH protocol (Evaluator). The hierarchical scheme of aggregation exploits the presence of heterogeneity

Figure 4 SeSCo mapped to SCM SeSCo architecture is depicted under SCM model in Figure 4. No Translator module is provided and SeSCo uses the same language to present the user task and the composite service. The service matching is done on a semantic interface matching and the evaluation is upon the input/output matching correctness. SeSCo[10] evaluated its approach by calculating the composition success ratio for different lengths of composition which is essentially the number of services that can be used to compose a single service. This evaluation shows the effect of limiting the length of the composition to a predefined number. If the service density is higher, even with a lower value of composition length, a successful composition can be achieved. However, at lower service densities, it might be necessary to allow higher composition lengths for better composition.

3.4 Broker Approach for Service Composition Broker[6] presents a distributed architecture and associated protocols for service composition in mobile environments that take into consideration mobility, dynamic changing service topology, and device resources. The composition protocols are based on distributed brokerage mechanisms (Evaluator) and utilize a distributed service discovery process over adhoc network connectivity. The proposed architecture is based on a composition manager, a device that manages the discovery, integration (Generator), and execution of a composite request (Builder). Two broker selectionbased protocols - dynamic one and distributed one - are proposed in order to distribute the composition requests

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

6

to the composition managers available in the environment. These protocols depend on device-specific potential value, taking into account services available on the devices, computation and energy resources and the service topology of the surrounding vicinity.

component model named Component Service Model with Semantics (CoSMoS) - discovers services required for composition - through a middleware named Component Runtime Environment (CoRE) - and composes the requested service based on its semantics and the semantics of the discovered services - through a service composition mechanism named Semantic GraphBased Service Composition (SeGSeC).

Figure 5 Broker mapped to SCM Broker architecture is depicted under the SCM model in Figure 5. The Broker arbitration is the Evaluator module as it evaluates the available devices and decides to distribute the composition request, described in a special language (DSF), taking into account the device context. The evaluation is done here before the composition process. The Service Integration describes the composition sequence using a specific language (ESF) and pass it to the Service Execution (the Builder) to execute it. Broker[6] implemented a protocol as part of a distributed event-based mobile network simulator, to test the two proposed broker arbitration protocols and the composition efficiency. Simulation results show that their protocols - especially the distributed approach exceed the usual centralized broker composition in terms of composition efficiency and broker arbitration efficiency.

3.5 SeGSeC: Composition

Semantic

Graph-Based

Service

SeGSeC[8] proposes an architecture that obtains the semantics of the requested service in an intuitive form (e.g. using a natural language) (Tranlator), and dynamically composes the requested service based on its semantics (Generator). To compose a service based on its semantics, the proposed architecture supports semantic representation of services - through a

Figure 6 SeGSeC mapped to SCM SeGSeC architecture is depicted under SCM model in Figure 6. The Request Analyser translates the user request into an internal system language using graphbased approach. The Semantic Analyser and Service composer produce the composition workflow ready to be executed by the Service performer. The workflow respects the semantic matching composition rules and the correctness is guaranteed via the Evaluator module. SeGSeC[8] was evaluated according to the number of services deployed and the time needed to discover, match and compose services. Another set of evaluations took not only the number of deployed services but especially the number of operations these services implement. Their results show that SeGSeC performs efficiently when only a small number of services are deployed and that it scales to the number of services deployed if the discovery phase is done efficiently.

3.6 WebDG: Semantic Web Services Composition WebDG[12] proposes an ontology-based framework for the automatic composition of web services. [12] presents an algorithm to generate composite services from high level declarative descriptions. The algorithm uses composability rules, in order to compare the syntactic and semantic features of web services to determine whether two services are composable.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

7

autonomous management.

Figure 7 WebDG mapped to SCM WebDG architecture is depicted under SCM model in Figure 7. The service composition approach is depicted under four phases of request specification (Translator), service description matchmaking (Generator), composition plan selection (Evaluator) and composite service generation (Builder). A prototype implementation WebDG is provided and tested on a E-government Web service applications. The WebDG evaluation aims to test the possibility of generating plans for a large number of service interfaces, the effectiveness and speed of the matchmaking algorithm, and the role of the selection phase (QoC parameters) in reducing the number of generated plans. The result show that most of the time is spent on checking message composability. On the other hand, a relatively low value of composition completeness generates more plans, each plan containing a small number of composable operations. In contrast, a high value of this ratio generates a smaller number of plans, each plan having more composable operations.

4. Classification of the Pervasive Service Composition Middleware As shown above, the SCM model is generic enough to provide generic functional modules that describe the existing service composition middleware. We choose to classify the middleware – MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8] and WebDG[12] according to pervasive environment requirements. We first list and explain these pervasive requirements for service composition middleware, then a classification of these middleware is given.

4.1 Pervasive Requirements Pervasive computing brought new challenges to distributed and mobile computing. We identify the following eight fundamental requirements for service composition in pervasive environments: interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management and

Interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged. Ubiquitous computing environments, quoting Mark Weiser's definition, consist of various kinds of computational devices, networks and collaborating software and hardware entities. Due to the large number of heterogeneous and cooperating parties, interoperability is required at all levels of ubiquitous computing. Service composition middleware need to take advantage of all the functionalities available in the surroundings, and for that they need to be interoperable. Discoverability is a major issue for ubiquity and composition as devices and services need to be located and accessed before being composed. One of the fundamental challenges of distributed and highly dynamic environments is how the applications can discover the surrounding entities and, conversely, how the applications can be discovered by the other entities in the system. In a pervasive system, the execution environment of applications can be logically considered as a single container including all applications, other components, and resources. Moreover, the idea in distributed environments is that the resources can be accessed without any knowledge of where the resources or the users are physically located. Adaptability is the ability of a software entity to adapt to the changing environment. Changes in applications' and users' requirements or changes within the network, may require the presence of adaptation mechanisms within the middleware. Moreover, adaptation is necessary when a significant mismatch occurs between the supply and demand of a resource. As the application's execution environment changes due to the user's mobility, the vital resources need to be substituted by corresponding resources in the new environment in order to ensure continuous operation. The requirement for adaptation is present on many different layers of a computing system. Context awareness is the ability of pervasive middleware to be aware in terms of devices coming and leaving, functionalities offered and retrieved, quality of service changing, etc. They need to be aware of all these changes, in order to offer the best functionalities to applications regardless the context around. When considering context-aware systems in general, some common functionalities that are present in almost every system, can be identified: context sensing and processing, context information representation, and the applications that utilize the context information. In

IJCSI

8

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

general, the context information can be divided into lowand high-level context information. Low-level context information can be collected using sensors in the system. Low-level context information sources can be combined or processed further to higher level context information. QoS management is essential in dynamic environments, where connectivity is very variable. A pervasive middleware for service composition need to take the non functional parameters of applications and devices into consideration in order to provide viable and flexible composition plans and composite services. QoS parameters concern not only the services but also the devices where the execution is taking place. The composition execution need to rely on this parameter in order to take place in the best conditions. Not only the QoS of different services need to be compatible, but also the devices performing the composition need to respect certain characteristics and constraints. Security mechanisms, such as authentication, authorization, and accounting (AAA) functions may be an important part of the middleware in order to intelligently control access to computer and network resources, enforcing policies, auditing network/user usage, etc. Another important aspect concerns privacy and trust in pervasive environments. In presence of unknown devices, middleware need to respect privacy of users, and provide trust mechanisms adapted to the ever changing nature of the environment. Spontaneous management concerns the ability of a pervasive middleware to compose services independently of user and application requests. The middleware spontaneously composes services that are compatible together and produces a new composite service into the environment. The new service is registered and can publish its interfaces in order to be discovered and executed by applications. Spontaneous service composition is an interesting feature in pervasive environments, as services meet when the user encounter, and interesting composite service can be generated from these meetings, even though not required at that moment by users. Autonomous Management concerns the ability for a pervasive middleware to control and manage its resources, functions, security and performance, in face of failures and changes, with little or no human intervention. The complexity of future ubiquitous computing environments will be such that it will be impossible for human administrators to perform their

traditional functions of configuration management, performability management, and security management. Instead, one must resort to automate most of these management functions, allowing humans to concentrate on the definition and supervision of high-level management policies, while the middleware itself takes care of the translation of these high-level policies into automated control structures. The challenge is therefore to move from classical middleware support for configuration, performability and security management to support for self-configuration, self-tuning, self-healing and self-protecting capabilities. We classify the service composition middleware – MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8], and WebDG[12] - under the above requirements. For each middleware, we analyze its four modules - Translator, Generator, Evaluator, and Builder - and detail if they respect the pervasive requirements. The first section depicts the requirements that are fulfilled, at a certain extend, by the service composition middleware. The second section explains the requirements that are until now left behind. Our classification is given in Figure 8.

4.2 Service Composition Middleware Meeting Pervasive Requirements In this section, we are interested in the pervasive requirements that are fulfilled by service composition middleware: discoverability, adaptability, context awareness, and QoS management. If some pervasive requirements are relatively well fulfilled by the current composition middleware, others are still at a preliminary stage. All middleware provide the discoverability and context awareness requirements. These requirements are intrinsic to every composition middleware wanting to evolve in dynamic and ever changing environment such as the pervasive environments. These requirements are essential when constructing and evaluating composition plans, but also when discovering and invoking services. Indeed, generating and evaluating composition plans must be contextual, as services can come and go at any time, and a given composition plan constructed at a certain time, need to be evaluated before execution, in case some changes have affected it. Hence, the context awareness is not only provided by the Builder but also by the Generator and Evaluator modules.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

9

into account, as the resulting composition may not execute in case of severe incompatibilities in QoS between combined services. The Builder can analysis the QoS parameter in order to choose the devices and platforms where to execute the service composition, depending on power or memory properties, but also to choose services to compose depending on the devices they execute on. This requirement is especially considered in the development of multimedia applications in variable environments such as pervasive environments[16]. Indeed, composing services within multimedia applications imposes a rigorous respect of the QoS properties otherwise the whole application may not execute.

4.3 Service Composition Middleware Missing Pervasive Requirements Nowadays service composition middleware present real lack in providing interoperability, spontaneous management, security mechanisms and autonomous management to service composition in pervasive environments.

Figure 8 Service composition Middleware Classification The adaptability requirement is fulfilled by four of the six classified middleware (MySIM[17], PERSE[30], SeSCo[10], and Broker[6]) for different SCM modules. The environmental changes, that affect a pervasive environment, such as devices coming and leaving, services being unavailable, require from the middleware special mechanisms in order to re-evaluate and adapt their service composition to these changes. As we can see, some middleware propose adaptation mechanisms, but this requirement is far from being fulfilled by all service composition middleware in the environment. In nowadays researches, adaptation is more considered as a field of research[35] than a requirement to fulfill. Adapting services can be seen as a way of integrating services into their new environments. The QoS management requirement is fulfilled by five of the six classified middleware (MySIM[17], PERSE[30], SeSCo[10], Broker[6] and WebDG[11]). The modules that usually respect the QoS properties are the Generator, Evaluator and the Builder. The Evaluator relies on the service QoS parameters in order to choose the most suitable plan from all possible composition plans. QoS is especially relevant for stateful services. A plan composition of stateful services need to take QoS

The interoperablity requirement is more than left behind in nowadays service composition middleware. Figure 8 shows that only three middleware (MySIM[17], PERSE[30] and SeGSeC[8]) fulfill this requirement, and only for the Translator module. Interoperability is currently resolved by explicit technical translations from one service model to another. By this way, interoperability is only resolved at a technology level. On a more theoretical and formal level, the use of semantic and ontology based languages[1] is not sufficient to make service composition fully interoperable. Very often, service providers use different ontology domain and ontology transformations from one domain to another are more than needed. Spontaneous management is only considered by MySIM[17] middleware. Indeed all of the other five middleware are goal-oriented and respond mainly to predefined functionality requests coming from the application layer. None of these middleware propose a spontaneous service composition that deliver new services and functionalities into the environment, without the intervention of users or applications. MySIM[17] proposes a service integration middleware that generates new services in the environment spontaneously and automatically. Compatible services are composed on the fly, without any intervention and upon the middleware own decision based on semantic and syntactic service matching.

IJCSI

10

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

The middleware listed above, do not propose solutions to address the problem of security or trust. They rely on the existing mechanisms proposed by the middleware and network layers, if any. Several other studies[14][15] address security features for service composition using contracts[15], verification formal methods[14], or a security model for enforcing access control in extensible component platforms[20]. No real autonomous composition management is provided. The middleware do not propose mechanism to manage their resources, functions, security, and performance, in face of failures and changes, with little or no human intervention. Pervasive environments that are capable of composing functionalities autonomously are still at preliminary state of consumption. A major domain that dealt with autonomous management of the composition is the multi-agent systems. Combining multi-agent systems and service-oriented architecture is a well known research field to add autonomy features to services[9][18][21][23].

problems: the service translations, the composition plan generations, the plan contextual evaluations, and finally the real composition implementation. In each of these domains, several trends appeared to be commonly used: simple translation between diverse service technologies for the Translator, graph based approach or language composition one for the Generator, formal methods approach for the Evaluator, and discovery and invocation mechanisms for the Builder. Finally, we classified these middleware under several requirements related to the ubiquity of the environments. If some requirements such as discoverability and context awareness are well verified, others are still being explored such as interoperability, adaptability and QoS management. Security, spontaneous and autonomous management open the way to many promising research trends, at the intersection of several major domains such as artificial intelligence and autonomic computing, for service composition middleware in pervasive environments.

References 5. Conclusions The development of pervasive computing has put the accent on a well identified problem, the service composition problem. Composing services together on various platforms, extending environments with new functionalities, are the new trends pervasive computing aims to achieve. Many composition middleware have reached a certain maturity, and propose complete architectures and protocols to discover and compose services in pervasive environments. Many surveys[5][7][22][31][33] list service composition middleware according to predefined criteria or properties. They very often consider middleware for the composition of a particular technology such as Web services composition middleware. The application of service composition middleware to pervasive environment is rather new, and a real lack in analyzing and classifying service composition middleware under a reference model is noticed. In this article, we surveyed six complete service composition architectures for pervasive environments, located in the middleware layer, MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8] and WebDG[12]. We do not claim the exhaustiveness of our classification, but we think that the major middleware for service composition in pervasive environments are depicted. We introduced a novel approach to study the service composition problem. We studied these systems by reducing the composition problem to four main

[1] T. Bittner and M. Donnelly and S. Winter:, Ontology and semantic interoperability, CRCpress (Tailor & Francis), D. Prosperi and S. Zlatanova (ed.): Large-scale 3D data integration: Challenges and Opportunities, pages 139-160, 2005. [2] T. Erl:, Service-Oriented Architecture (SOA): Concepts, Technology, and Design, Prentice Hall, 2005. [3] B. Cole-Gomolski:, Messaging Middleware Initiative Takes a Hit, PComputerworld, 1997. [4] Matjaz Juric and Poornachandra Sarang and Benny Mathew:, Business Process Execution Language for Web Services (2nd edition), PACKT Publishing, 2006. [5] A. Alamri and M. Eid and A. El Saddik "Classification of the state-of-the-art dynamic web services composition techniques", Int. J. Web and Grid Services, Vol. 2, No. 2, 2006, pp. 148-166. [6] D. Chakraborty and A. Joshi and T. Finin and Y. Yesha "Service Composition for Mobile Environments", Journal on Mobile Networking and Applications, Special Issue on Mobile Services, Vol. 10, No. 4, 2005, pp. 435-451. [7] S. Dustdar and W. Schreiner "A survey on web services composition", Int. J. Web and Grid Services, Vol. 1, No. 1, 2005, pp. 1-30. [8] K. Fujii and T. Suda "Semantics-based dynamic service composition", IEEE Journal on Selected Areas in Communications, Vol. 23, No. 12, 2005. [9] F. Ishikawa and N. Yoshioka and S. Honiden "Mobile agent system for Web service integration in pervasive network", Systems and Computers in Japan, Wiley-Interscience, Vol. 36, No. 11, 2005, pp. 34-48. [10] S. Kalasapur and M. Kumar and B. Shirazi "Dynamic Service Composition in Pervasive Computing", IEEE Transactions on Parallel and Distributed Systems, Vol. 18, No. 7, 2007, pp. 907-918. [11] B. Medjahed and Y. Atif "Context-based matching for

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

web service composition", Distributed and Parallel Databases, Special Issue on Context-Aware Web Services, Vol. 21, No. 1, 2006, pp. 5-37. [12] B. Medjahed and A. Bouguettaya and A. K. Elmagarmid "Composing Web services on the Semantic Web", The VLDB Journal, Vol. 12, No. 4, 2003, pp. 333-351. [13] M. Satyanarayanan "Pervasive Computing: Vision and Challenges", IEEE Personal Communication, 2001. [14] G. Barthe and D. Gurov and M. Huisman "Compositional Verification of Secure Applet Interactions", in FASE '02: Proceedings of the 5th International Conference on Fundamental Approaches to Software Engineering, 2002, London, UK, pp. 15-32. [15] N. Dragoni and F. Massacci and C. Schaefer and T. Walter and E. Vetillard "A Security-by-contracts Architecture for Pervasive Services", in Security Privacy and Trust in Pervasive and Ubiquitous Computing Worshop, 2007. [16] B. Girma and L. Brunie and J.-M. Pierson "PlanningBased Multimedia Adaptation Services Composition for Pervasive Computing", in 2nd International Conference on Signal-Image Technology & Internet & based Systems (SITIS'2006), 2006, LNCS series, Springer Verlag. [17] N. Ibrahim and F. Le Mouël and S. Frénot "MySIM: a Spontaneous Service Integration Middleware for Pervasive Environments", in ACM International Conference on Pervasive Services (ICPS'2009), 2009, London, UK. [18] Z. Maamar and S. Kouadri and H. Yahyaoui "A Web services composition approach based on software agents and context", in SAC'04: Proceedings of the 2004 ACM symposium on Applied computing, 2004, Nicosia, Cyprus. [19] E. Niemela and J. Latvakoski "Survey of requirements and solutions for ubiquitous software", in 3rd international conference on Mobile and ubiquitous multimedia, 2004, Vol. x, pp. 71-78. [20] P. Parrend and S. Frénot "Component-Based Access Control: Secure Software Composition through Static Analysis", in Proceedings of the 7th International Symposium, 2008 Springer, LNCS 4954, Budapest, Hungary. [21] C. Preist and C. Bartolini and A. Byde "Agent-based service composition through simultaneous negotiation in forward and reverse auctions", in EC '03: Proceedings of the 4th ACM conference on Electronic commerce, 2003. [22] J. Rao and X. Su "A Survey of Automated Web Service Composition Methods", in First International Workshop on Semantic Web Services and Web Process Composition, 2004, SWSWPC, San Diego, California, USA. [23] Q. B. Vo and L. Padgham "Conversation-Based Specification and Composition of Agent Services", in Cooperative Information Agents (CIA), 2006, Edinburgh, UK, pp. 168-182. [24] Z. Yang and R. Gay and C. Miao and J.-B. Zhang and Z. Shen and L. Zhuang and H. M. Lee "Automating integration of manufacturing systems and services: a semantic Web services approach", in Industrial Electronics Society (IECON) 31st Annual Conference of

11

IEEE, 2005. [25] Ion Constantinescu and Boi Faltings and Walter Binder "Large Scale, Type-Compatible Service Composition", in ICWS '04: Proceedings of the IEEE International Conference on Web Services, Washington, DC, USA, 2004. [26] Evren Sirin and James Hendler and Bijan Parsia "Semiautomatic Composition of Web Services using Semantic Descriptions", in Web Services: Modeling, Architecture and Infrastructure Workshop, Angers, France, 2003. [27] Fabio Casati and Ski Ilnicki and Li-jie Jin and Vasudev Krishnamoorthy and Ming-Chien Shan "Adaptive and Dynamic Service Composition in eFlow", in CAiSE '00: Proceedings of the 12th International Conference on Advanced Information Systems Engineering, London, UK, 2000. [28] Tao Gu and Hung Keng Pung and Da Qing Zhang "A Middleware for Building Context-Aware Mobile Services", in Proceedings of IEEE Vehicular Technology Conference, Los Angeles, USA, 2004. [29] Shankar R. Ponnekanti and Armando Fox "SWORD: A developer toolkit for web service composition'', in 11th World Wide Web Conference, Honolulu, USA, 2002. [30] S. Ben Mokhtar, "Semantic Middleware for ServiceOriented Pervasive Computing", Ph.D. thesis, University of Paris 6, Paris, France, 2007. [31] D. Kuropka and H. Meyer "Survey on Service Composition", Technical Report, Hasso-PlattnerInstitute, University of Potsdam, number 10, 2005. [32] J. Floch ed. "Theory of adaptation", Delivrable D2.2, Mobility and ADaptation enAbling Middleware (MADAM), 2006. [33] C. Mascolo and S. Hailes and L. Lymberopoulos and P. Picco and P. Costa and G. Blair and P. Okanda and T. Sivaharan and W. Fritsche and M. and M. A. Rónai and K. Fodor and A. Boulis "Survey of Middleware for Networked Embedded Systems", Technical Report, FP6 IP Project: Reconfigurable Ubiquitous Networked Embedded Systems, 2005. [34] T. Salminen, "Lightweight middleware architecture for mobile phones", Ph.D. thesis, Department of Electrical and Information Engineering, University of oulu, Oulu, Finland, 2005. [35] UPnP Forum, "Understanding UPnP: A White Paper", Technical Report, 2000. [36] The OWL Services Coalition, "OWL-S: Semantic Markup for Web Servicesr", White paper, 2003. Noha Ibrahim holds an engineering diploma from Ecole Nationale Supérieure d'Informatique et de Mathématique Appliquée de Grenoble (ENSIMAG), and a Phd degree from National Institute for Applied Science (INSA) Lyon, France. The Phd is about service integration in pervasive environments. Her Phd focused on providing a spontaneous service integration middleware adapted for pervasive middleware. Her main interests are middleware for pervasive and ambient computing. Noha Ibrahim is currently a post doctoral at the Grenoble Informatics Laboratory where she works on service composition based framework for optimizing queries.

IJCSI

12

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

Frédéric Le Mouël holds an engineering diploma in Languages and Operating Systems, and a Phd degree from the University of Rennes 1, France. His dissertation focused on an adaptive environment for distributed execution of applications in a mobile computing context. Frédéric Le Mouël is currently associate professor in the National Institute for Applied Sciences of Lyon(INSA Lyon, France), Telecommunications Department, Center for Innovation in Telecommunication and Integration of Services (CITI Lab.). His main interests are service-oriented middleware and more specifically on the fields of dynamic adaptation, composition, coordination and trust of services.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

13

Context Aware Adaptable Applications - A global approach Marc DALMAU, Philippe ROOSE, Sophie LAPLACE LIUPPA, IUT de Bayonne 2, Allée du Parc Montaury 64600 Anglet FRANCE Mail : {[email protected]}

1. Introduction Actual applications (mostly component based) requirements cannot be expressed without a ubiquity and mobile part for end-users as well as for M2M applications (Machine to Machine). Such an evolution implies context management in order to evaluate the consequences of the mobility and corresponding mechanisms to adapt or to be adapted to the new environment. Mobile computing and next, ubiquitous computing, focuses on the study of systems able to accept dynamic changes of hosts and environment [33] . Such systems are able to adapt themselves or to be adapted according to their mobility into a physical environment. That implies dynamic

interconnections, and the knowledge of the overall context. Due to the underlying constraints (mobility, heterogeneity, etc.), the management of such applications is complex and requires considering constraints as soon as possible and having a global vision of the application. Adaptation decision can be fully centralized (A - Figure 1) or fully distributed with all intermediary positions (B&C Figure 1). The consequence is the level of autonomy of decision as well as the level of predictability. Obviously, the autonomy increases with decentralized supervision. Reciprocally, the complexity increases with the autonomy (problems of predictability, concurrency, etc.). +

C- Self Adaptation

Autonomy

B- Adaptation Platform (decentralized supervision) A- Centralized Supervision

Predictability

Abstract Actual applications (mostly component based) requirements cannot be expressed without a ubiquitous and mobile part for end-users as well as for M2M applications (Machine to Machine). Such an evolution implies context management in order to evaluate the consequences of the mobility and corresponding mechanisms to adapt or to be adapted to the new environment. Applications are then qualified as context aware applications. This first part of this paper presents an overview of context and its management by application adaptation. This part starts by a definition and proposes a model for the context. It also presents various techniques to adapt applications to the context: from self-adaptation to supervised approached. The second part is an overview of architectures for adaptable applications. It focuses on platforms based solutions and shows information flows between application, platform and context. Finally it makes a synthesis proposition with a platform for adaptable context-aware applications called Kalimucho. Then we present implementations tools for software components and a dataflow models in order to implement the Kalimucho platform. Key-words: Adaptation, Supervision, Platform, Context, Model

+

Figure 1 : Means of adaptation

Self-adaptable applications need to access to context information. This access can be active if the application captures itself the context (see A - Figure 1), or passive if an external mechanisms gives it access to the context (see B - Figure 1). Nevertheless, with mobile peripherals and the underlying connectivity problems, a fully centralized supervision is not possible. A pervasive supervision [29] appears is a good solution and allows managing complexity, predictability while keeping the advantages of autonomy. In order to be context-aware, applications need to get information corresponding to three adaptation types: data, service and presentation. The first one deals with “raw data” and its adaptations to provide complete and formatted information. Service adaptation deals with the architecture of the application and with dynamic adaptation (connection/disconnections/migration of

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

14

components composing the application). It allows adapting the application in order to respect the required QoS. Presentation deals with HCI (not addressed in this paper). Here is a global schema of an adaptable application: adaptations

wish a QoS

Application

provided QoS

adap tation s

monitor ed

ents rem qui r re use

push/pull

on ces uen

by

infl

Adaptation Manager

Context Information

Figure 2 : Adaptable applications

Whereas [34] [35] do not make distinction between context oriented and application oriented data (functional data), we think that such a distinction makes design easier and offers a good separation of concerns [36] .

2. What is context? 2.1 Definition and model The origin of the term « context awareness » if attributed to Schilit and Theimer [42] . They explain that it is « the capacity for a mobile application and/or a user to discover and react to situations modifications ». Many other definitions can be found in [43] . The application context is the situation of the application so the context is a set of external data that may influence the application [36] . A context management system can interpret the context and formalize it in order to make a high level representation. It creates an abstraction for the entities reacting to situations evolutions, they can be applications [35] , platforms, middlewares, etc. In order to make such abstractions, a three layered taxonomy can be organized as shown in Figure 3: The first layer deals with context information capture. The first type of context is called Environmental: this is the physical context. It represents the external environment where information is captured by sensors. This information is about light, speed, temperature, etc. The second type, called User, gives a representation of users interacting with the application (language, localization, activity, etc.). This is the user profile. The third one deals with Hardware. Most probably, the more “classical” one; it gives information on available resources (memory, % CPU, connections, bandwidth, debit, etc.). It also gives information as displays resolutions, type of the host (PDA,

Smartphone, Mobile Phone, PC, etc.). The third one is the Temporal context. It preserves and manages the history (date, hour, synchronizations, actions performed, etc.). The last one is called Geographic and gives geographical information (GPS Data, horizontal/vertical moving, speed, etc.). The second layer, called « context management » [44] [45] is based on the previous layer representations. It provides services specifying the software environment of the application (platform, middleware, operating system, etc.). The Storage of context data in order to allow services retrieving them, the Quality giving a measure about the service itself or data processed and the Reflexivity allowing to represent the application itself. The localization manages geographic information in order to locate entities, predict their displacements. The last layer proposes mechanisms to permit the adaptation to the context. We will find several mechanisms in order to react to contextual events. The first one is the software component Composition, the second one is the Migration in order to move entities and the last one, the Adaptation to ensure the evolution of the application. This last point is no-functional, the middleware manages it, it can depend on a user profile or on rules provided by the user. The polymorphism facilitates the migration of entities and their adaptation to various hosts (with more or less constraints). Context Management Tools Adaptation

Migration

Composition

Polymorphisme

Context Management Services Service

Storage

Quality

Hardware

Environment

Reflexivity

localization

Type of the context User

Temporal

Geographic

Figure 3 : Taxonomy of context

We propose a context model able to design any context information. This model (called Context Object) provides information needed by entities managing the application. Some information defines the context (its nature) whereas others define its validity. The nature of the context can be [34] : - User (people) as his preferences, - Hardware (things) as network, - Environment (places) as temperature, sunlight, sound, movement, air pressure, etc. It is the physical context. It represents the external environment from where

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

15

information is captured by sensors. It deals with users’ environment [36] as well as hardware environment. Such information is called ContextInformation and we call InformationValidity the validity area of a ContextInformation (example: old information or information which source is very far can be useless). InformationValidity is: - Temporal: Temporal information can be a date or time used as timestamp. Time is one aspect of durability so it is important to date information as soon as it is produced. This temporal information allows making historical report of the application and defining the validity (freshness) of ContextInformation [40] . This freshness is the time since the last sensor reads it. Ranganathan defines a temporal degradation function that degrades the confidence of the information. - Spatial: it is the current location (the host (identity) or the geographic coordinates (GPS)) and makes possible to distinguish local and remote context - Confidence information: how precise is the sensor - Information ownership: in some application hosted on a SmartPhone for example, privacy is very important, therefore, each information has to be stamped with its owner (the producer). Let’s notice that some information is strongly coupled as freshness and confidence whereas others are defined using application data as ownership. That is the reason why [39] identified physical, virtual (data source from software application or services) and logical sensors (combine physical and virtual sensors with additional information) Depending on the application, one information type could be a ContextInformation or a ValidityInformation. For example, location can be a ContextInformation for a user in a forest or can be a ValidityInformation for the sensor network that supervises temperature and air pressure measurement. According to this model, we organize all the characteristics of context information that define type, value, time stamp, source, confidence and ownership [37] or user, hardware, environment and temporal [45] [46] Error! Reference source not found.. In order to structure such contextual information, we proposed a meta-model structuring ContextInformation and ValidityInformation (see Figure 4).

Context

1

1

* RemoteContext

* LocalContext

1

1 *

ContextObject

* *

ContextInformati onObject

*

Spacio-TemporalC ontextObject

Characterization

User

Hardware

Environment

Geographic

Temporal

Figure 4 : Context class diagram

2.2 Context and applications Since several years, the natural evolution of applications to distribution shows the need of more than only processing information. Traditionally, applications are based on input/output, i.e. input data given to an application produces output data. This too restrictive approach is now old fashioned [48] . Data are not clearly identified, processes does not only depend on provided data but depend also on data such the hour, the localization, preferences of the user, the history of interactions, etc. in a word the context of the application. We can find a representative informal definition in [49] "The execution context of an application groups all entities and external situations that influence on the quality of service/performances (qualitative & quantitative) as the user perceives them". Designers and developers had to integrate the execution environment into their applications. This evolution allows applications to be aware of the context, then to be contextsensible and then to adapt their processes and next to dynamically reconfigure themselves in order to react as well as possible to demands. This is evidence, but to adapt itself to the context, the application needs to have a good knowledge of it and of its evolutions. With a research point of view, context needs a vertical approach. All research domains/layers manage contextual information. Many works deal with its design, management, evaluation, etc. Its impact is wide: Reengineering, HCI, Grid, Distributed Applications,

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

16

Ubiquitous Computing, Security, etc. But to be honest, the context it not a new concept in computer science! Since the early 90’s, Olivetti Research Center with the ActiveBadge [Harter, 1994] and most of all, with a lot a regrets, the Xerox PARC with the PARCTab System [51] gave the bases of modern context aware applications.

resources, etc.). The user's context is captured through the interfaces and the information system (user's profile description files). At last, environmental context can be captured through sensors and modified by actuators. Flow

Hardware System and network primitives Resource allocation

#1

Application Adaptation Context Management Contextual Information Acquisition

Figure 5 : Architectural layers of context aware applications

According to Figure 5, context management do imply to have dynamic applications in order to adapt them to variations of the context and so to provide a quality of service corresponding to current capabilities of the environment (application + runtime).

3. Context aware applications

Adding adaptation to context aware applications means the addition of a new interaction corresponding to the influence that the context has on the application. That is the property for the application to adapt itself to the context (Figure 7). Application Data F low #1 = consultation Data F low #2 = modification Data F low #3 = adaptation

Context

Figure 7 : Adaptable Context Aware Application

Achievement of a context aware application can be done: − By self adaptation − By supervision Application

Adaptation

Self Adaptation

Co ntext

3.1 Adaptable context aware applications

2

2

1

D ata Fl ow #1 = co nsu lta tio n D ata Fl ow #2 = mod ifica tion

However, even if it is possible to design limited applications according to the use of contextual information, the main interest is to be able to adapt the behavior of the applications to the context evolutions. Particularly, the increasing use of mobile and limited devices implies the deployment of adaptable applications. Such approach allows having a quality of service management (functional and non-functional services as energy saving for example).

1

A pp lication

Table 1 : Means of interaction between application and context

3

Context aware applications are tightly coupled to mobiles devices and ubiquitous computing in the meaning of "machines that fit the human environment, instead of forcing humans to enter theirs" [1] . These applications are able to be aware of their execution context (user, hardware and environment) and its evolutions. They extract information from the context about geographical localization, time, hardware conditions (network, memory, battery, etc.) as well as about users. Interactions between an application and its context can then be represented by two information flows (Figure 6): − Application captures information from its context − Application acts its context

#2

Supervision

In order to be aware of the context, the following architecture (see Figure 5) is “classical”. An example can be found in [46] . It can be summarized as a superposition of layers. Each of them matches to a contextual information acquisition process, a contextual information management and an adaptation of the application to the context (as defined in Figure 3).

Type of context User Environment Interfaces and Sensors information system Interfaces Actuators

Context

Figure 6 : Context aware application Figure 8 : Supervision vs Self Adaptation: a global vue.

The means operated to realize both data flows of the Figure 6 depend on types of context (Table 1). They are system and network primitives for hardware context (resource allocation, connections, consultation of available

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

3.1.1

17

Self adaptation Ap plicatio n

2 2

Da ta F low #1 = consulta tion Da ta F low #2 = modification Da ta F low #3 = a daptation

2

1

In these approaches a runtime platform interfaces the application and the context. It allows then access to distant context. The application only senses the context (flow #1) by means of the middleware of the platform. The application can modify the context and the platform itself (flow #2). Both the application and the platform adapt themselves to the context (flow #3). This kind of organization is shown in Figure 9.

1

Supervised adaptation

Platform

3

3.1.2

3

Such systems are expected to dynamically self-adapt to accommodate resource variability, change of user needs and system faults. In [27] , self-adaptive applications are described as useful for pervasive applications and sensor systems. Self-adaptive applications mean that adaptations are managed by the applications itself. It evaluates its behavior, configuration, and with distributed application, its deployment. The application captures the context (flow #1) and therefore adapts its behavior (data flow #3). The activity of the application modifies the context (flux #2). This approach, represented in Figure 7, raises the essential problem of accessing to distant context information. Indeed, through the interactions described in Table 1 it is only possible for the application to interact with its local context. In order to get or modify distant contextual information, the designer of the application has to set up specific services on the different sites of its application. It becomes necessary to set up many non functional mechanisms that strongly increases the complexity of the application and are difficult to maintain up to date. Moreover self-adaptive solutions imply to have a planning and an evaluation part at runtime and a control loop. In order to make the evaluation, such application needs components description, as well as software description, structure and various alternatives, i.e. various assembling configurations. Such solutions do not simplify the separations of concern, and so increase the practical viability of the application and its maintainability and possible evolutions. Moreover, with ubiquitous and heterogeneous environments, such generic solutions are not suitable to exploit the potential of hosts [28] . That is the reasons why most systems tend to solve these problems using platforms.

Context

Figure 9 : Adaptable Context Aware Application with platform

Recent works as Rainbow use a closed-loop control based on external models and mechanisms to monitor and adapt system behavior at run time to achieve various goals [32] , such solution is closed to the use of pervasive supervision. In order to implement such a solution, we need a distributed platform on all heterogeneous hosts. Such architecture allows to capture local context, and to propose local adaptations. Additionally, communication between local platforms gives a global vision of the context permitting to have a global measure of the context and adapted reactions. Each platform has three main tasks to accomplish: − Capture of the context: This task is important and implements tools to capture information of layer 1 (see Figure 3). − Context Management Service. Its role is to manage and evaluate information from layer 1 in order to evaluate if adaptation is required. − Context Management Tools. It proposes a set of mechanisms to adapt the application because of variations of the context. The means operated to realize data flows #1 and #2 of Figure 9 depends on the types of context (Table 2). Interactions with local context use the mechanisms described in (Table 1) whereas those with distant context use services of the platform. The middleware of the platform offers services for context capture providing contextual information completed by time and localisation parameters as described in Figure 4. Flo w

Type of context Context Hardware User Environment System and Interfaces Sensors Local network #1 primitives Services of the Services of the Services of Distant platform platform the platform Resource Interfaces Actuators Local allocation #2 Services of the Services of the Services of Distant platform platform the platform Table 2 : Interactions between Application and Context with a Platform

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

18

The role of the platform in this kind of organisation becomes central. We will now define more precisely the role and the architecture of a platform.

3.2 The platforms

Application state change

Application state change

1

Middleware

Generally, we consider a platform as a set of elements of virtualization (Figure 10) allowing application designers to have a runtime environment independent of the hardware and network infrastructures, supporting distribution and offering non functional general services (persistence, security, transactions …) or services specific to a domain (business, medical …).

2

Application

conta iner middleware he terogeneity

servic es

distribution

Conte xt

Figure 10 : Elements of virtualization in a platform

The container virtualizes the application or its components in make them suitable and compatible (interface) with the platform. The framework finishes this task allowing the designer to respect the corresponding model. The middleware virtualizes communications and offers services called by the application in order to access to the context. At last heterogeneity consists in virtualization of the hardware and the operating systems on witch the application runs. Interactions between platform and application are bidirectional and represent the core aspect of the whole system (platform/application). The platform has its proper state evolving when modifications occur in the underlying level (context) and in the application. Consequently, the platform can trigger updates of the application state. The interaction mode between application and platform can be achieved by: - service - container In the first case, the changes of the state of the application that the platform knows are those inserted into the application itself by services, API or middleware calls ( Figure 11 left), while in the second case the containers of the business components send to the platform information about their evolution ( Figure 11 right). These containers can themselves offer some services to the business components or capture information about their changes of state by observing their behavior.

causes

imply platform

platform state change

state change

imply

imply

Context

Context Interaction platform/Application by services

Fra mework

Container

Interaction platform/Application by container

Figure 11 : Modes of interaction between Application and Platform

The interaction mode between platform and application allows distinguishing two families (Figure 12): - Non intrusive platforms; - Intrusive platforms. A non intrusive platform acts on external elements of the application like data or uses a event based mechanism. It raises events when an internal state change occurs. These events can be caught by specific components of the application (event listeners). These modifications of external elements and these events imply the changes of the application state. An intrusive platform can directly change the state of the application without participation of the application. This can be achieved by a direct action on the functional part either by a modification of the circulating of information either by directly modifying the architecture of the application itself. The use of objects and components facilitates greatly this task. Application

Application change of state

chang of state

cause Listener

cause

event platform

platform change of state

change of state

cause

cause Context Non intrusive platform

Context Intrusive platform

Figure 12 : Modes of interaction between platform and application

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

3.3 Architecture applications

of

context

aware

adaptable

An overall schema of the architecture of an adaptable context aware application is presented in Figure 13. Relationship between platform and application are materialized by four flows: A ppl icati on

D

C

B

A

D ata D ata D ata D ata

Fl ow Fl ow Fl ow Fl ow

A = Requirements for reso urces B = Control of th e p latform C = In fo rma tion fro m the platfo rm D = C ontrol of the application

Now, let’s have a look on different context aware applications types that can be build according to data flows really used. Firstly it is important to notice that for context aware applications, data flow A is essential. In order to be adaptable, at least flow C or flow D need to be provided. If not, the platform is the only one able to be adaptable. The optional data flow B represents the possibility that the application has to configure the interaction modes corresponding to the flows A, C and D. The Table 3 presents the four models of adaptation that it is possible to realize according to the flows used:

Platfo rm

Flows used

Figure 13: Information flows between application and platform [1]

This overall schema can be completed by adding the flows of interactions with the context as presented in Figure 9. We then obtain the general architecture shown in Figure 14 :

1

2

Application

D

C

B

A

2

Platform

2

1

3

Data Flow A = Access to services of the middleware some of witch give access to the context Data Flow B = Control of the platform by the application Data Flow C = Information for non intrusive mode Data Flow D = Information for intrusive mode Data Flow #1 = Consultation of the context Data Flow #2 = Modification of the context Data Flow #3 = Adaptation to the context

Context

Figure 14 : Interactions between application, platform and context

Interactions between application and platform can be described as follow: − Data Flow A corresponds to information from the application to the platform through usage of services of the middleware. − Data Flow B represents the possibility to the application to configure the behavior of the platform (events priorities, filtering of contextual information, etc.) − Data Flow C corresponds to the non intrusive mode of interaction between platform and application. It deals with events produced by the platform for the listeners inside the application. − Data Flow D represents the intrusive mode of interaction between platform and application. It deals with updates of the application by the platform (modification of the architecture by adding/suppressing/moving components or by changing their business part).

19

3

4

Type of interaction

Consequence

The platform is a Only the platform is able middleware (services for to adapt itself to the A accessing to local and context distant context) The platform is a Adaptation is decided by middleware (services for the application according A and accessing to local and to information send by C distant context) and offers the platform an adaptation service The platform is a Adaptation fully middleware (services for supervised A and accessing to local and D distant context) and supervises the adaptation The platform is a Adaptation is partially middleware (services for supervised and partially A, C accessing to local and decided by the and D distant context) and offers application an adaptation service Table 3 : Possible models of adaptation according to the flows used

Data flow B allows to enrich the interaction types presented in the above Table 3: − In the first case: the application only can configure the services of context access provided by the platform − In the second case: the application can also choose the events which are indicated to it and their priority. − In the third case: the application can configure the level of intrusion of the platform and eventually protect itself from it at some moments. − The fourth case is the union of the two before. According to the taxonomy proposed in [23] , middlewares like Aura [6] [7] [8] [9] , CARMEN [10] , CORTEX [11] [12] and CARISMA [13] [14] [15] belong to the first category while Cooltown [16] [17] , GAIA [18] [19] and MiddleWhere [20] belong to the second category. SOCAM [21] and Cadecomp [24] belong to the third category while MADAM [25] and Mobipads [22] belong to the fourth.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

20

B

B

Application

Adaptation

Application

Adaptation

D

D

2

Platform

Services 2

Services 2

2

A

C

Platform

1

1

Context capture

Capture of context

Context

Context

Figure 15: General schema of adaptation with a platform

We can then draw a general schema of an adaptable context aware application (Figure 15). The platform is distributed on every device hosting components of the applications. Then it can access to all contextual information. It offers a set of services in order to allow the application accessing to local or distant context (data flow A). Moreover it includes an adaptation manager sending events (data flow C) and a manager supervising the application (data flow D). The execution of this supervision manager can be configured by the application (data flow B).

3.4 Functional model of adaptation The execution of an adaptable context aware application looks like a looped system: the context modifies the application, the execution of the application modify the context and so on. When a platform is introduced between the context and the application, a new loop appears because the platform itself is modified by the context and reciprocally, the platform modifies the context. Depending on using an intrusive or a non intrusive platform model, these loops are achieved by different data flows. B Adaptation

Application

Services 2

2

C

Platform

1

Context capture

Context

Figure 16 : Non intrusive adaptation model



Case 1: Adaptation controlled by the application (non intrusive model) : The context is captured by the platform (data flow #1) which signals its modifications to the application (data flow C). The application adapts itself using or not the services of the platform (data flow B). Activity of the application and platform modifies the context (data flow #2)

Figure 17 : Intrusive adaptation model

− −

Case 2: Adaptation monitored by the platform (intrusive model): The context is captured by the platform (data flow #1) which modifies the application (data flow D). This mechanism can be monitored by the application (data flow B). Activity of the application and of the platform modifies the context (data flow #2).

3.5 General architecture of a platform for adaptable context aware applications The platform is composed of three main parts: 1. The capture of context is done by usual mechanisms as described in Table 1. They are system and network primitives, information system and sensors. Moreover, the platform also receives information about the application’s running context from the containers of the business components (Figure 10). 2. The services concern both the application and the platform itself (more precisely the part in charge of the adaptation): For the application it corresponds to: • Services for accessing to the context (hardware, user, environment) with filtering possibilities (time, localisation) • Other usual services (persistence, …) For the adaptation it means: • Services for accessing to the context • Services for Quality of Service measurement • Services for reflexivity that is to say the knowledge that the system constituted by the platform and the application has of itself. 3. The adaptation matches the general schema of adaptation proposed in [3] which distinguishes two parts: • The evolution manager which implements the mechanisms of the adaptation; • The adaptation manager which monitors and evaluates the application.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

components uses the generic framework called Korrontea. This framework is a first class component connector able to implement various communications policies.

Plan changes

Evaluate and monitor observations

4.1 Kalimucho architecture

Collect observations

Evolution management

Architectural model

Implementation

Maintain coherency

Figure 18 : General schema of adaptation [3]

The evolution manager monitors the application and its environment. Its architectural model selects an implementation maintaining the coherency of the application. The essential role of this manager is to check if deployment of the application is "causally connected" to the system [5]. Such a model integrates reflexivity like defined in [4] but limited to the architecture of the application and therefore protecting the encapsulation of the business components. The adaptation manager receives observations measured by the evolution manager. It evaluates them in order to select an adaptation and to find a new deployment of the components of the application (Figure 18).

4. Kalimucho platform and implementation tools

Korrontea

BC Osagaia

site #1

C D

Communication between platforms

D

Platform

Osag ai a Components

A

BC Osagaia

Korrontea Korrontea

BC Osagaia

A

C D

D

A

A

The architecture of the application has to be virtualized in order to be monitored by the platform. The general architecture of the Kalimucho platform is the following:

We propose to build the architecture of adaptable context aware applications on a distributed platform called Kalimucho. The application is made of business components (BC) interconnected by information flows. To directly modify the architecture of the application it is necessary that the platform should be able to add/remove/move/connect/disconnect the components. Moreover the platform has to capture the context on every site. We created a container for information data flows named Korrontea and another for business components named Osagaia [26] . These containers collect local contextual information from business components and connectors and send them to the platform. They receive back supervisions commands from the platform. Interactions between the platform and the application are implemented with the flows shown in Figure 20. We can notice that because Korrontea containers have a non functional role into the application (information transportation), they do not accept the data flow C and are not event listeners. On the other hand, some BC can react to context events sent by the platform towards Osagaia containers.

A

Adaptation management

D

New deployment

21

A

C D

Platform site #2

Figure 20 : Interactions between application and platform in Kalimucho Pl atform Kalimucho Osagaia Components

Host 2

Osagaia Components Commandes

Platform Kali mucho Host 1

États

Pl atform Kali mucho Host 3

Data Fl ows between Components (Korrontea connectors) Kalimucho platform Intra-platform communicatio ns Osagaia Components

Figure 19: Kalimucho’s General Architecture

It is based on a distributed service based platform implementing non-functional services for adaptations (layer 2 – Figure 3). The functional part is implemented with software and hardware components running into the generic Osagaia container. Communication between

Our work deals with various devices as sensors (which are CLDC compliant), PDA, SmartPhones (CDC compliant) and traditional PCs. Such an heterogeneous environment implies several services variations devoted to the platform: The capture of the context is done by components (Osagaia) and flow containers (Korrontea). Depending on the host running the component, it will capture users, environment, hardware, temporal or geographic information (see layer 1 - Figure 3). The second layer (context management services) is done by implementing an heuristic in order to evaluate the current Quality of Service (QoS) and to propose adaptations if needed and if possible. The last layer (context management tools) gives solutions to provide adaptations (add/remove/move/connect/disconnect components).

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

22

The platform is distributed on every machine on which components of the application are deployed (desktops, mobile devices and sensors). The different parts of the platform communicate through the network. Communications between BCs (local or distant) are achieved by data flows encapsulated into Korrontea containers. Various versions of the platform are implemented on the different hosts according to their physical capacities. On a desktop all the parts of the platform are implemented whereas, on a mobile device, and particularly on a wireless sensor, light versions are proposed (one for CDC and one for CLDC compliant hosts). Consequently, only non avoidable services for the host are deployed (for example a service for persistence is useless on a sensor). In the same way, the adaptation manager implemented on a mobile device can be lightened using internal services of one of the neighbouring platform (for example, only local routing information is available on a limited device). If the platform of this device needs to find others routes in order to set up a new connection, it has to use services of the platforms implemented on neighbouring desktops.

4.2 Osagaia Software Component Model Osagaia Input Unit

Korrontea

Acces sPort R eadF low

Access Port

connections. The control unit manages the life cycle of the business component and the interactions with the runtime platform. Thus, the platform supervises the containers and, indirectly, the business components (a full description of the Osagaia software component model is available in [31] ). Thanks to this container, business components read and write data flows managed by Connectors called Korrontea (see Figure 22). Its main role is to connect software components of the applications. The Korrontea container receives data flows produced by components and transports them. It is made up of two parts. The control unit implements interactions between the Korrontea container and the platform while an exchange unit manages the input/output connections with components. The container is the distributed entity of our model, i.e. it can transfer data flows between different sites of distributed applications. The flow management is done according to the business part of the connector implementing both the communication mode (client/server for example) and the communication politic (with or without synchronization, loss of data, etc.). A full description of the Korrontea component model is available in [28] ). GetSlice Input Unit

WriteF low Business Component

Output Unit

ProvideSlice Client/Serv er Pro cess

Control Unit

...

C, D W riteFlow

Control Unit

A D

D

Interactions with the Platform

Figure 21: Osagaia Conceptual Model

Finally we design the software component model in order to ensure the implementation of distributed applications according to the specifications expressed by functional graphs [41] . Functional components are called business component since they implement the business functionalities of applications. These components need to be executed into a container whose role is to provide non-functional implementation for components. The architecture of this container is shown in Figure 21, we call it Osagaia. Its role is to perform interactions between business components and their environment. It is divided into two main parts: the exchange unit (composed of input and output units, see Figure 21) and the control unit. The exchange unit manages data flows input/output

{

Interactions with the Platform

{

C

A A

A A

Acces sPort

D Acces sPort Acc essPort

Acc essPort

Output Unit

Figure 22: Korrontea Conceptual Model

5. Conclusion In this paper, we presented an overview of adaptable applications. Because such applications need knowledge of their environment, we made a definition of the context and presented it according to applications uses. Next, we present adaptation management politics and their possible implementation. This part was followed by a presentation of implementation tools able to provide adaptations. We finished by the description of the Kalimucho platform, software and connectors containers models used in order to make adaptations. Implementing context-aware adaptable applications with a platform helps having a global view of the application and of the context. The global view of the application permits an optimum mobility and resource management. The global view of context permits considering the whole context of the application instead of the only local one.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

The system composed of the platform and the application make up a reflexive context aware system. The problem of such an approach is its inherent complexity. Context aware platforms become more and more complex in order to manage a context more and more variable and evanescent. So, depending on the targeted application, it could be much more interesting to provide various lighter, specialized and reflexive platforms providing a view of their state. Moreover, such platforms are able to be heaped with other light, specialized and reflexive ones. The influence of the environment on the system behavior leads to strongly couple the execution platform and the application[38] . So design methods for applications and platforms must also be coupled to constitute a sole design method. Instead of making a whole design step, we propose a lifecycle including both application and platform (which is also an application – this is recursive) to finish with implementation tools (platform specific, component and connector models and specific implementations). Such approach let us imagine wide development with automatic code generation.

6. Bibliography [1] [2]

[3]

[4]

[5]

[6]

[7]

[8]

Weiser, M. (1991) ‘The computer for the 21st century’, Scientific American, pp.94–104. C. Efstratiou, K. Cheverst, N. Davies, A. Friday : "An Architecture for the Effective Support of Adaptive Context-Aware Applications". In Proc. of the Second Int’l Conference on Mobile Data Management (MDM 2001). P. Oreizy, M. M. Gorlick, R. N. Taylor, D. Heimbigner, G. Johnson, N. Medvidivic, A. Quilici, D. S. Rosenblum, A. L. Wolf : "An architecture-based approach to selfadaptative software". IEEE Intelligent Systems, vol 14 n°3, pp : 54-62. Mai/Juin 1999. P. Maes : "Concepts and experiments in computational reflection". In proceedings of the conference on objectoriented systems, languages and applications (OOPSLA'87), pp : 147-155. Orlando, Florida 1987. S. Krakowiak : "Introduction à l'intergiciel", Intergiciel et construction d'application réparties (ICAR), pp : 1-21, 19 Janv 2007, Licence Creative Commons D. Garlan, D. Siewiorek, A. Smailagic, and P. Steenkiste. Project Aura: Toward Distraction-Free Pervasive Computing. IEEE Pervasive computing, 1(2):22–31, April– June 2002. U. Hengartner and P. Steenkiste. Protecting access to people location information. In D. Hutter, G. Müller, W. Stephan, and M. Ullmann, editors, SPC, volume 2802 of LNCS, pages 25–38. Springer, 2003. G. Judd and P. Steenkiste. Providing contextual information to pervasive computing applications. In PERCOM ’03: Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, page 133,Washington, DC, USA, 2003. IEEE Computer Society.

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16] [17]

[18]

[19]

[20]

[21]

[22]

[23]

23

J. P. Sousa and D. Garlan. Aura: An architectural framework for user mobility in ubiquitous computing environments. In WICSA 3: Proceedings of the IFIP 17th World Computer Congress - TC2 Stream / 3rd IEEE/IFIP Conference on Software Architecture, pages 29–43, Deventer, The Netherlands, The Netherlands, 2002. Kluwer, B.V. P. Bellavista, A. Corradi, R. Montanari, and C. Stefanelli. Context-aware middleware for resource management in the wireless internet. IEEE Transactions on Software Engineering, 29(12):1086–1099, 2003. H. A. Duran-Limon, G. S. Blair, A. Friday, P. Grace, G. Samartzisdis, T. Sivahraran, and M. WU. Contextaware middleware for pervasive and ad hoc environments, 2000. C.-F. Sørensen, M. Wu, T. Sivaharan, G. S. Blair, P. Okanda, A. Friday, and H. Duran-Limon. A context-aware middleware for applications in mobile ad hoc environments. In MPAC ’04: Proc. of the 2nd workshop on Middleware for pervasive and ad-hoc computing, pages 107–110, New York, NY, USA, 2004. ACM Press. L. Capra. Mobile computing middleware for context aware applications. In ICSE ’02: Proceedings of the 24th International Conference on Software Engineering, pages 723–724, New York, NY, USA, 2002. ACM Press. L. Capra, W. Emmerich, and C. Mascolo. Reflective middleware solutions for context-aware applications. Lecture Notes in Computer Science, 2192:126–133, 2001. L. Capra, W. Emmerich, and C. Mascolo. Carisma: context-aware reflective middleware system for mobile applications. IEEE Transactions on Software Engineering, 29(10):929 – 45, 2003/10/. J. Barton and T. Kindberg. The Cooltown user experience. Technical report, Hewlett Packard, February 2001. P. Debaty, P. Goddi, and A. Vorbau. Integrating the physical world with the web to enable context-enhanced services. Technical report, Hewlett-Packard, Sept. 2003. M. Roman, C. Hess, R. Cerqueira, A. Ranganathan, R. Campbell, and K. Nahrstedt. A middleware infrastructure for active spaces. IEEE Pervasive Computing, 1(4):74 – 83, 2002/10/. M. Román, C. K. Hess, R. Cerqueira, A. Ranganathan, R. H. Campbell, and K. Nahrstedt. Gaia: A Middleware Infrastructure to Enable Active Spaces. IEEE Pervasive Computing, pages 74–83, Oct–Dec 2002. A. Ranganathan, J. Al-Muhtadi, S. Chetan, R. H. Campbell, and M. D. Mickunas. Middlewhere: A middleware for location awareness in ubiquitous computing applications. In H.-A. Jacobsen, editor, Middleware, volume 3231 of Lecture Notes in Computer Science, pages 397–416. Springer, 2004. T. Gu, H. K. Pung, and D. Q. Zhang. A middleware for building context-aware mobile services. In Proceedings of IEEE Vehicular Technology Conference, May 2004. A. Chan and S.-N. Chuang. Mobipads: a reflective middleware for context-aware mobile computing. IEEE Transactions on Software Engineering, 29(12):1072 – 85, 2003/12. Kristian Ellebæk Kjær. A survey of context-aware middleware. In Proceedings of the 25th conference on

IJCSI

24

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IASTED International Multi-Conference: Software Engineering Innsbruck, Austria ,Pages 148-155, 2007 Dhouha Ayed, Nabiha Belhanafi, Chantal Taconet, Guy Bernard. Deployment of Component-based Applications on Top of a Context-aware Middleware. - The IASTED International Conference on Software Engineering (SE 2005) - Innsbruck, Austria - February 15-17, 2005. http://picolibre.int-evry.fr/projects/cadecomp MADAM Consortium. MADAM middleware platform core and middleware services. Editor Alessandro Mamelli (Hewlett-Packard), deliverable D4.2, 30 March 2007. hppt://www.intermedia.uio.no/confluence/madam/Home C. Louberry, M. Dalmau, P. Roose – Architectures Logicielles pour des Applications Hétérogènes Distribuées et Reconfigurables – NOTERE’08 - 23-27/06/2008, Lyon. Robert Laddaga, Paul Robertson, Self Adaptive Software: A Position Paper, SELF-STAR: International Workshop on Self-* Properties in Complex Information Systems, 31 May - 2 June 2004 Holger Schmidt, Franz J. Hauck: SAMProc: Middleware for Self-adaptive Mobile Processes in Heterogeneous Ubiquitous Environments. 4th Middleware Doctoral Symposium - MDS, co-located at the ACM/IFIP/USENIX 8th International Middleware Conference (Newport Beach, CA, USA, November 26, 2007). Baresi, L.; Baumgarten, M.; Mulvenna, M.; Nugent, C.; Curran, K.; Deussen, P.H. - Towards Pervasive Supervision for Autonomic Systems Distributed Intelligent Systems: Collective Intelligence and Its Applications, 2006. DIS 2006. IEEE Workshop on Volume, Issue, 15-16 June 2006 Page(s):365 – 370. Emmanuel bouix, Philippe Roose, Marc Dalmau - The Korrontea Data Modeling - Ambi Sys 2008 - International Conference on Ambient Media and Systems - 11/14 february, Quebec City, Canada, 2008. E. Bouix, M. Dalmau, P. Roose, F. Luthon. A Component Model for transmission and processing of Synchronized Multimedia Data Flows. In Proceedings of the 1st IEEE International Conference on Distributed Frameworks for Multimedia Applications (France, February 6-9 2005). D. Garlan, J. Kramer, and A. Wolf, editors. Proceedings of the First ACM SIGSOFT Workshop on Self-Healing Systems (WOSS ’02). ACM Press, 2002. [Roman, 2000] Roman G.C., Picco, G.P. Murphy A.L. – Software Engineering for mobility : a roadmap – ICSE 2000 – ACM Press, New York, USA, p. 241-258 – 2000. A.K. Dey G.D. Abowd – Towards a better understanding of context and context-awareness – CHI 2000 - Workshop on the What, Who, Where, When and How of ContextAwareness, The Hague, Netherlands, April 2000. Dey, A.K. and Abowd, G.D. ‘A conceptual framework and a toolkit for supporting rapid prototyping of context-aware applications’, HCI Journal, Vol. 16, Nos. 2–4, pp.7–166. T. Chaari, F. Laforest - L’adaptation dans les systèmes d’information sensibles au contexte d’utilisation: approche et modèles. Conférence Génie Electrique et Informatique (GEI), Sousse, Tunisie, mars 2005. pp. 56-61. Matthias Baldauf, Schahram Dustdar, Florian Rosemberg A survey on context-aware systems – Int’l journal on Ad Hoc and Ubiquitous Computing, Vol.2, N°4, 2007.

[38] T.A. Henzinger and J. Sifakis. The Embedded Systems Design Challenge Invited Paper, FM 2006, pp. 1-15. [39] Indulska, J. and Sutton, P. (2003) ‘Location management in pervasive systems’, CRPITS’03: Proceedings of the Australasian Information Security Workshop, pp.143–151. [40] A. Ranganathan, J. Al-Muhtadi, S. Chetan, R. H. Campbell, and M. D. Mickunas. Middlewhere: A middleware for location awareness in ubiquitous computing applications. Vol. 3231 of LNCS, pages 397– 416. Springer, 2004. [41] Sophie Laplace, Marc Dalmau, Philippe Roose - Kalinahia: Considering quality of service to design and execute distributed multimedia applications - NOMS 2008 IEEE/IFIP Int'l Conference on Network Management and Management Symposium - 7-11/04/2008 Salvador de Bahia, Brazil, 2008. [42] Bill Schilit, Marvin Theimer - Disseminating Active Map Information to Mobile Hosts - IEEE Network, September, 1994 [43] Jason Pascoe , Nick Ryan, David - Using while moving: HCI issues in fieldwork environments –ACM Transactions on Computer-Human Interaction (TOCHI) Vol. 7 , Issue 3 (09/2000) - Special issue on humancomputer interaction with mobile systems - 2000 [44] K.E. Kjær - A Survey of Context-Aware Middleware Software Engineering - SE 2007 - Innsbruck, Austria, 2007. [45] Frédérique Laforest - De l’adaptation à la prise en compte du contexte – Une contribution aux systèmes d’information pervasifs – Habilitation à Diriger les Recherches, Université Claude Barnard, Lyon I, 2008. [46] Daniel Cheung-Foo-Wo, Jean-Yves Tigli, Stéphane Lavirotte, Michel Riveill. “Contextual Adaptation for Ubiquitous Computing Systems using Components and Aspect of Assembly” in Proc. of the Applied Computing (IADIS), IADIS, Salamanca, Spain, 18-20 feb 2007 [47] Guanling Chen, David Kotz - A Survey of Context-Aware Mobile Computing Research - Dartmouth College Technical Report TR2000-381, November 2000. [48] H. Lieberman and T. Selker - Out of context: Computer systems that adapt to, and learn from, context – IBM System Journal - Volume 39, Numbers 3 & 4, MIT Media Laboratory 2000. [49] Pierre-Charles David, Thomas Ledoux - WildCAT: a generic framework for context-aware applications, Proceedings of the 3rd international workshop on Middleware for pervasive and ad-hoc computing, ACM International Conference Proceeding Series; Vol. 115 [50] A. Harter, A. Hopper – A distributed location system for the active office. IEEE Networks, 8(1):6270, 1994. [51] Want, R. Schilit, B.N. Adams, N.I. Gold, R. Petersen, K. Goldberg, D. Ellis, J.R. Weiser, M. - An overview of the PARCTab ubiquitous computing environment. IEEE Personal Communications, 2(6): 2833, 1995. Marc Dalmau is an IEEE member and Assistant Professor in the department of Computer Science at the University of Pau, France. He is a member of the TCAP project. His research interests include wireless sensors, software architectures for distributed multimedia applications, software components, quality of service,

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

25

dynamic reconfiguration, distributed software platform, information system for multimedia applications. Philippe Roose is an Assistant Professor in the department of Computer Science at the University of Pau, France. He is responsible of the TCAP project - Video flows transportation on sensor networks for on demand supervision. His research interests include wireless sensors, software architectures for distributed multimedia applications, software components, quality of service, dynamic reconfiguration, COTS, distributed software platform, information system for multimedia applications. Sophie Laplace is Doctor Biographies in the department of Computer Science at the University of Pau, France. Her researches interests include formal methodology, Quality of Service design and evaluation. Her works mainly focus on multimedia applications. She defended her PhD (Software Architecture Design in order to integrate QoS in Distributed Multimedia Applications) thesis in 2006

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

26

Embedded Sensor System for Early Pathology Detection in Building Construction Santiago J. Barro Torres, Carlos J. Escudero Cascón 1

Department of Electronics and Systems, University of A Coruña A Coruña, 15071 Campus Elviña, Spain [email protected]

2

Department of Electronics and Systems, University of A Coruña A Coruña, 15071 Campus Elviña, Spain [email protected]

Abstract Structure pathology detection is an important security task in building construction, which is performed by an operator by looking manually for damages on the materials. This activity could be dangerous if the structure is hidden or difficult to reach. On the other hand, embedded devices and wireless sensor networks (WSN) are becoming popular and cheap, enabling the design of an alternative pathology detection system to monitor structures based on these technologies. This article introduces a ZigBee WSN system, intending to be autonomous, easy to use and with low power consumption. Its functional parts are fully discussed with diagrams, as well as the protocol used to collect samples from sensor nodes. Finally, several tests focused on range and power consumption of our prototype are shown, analysing whether the results obtained were as expected or not. Key words: Wireless Sensor Network, WSN, Building Construction, ZigBee, IEEE 802.15.4, Arduino, XBee.

1. Introduction Over the last years a growing interest in security in the field of building construction has been noticed. The knowledge of distortions and movements caused by the structures at the right time makes it possible to assess their tension and, consequently, improve workers' security. The techniques traditionally used in the inspection of structures are very basic, mostly centered on having an operator watching out for damages present in the materials (fissures in the concrete, metal corrosion...). Sometimes, the structure is hidden or is difficult to reach. Additionally, access to the structure can be dangerous, as is the case with bridges. All these problems complicate the examination process. Therefore, it is necessary to have alternative means of detecting pathologies [1, 2, 3].

Advances in microelectronics make it possible to design new systems for carrying out technical inspection of works. Nowadays, it is possible to obtain embedded systems with a high degree of integration, processing, storage and a low consumption at an affordable price. On the other hand, sensor networks have evolved up to the point that it is possible to have a series of sensors sharing information and cooperating to reach a common goal. Thus, this new generation of intelligent sensors is beginning to look like a suitable technology for pathology detection. The saving in maintenance costs in the near future would make it possible to recover the value of the initial investment, by avoiding hiring technical inspection services. The boom of wireless technologies has also reached the sphere of sensor networks, with technologies such as ZigBee [4, 5, 6], which lets us interconnect a group of sensors in an ad-hoc network without a default physical infrastructure or a centralized administration [7]. For that reason, this technology is very suitable for this application. In this article the design and implementation of a pathology detection network based on embedded systems, sensors and the ZigBee technology is presented, satisfying the following requirements, as for example: -

-

-

-

Ease of Use. The network must be able to configure itself, without human intervention, reducing maintenance costs. Fault tolerance. In case one of the intermediate nodes fail, the network would look for alternative roots, so as not to leave any node isolated. Scalability. The network should be as extensible as possible, to place the sensor nodes located in areas potentially far away from the building. Low consumption. Sometimes it will be difficult or impossible to feed the nodes directly from the electric

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

-

network, so it is necessary to equip them with a battery they will have to take full advantage of. Flexibility. The frequency with which samples are taken will change with time, which means that periodicity must be an independently configurable parameter in every single node. In addition, modification of the periodicity must be possible at any moment, even when the node is sleeping.

The system presented in this article is made of a series of sensor nodes specialized in the detection of pathologies, distributed along the structure of a building. These nodes communicate wirelessly through a ZigBee mesh network, which makes expanding the network easier, as there is no default structure. Moreover, management becomes easier, too. One of these nodes, formally called coordinator, is in charge of gathering and storing the samples sent by sensor nodes at a configurable periodicity, for their use in subsequent studies. Sensor nodes, in turn, are fed by a battery, and can thus operate autonomously, but at the cost of using a power saving design. The article structure is as follows: The second section describes the problem and the technical solution adopted. The third section presents the logical design of the system, showing its different parts and explaining how they operate. The fourth section deals with implementation, including commented photos of the prototype built. The fifth section shows the test results performed. Finally, the sixth section is dedicated to the conclusions.

sensors are, because gauges are attached or embedded within the structure. Another kind of sensor to take into account is temperature catheters, which allows concrete monitoring in its first stages after the building construction has been completed. ZigBee is a specification for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). In turn, IEEE 802.15.4 specifies the physical layer and media access control for low-rate wireless personal area networks (LR-WPANs) [12]. The purpose of ZigBee is to provide advanced mesh networking functions not covered by IEEE 802.15.4. ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe, 915 MHz in the USA and Australia, and 2.4 GHz in most jurisdictions worldwide [4]. This technology is intended to be simpler, cheaper and to have lower power consumption than other WPANs such as Bluetooth. In a ZigBee network there are three different types of nodes: -

-

2. Problem Statement and Technology The choice of sensors depends on the physical phenomenon to be measured. In this case, the most suitable sensors to perform pathology detection are strain gauges [8], potentiometric displacement sensors and temperature catheters [9, 10]. Strain gauges are used to measure the deformation level of materials such as concrete and steel. Its operation is based on the piezoresistive effect, which means that its internal resistance changes when is deformed by an external force. As the voltage variations obtained are very small (less than 1 mV), it is necessary to add extra circuitry to condition the signal prior to reading its value: Amplification, noise filtering, etc. [11] Heating the gauge several minutes before sampling is another important restriction. On the other hand, potentiometric displacement sensors are used to measure the movement suffered by the structure with respect to its anchors. An important difference between the two sensors mentioned is that whilst strain gauges are not reusable, displacement

27

-

ZigBee Coordinator: The coordinator forms the root of the network tree and might bridge to other networks. ZigBee End Device: Contains just enough functionality to talk to the parent node (either the coordinator or a router), and it cannot relay data from other devices. This relationship allows the node to be asleep a significant amount of the time thereby giving long battery life. ZigBee Router: Acts as an intermediate router, passing on data from other devices. Moreover, it might support the same functions as a ZigBee End Device.

Nowadays, ZigBee-based devices are easy to find, as many semiconductor and integrated circuit manufacturers have opted for this new technology: -

-

-

Digi International, leader in Connectware solutions, offers development kits for its module XBee® & XBee-PRO® ZB RF [13]. Rabbit, an 8-bit microcontroller specialized company, has developed the arquitecture iDigi™ BL4S100, which consists of a XBee-PRO® ZB module with a Rabbit® 4000 microcontroller, capable of acting as an intelligent controller or ZigBee-Ethernet gateway [14]. Ember, a monitoring and wireless sensor network provider offers the InSight Development Kit [15],

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

28

-

-

which includes everything needed to create embedded applications in their radios EM250/EM260. Crossbow, one of the leading wireless sensor manufacturers, features several development kits that provide complete solutions in the development of such networks, including the Professional Kit [16]. Sun Microsystems offers their Java-based Sun SPOT [17], formed by a processor, battery and a set of sensors.

protocol is important to ensure that samples are collected properly.

3.2 Communications Model

3. Design This section shows the architecture and models describing the presented system.

3.1 System Architecture

Fig. 1 System Architecture

The system comprises a set of nodes interconnected in a ZigBee network, as shown in Figure 1. This structure provides us mobility to place end devices anywhere in the building, as well as connectivity to collect samples: -

-

-

The Coordinator node works as gateway between the sensor network and the main station computer, where the Coordinator Application Software will be running, which provides the user an interface to manage the system Router Nodes extend the network coverage through the whole building. They need to be placed where can be powered without interruption. End Devices sample sensor at regular intervals, and then send the values obtained to the coordinator. When not in use, the end device enters a low power consumption mode, called ‘sleep mode’, helping to increase battery life.

The Coordinator and End Devices collaborate with each other, following the protocol explained below. This

Fig. 2 Sample Colleting Protocol Sequence Chart

The Coordinator and End Devices interact using a special state message-based protocol (see Figure 2). The protocol starts when any of the End Devices of the network decides to awake, notifying the Coordinator such situation with the message “I am awake” (1). This makes the sample periodicity management easier, since each End Devices know when it is time to take the next sample. Once the protocol has been started, the Coordinator is completely free to send the End Device any remote commands requests (2). For example, heating the gauge, as this is one of the requirements of this type of sensors. When the gauge is ready, the Coordinator could ask the End Device for as many samples as necessary. This approach gives us great flexibility to adapt the protocol to the application needs, with minimal system design changes. Finally, the Coordinator asks the End Device to sleep immediately (3). This event marks the end of the protocol, which will be executed again when the End Device decides to send the next sample, according to the sampling periodicity settings.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

The protocol requires processing capacity both in the Coordinator and the End Device, as illustrated in the next subsection. Coordinator Flow Chart (Figure 3): The Coordinator waits for the message “I am awake”, which is sent by one of the End Devices who owns the periodicity control. When this message arrives, the Coordinator asks the End Device to heat its strain gauge. After a while, the Coordinator asks the End Device to send one sample, which is stored in the Coordinator’s database whenever it is received. Finally, the Coordinator puts the End Device to sleep.

29

Sleep. Whenever a remote command is received, the message type is checked and the appropriate action is executed. The protocol ends when the Coordinator sends a Sleep End Device Request.

Fig. 4 End Device Flow Chart

3.3 Network Model The network model is composed by three different submodels, in accordance with their responsibilities, as it was mentioned earlier: -

Coordinator Model Router Model End Device Model

Coordinator Model: The coordinator is responsible for collecting samples from the end devices, besides network management. There is only one in the entire network, and consists of the following elements: Fig. 3 Coordinator Flow Chart

End Device Flow Chart (Figure 4): The End Device has its own internal timer to know when to wake up, according to the sampling frequency settings. When it is time to wake up, the End Device notifies the Coordinator with the message “I am awake”, and enters an idle state, waiting for remote request commands coming from the Coordinator. According to the example being shown, there are three possible commands: Heat Gauge, Sample Gauge and

-

-

-

User Interface. Some of the operations performed on the system require user interaction, hence the need for an entry data interface. XBee-API Communications Library. It is an object model library specially designed to talk to XBee ZB modules in API mode. Database. It is used to store samples persistently, so that they can be analyzed or queried later. Coordinator Controller. Here lies the Coordinator core, where the protocol operations are performed.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

30

4.1 Prototype Description Router Model: Routers are useful to extend the network coverage, enabling communication between the Coordinator and End Devices. This communication would not be possible or would be very difficult to establish because of the distance, presence of obstacles (walls, floors, ceilings, structures, etc.). Although its presence on the network is optional, routers are usually distributed among several strategic points to extend coverage effectively. End Device Model: Figure 5 shows the functional elements compounding an End Device: -

-

-

-

Control Unit. Here lies the End Device core, where the protocol operations are performed. Besides, it manages all other components and performs actions such as: Sampling sensors, sending and receiving ZigBee messages, setting the alarm clock, etc. Alarm Clock. Once configured, it is able to wake up the whole End Device circuitry. Sampling frequency is set here. It must be always powered. ZigBee End Device Module. Enables remote communication with the Coordinator through ZigBee technology. It must be always powered, as it may receive data at any time, even when the End Device is asleep. Conditioning Circuitry. It is responsible for adapting the signal obtained in the strain gauge (amplifying, filtering, ADC-converting, etc.) so the control unit can read its value.

The hardware platform selected was SquidBee [18], an open mote architecture formed by Arduino Duemilanove Board [19], Digi International XBee® ZB [20] and a set of basic sensors (temperature [21], light and humidity [22] sensors), distributed by Libelium [23]. The main advantage of this platform is its great flexibility, as it allows us to build any node (Coordinator, Router or End Device) with very few hardware and firmware changes to its basic architecture [24]. The End Device’s Unit Control has been implemented in the Arduino’s Atmel ATmega168 microcontroller [25]. On the other hand, the XBee® ZB integrates the “alarm clock”, since it is able to wake up at regular customizable intervals (formally, cyclic sleep [20]). Therefore, remote configuration is much easier, as XBee provides simple mechanisms to change remote variables from the Coordinator. Finally, the Coordinator Application is a Java-based Desktop Software running on a computer with the Coordinator connected to one of its USB ports. XBeeAPI Library [26] has been used to implement such application.

4.2 Coordinator Implementation Here is the component list for the Coordinator, as shown in Figure 6: -

-

-

-

Fig. 5 End Device Functional Model

-

Rigid case. Protects the components and the circuitry. USB cable. The connection between the Coordinator and the Computer is established by a USB connector. The USB also powers this Node. XBee® ZB RF Module. Provides the End Device with ZigBee connection. Must be set up with the Coordinator firmware. 2.4 GHz Antenna. An external antenna to connect to XBee, using a U. FL proprietary connector from Hirose Electronic Group [27]. Arduino Board (without microcontroller). Since the Coordinator Application is running on the computer, the microcontroller is not needed anymore. XBee Shield. Enables ZigBee connection to the Arduino Board. USB connection must be selected [28].

4. Implementation

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

-

-

Fig. 6 Coordinator components

4.3 Router Hardware Implementation In this case, the component list remains almost the same. Therefore, only different and new components are highlighted: -

-

31

memory. XBee Shield configuration is slightly different from the others. First, an extra connection is needed to awake the microcontroller from XBee [29]. Second, the XBee connection must be selected (mind the USB connection was the one selected previously) [28]. XBee® ZB RF Module. End Device firmware must be set up [24]. Data Acquisition Board. It is the special circuitry shown in Figure 9, which is equipped with several signal conditioning components (Analog to Digital Converters or ADCs, among others) used to connect strain gauges, potentiometric displacement sensors, temperature catheters and so on. Communication between the End Device and the Data Acquisition Board is performed through a Serial Port Connection, using a specific set of commands.

USB charger. Unlike the Coordinator, Routers are not connected to a Computer, so a USB charger is needed to plug the Node to the power supply. XBee® ZB RF Module. Despite being physically equal, a different firmware is needed. Router firmware must be set up in the module [29].

The final assembled node is shown in Figure 7.

Fig. 8 End Device components

Fig. 7 Assembled Router

4.4 End Device Hardware Implementation Again, only different and new components are highlighted. The components are shown in Figure 8: -

-

Rechargeable Lithium Battery. Since the End Device needs full autonomy, it must be powered by a Battery (of 1100 mAh in our case) Arduino Board with Atmel ATmega168. The Arduino Board must have its microcontroller with our protocol implementation loaded in the

Fig. 9 Data Acquisition Board

As we will see in the Testing Scenario, we take temperature samples from the Data Acquisition Board, which has one temperature sensor [21] attached to one of its multiple input channels. The communication between the End Device and the Data Acquisition Board is performed through a RS-232 or Serial Connection. Since

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

32

there are no more Serial Connectors available in the Arduino Board (apart from the one establishing the communication with the ZigBee Network), two virtual Serial Port Connectors were created, using a special circuitry (shown in Figure 10) and an Arduino Library called SoftwareSerial [30]. One of those ports establishes the communication, and the other one enables the log output, which is helpful when performing debugging tasks.

XBee End Devices support cyclic sleep, allowing the module to wake periodically to check for RF data and sleep when idle. Since the changes in the sampling frequency must be immediate (or almost immediate), checking incoming messages is performed every 28 seconds. However, XBee does not impose any restriction about this [20]. As the sample frequency rarely changes, XBee will not receive any data most of the times. In consequence, there is no need to wake the external circuitry whenever XBee awakes, so it makes sense to difference between XBee and external circuitry frequencies, and , respectively. Equation 1 shows the relation between both variables: (1) For example, consider an XBee module waking once every 28 seconds, and wakening an external sampling circuitry once every 2 minutes thought an interruption line. XBee must be set with the following parameters:

Fig. 10 Fully assembled End Device

This special circuitry converts Arduino TTL voltage levels into RS-232 values, since a RS-232 connector is needed to talk to both the Data Acquisition Board and Log Software. The electronic scheme, shown in Figure 11, is based on the ADM232L chip [31], which supports up to 2 Serial Ports.

The Figure 12 represents graphically this example. The external circuitry is awake every four times XBee awakes and therefore matching the specified timing, as the timelines show. This behavior is repeated cyclically and hence its name.

Fig. 12 Timeline Chart showing Wake Times

5. Testing

Several tests dealing with coverage and power consumption where done in order to evaluate the prototype performance.

Fig. 11 TTL-to-RS232 Circuitry Scheme

5.1 Coverage Test 4.5 End Device Cyclic Sleep

This test has consisted in detecting the mean RSSI (Received Signal Strength Indication) value, obtained

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

from the reception of 100 messages. Each measure is averaged from 5 repetitions of the same experiment, to counteract the signal fluctuations caused by indoor fading [32]. Both modules have transmitted with a power of 3 dBm and boost mode enabled [20]. Also, the ZigBee nodes have been configured to automatically select the channel having the least interference with other nearby networks [33]. The obtained results are shown in Tables 1 and 2.

Free Space

Mean Attenuation (dB)

50 cm

0.00

1m

8.16

2m

11.65

4m

19.91

8m

23.93

11 m

29.61

33

regardless of whether the microcontroller and XBee are sleeping or not [34]. As a result, the authors are developing their own customized Arduino board where this problem is solved. On the other hand, it is important to highlight the fact that data sending causes a consumption peak, which, however high, lasts a very short time. The last measure, that corresponding to consumption when the node is awake and active, has been estimated considering transmit and every circuit component consumption.

5.3 Testing Scenario In this example, a network of three nodes, one of each type, is deployed. As shown in Figure 13, the End Device is placed in Classroom 1.1 and the Router in the Repository, whilst Coordinator is in Laboratory 1.2.

Table 1: Signal Loss in Free Space

Obstacle

Mean Attenuation (dB)

Window (Open Metallic Blinds)

1.04

Window (Closed Metallic Blinds)

3.95

Wall with Open Door Wall with Closed Door Brick Wall

0.39 1.19

Between Floors

13.08

1.46 Fig. 13 Node distribution in the proposed Scenario

Table 2: Signal Loss with Obstacles

Note that the total attenuation is the sum of free space and all obstacles loss [22].

A temperature sensor is attached to the End Device (see Figure 14), sending temperature measures twice per hour. This rate was set from the Coordinator.

5.2 Power Consumption Test Prototype power consumption has been measured for each of the possible states using a polymeter: Node State

Consumption (mA)

Sleeping

21.10

Awake (Idle)

69.80

Awake (Transmitting)

109.80

Table 3: Power Consumption Test

The consumption of a sleeping node is abnormally high, as shown in Table 3, due to a design fault in the Arduino board. This board always has the same consumption,

Fig. 14 End Device with a Temperature Sensor in Classroom 1.1

Unlike the previous nodes, the Coordinator is placed in the Laboratory 1.2, and it is connected to a computer through

IJCSI

34

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

a USB port, as shown in Figure 15. The Coordinator Software is running on this computer, allowing the user to set the sample collecting frequency, or even store the samples in a file.

and installation flexibility. Its functional parts were fully discussed with diagrams, including the protocol specifically designed to collect samples from sensor nodes and several photos of the prototype built. In addition, the results showing the typical node coverage limits and their consumption have been calculated for several situations. Acknowledgments This work has been supported by: 07TIC019105PR (Xunta de Galicia, Spain), TSI-020301-2008-2 (Ministerio de Industria, Turismo y Comercio, Spain) and 08TIC014CT (Instituto Tecnológico de Galicia, Spain).

Fig. 15 Coordinator connected to a Computer USB Port in Lab 1.2

References This scenario has been tested in real time, using a simple network with three nodes: One Coordinator, one Router and one End Device. When the Router is off, the received signal power (see Path 1 in Figure 13) at the Coordinator is too low to establish a connection with the End Device. However, when the Router located between the Coordinator and the End Device is switched on, the received signal power increases (it would be around 29.53 dBm, that is 3 dBm of transmit power, -29.61 dB of free space loss and -2.92 dB of 2-break wall obstacle loss), enabling communication between both of them. With this scenario is possible to estimate End Device battery lifetime, as well. Considering a 30-minute sleeping cycle with 5-second staying in active state, the average consumption calculated according with the values shown in Table 3 is 23.79 mA. Therefore, using an 1100 mAh battery the total lifetime estimated is around 51 hours. Note this consumption is too high, caused by the Arduino board design, as it was said before. Therefore, the authors of this article are working in the design of a new Arduinobased board with very low consumption in sleeping state (just of a few µA). Thanks to this improved design, it is possible to extend battery lifetime to several months.

6. Conclusions This article has presented a construction pathology detection system, based on a wireless sensor network using the ZigBee technology, which enables continuous monitoring of the parameters of interest, meeting the requirements of low consumption, ease of maintenance

[1] D. Wall, Building Pathology: Principles and Practice, Willey-Blackwell, 2007. [2] S. Y. Harris, Building Pathology: Deterioration, Diagnostics and Intervention, John Wiley & Sons, 2001. [3] L. Addleson, Building Failures: A guide to Diagnosis, Remedy and Prevention, Butterworth-Heinemann Ltd, 1992. [4] ZigBee Standards Organization, “ZigBee 2007 Specification Q4/2007”: http://www.zigbee.org/Products/TechnicalDocumentsDownl oad/tabid/237/Default.aspx [5] E.H. Callaway, Wireless Sensor Networks: Architectures and Protocols, Auerbach Publications, 2003. [6] J. A. Gutierrez et al, Low-Rate Wireless Personal Area Networks: Enabling Wireless Sensors with IEEE 802.15.4, IEEE Press, 2003. [7] F. L. Zucatto et al, “ZigBee for Building Control Wireless Sensor Networks”, in Microwave and Optoelectronics Conference, 2007. [8] W. M. Murry, W. R. Miller, The Bonded Electrical Resistance Strain Gage: An Introduction, Oxford University Press, 1992. [9] J. S. Wilson, Sensor Technology Handbook, Elsevier Inc, 2005. [10]J. Fraden, Handbook of Modern Sensors: Physics, Designs and Applications, Springer, 2004. [11]R. Pallás-Areny, J. G. Webster, Sensors and Signal Conditioning, John Wiley & Sons, 2001. [12]IEEE 802.15.4-2003, “IEEE Standard for Local and Metropolitan Area Networks: Specifications for Low-Rate Wireless Personal Area Networks”, 2003. [13]Digi XBee® & XBee-PRO® ZigBee® PRO RF Modules: http://www.digi.com/products/wireless/zigbee-mesh/xbee-zbmodule.jsp [14]Rabbit Application Development Kit for ZigBee Networks: http://www.rabbit.com/products/iDigi_bl4s100_addon_kit/index.shtml

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

[15]Ember Insight Development Kit: http://www.ember.com/products_zigbee_development_tools_ kits.html [16]Crossbow WSN Professional Kit: http://www.xbow.com/Products/productdetails.aspx?sid=231 [17]Sun Spot Development Kit: http://www.sunspotworld.com/ [18]Libelium SquidBee. Open Mote for Wireless Sensor Networks: http://ww.squidbee.org/ [19]Arduino Duemilanove Board, Open-Source Electronics Prototyping Platform: http://www.arduino.cc/en/Main/ArduinoBoardDuemilanove [20]Xbee®/XBee-Pro® ZB OEM RF Modules Manual, ver. 4/14/2008: http://ftp1.digi.com/support/documentation/90000976_a.pdf [21]National Semiconductor LM35 (Precision Centigrade Temperature Sensor): http://www.national.com/mpf/LM/LM35.html [22]808H5V5 Humidity Transmitter: http://www.sensorelement.com/humidity/808H5V5.pdf [23]Libelium: http://www.libelium.com/ [24]Digi International, “Upgrading RF Modem modules to the latest firmware using X-CTU”: http://www.digi.com/support/kbase/kbaseresultdetl.jsp?id=21 03 [25]Atmel ATmega168 Manual Datasheet: http://www.atmel.com/dyn/resources/prod_documents/doc25 45.pdf [26]Java API for Communicating with XBee® & XBee-Pro® RF Modules: http://code.google.com/p/xbee-api/ [27]Hirose Electronic Group, “Ultra Small Surface Mount Coaxial Connectors”: http://www.hirose.co.jp/cataloge_hp/e32119372.pdf [28]Libelium, “How to Create a Gateway Node”: http://www.libelium.com/squidbee/index.php?title=How_to_ create_a_gateway_node [29]Wireless Sensor Network Research Group, “How to Save Energy in the WSN: Sleeping the motes”: http://www.sensornetworks.org/index.php?page=0820520514 [30]Arduino – SoftwareSerial Library: http://arduino.cc/en/Reference/SoftwareSerial [31]Analog Devices, 5V Powered CMOS RS-232 Drivers/Receivers: http://www.analog.com/static/importedfiles/data_sheets/ADM231L_232L_233L_234L_236L_237L _238L_239L_241L.pdf [32]A. Goldsmith, Wireless Communications, Cambridge University Press, 2005. [33]K. Shuaib et al., “Co-Existence of ZigBee and WLAN. A Performance Study”. Wireless and Optical Communications Conference, 2007. [34]Arduino Community Page. “Arduino Sleep Code”: http://www.arduino.cc/playground/Learning/ArduinoSleepCo de

35

1998. He obtained two grants to stay at Ohio State University as a research visitor, in 1996 and 1998. In 2000 he was appointed Associate Professor, and more recently, in 2009, Government Vice-Dean in the Computer Engineering Faculty at Coruña University. His research interests are Signal Processing, Digital Communications, Wireless Sensor Networks and Location Systems. He has published several technical papers in journals and congresses, and supervised one PhD Thesis.

Santiago J. Barro Torres. He obtained a MS in Computer Engineering in 2008 and a MS in Wireless Telecommunications in 2009, both from A Coruña University. His research interests are Digital Communications, Wireless Sensor Networks, Microcontroller Programming and RFID Systems. Carlos J. Escudero Cascón. He obtained a MS in Telecommunications Engineering from Vigo University in 1991 and a PhD degree in Computer Engineering from Coruña University, in

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

36

SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer Scott CARTER, Laurent DENOUE FX Palo Alto Laboratory, Inc. 3400 Hillview Ave., Bldg. 4 Palo Alto, CA 94304 {carter,denoue}@fxpal.com

Abstract Reading documents on mobile devices is challenging. Not only are screens small and difficult to read, but also navigating an environment using limited visual attention can be difficult and potentially dangerous. Reading content aloud using text-tospeech (TTS) processing can mitigate these problems, but only for content that does not include rich visual information. In this paper, we introduce a new technique, SeeReader, that combines TTS with automatic content recognition and document presentation control that allows users to listen to documents while also being notified of important visual content. Together, these services allow users to read rich documents on mobile devices while maintaining awareness of their visual environment. Key words: Document reading, mobile, audio.

1. Introduction Reading documents on-the-go can be difficult. As previous studies have shown, mobile users have limited stretches of attention during which they can devote their full attention to their device [8]. Furthermore, studies have shown that listening to documents can improve users' ability to navigate real world obstacles [11]. However, while solutions exist for unstructured text, these approaches do not support the figures, pictures, tables, callouts, footnotes, etc., that might appear in rich documents. SeeReader is the first mobile document reader to support rich documents by combining the affordances of visual document reading with auditory speech playback and eyes-free navigation. Traditional eReaders have been either purely visual or purely auditory, with the auditory readers reading back unstructured text. SeeReader supports eyes-free structured document browsing and reading as well as automatic panning to and notification of crucial visual components. For example, while reading the text “as shown in Figure 2” aloud to the user the visual display automatically frames Figure 2 in the document. While this is most useful for document figures, any textual reference can be used to change the visual display, including footnotes, references to other sections, etc.

Figure 1: SeeReader automatically indicates areas of visual interest while reading document text aloud. The visual display shows the current reading position (left) before it encounters a link (right). When viewing the whole page, SeeReader indicates the link (top). When viewing the text, SeeReader automatically pans to the linked region (bottom). In both views, as the link is encountered, the application signals the user by vibrating the device.

Furthermore, SeeReader can be applied to an array of document types, including digital documents, scanned documents, and web pages. In addition to using references in the text to automatically pan and zoom to areas of a page, SeeReader can provide other services automatically, such as following links in web pages or initiating embedded macros. The technology can also interleave

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

useful information into the audio stream. For example, for

37

channel to free visual attention to the primary task. Along

Figure 2: Data flow for digital documents (left) and scanned documents (right).

scanned documents the technology can indicate in the audio stream (with a short blip or explanation) when recognition errors would likely make text-to-speech (TTS) translation unusable. These indications can be presented in advance to allow users to avoid listening to garbled speech. In the remainder of this paper, we briefly review other mobile document reading technologies, describe SeeReader including server processes and the mobile interface, and describe a study we ran to verify the usefulness of our approach.

2. Mobile Document Reading The linear, continuous reading of single documents by people on their own is an unrealistic characterization of how people read in the course of their daily work. [1] Work-related reading is a misnomer. Most “reading” involves an array of activities, often driven by some welldefined goal, and can include skimming, searching, crossreferencing, or annotating. For example, a lawyer might browse a collection of discovery documents in order to find where a defendant was on the night of October 3, 1999, annotate that document, cross-reference it with another document describing conflicting information from a witness, and begin a search for other related documents. A growing number of mobile document reading platforms are being developed to support these activities, including specialized devices such as the Amazon KindleTM (and others [10]) as well as applications for common mobile platforms such as the Adobe ReaderTM for mobile devices. Past research has primarily focused on active reading tasks, in which the user is fully engaged with a document [9, 4]. In these cases, support for annotation, editing, and summarization is critical. Our goal, on the other hand, is to support casual reading tasks for users who are engaged in another activity. A straightforward approach for this case is to use the audio

these lines, the Amazon KindleTM includes a TTS feature. However, the Kindle's provides no visual feedback while reading a document aloud. Similarly, the knfbReaderTM converts words in a printed document to speech (http://www.knfbreader.com/). However, as this application was designed primarily for blind users, its use of the mobile device's visual display is limited to an indication only of the text currently being read. Other mobile screen readers, such as Mobile Speak (http://www.codefactory.es/), can be configured to announce when they have reached a figure, however as they do not link to textual references they are therefore likely to interrupt the reading flow of text. Similarly, with Click-Through Navigation [3] users can click on figure references in body text to open a window displaying the figure. SeeReader improves on these techniques by making figures (and other document elements) visible on the screen automatically when references to them are read.

3. Analysis Pipeline The SeeReader mobile interface depends upon third-party services to generate the necessary metadata. Overall, SeeReader requires the original document, information delineating regions in the document (figures, tables, and paragraph boundaries) as well as keyphrase summaries of those regions, the document text parsed to include links and notifications, and audio files. In this section, we describe this process (shown in Figure 2 in detail). Initially, documents, whether they are scanned images or electronic (PDFs), are sent to a page layout service that produces a set of rectangular subregions based on the underlying content. Subregions might include figures and paragraph boundaries. In our approach, we use a version of the page segmentation and region classification described in [5]. Region metadata is stored in a database. In the next step the body text is digitized. This is obviously automatic for electronic documents, while for scanned documents we use Microsoft OfficeTM OCR. Next,

IJCSI

38

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

text is sent to a service that extracts keyphrases summarizing each region. Many text summary tools would suffice for this step. We use a version of [6] modified to work on document regions. Once processed, keyphrases are saved in a database. Simultaneously, text is sent to an algorithm we developed to link phrases to other parts of the document. For example, our algorithm links text such as “Figure” to the figure proximate to a caption starting with text “Figure 2.” Our algorithm currently only creates links with low ambiguity, including figures, references, and section headings using simple rules based on region proximity (similar to [7]). These links are also saved in a database. Finally, the document text and region keyphrases are sent to a TTS service (AT&T's Natural Voices TM). In the case of scanned documents, OCR scores are used to inject notifications to the user of potentially poorly analyzed blocks of text (e.g., this process may inject the phrase “Warning, upcoming TTS may be unintelligible”). The resulting files are processed into low footprint Adaptive Multi-Rate (AMR) files and saved in a database.

4. Mobile Application The SeeReader mobile interface is a J2ME application capable of running on a variety of platforms. SeeReader also supports both touchscreen and standard input. Thus far, we have tested the device with Symbian devices (Nokia N95), Windows Mobile devices (HTC Touch), and others (e.g., LG Prada). The application acquires documents and metadata and their associated metadata (as produced from the pipeline described above) via a remote server whose location is specified in a configuration file. The application can also read document data from local files. When configured to interact with a remote server, the application downloads data progressively, obtaining first for each document only XML metadata and small thumbnail representations. When the user selects a document (described below), the application retrieves compressed page images first, and AMR files as a background process, allowing users to view a document quickly. The interface supports primarily three different views: document, page, and text. The document view presents thumbnail representations of all available documents. Users can select the document they wish to read via either the number pad or by directly pressing on the screen.

Figure 3: A user interacting with the touchwheel. As the user drags her finger around the center of the screen, the application vibrates the device to signal sentence boundaries and plays an audio clip of the keyphrase summarizing each region as it is entered.

After selecting a document, the interface is in page view mode (see Figure 1, top). We support both standard and touch-based navigation in page mode. For devices without a touchscreen, users can press command keys to navigate between regions and pages. For devices with a touchscreen, a user can set the cursor position for reading by pressing and holding on the area of interest. After a short delay, the application highlights the region the user selected and then begins audio playback beginning with the first sentence of the selected region. To support eyes-free navigation, we implemented a modified version of the touchwheel described by Zhao et al. in [12] that provides haptic and auditory feedback to users as they navigate. This allows the user to maintain their visual attention on another task while still perusing the document. As the user gestures in a circular motion (see Figure 3), the application vibrates the device to signal sentence boundaries to the user. The application also reads aloud the keyphrase summary of each region as it is entered. In addition, we inject other notifications to help the user maintain an understanding of their position in the document as they navigate the touchwheel, such as page and document boundaries. While in page view, users can flick their finger across the screen or use a command key to navigate between pages. Users can also click a command key or double-click on the screen to zoom in to the text view (see Figure 1, bottom). The text view shows the details of the current document page — users can navigate the page using either touch or arrow keys. Double-clicking again zooms the display back to the page view. Multiple actions launch the read-aloud feature. In page view mode, when a user releases the touchwheel, selects a region by pressing and holding, or presses the selection key, the application automatically begins reading at the selected portion of text. The user can also press a command key at any time to start or pause reading. While

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

39

reading, SeeReader indicates the boundaries of the sentence being read. When SeeReader encounters a link, it vibrates the device and either highlights the link or automatically pans to the location of the linked content, depending on whether the device is in page view or text view mode respectively (see Figure 1).

5. Evaluation We ran a within subjects, dual-task study as a preliminary evaluation of the core features of the SeeReader interface. Participants completed two reading tasks while also doing a physical navigation task. Common dual-task approaches to evaluating mobile applications involve participants walking on treadmills or along taped paths while completing a task on the mobile device. These approaches are designed to simulate the bodily motion of walking [2]. However, they do not simulate the dynamism of a realworld environment. We developed a different approach for this study that focuses on collision avoidance rather than walking per se. In this configuration, participants move laterally either to hit targets (such as doorways or stairs) or avoid obstacles (such as telephone poles or crowds). We simulated the targets and obstacles on a large display visible in the participant's periphery as they used the mobile device (see Figure 4). To sense lateral motion, a WiimoteTM mounted on the top of the display tracked an IR LED attached to a hat worn by each participant. We also included an override feature to allow a researcher to manually set the participant's location with a mouse in the event that the sensor failed. In addition to simulating a more dynamic situation, this approach has the advantage of being easily repeatable and producing straightforward measures of success for the peripheral task (task completion time and the number of barriers and targets hit). In order to understand the benefits of eyes-free document reading, we compared the touchscreen SeeReader interface against a modified version of the touchscreen SeeReader interface with audio and vibration notifications removed (similar to a standard document reader). At the beginning of the experiment, we asked participants to open and freely navigate a test document using both interfaces. After 10 minutes of use, we then had participants begin the main tasks on a different document. We used a 2x2 design, having the participants complete two reading tasks, one on each interface, in randomized order. At the end of each task we asked participants two simple, multiple-choice comprehension and recall questions.

Figure 4: Study participants read documents while peripherally monitoring an interface, moving their body either to avoid barriers TM (red) or to acquire targets (green). A Wiimote tracked an IR LED attached to a hat to determine their location.

Finally, at the end of the study we asked participants the following questions (with responses recorded on a 7-point scale, where 1 mapped to agreement and 7 mapped to disagreement): “I found it easier to use the audio version than the non-audio version”; “I was comfortable using the touchwheel”; “It felt natural to listen to this document”; “The vibration notifications were a sufficient indication of links in the document”. We ran a total of 8 participants (6 male and 2 female) with an average age of 38 (SD 6.02). Furthermore, only 2 of the 8 participants reported having extensive experience with gaming and mobile interfaces.

6. Results Overall, we found that participants using SeeReader were able to avoid more barriers and hit more targets while sacrificing neither completion time nor comprehension of the reading material. Using SeeReader participants hit 76% (12% RSD) of the targets and avoided all but 10% (5% RSD) of the barriers, while using the non-audio reader participants hit 63% (11% RSD) of the targets and 17% (5% RSD) of the barriers. Meanwhile, average completion time across all tasks was virtually identical — 261 seconds (70 SD) for SeeReader and 260 seconds (70 SD) for the non-audio version. Also, comprehension scores were similar. Using SeeReader, participants answered on average 1.13 (.64 SD) questions correctly, versus 1 (.87 SD) using the non-audio version. In the post-study questionnaire, participants reported that they found SeeReader easier to use (avg. 2.75, 1.58 SD), and felt it was natural to listen to the document (avg. 3.00, 1.31 SD). However, participants were mixed on the use of the touchwheel (avg. 4.38, 1.85 SD) and the vibration notifications (avg. 4.13, 1.89 SD).

IJCSI

40

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

While participants had some issues with SeeReader, overall they found it more satisfying than the non-audio version. More than one participant noted that they felt they did not yet have “enough training with [SeeReader’s] touchwheel.” Still, comments revealed that using the nonaudio version while completing the peripheral task was “frustrating” and that some participants had no choice but to “stop to read the doc[ument].” On the other hand, SeeReader allowed participants to complete both tasks without feeling too overwhelmed. One participant noted that, “Normally I dislike being read to, but it seemed natural [with] SeeReader.”

7. Discussion The results, while preliminary, imply that even with only a few minutes familiarizing themselves with these new interaction techniques, participants may be able to read rich documents using SeeReader while also maintaining an awareness of their environment. Furthermore, the user population we chose was only moderately familiar with these types of interfaces. These results give us hope that after more experience with the application, users comfortable with smart devices would be able to use SeeReader not only to remain aware of their environment but also to read rich documents faster. Methodologically, we were encouraged that participants were able to understand rapidly the peripheral task, and generally performed well. Still, it was clear that participants felt somewhat overwhelmed trying to complete two tasks at once, both with interfaces they had not yet encountered. We are considering how to iterate this method to make it more natural while still incorporating the dynamism of a realistic setting.

8. Conclusion We presented a novel document reader that allows users to read rich documents while also maintaining an awareness of their physical environment. A dual-task study showed that users may be able to read documents with SeeReader as well as a standard mobile document reader while also being more aware of their environment. In future work, we intend to experiment with this technique in automobile dashboard systems. We can take advantage of other sensors in automobiles to adjust the timing of display of visual content (e.g., visual content could be shown only when the car is idling). We anticipate that SeeReader may be even more useful in this scenario given the high cost of distraction for drivers.

9. Acknowledgements We thank Dr. David Hilbert for his insights into improving the mobile interface. We also thank our study participants.

References [1] Annette Adler, Anuj Gujar, Beverly L. Harrison, Kenton O’Hara, and Abigail Sellen. A diary study of work-related reading: Design implications for digital reading devices. In CHI ’98. Pages 241–248. [2] Leon Barnard, Ji S. Yi, Julie A. Jacko, and Andrew Sears. An empirical comparison of use-inmotion evaluation scenarios for obile computing devices. International Journal of HumanComputer Studies, 62(4):487–520, April 2005. [3] George Buchanan and Tom Owen. Improving navigation interaction in digital documents. In JCDL ’08. Pages 389–392. [4] Nicholas Chen, Francois Guimbretiere, Morgan Dixon, Cassandra Lewis, and Maneesh Agrawala. Navigation techniques for dual-display e-book readers. In CHI ’08. Pages 1779–1788. [5] Patrick Chiu, Koichi Fujii, and Qiong Liu. Content based automatic zooming: Viewing documents on small displays. In MM ’08. Pages 817–820. [6] Julian Kupiec, Jan Pedersen, and Francine Chen. A trainable document summarizer. In SIGIR ’95. Pages 68–73. [7] Donato Malerba, Floriana Esposito, Francesca A. Lisi, and Oronzo Altamura. Automated discovery of dependencies between logical components in document image understanding. In ICDAR ’01. Pages 174–178. [8] Antti Oulasvirta, Sakari Tamminen, Virpi Roto, and Jaana Kuorelahti. Interaction in 4-second bursts: The fragmented nature of attentional resources in mobile HCI. In CHI ’05. Pages 919–928. [9] Morgan N. Price, Bill N. Schilit, and Gene Golovchinsky. Xlibris: The active reading machine. In CHI ’98. Pages 22–23. [10] Bill N. Schilit, Morgan N. Price, Gene Golovchinsky, Kei Tanaka, and Catherine C. Marshall. As we may read: The reading appliance revolution. Computer, 32(1):65–73, 1999. [11] Kristin Vadas, Nirmal Patel, Kent Lyons, Thad Starner, and Julie Jacko. Reading on-the-go: A comparison of audio and hand-held displays. In MobileHCI ’06. Pages 219–226. [12] Shengdong Zhao, Pierre Dragicevic, Mark Chignell, Ravin Balakrishnan, and Patrick Baudisch. Earpod: Eyes-free menu selection using touch input and reactive audio feedback. In CHI ’07. Pages 1395–1404. Scott Carter holds a PhD from The University of California, Berkeley. He is a research scientist at FX Palo Alto Laboratory.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

41

Laurent Denoue holds a PhD from the University of Savoie, France. He is a senior research scientist at FX Palo Alto Laboratory.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

42

Improvement of Text Dependent Speaker Identification System Using Neuro-Genetic Hybrid Algorithm in Office Environmental Conditions Md. Rabiul Islam1 and Md. Fayzur Rahman2 1 Department of Computer Science & Engineering Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh [email protected] 2

Department of Electrical & Electronic Engineering Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh [email protected]

Abstract In this paper, an improved strategy for automated text dependent speaker identification system has been proposed in noisy environment. The identification process incorporates the NeuroGenetic hybrid algorithm with cepstral based features. To remove the background noise from the source utterance, wiener filter has been used. Different speech pre-processing techniques such as start-end point detection algorithm, pre-emphasis filtering, frame blocking and windowing have been used to process the speech utterances. RCC, MFCC, ∆MFCC, ∆∆MFCC, LPC and LPCC have been used to extract the features. After feature extraction of the speech, Neuro-Genetic hybrid algorithm has been used in the learning and identification purposes. Features are extracted by using different techniques to optimize the performance of the identification. According to the VALID speech database, the highest speaker identification rate of 100.000 % for studio environment and 82.33 % for office environmental conditions have been achieved in the close set text dependent speaker identification system. Key words: Bio-informatics, Robust Speaker Identification, Speech Signal Pre-processing, Neuro-Genetic Hybrid Algorithm.

1. Introduction Biometrics are seen by many researchers as a solution to a lot of user identification and security problems now a days [1]. Speaker identification is one of the most important areas where biometric techniques can be used. There are various techniques to resolve the automatic speaker identification problem [2, 3, 4, 5, 6, 7, 8]. Most published works in the areas of speech recognition and speaker recognition focus on speech under the noiseless environments and few published works focus on speech under noisy conditions [9, 10, 11, 12]. In some research work, different talking styles were used to

simulate the speech produced under real stressful talking conditions [13, 14, 15]. Learning systems in speaker identification that employ hybrid strategies can potentially offer significant advantages over single-strategy systems. In this proposed system, Neuro-Genetic Hybrid algorithm with cepstral based features has been used to improve the performance of the text dependent speaker identification system under noisy environment. To extract the features from the speech, different types of feature extraction technique such as RCC, MFCC, ∆MFCC, ∆∆MFCC, LPC and LPCC have been used to achieve good result. Some of the tasks of this work have been simulated using Matlab based toolbox such as Signal processing Toolbox, Voicebox and HMM Toolbox.

2. Paradigm of the Proposed Speaker Identification System The basic building blocks of speaker identification system are shown in the Fig.1. The first step is the acquisition of speech utterances from speakers. To remove the background noises from the original speech, wiener filter has been used. Then the start and end points detection algorithm has been used to detect the start and end points from each speech utterance. After which the unnecessary parts have been removed. Pre-emphasis filtering technique has been used as a noise reduction technique to increase the amplitude of the input signal at frequencies where signal-to-noise ratio (SNR) is low. The speech signal is segmented into overlapping frames. The purpose of the overlapping analysis is that each speech sound of the input sequence would be approximately centered at some frame. After segmentation, windowing technique has been used. Features were extracted from the segmented speech. The

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 extracted features were then fed to the Neuro-Genetic hybrid techniques for learning and classification.

43

artifacts that results from the framing process [30, 31, 32]. The hamming window can be defined as follows [33]: 2Πn N −1 N −1 ⎧⎪ 0.54 − 0.46 cos , −( )≤n≤( ) w(n) = ⎨ 2 2 N ⎪⎩ 0, Otherwise

(3)

4. Speech parameterization Techniques for Speaker Identification

Fig. 1 Block diagram of the proposed automated speaker identification system.

3. Speech Signal Pre-processing for Speaker Identification To capture the speech signal, sampling frequency of 11025 Hz, sampling resolution of 16-bits, mono recording channel and Recorded file format = *.wav have been considered. The speech preprocessing part has a vital role for the efficiency of learning. After acquisition of speech utterances, wiener filter has been used to remove the background noise from the original speech utterances [16, 17, 18]. Speech end points detection and silence part removal algorithm has been used to detect the presence of speech and to remove pulse and silences in a background noise [19, 20, 21, 22, 23]. To detect word boundary, the frame energy is computed using the sort-term log energy equation [24],

E i = 10 log

ni + N −1



S

2

(t )

(1)

t = ni

Pre-emphasis has been used to balance the spectrum of voiced sounds that have a steep roll-off in the high frequency region [25, 26, 27]. The transfer function of the FIR filter in the z-domain is [26] H ( Z ) = 1 − α .z

Where

−1

, 0 ≤ α ≤ 1

This stage is very important in an ASIS because the quality of the speaker modeling and pattern matching strongly depends on the quality of the feature extraction methods. For the proposed ASIS, different types of speech feature extraction methods [34, 35, 36, 37, 38, 39] such as RCC, MFCC, ∆MFCC, ∆∆MFCC, LPC, LPCC have been applied.

5. Training and Testing Model for Speaker Identification Fig.2 shows the working process of neuro-genetic hybrid system [40, 41, 42]. The structure of the multilayer neural network does not matter for the GA as long as the BPNs parameters are mapped correctly to the genes of the chromosome the GA is optimizing. Basically, each gene represents the value of a certain weight in the BPN and the chromosome is a vector that contains these values such that each weight corresponds to a fixed position in the vector as shown in Fig.2. The fitness function can be assigned from the identification error of the BPN for the set of pictures used for training. The GA searches for parameter values that minimize the fitness function, thus the identification error of the BPN is reduced and the identification rate is maximized [43].

(2)

α is the pre-emphasis parameter.

Frame blocking has been performed with an overlapping of 25[%] to 75[%] of the frame size. Typically a frame length of 10-30 milliseconds has been used. The purpose of the overlapping analysis is that each speech sound of the input sequence would be approximately centered at some frame [28, 29]. From different types of windowing techniques, Hamming window has been used for this system. The purpose of using windowing is to reduce the effect of the spectral

Fig.2 Learning and recognition model for the Neuro-Genetic hybrid system.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

44

The algorithm for the Neuro-Genetic based weight determination and Fitness Function [44] is as follows: Algorithm for Neuro-Genetic Weight determination: { iÅ 0; Generate the initial population Pi of real coded chromosomes C i j each representing a weight set for the BPN;

}

Generate fitness values F i j for each C i j € Pi using the algorithm FITGEN(); While the current population Pi has not converged {

There are some critical parameters in Neuro-Genetic hybrid system (such as in BPN, gain term, speed factor, number of hidden layer nodes and in GA, crossover rate and the number of generation) that affect the performance of the proposed system. A trade off is made to explore the optimal values of the above parameters and experiments are performed using those parameters. The optimal values of the above parameters are chosen carefully and finally find out the identification rate.

Using the cross over mechanism reproduced offspring from the parent chromosome and performs mutation on offspring; iÅ i+1;

6. Optimum parameter Selection for the BPN and GA 6.1 Parameter Selection on the BPN

6.1.1 Experiment on the Gain Term, η

Call the current population Pi ; Calculate fitness values F j for each C j € P using the algorithm FITGEN(); } Extract weight from Pi to be used by the BPN; } i

i

i

In BPN, during the training session when the gain term was set as: η 1 = η 2 = 0.4, spread factor was set as k1 = k2 = 0.20 and tolerable error rate was fixed to 0.001[%] then the highest identification rate of 91[%] has been achieved which is shown in Fig.3.

Algorithm for FITGEN(): {Let ( I i , T j ), i=1,2,………..N where I i = ( I 1i ,

I 2i , ……… I li ) and Ti = ( T1i , T2i , ……… Tli ) represent the input-output pairs of the problem to be solved by BPN with a configuration l-m-n. { Extract weights Wi from Ci ;

Fig. 3 Performance measurement according to gain term.

Keeping W i as a fixed weight setting, train the BPN for the N input instances (Pattern); Calculate error Ei for each of the input instances using the formula: Ei, =

∑ (T

ji

− O ji ) 2

(3)

j

Where Oi is the output vector calculated by BPN;

6.1.2 Experiment on the Speed Factor, k The performance of the BPN system has been measured according to the speed factor, k. We set η 1 = η 2 = 0.4 and tolerable error rate was fixed to 0.001[%]. We have studied the value of the parameter ranging from 0.1 to 0.5. We have found that the highest recognition rate was 93[%] at k1 = k2 = 0.15 which is shown in Fig.4.

Find the root mean square E of the errors Ei, I = 1,2,…….N i.e. E =

∑E

i

N

(4)

i

Now the fitness value Fi for each of the individual string of the population as Fi = E; } Output Fi for each Ci, i = 1,2,…….P; }

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

45

Fig. 4 Performance measurement according to various speed factor. Fig. 6 Performance measurement according to the crossover rate.

6.1.3 Experiment on the Number of Nodes in Hidden Layer, NH

6.2.2 Experiment on the Crossover Rate

In the learning phase of BPN, We have chosen the hidden layer nodes in the range from 5 to 40. We set η 1 = η 2 = 0.4, k1 = k2 = 0.15 and tolerable error rate was fixed to 0.001[%]. The highest recognition rate of 94[%] has been achieved at NH = 30 which is shown in Fig.5.

Different values of the number of generations have been tested for achieving the optimum number of generations. The test results are shown in the Fig.7. The maximum identification rate of 95[%] has been found at the number of generations 15.

Fig.7 Accuracy measurement according to the no. of generations. Fig. 5 Results after setting up the number of internal nodes in BPN.

6.2 Parameter Selection on the GA To measure the optimum value, different parameters of the genetic algorithm were also changed to find the best matching parameters. The results of the experiments are shown below. 6.2.1 Experiment on the Crossover Rate In this experiment, crossover rate has been changed in various ways such as 1, 2, 5, 7, 8, 10. The highest speaker identification rate of 93[%] was found at crossover point 5 which is shown in the Fig.6.

7. Performance Measurement of the TextDependent Speaker Identification System VALID speech database [45] has been used to measure the performance of the proposed hybrid system. In learning phase, studio recording speech utterances ware used to make reference models and in testing phase, speech utterances recorded in four different office conditions were used to measure the accurate performance of the proposed Neuro-Genetic hybrid system. Performance of the proposed system were measured according to various cepstral based features such as LPC, LPCC, RCC, MFCC, ∆MFCC and ∆∆MFCC which are shown in the following table. Table 1: Speaker identification rate (%) for VALID speech corpus

Type of environments Clean speech utterances Office environments

MFCC

∆ MFCC

∆∆ MFCC

RCC

LPCC

100.00

100.00

98.23

90.43

100.00

80.17

82.33

68.89

70.33

76.00

IJCSI

46

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

speech utterances

Table 1 shows the overall average speaker identification rate for VALID speech corpus. From the table it is easy to compare the performance among MFCC, ∆MFCC, ∆∆MFCC, RCC and LPCC methods for Neuro-Genetic hybrid algorithm based text-dependent speaker identification system. It has been shown that in clean speech environment the performance is 100.00 [%] for MFCC, ∆MFCC and LPCC and the highest identification rate (i.e. 82.33 [%]) has been achieved at ∆MFCC for four different office environments.

8. Conclusion and Observations The experimental results show the versatility of the NeuroGenetic hybrid algorithm based text-dependent speaker identification system. The critical parameters such as gain term, speed factor, number of hidden layer nodes, crossover rate and the number of generations have a great impact on the recognition performance of the proposed system. The optimum values of the above parameters have been selected effectively to find out the best performance. The highest recognition rate of BPN and GA have been achieved to be 94[%] and 95[%] respectively. According to VALID speech database, 100[%] identification rate in clean environment and 82.33 [%] in office environment conditions have been achieved in Neuro-Genetic hybrid system. Therefore, this proposed system can be used in various security and access control purposes. Finally the performance of this proposed system can be populated according to the largest speech recognition database.

References [1] A. Jain, R. Bole, S. Pankanti, BIOMETRICS Personal Identification in Networked Society, Kluwer Academic Press, Boston, 1999. [2] Rabiner, L., and Juang, B.-H., Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs, New Jersey, 1993. [3] Jacobsen, J. D., “Probabilistic Speech Detection”, Informatics and Mathematical Modeling, DTU, 2003. [4] Jain, A., R.P.W.Duin, and J.Mao., “Statistical pattern recognition: a review”, IEEE Trans. on Pattern Analysis and Machine Intelligence 22 (2000), pp. 4–37. [5] Davis, S., and Mermelstein, P., “Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences”, IEEE 74 Transactions on Acoustics, Speech, and Signal Processing (ICASSP), Vol. 28, No. 4, 1980, pp. 357-366. [6] Sadaoki Furui, “50 Years of Progress in Speech and Speaker Recognition Research”, ECTI TRANSACTIONS ON COMPUTER AND INFORMATION TECHNOLOGY, Vol.1, No.2, 2005.

[7] Lockwood, P., Boudy, J., and Blanchet, M., “Non-linear spectral subtraction (NSS) and hidden Markov models for robust speech recognition in car noise environments”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 1992, Vol. 1, pp. 265-268. [8] Matsui, T., and Furui, S., “Comparison of text-independent speaker recognition methods using VQ-distortion and discrete/ continuous HMMs”, IEEE Transactions on Speech Audio Process, No. 2, 1994, pp. 456-459. [9] Reynolds, D.A., “Experimental evaluation of features for robust speaker identification”, IEEE Transactions on SAP, Vol. 2, 1994, pp. 639-643. [10] Sharma, S., Ellis, D., Kajarekar, S., Jain, P. & Hermansky, H., “Feature extraction using non-linear transformation for robust speech recognition on the Aurora database.”, in Proc. ICASSP2000, 2000. [11] Wu, D., Morris, A.C. & Koreman, J., “MLP Internal Representation as Disciminant Features for Improved Speaker Recognition”, in Proc. NOLISP2005, Barcelona, Spain, 2005, pp. 25-33. [12] Konig, Y., Heck, L., Weintraub, M. & Sonmez, K., “Nonlinear discriminant feature extraction for robust textindependent speaker recognition”, in Proc. RLA2C, ESCA workshop on Speaker Recognition and its Commercial and Forensic Applications, 1998, pp. 72-75. [13] Ismail Shahin, “Improving Speaker Identification Performance Under the Shouted Talking Condition Using the Second-Order Hidden Markov Models”, EURASIP Journal on Applied Signal Processing 2005:4, pp. 482– 486. [14] S. E. Bou-Ghazale and J. H. L. Hansen, “A comparative study of traditional and newly proposed features for recognition of speech under stress”, IEEE Trans. Speech, and Audio Processing, Vol. 8, No. 4, 2000, pp. 429–442. [15] G. Zhou, J. H. L. Hansen, and J. F. Kaiser, “Nonlinear feature based classification of speech under stress”, IEEE Trans. Speech, and Audio Processing, Vol. 9, No. 3, 2001, pp. 201–216. [16] Simon Doclo and Marc Moonen, “On the Output SNR of the Speech-Distortion Weighted Multichannel Wiener Filter”, IEEE SIGNAL PROCESSING LETTERS, Vol. 12, No. 12, 2005. [17] Wiener, N., Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications, Wiely, Newyork, 1949. [18] Wiener, N., Paley, R. E. A. C., “Fourier Transforms in the Complex Domains”, American Mathematical Society, Providence, RI, 1934. [19] Koji Kitayama, Masataka Goto, Katunobu Itou and Tetsunori Kobayashi, “Speech Starter: Noise-Robust Endpoint Detection by Using Filled Pauses”, Eurospeech 2003, Geneva, pp. 1237-1240. [20] S. E. Bou-Ghazale and K. Assaleh, “A robust endpoint detection of speech for noisy environments with application to automatic speech recognition”, in Proc. ICASSP2002, 2002, Vol. 4, pp. 3808–3811. [21] A. Martin, D. Charlet, and L. Mauuary, “Robust speech / non-speech detection using LDA applied to MFCC”, in Proc. ICASSP2001, 2001, Vol. 1, pp. 237–240.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 [22] Richard. O. Duda, Peter E. Hart, David G. Strok, Pattern Classification, A Wiley-interscience publication, John Wiley & Sons, Inc, Second Edition, 2001. [23] Sarma, V., Venugopal, D., “Studies on pattern recognition approach to voiced-unvoiced-silence classification”, Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP'78, 1978, Vol. 3, pp. 1-4. [24] Qi Li. Jinsong Zheng, Augustine Tsai, Qiru Zhou, “Robust Endpoint Detection and Energy Normalization for RealTime Speech and Speaker Recognition”, IEEE Transaction on speech and Audion Processing, Vol. 10, No. 3, 2002. [25] Harrington, J., and Cassidy, S., Techniques in Speech Acoustics, Kluwer Academic Publishers, Dordrecht, 1999. [26] Makhoul, J., “Linear prediction: a tutorial review”, in Proceedings of the IEEE 64, 4 (1975), pp. 561–580. [27] Picone, J., “Signal modeling techniques in speech recognition”, in Proceedings of the IEEE 81, 9 (1993), pp. 1215–1247. [28] Clsudio Beccchetti and Lucio Prina Ricotti, Speech Recognition Theory and C++ Implementation, John Wiley & Sons. Ltd., 1999, pp.124-136. [29] L.P. Cordella, P. Foggia, C. Sansone, M. Vento., "A RealTime Text-Independent Speaker Identification System", in Proceedings of 12th International Conference on Image Analysis and Processing, 2003, IEEE Computer Society Press, Mantova, Italy, pp. 632 - 637. [30] J. R. Deller, J. G. Proakis, and J. H. L. Hansen, DiscreteTime Processing of Speech Signals, Macmillan, 1993. [31] F. Owens., Signal Processing Of Speech, Macmillan New electronics. Macmillan, 1993. [32] F. Harris, “On the use of windows for harmonic analysis with the discrete fourier transform”, in Proceedings of the IEEE 66, 1978, Vol.1, pp. 51-84. [33] J. Proakis and D. Manolakis, Digital Signal Processing, Principles, Algorithms and Aplications, Second edition, Macmillan Publishing Company, New York, 1992. [34] D. Kewley-Port and Y. Zheng, “Auditory models of formant frequency discrimination for isolated vowels”, Journal of the Acostical Society of America, 103(3), 1998, pp. 1654– 1666. [35] D. O’Shaughnessy, Speech Communication - Human and Machine, Addison Wesley, 1987. [36] E. Zwicker., “Subdivision of the audible frequency band into critical bands (frequenzgruppen)”, Journal of the Acoustical Society of America, 33, 1961, pp. 248–260. [37] S. Davis and P. Mermelstein, “Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences”, IEEE Transactions on Acoustics Speech and Signal Processing, 28, 1980, pp. 357–366. [38] S. Furui., “Speaker independent isolated word recognition using dynamic features of the speech spectrum”, IEEE Transactions on Acoustics, Speech and Signal Processing, 34, 1986, pp. 52–59. [39] S. Furui, “Speaker-Dependent-Feature Extraction, Recognition and Processing Techniques”, Speech Communication, Vol. 10, 1991, pp. 505-520.

47

[40] Siddique and M. & Tokhi, M., “Training Neural Networks: Back Propagation vs. Genetic Algorithms”, in Proceedings of International Joint Conference on Neural Networks, Washington D.C.USA, 2001, pp. 2673- 2678. [41] Whiteley, D., “Applying Genetic Algorithms to Neural Networks Learning”, in Proceedings of Conference of the Society of Artificial Intelligence and Simulation of Behavior, England, Pitman Publishing, Sussex, 1989, pp. 137- 144. [42] Whiteley, D., Starkweather and T. & Bogart, C., “Genetic Algorithms and Neural Networks: Optimizing Connection and Connectivity”, Parallel Computing, Vol. 14, 1990, pp. 347-361. [43] Kresimir Delac, Mislav Grgic and Marian Stewart Bartlett, Recent Advances in Face Recognition, I-Tech Education and Publishing KG, Vienna, Austria, 2008, pp. 223-246. [44] Rajesskaran S. and Vijayalakshmi Pai, G.A., Neural Networks, Fuzzy Logic, and Genetic AlgorithmsSynthesis and Applications, Prentice-Hall of India Private Limited, New Delhi, India, 2003. [45] N. A. Fox, B. A. O'Mullane and R. B. Reilly, “The Realistic Multi-modal VALID database and Visual Speaker Identification Comparison Experiments”, in Proc. of the 5th International Conference on Audio- and VideoBased Biometric Person Authentication (AVBPA-2005), New York, 2005.

Md. Rabiul Islam was born in Rajshahi, Bangladesh, on December 26, 1981. He received his B.Sc. degree in Computer Science & Engineering and M.Sc. degrees in Electrical & Electronic Engineering in 2004, 2008, respectively from the Rajshahi University of Engineering & Technology, Bangladesh. From 2005 to 2008, he was a Lecturer in the Department of Computer Science & Engineering at Rajshahi University of Engineering & Technology. Since 2008, he has been an Assistant Professor in the Computer Science & Engineering Department, University of Rajshahi University of Engineering & Technology, Bangladesh. His research interests include bio-informatics, human-computer interaction, speaker identification and authentication under the neutral and noisy environments. Md. Fayzur Rahman was born in 1960 in Thakurgaon, Bangladesh. He received the B. Sc. Engineering degree in Electrical & Electronic Engineering from Rajshahi Engineering College, Bangladesh in 1984 and M. Tech degree in Industrial Electronics from S. J. College of Engineering, Mysore, India in 1992. He received the Ph. D. degree in energy and environment electromagnetic from Yeungnam University, South Korea, in 2000. Following his graduation he joined again in his previous job in BIT Rajshahi. He is a Professor in Electrical & Electronic Engineering in Rajshahi University of Engineering & Technology (RUET). He is currently engaged in education in the area of Electronics & Machine Control and Digital signal processing. He is a member of the Institution of Engineer’s (IEB), Bangladesh, Korean Institute of Illuminating and Installation Engineers (KIIEE), and Korean Institute of Electrical Engineers (KIEE), Korea.

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

49

MESURE Tool to benchmark Java Card platforms Samia Bouzefrane1, Julien Cordry1 and Pierre Paradinas2 1

CEDRIC Laboratory, Conservatoire National des Arts et Métiers 292 rue Saint Martin, 75141, Paris Cédex 03, FRANCE {[email protected]} 2

INRIA, Domaine de Voluceau - Rocquencourt -B.P. 105, 78153 Le Chesnay Cedex, FRANCE. { [email protected]}

Abstract The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behavior of these platforms is becoming crucial. To meet this need, we present in this paper a novel benchmarking framework to test and evaluate the performance of Java Card platforms. MESURE tool is the first framework which accuracy and effectiveness are independent from the particular Java Card platform tested and CAD used. Key words: Java Card benchmarking, smart cards.

platforms,

software

testing,

paper, on one hand we propose a general benchmarking solution through different steps that are essential for measuring the performance of the Java Card platforms; on the other hand we validate the obtained measurements from statistical and precision CAD (Card Acceptance Device) points of view. The remainder of this paper is organised as follows. In Section 2, we describe briefly some benchmarking attempts in the smart card area. In Section 3, an overview of the benchmarking framework is given. Section 4 analyses the obtained measurements using first a statistical approach, and then a precision reader, before concluding the paper in Section 5.

1. Introduction

2. Java-Card Benchmarking State of the Art

With more than 5 billion copies in 2008 [2], smart cards are an important device of today’s information society. The development of the Java Card standard made this device even more popular as it provides a secure, vendor-independent, ubiquitous Java platforms for smart cards. It shortens the time-to-market and enables programmers to develop smart card applications for a wide variety of vendors products. In this context, understanding the performance behavior of Java Card platforms is important to the Java Card community (users, smart card manufacturers, card software providers, card users, card integrators, etc.). Currently, there is no solution on the market which makes it possible to evaluate the performance of a smart card that implements Java Card technology. In fact, the programs which realize this type of evaluations are generally proprietary and not available to the whole of the Java Card community. Hence, the only existing and published benchmarks are used within research laboratories (e.g., SCCB project from CEDRIC laboratory [5] or IBM Research [12]). However, benchmarks are important in the smart card area because they contribute in discriminating companies products, especially when the products are standardized. In this

Currently, there is no standard benchmark suite which can be used to demonstrate the use of the Java Card Virtual Machine (JCVM) and to provide metrics for comparing Java Card platforms. In fact, even if numerous benchmarks have been developed around the Java Virtual Machine (JVM), there are few works that attempt to evaluate the performance of smart cards. The first interesting initiative has been done by Castellà et al. in [4] where they study the performance of micro-payment for Java Card platforms, i.e., without PKI (Public Key Infrastructure). Even if they consider Java Card platforms from distinct manufacturers, their tests are not complete as they involve mainly computing some hash functions on a given input, including the I/O operations. A more recent and complete work has been undertaken by Erdmann in [6]. This work mentions different application domains, and makes the distinction between I/O, cryptographic functions, JCRE (Java Card Run Time Execution) and energy consumption. Infineon Technologies is the only provider of the tested cards for the different application domains, and the software itself is not available. The work of Fischer in [7] compares the performance results given by a Java Card applet with the results of the equivalent

IJCSI

50

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

native application. Another interesting work has been carried out by the IBM BlueZ secure systems group and it was detailed in a Master thesis [12]. JCOP framework has been used to perform a series of tests to cover the communication overhead, DES performance and reading and writing operations into the card memory (RAM and EEPROM). Markantonakis in [9] presents some performance comparisons between the two most widely used terminal APIs, namely PC/SC and OCF. Comparatively to these works, our benchmarking framework not only covers the different functionalities of a Java Card platform but it also provided as a set of open source code freely accessible on-line.

3. General benchmarking framework 3.1 Introduction Our research work falls under the MESURE project [10], a project funded by the French administration (Agence Nationale de Recherche), which aims at developing a set of open source tools to measure the performance of Java Card platforms. These benchmarking tools focus on Java Card 2.2 functionalities even if Java Card 3.0 specifications have been published since March 2008 [1], principally because until now there is no Java Card 3.0 platform in the market except for some prototypes such as the one demonstrated by Gemalto during the Java One Conference in June 2008. Since Java Card 3.0 proposes two editions: connected (web oriented) edition and classic edition, our measuring tools can be reused to benchmark Java Card 3.0 classic edition platforms.

3.2 Addressed issues Only features related to the normal use phase of Java Card applications will be considered here. Excluded features include installing, personalizing or deleting an application since they are of lesser importance from user’s point of view and performed once. Hence, the benchmark framework enables performance evaluation at three levels: – The VM level: to measure the execution time of the various instructions of the virtual machine (basic instructions), as well as subjacent mechanisms of the virtual machine (e.g., reading and writing the memory). – The API level: to evaluate the functioning of the services proposed by the libraries available in the embedded system (various methods of the API, namely those of Java Card and GlobalPlatform). – The JCRE (Java Card Runtime Execution) level: to evaluate the non-functional services, such as the

transaction management, the method invocation in the applets, etc. We will not take care of features like the I/Os or the power consumption because their measurability raises some problems such as: – For a given smart card, distinct card readers may provide different I/Os measurements. – Each part of an APDU is managed differently on a smart card reader. The 5 bytes header is read first, and the following data can be transmitted in several way: 1 acknowledge for each byte or not, delay or not before noticing the status word, etc. – The smart card driver used by the workstation generally induces more delay on the measurement than the smart card reader itself.

3.3 The benchmarking overview The set of tests are supplied to benchmark Java Card platforms available for anybody and supported by any card reader. The various tests thus have to return accurate results, even if they are not executed on precision readers. We reach this goal by removing the potential card reader weakness (in terms of delay, variance and predictability) and by controlling the noise generated by measurement equipment (the card reader and the workstation). Removing the noise added to a specific measurement can be done with the computation of an average value extracted from multiple samples. As a consequence, it is important on the one hand to perform each test several times and to use basic statistical calculations to filter the trustworthy results. On the other hand, it is necessary to execute several times in each test the operation to be measured in order to fix a minimal duration for the tests (> 1 second) and to expect getting precise results. We defined a set of modules as part of the benchmarking framework. The benchmarks have been developed under the Eclipse environment based on JDK 1.6, with JSR268 [13] that extends Java Standard Edition with a package that defines methods within Java classes to interact with a smart card. According to the ISO 7816 standard, since a smart card has no internal clock, we are obliged to measure the time a Java Card platform takes to answer to an APDU command, and to use that measure to deduce the execution time of some operations. The benchmarking development tool covers two parts as described in Figure 1: the script part and the applet part. The script part, entirely written in Java, defines an abstract class that is used as a template to derive test cases characterized by relevant measuring parameters such as, the operation type to measure, the number of loops, etc. A method run() is executed in each script to interact with the

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 corresponding test case within the applet. Similarly, on the card is defined an abstract class that defines three methods: – a method setUp() to perform any memory allocation needed during the lifetime test case. – a method run() used to launch the tests corresponding to the test case of interest, and – a method cleanUp() used after the test is done to perform any clean-up.

51

The Calibrate Module: computes the optimal parameters (such as the number of loops) needed to obtain measurements of a given precision.

Fig. 2 Overall Architecture

Fig. 1 The script part and the Applet part

3.4 Modules In this section, we describe the general benchmark framework (see Figure 2) that has been designed to achieve the MESURE goal. The methodology consists of different steps. The objective of the first step is to find the optimal parameters used to carry out correctly the tests. The tests cover the Virtual Machine (VM) operations and the API methods. The obtained results are filtered by eliminating non-relevant measurements and values are isolated by drawing aside measurement noise. A profiler module is used to assign a mark to each benchmark type, hence allowing us to establish a performance index for each smart card profile used. In the following subsections, we detail every module composing the framework. The bulk of the benchmark consists in performing time execution measurements when we send APDU commands from the computer through the CAD to the card. Each test (through the run method) is performed within the card a certain number of times (Y) to ensure reliability of the collected execution times, and within each run method, we perform a certain number of loops (L). L is coded on the byte P2 of the APDU commands which are sent to the on-card applications. The size of the loop performed on the card is L = (P2)2 since L is so great to be represented with one byte.

Benchmarking the various different byte-codes and API entries takes time. At the same time, it is necessary to be precise enough when it comes to measuring those execution times. Furthermore, the end user of such a benchmark should be allowed to focus on a few key elements with a higher degree of precision. It is therefore necessary to devise a tool that let us decide what are the most appropriate parameters for the measurement. Figure 3 depicts the evolution of the raw measurement, as well as its standard deviation, as we take 30 measurements for each available loop size of a test applet. As we can see, the measured execution time of an applet grows linearly with the number of loops being performed on the card (L). On the other hand, the perceived standard deviation on the different measurements varies randomly as the loop size increases, though with less and less peaks. Since a bigger loop size means a relatively more stable standard deviation, we use both the standard deviation and the mean measured execution time as a basis to assess the precision of the measurement. To assess the reliability of the measurements, we compare the value of the measurement with the standard deviation. The end user will need to specify this ratio between the average measurement and the standard deviation, as well as an optional minimum accepted value, which is set at one second by default. The ratio refers to the precision of the tests while the minimal accepted value is the minimum duration to perform each test. Hence, with both the ratio and the minimal accepted value, as specified by the end

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

52

user, we can test and try different values for the loop size to binary search and approach the ideal value.

Fig. 3 Raw measurements and Standard deviation

The Bench Module: For a number of cycles, defined by the calibrate module, the bench module computes the execution time for: – The VM byte codes – The API methods – The JCRE mechanisms (such as transactions). The Filter Module: Experimental errors lead to noise in the raw measurement experiments. This noise leads to imprecision in the measured values, making it difficult to interpret the results. In the smart card context, the noise is

due to crossing the platform, the CAD and the terminal (measurement tools, Operating System, hardware). The issues become: how to interpret the varying values and how to compare platforms when there is some noise in the results. The filter module uses a statistical design to extract meaningful information from noisy data. From multiple measurements for a given operation, the filter module uses the mean value µ of the set of measurements to guess the actual value, and the standard deviation σ of the measurements to quantify the spread of the measurements around the mean. Moreover, since the measurements respect the normal Gaussian distribution, a confidence interval [µ − (n × σ), µ + (n × σ)], within which the confidence level is of 1−a, is used to help eliminate the measurements outside the confidence interval, where n and a are respectively the number of measurements and the temporal precision, and they are related by traditional statistical laws. The Extractor Module: is used to isolate the execution time of the features of interest among the mass of raw measurements that we gathered so far. Benchmarking byte-codes and API methods within Java Card platforms requires some subtle means in order to obtain execution results that reflect as accurately as possible the actual isolated execution time of the feature of interest. This is because there exists a significant and non-predictable elapse of time between the beginning of the measure, characterized by the starting of the timer on the computer, and the actual execution of the byte-code of interest. This is also the case the other way around. Indeed, when performing a request on the card, the execution call has to travel several software and hardware layers down to the card’s hardware and up to the card’s VM (vice versa upon response). This non-predictability is mainly dependent on hardware characteristics of the benchmark environment (such as the CAD, PC’s hardware, etc), the Operating System level interferences, services and also on the PC’s VM. To minimize the effect of these interferences, we need to isolate the execution time of the features of interest, while ensuring that their execution time is sufficiently important to be measurable. The maximization of the byte-codes execution time requires a test applet structure with a loop having a large upper bound, which will execute the bytecodes for a substantial amount of time. On the other hand, to achieve execution time isolation, we need to compute the isolated execution time of any auxiliary byte-code upon which the byte-code of interest is dependent. For example if sadd is the byte-code of interest, then the bytecodes that need to be executed prior to its execution are those in charge of loading its operands onto the stack, like two sspush. Thereafter we subtract the execution time of an empty loop and the execution time of the auxiliary

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 byte-codes from that of the byte-code of interest (opn in Table 1) to obtain the isolated execution time of the bytecode. As presented in Table 1, the actual test is performed within a method run to ensure that the stack is freed after each invocation, thus guaranteeing memory availability. Table 1: The framework for a bytecode opn

Java Card Applet process() { i=0 while i <= L do { run() i = i+1 } }

Test Case run() { op0 op1 . . . opn-1 opn }

In Table 1: - L represents the chosen upper bound, - opn represents the byte-code of interest, - opi for i ∈ [0..n-1] represents the auxiliary byte-codes necessary to perform the byte-code opn. To compute the mean isolated execution time of opn we need to perform the following calculation:

M (opn ) =

mL ( opn ) − mL ( Emptyloop ) L

n −1

− ∑ M (opi )     (1)  i =0

Where : ‐  M (opi ) is the mean isolated execution time of the bytecode opi ‐  mL (opn ) is the mean global execution time of the bytecode opn, including interferences coming from other operations performed during the measurement, both on the card and on the computer, with respect to a loop size L. These other operations represent for example auxiliary byte-codes needed to execute the byte-code of interest, or OS and JVM specific operations. The mean is computed over a significant number of tests. It is the only value that is experimentally measured. - Emptyloop represents the execution of a case where the run method does nothing. The formula (1) implies that prior to computing M (opn ) we need to compute M (opi ) for i ∈ [0..n-1]. The Profiler Module: In order to define performance references, our framework provides measurements that are specifically adapted to one of the following application domains:

53

– Banking applications – Transport applications, and – Identity applications. A JCVM is instrumented in order to count the different operations performed during the execution of a script for a given application. More precisely, this virtual machine is a simulated and proprietary VM executing on a workstation. This instrumentation method is rather simple to implement compared to a static analysis based methods, and can reach a good level of precision, but it requires a detailed knowledge of the applications and of the most significant scripts. Some features related to byte-codes and API methods appeared to be necessary and the simulator was instrumented to give useful information such as: – for the API methods : • the types and values of method parameters • the length of arrays passed as parameters, – for the byte-codes : • the type and duration of arrays for array related bytecodes (load, astore, arraylength), • the transaction status when invoking the byte-code. A simple utility tool has been developed to parse the log files generated by the instrumented JCVM, which builds a human-readable tree of method invocations and byte-code usage. Thus, with the data obtained from the instrumented VM, we attribute for each application domain a number that represents the performance of some representative applets of the domain on the tested card. Each of these numbers is then used to compute a global performance mark. We use weighted means for each domain dependent mark. Those weights are computed by monitoring how much each Java Card feature is used within a regular use of standard applets for a given domain. For instance, if we want to test the card for a use in transport applications, we will use the statistics that we gathered with a set of representative transport applets to evaluate the impact of each feature of the card. We are considering the measure of the feature f on a card c for an application domain d. For a set of nM extracted measurements M1c,f,  …,  MnMc,f   considered as significant for the feature f, we can determine a mean M c , f   modelling the performance of the platform for this feature. Given nC cards for which the feature f was measured, it is necessary to determine the reference mean execution time Rf , which will then serve as a basis of comparison for all subsequent test. Hence the “mark” Nc,f of a card c for a feature f, is the relation between Rf and

N c, f =

Rf M c, f

M c, f :

(2)

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

54

However, this mark is not weighted. For each pair of a feature f and an application domain d, we associate a coefficient αf,d, which models the importance of f in d. The more a feature is used within typical applications of the domain, the bigger the coefficient:

α f ,d =

β f ,d

∑i=F1 β i ,d n

(3)

where : – βf,d is the total number of occurrence of the feature f in typical applications of the domain d. – nF is the total number of features involved in the test. Therefore, the coefficient αf,d represents the occurrence proportion of the feature of interest f among all the features. Hence, given a feature f, a card c and a domain d, the “weighted mark” Wc,f,d is computed as follows :

Wc,f,d = Nc,f × αf,d (4)

The “global mark” Pc,d for a card c and for a domain d is then the sum of all weighted marks for the card. A general domain independent note for a card is computed as the mean of all the domain dependant marks. Figure 4 shows some significant byte-codes computed for a card and compared to the reference tests regarding the financial domain. Whereas, Figure 5 shows the global results obtained for a tested card. Based on the results of Figure 5, our tested card seems to be dedicated for financial use.

Fig. 4 An example of a financial-dependent mark

Fig. 5 Computing a global performance mark

4. Validation of the tests 4.1 Statistical correctness of the measurements The expected distribution of any measurement is a normal distribution. The results being time values, if the distribution is normal, then, according to Lilja [8], the arithmetic mean is an acceptable representative time value for a certain number of measurements (Lilja recommends at least 30 measurements). Nevertheless, Rehioui [12] pointed out that the results obtained via methods similar to ours were not normally distributed on IBM JCOP41 cards. Erdmann [6] cited similar problems with Infineon smart cards. When we measure both the reference test and the operation test on several smart cards by different providers using different CADs on different OSs, none of the time performances had a normal distribution (see Figure 6 for a sample reference test performed on a card). The results were similar from one card to another in terms of distribution, even for different time values, and for different loop sizes. Changes in CAD, in host-side JVM, in task priority made no difference on the experimental distribution curve. Testing the cards on Linux and on Windows XP or Windows Vista, on the other side, showed differences. Indeed, the recurring factor when measuring the performances with a terminal running Linux with PC/SC Lite and a CCID driver is the gap between peaks of distribution. The peaks are often separated by 400ms and 100 ms steps which match some parts of the public code of PC/SC Lite and the CCID driver. With other CADs, the distribution shows similar

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 steps with respect to the CAD driver source code. The peaks in the distribution from the measurements obtained on Windows are separated by 0.2 ms steps (see Figure 7). Without having access to neither the source code of the PC/SC implementation on Windows nor the driver source codes, we can deduce that there must be some similarities in the source codes between the proprietary versions and the open source versions. In order to check the normality of the results, we isolated some of the peaks of some distribution obtained on our measurements and we used the resulting data set. The Shapiro-Wilk test is a well established statistical test used to verify the null hypothesis that a sample of data comes from a normally distributed population. The result of such a test is a number W ∈ [0, 1], with W close to 1 when the data is normally distributed. No set of value obtained by isolating a peak within a distribution gave us a satisfying W close to 1. For instance, considering the peak in Figure 8, W = 0.8442, which is the highest value for W that we observed, with other values ranging as low as W = 0.1384. We conclude that the measurements we obtain, even if we consider a peak of distribution, are not normally distributed.

4.2 Validation through a precision CAD We used a Micropross MP300 TC1 reader to verify the accuracy of our measurements. This is a smart card test platform, that is designed specifically to give accurate results, most particularly in terms of time analysis. The results here are seemingly unaffected by noises on the host machine. With this test platform, we can precisely monitor the polarity changes on the contact of the smart card, that mark the I/Os. We measured the time needed by a given smart card to reply to the same APDUs that we used with a regular CAD. We then tested the measured time values using the Shapiro-Wilk test, we observed W ≥ 0.96, much closer to what we expected in the first place. So we can assume that the values are normally distributed for both the operation measurement and the reference measurement. We subtracted each reference measurement value from each sadd operation measurement value, divided by the loop size to get a time values set that represents the time performance of an isolated sadd bytecode. Those new time values are normally distributed as well (W = 0.9522). On the resulting time value set, the arithmetic mean is 10611.57 ns and the standard deviation is 16.19524. According to [6], since we are dealing with a normal distribution, this arithmetic mean is an appropriate evaluation of the time needed to perform a sadd byte code on this smart card. Using a more traditional CAD (here, a

55

Cardmann 4040, but we tried five different CADs) we performed 1000 measurements of the sadd operation test and 1000 measurements of the corresponding reference test. By subtracting each value obtained with the reference test from each of the values of the sadd operation test, and dividing by the loop size, we produced a new set of 1000000 time values. The new set of time values has an arithmetic mean of 10260.65 ns and a standard deviation of 52.46025. The value we found with a regular CAD under Linux and without priority modification is just 3.42% away from the more accurate value found with the precision reader. Although this is a set of measurements that are not normally distributed (W = 0.2432), the arithmetic mean of our experimental noisy measurements seems to be a good approximation of the actual time it takes for this smart card to perform a sadd. The same test under Windows Vista gave us a mean time of 11380.83 ns with a standard deviation of 100.7473, that is 7,24% away from the accurate value. We deduce that our data are noisy and faulty but despite a potentially very noisy test environment, our time measurements always provide a certain accuracy and a certain precision.

5. Conclusion With the wide use of Java in smart card technology, there is a need to evaluate the performance and characteristics of these platforms in order to ascertain whether they fit the requirements of the different application domains. For the time being, there is no other open source benchmark solution for Java Card. The objective of our project [10] is to satisfy this need by providing a set of freely available tools, which, in the long term, will be used as a benchmark standard. In this paper, we have presented the overall benchmarking framework. Despite the noise, our framework achieves some degree of accuracy and precision. Our benchmarking framework does not need a costly reader to accurately evaluate the performance of a smart card. Java Card 3.0 is a new step forward for this community. Our framework should still be relevant to the classic edition of this platform, but we have yet to test it.

IJCSI

56

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

Fig. 6 Measurements of a reference test as the tests proceed under Linux, and the corresponding distribution curve L = 412

Fig. 8 Some Distribution of the measurement of a reference test: close up look at a peak in distribution L = 412

References [1] [2] [3] [4]

[5] [6] [7]

[8] [9] Fig. 7 Distribution of sadd operation measurements using Windows Vista, and a close up look at the distribution (L = 902) [10] [11]

[12] [13]

Java card 3.0 specification, March 2008. Pierrick Arlot. Le marché de la carte à puce ne connaît pas la crise. Technical report, Electronique international, 2008. Zhiqun Chen, Java Card Technology for Smart Cards: Architecture and Programmer’s Guide, Addison Wesley 2000. Jordy Castellà-Roca, Josep Domingo-Ferrer, Jordi HerreraJoancomati, and Jordi Planes. A performance comparison of Java Cards for micropayment implementation. In CARDIS, pages 19–38, 2000. Jean-Michel Douin, Pierre Paradinas, and Cédric Pradel. Open Benchmark for Java Card Technology. In e-Smart Conference, September 2004. Monika Erdmannn. Benchmarking von Java Cards. Master’s thesis, Institut für Informatik der Ludwig-Maximilians-Universität München, 2004. Mario Fischer. Vergleich von Java und native-chipkarten toolchains, benchmarking, messumgebung. Master’s thesis, Institut für Informatik der Ludwig-Maximilians-Universität München, 2006. David J. Lilja. Measuring Computer Performance: A Practitioner’s Guide. Cambridge University Press, 2000. Constantinos Markantonakis. Is the performance of smart card cryptographic functions the real bottleneck? In 16th international conference on Information security: Trusted information: the new decade challenge, volume 193, pages 77 – 91. Kluwer 2001. The MESURE project website. http://mesure.gforge.inria.fr. Pierre Paradinas, Samia Bouzefrane, and Julien Cordry. Performance evaluation of Java card bytecodes. In Springer, editor, Workshop in Information Security Theory and Practices (WISTP), Heraklion, Greece, 2007. Karima Rehioui. Java Card Performance Test Framework, September 2005. Université de Nice, Sophia-Antipolis, IBM Research internship. JSR 268 : http://jcp.org/en/jsr/detail?id=268

IJCSI

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

57

Samia Bouzefrane is an associate professor at the CNAM (Conservatoire National des Arts et Métiers) in Paris. She received her Ph. D. in Computer Science in 1998 at the University of Poitiers (France). She joined the CEDRIC Laboratory of CNAM on September 2002 after 4 years at the University of Le Havre. After many research works on real-time systems, she is interested in smart card area. Furthermore, she is the author of two books: a French/English/Berber dictionary (1996) and a book on operating systems (2003). Currently, she is a member of the ACM-SIGOPS, France Chapter. Julien Cordry is a PhD student from the SEMpIA team (embedded and mobile systems towards ambient intelligence) of the CNAM in Paris. The topic of his research is the performance evaluation of Java Card platforms. He took part in the MESURE project, a collaborative work between the CNAM, the university of Lille and Trusted Labs. He gives lecturers at the CNAM, at the ECE (Ecole Centrale d'Electronique) and at the EPITA (a computer science engineering school). The MESURE project has received on September 2007 the Isabelle Attali Award from INRIA, which honors the most innovative work presented during “e-Smart” Conference. Pierre Paradinas is currently the Technology-Development Director at INRIA, France. He is also Professor at CNAM (Paris) where he manages the "chair of Embedded and Mobile Systems". He received a PhD in Computer Science from the University of Lille (France) in 1988 on smart cards and health application. He joined Gemplus in 1989, and was successively researcher, internal technology audit, Advanced Product Manager while he launched the card based on Data Base engine (CQL), and the Director of a common research Lab with universities and National Research Center (RD2P). He sets up the Gemplus Software Research Lab in 1996. He was also appointed technology partnership Director in 2001 based in California until June 2003. He was the Gemplus representative at W3C, ISO/AFNOR, Open Card Framework and Java Community Process, co-editor of the part 7 of ISO7816, Director of the European funded Cascade project where the first 32-Risc microprocessor with Java Card was issued.

IJCSI

IJCSI CALL FOR PAPERS JANUARY 2010 ISSUE The topics suggested by this issue can be discussed in term of concepts, surveys, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas. See authors guide for manuscript preparation and submission guidelines. Accepted papers will be published online and authors will be provided with printed copies and indexed by Google Scholar, CiteSeerX, Directory for Open Access Journal (DOAJ), Bielefeld Academic Search Engine (BASE), SCIRUS and more. Deadline: 15th December 2009 Notification: 15th January 2010 Online Publication: 31st January 2010 • • • • • • • • • • • • • • • •

Evolutionary computation Industrial systems Evolutionary computation Autonomic and autonomous systems Bio-technologies Knowledge data systems Mobile and distance education Intelligent techniques, logics, and systems Knowledge processing Information technologies Internet and web technologies Digital information processing Cognitive science and knowledge agent-based systems Mobility and multimedia systems Systems performance Networking and telecommunications

• • • • • • • • • • • • • • •

Software development and deployment Knowledge virtualization Systems and networks on the chip Context-aware systems Networking technologies Security in network, systems, and applications Knowledge for global defense Information Systems [IS] IPv6 Today - Technology and deployment Modeling Optimization Complexity Natural Language Processing Speech Synthesis Data Mining

All submitted papers will be judged based on their quality by the technical committee and reviewers. Papers that describe research and experimentation are encouraged. All paper submissions will be handled electronically and detailed instructions on submission procedure are available on IJCSI website (http://www.ijcsi.org). For other information, please contact IJCSI Managing Editor, ([email protected]) Website: http://www.ijcsi.org

© IJCSI PUBLICATION 2009 www.IJCSI.org

IJCSI The International Journal of Computer Science Issues (IJCSI) is a refereed journal for scientific papers dealing with any area of computer science research. The purpose of establishing the scientific journal is the assistance in development of science, fast operative publication and storage of materials and results of scientific researches and representation of the scientific conception of the society. It also provides a venue for researchers, students and professionals to submit ongoing research and developments in these areas. Authors are encouraged to contribute to the journal by submitting articles that illustrate new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.

Indexing of IJCSI: 1. Google Scholar 2. Directory for Open Access Journals (DOAJ) 3. Bielefeld Academic Search Engine (BASE) 4. CiteSeerX 5. SCIRUS

Frequency of Publication: Monthly

© IJCSI PUBLICATION www.IJCSI.org

Related Documents