Using Lightweight Formal Methods In User Interface Verification

  • Uploaded by: Izzat Alsmadi
  • 0
  • 0
  • October 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Using Lightweight Formal Methods In User Interface Verification as PDF for free.

More details

  • Words: 2,903
  • Pages: 5
Using Lightweight Formal Methods in User Interface Verification Izzat Alsmadi Kenneth Magel Department of Computer Science North Dakota State University { izzat.alsmadi,kenneth.magel }@ndsu.edu

Abstract Formal methods are usually out of scope for industry GUI designers as they are uneasy to implement and require long time for learning and implementation. User interface design evolves several times through the life time of the software project. This makes it ineffective to use formal methods to verify the correctness of the design especially in an uncritical environment. The last factor that depresses designers from using formal methods in GUI design is that GUI specifications are not easy to be formally described or proved. On the other hand, GUI components and states are too large for them to be adequately covered by testing. If we can dynamically (e.g. implemented automatically by a tool) verify some properties of the GUI model, this can assist in the reduction of the required resources for testing while it does not require extra time or resources for learning or implementing this lightweight formal process. This paper introduces a process to dynamically define and verify some GUI properties and test cases. The GUI model used in the technique is developed from the implementation. Test cases are created dynamically and formal methods are used to describe their expected results. Those results are then compared with the actual output from the execution process.

Keywords Model based user interface verification, Interface model, GUI specification, software verification, formal methods.

1. Introduction Model verification means to verify that the deigned model matches or complies with its requirement or

specifications. User interface verification means to verify that all GUI widgets are in their expected state. If you copy a certain text in a text editor application, GUI widgets states change should be reflected on the paste control (to become enabled), and on the clipboard to save the copied text. Formal verification is accomplished through making assertions about the design with some expected results, by formulating properties based on the specification of the system, or by using mathematical and/or logical rules to prove that the design meets these properties. Verification can support testing and save the effort of exhaustingly test the implementation. As it is always stated; testing can only show the presence, but not the absence of error. Interface designers usually use informal or ad-hoc techniques such as mock-ups or prototypes for defining or describing the user interface. In some cases they are incomplete and/or vague and cause developers and users to interpret them differently. It is resource consuming and infeasible to adequately test the user interface. It is very difficult and expensive to automate GUI testing. Bit-by-bit screen verification has long been abandoned. Modern development languages allow us to extract and interrogate GUI information from their executables. GUI test automation is an extensive process faces challenges in test cases’ creation, execution, and verification. Using verification of some properties can support and complement the testing process and provide another channel to improve the overall testing coverage. Neither process should or can claim completely proving the correctness of the user interface.

2. Related Work In order to be able to verify requirements, they have to be formally described. If we research through some of the software projects specifications for GUI specifications, we find them as general guidelines (rather than detailed specifications) such as workflow, window,

common action, buttons, pop-up menus, drag and drop, item selection, layout, and dialogue guidelines that are not formally described. There are several researches regarding the automatic GUI generation from specification using formally specified GUI requirements or using a GUI specification language [7] [8] [9] [10] [11] [18] [19] [20] [21] [22]. The general problem with those approaches to be adopted in GUI design is that they require a relatively long learning curve that does not fit the user interface unstable, and continuously evolved environment. Companies tend to prefer paying more for testing than investing those resources for GUI formal verification. Qing et al. [6] discussed formally describing GUI events for the automatic generation of test oracles. We added to the similar concept of current state-event-next state described in the paper, the idea of dynamically defining the name of next state depending on the names of current state and the event. This makes it possible to automate the event flow graph creation and the expected test case results. This assumption may involve some abstraction in assuming that it is always true that the same event on the same state causes the transition to a similar state. For example, assuming that clicking the event “save” in an open document causes the transition to another state (e.g. a document is saved in a certain location with a certain name). The name or the destination can be different, but we assume that the final state is the same. We can use the initial state info to verify the next state results. Vieira et al [1] used UML models for test case generation and verification. Beer et al [2] described the IDATG (Integrating Design and Automated Test Case Generation) environment for GUI testing. IDATG supports the generation of test cases based a model that describes a specific user task and a model that captures the user interface behavior. Operators, Methods, and Selection Rules (GOMS) model is used for user interface design and evaluation [14, 15, and 16]. GOMS analyzes the user complexity of interactive systems and models the user behavior. The GOMS approach has several limitations. Its most significant downside is that the predictions are only valid for expert users who do not make any errors. GOMS does not take into account novices who are just learning the system, or intermediate users who make occasional errors. Since one of the goals of GUI is to aim for maximum usability for all users, especially novices, this is a serious deficiency in the model [17].

3. Goals and Approaches In order to verify the model against specification, we need to clearly and as formally as we can describe the specification. How can we describe the GUI

specifications in a format way that can be verified?! We developed a tool for the automatic generation and execution of GUI testing [12] [13]. The tool generates a GUI tree model from the implementation using reflection. Test cases are generated using several algorithms that consider the tree paths’ coverage. The goal is to test each path in the generated tree (i.e. branch coverage). Execution is accomplished through running some API’s that simulate user actions. The execution process uses the output of the dynamically generated test cases as input and provides a log through the execution process to indicate those events that are successfully executed. Verification is accomplished through the comparison of the generated and executed test suites.

Model checkers Model checkers are decision procedures for temporal propositional logic. They are particularly suited to show properties of finite state-transition system [23]. Labeled Transition System Analyzer (LTSA) is a tool for modeling and analyzing the behavior of concurrent systems. The tool is being developed to meet the needs for a lightweight, accessible and interactive tool targeted at behavioral analysis of software architectures [25]. One problem with using LTSA for GUI modeling is that it can deal with limited number of states whereas GUI applications usually have large number of possible states. We intend to use and extend LTSA features as part of an automated dynamically generated algorithms as part of a GUI test automation framework. Using LTSA, we can define some requirement properties to be checked for the correctness and safety in the GUI generated model from the implementation. The verification of the implementation model rather than the design model is expected to expose different issues. While the design model is closer to the requirement, it is more abstract and generally causes some difficulties for testing. On the other hand, the implementation model is closer to testing and is expected to be easier to test and expose more relevant errors. Those errors could be a reflection of a requirement, design, or implementation problems. Intuitively, the conformance relation holds between an implementation and a specification if, for every trace in the specification, the implementation does not contain unexpected deadlocks. That means that if the implementation refuses an event after such a trace, the specification also refuses this event [25]. A formal definition of an event in LTSA is: S1= (e1> S2), where S1 is the initial state, S2 is the next state, and e1 is the event that causes the transition. We formalize the names of the states to dynamically generate the LTSA file. For example, Edir=(copy->Copy_Edit), File= (save->ok->Save_Ok_File)

File= (save->cancel->Save_Cancel_File), where the next state name is the combination of the event(s) and the initial state. This means that the same event on the same state transits to the same next state. After generating the LTSA File (with extension .lts) from the GUI tree, we can check for some safety, deadlock, or progress properties’ violations. The current tool can generate dynamically this file from the GUI tree [13]. Figure1 shows a screen shot from the file dynamically generated for MS Notepad Application Under Test (AUT). The default autogenerated LTSA file in the tool generates the single parent-child events that can be observed in the tree and not all possible events. The objects on the left represent those parents in Notepad that have several children. All original states go to undefined states (represented by -1 in the figure) in the tree file, as the dynamically generated file does not represent the complete possible states.

PASTE =({cut,copy} ->paste -> PASTE). We compose the edit process with a check property to make sure that the application does not start from a paste action (before copy, or cut). Figure2 represents the graphical representation in LTSA for the above example.

Figure2. Edit-Cut-Copy-Paste graphical presentation in LTSA.

Test cases’ results verification

Figure 1. LTSA Notepad model. Here is an edit-cut-copy –paste –undo LTSA demonstration example: Set Edit events={cut,copy,paste,undo} EDIT=(cut->CUTEDIT|copy->COPYEDIT |{paste,undo}->EDIT), CUTEDIT=(undo->EDIT|{cut,copy}-> CUTEDIT |paste>PASTECUTEDIT), COPYEDIT=(undo->EDIT|{cut,copy}-> COPYEDIT |paste->PASTECOPYEDIT), PASTECOPYEDIT=(undo->EDIT|paste-> PASTECOPYEDIT), PASTECUTEDIT=(undo->EDIT|paste-> PASTECUTEDIT). Property

The second usage of the LTSA implementation is in results verification. We utilize the concept of dynamically defining the next expected state to verify the output from the execution and compare it to the expected results. We developed algorithms to dynamically generate test cases from the GUI tree [12] [13]. A typical test case dynamically generated in the tool look like; 15,NOTEPADMAIN,EDIT,FIND,TABCONTROL1,TA BGOTO, where the number represent test case number, and the list represents the consecutive sequence of controls. This technique allows a simple approach to dynamically verify the out of GUI test automation which is considered as the most challenging process in GUI test automation implementation. The process verifies the results from executing those test cases and compares them to the expected ones as defined. To formally verify the results of the above test case according to the previously described rules, we will get the following: Notepadmain=(edit-> Edit_Notepadmain), Edit_Notepadmain=(find->Find_ Edit_Notepadmain), Find_ Edit_Notepadmain=(tabcontrol1->Tabcontrol1_ Find_ Edit_Notepadmain), Tabcontrol1_ Find_ Edit_Notepadmain=(tabgoto->Tabgoto_ Tabcontrol1_ Find_ Edit_Notepadmain).

The tool uses the test cases as input and executes them using some API’s that simulate the user actions. Each successfully executed control is logged. If all controls from the previous test case is successfully executed in the right sequence, they will be listed in the log file as (Tabgoto, tabcontrol1, find, edit, Notepadmain). The verification algorithm compares both (e.g. generated and executed) for the right controls; correct name, number and sequence of controls.

Using LTSA number of parent states as a GUI structural metric We studied the number of LTSA parent states as a metric to indicate the GUI structural complexity. The value behind this metric is that it can be automatically extracted from the tool without the need for users involvement as in several GUI metrics suggested in this field. Table1 shows the results of applying this metric on the selected AUTs. Compose and progress times (from the LTSA tool) are related to the number of states that the user interface have. Table1. LTSA parents’ metric. AUT No. of Compose Progress LTSA time time parents Millisecs Millisecs Notepad 24 453 812 CDiese Test 4 187 0 FormAnimation 5 172 609 App winFomThreading 1 250 0 WordInDotNet 3 156 594 WeatherNotify ---Note1 6 172 15 Note2 ---Note3 10 796 594 GUIControls 11 172 16 ReverseGame 6 156 0 MathMaze 5 172 594 PacSnake 5 156 594 TicTacToe 3 422 563 Bridges Game 10 156 0 Hexomania 7 156 593 Further study is required to find out whether this metric indicates a valid implication about the structural complexity of the GUI. We will study some other metrics on the same AUTs to validate the results or the value of this metric.

4. Conclusion and Future Work GUI test automation is not a cure-all that should be taken as the only solution. We automate to save time and resource and we don’t expect everything to be automated. In this research, we studied using GUI model and test results’ verification as a complement for testing. We try to target the practice of avoiding the use of formal methods through using a lightweight formal process that can be dynamically generated without the need for using extra resources. The research is still in an early stage as we still need to evaluate the effectiveness and the validity of the verification process. In principle, the verification process is simple and can be easily implemented dynamically which provides a promising solution for the usually complex GUI test automation verification process. GUI verification does not eliminate the need for user validation. Rather than getting rid of GUI testers, we want to reduce them and reduce the time it is required to test. Such techniques can be very useful in regression testing where we need to rerun specific test cases periodically or before a new release.

5. References [1] Marlon Vieira, Johanne Leduc, Bill Hasling, Rajesh Subramanyan, Juergen Kazmeier. Automation of GUI Testing Using a Model-driven Approach. In proceedings of the 2006 international workshop on Automation of software test. Pages: 9 – 14. 2006. [2] A. Beer, S. Mohacsi, and C. Stary: “IDATG: An Open Tool for Automated Testing of Interactive Software”. In proceedings of the COMPSAC '98 - 22nd International Computer Software and Applications Conference. Pages 470-475. 1998. [3] Peter Fröhlich. Johannes Link. Automated test case generation from dynamic models. lecture notes in computer science. In proceedings of the 14th European Conference on Object-Oriented Programming. Pages: 472 – 492. 2000. [4] Egbert Schlungbaum. Model-based User Interface Software Tools,Current state of declarative models. GIT-GVU-96-30. 1996. [5] Qing Xie and Atif M Memon. Studying the characteristics of a good GUI test suite. In proceedings of the 17th International Symposium on Software Reliability Engineering. Pages: 159168. 2006. [6] Qing Xie , Atif Memon. Designing and comparing automated test oracles for GUI-based software applications.. ACM Transactions on Software Engineering and Methodology (TOSEM). 2007. [7] B.E. Sucrow. Formal specification of human-computer interaction by graph grammars under consideration of information resources. In proceedings of the 12th international conference on Automated software engineering (formerly: KBSE) Page: 28. 1997.

[8] B.E. Sucrow. Refining formal specifications of human computer interaction by graph rewrite rules. Springer Berlin / Heidelberg. 1998. [9] R. Cassino, G. Tortora, M. Tucci, G. Vitiello. SR-task grammars: a formal specification of human computer interaction for interactive visual languages. IEEE Symposium on Human Centric Computing Languages and Environments (HCC'03). Page 195-197. 2003. [10] Francis, Jambon. First steps in the retro-engineering of a GUI toolkit in the B language. In proceedings of the 15th French-speaking conference on human-computer interaction. Pages: 118 – 125. 2003. [11] Yamine Ait-Ameur, and Mickael Baron. Formal and experimental validation approaches in HCI systems design based on a shared event B model. International Journal on Software Tools for Technology Transfer (STTT). Springer Berlin / Heidelberg. 2006. [12] Alsmadi, I, and Kenneth Magel. GUI Path Oriented Test Generation Algorithms. In Proceeding of IASTED (569) Human-Computer Interaction. 2007. [13] Alsmadi, I, and Kenneth Magel. An Object Oriented Framework for User Interface Test Automation. MICS07. 2007. [14] Bonnie, John, and David Kieras. Using GOMS for user interface design and evaluation: which technique? In ACM Transactions on Computer-Human Interaction (TOCHI). Pages: 287 – 319. 1996. [15] Mei C. Chuah , Bonnie E. John , and John Pane. Analyzing graphic and textual layouts with GOMS: results of a preliminary analysis. In proceeding of the conference companion on human factors in computing systems. Pages: 323-325. 1994. [16] Elkerton, J. Using GOMS models to design documentation and user interfaces: An uneasy courtship. In proceedings of INTERCHI'93. 1993. [17] Hochstein, Lorin. GOMS. Course website. . 2002. [18]. Marian G. Williams, Vivienne Begg. Translation between Software Designers and Users. Communication of the ACM. Pages:102 – 103. 1993. [19] Peter Bumbulis. Combining Formal Techniques and Prototyping in User Interface Construction and Verification. PhD thesis, University of Waterloo. 1996. [20] Brad A. Myers. State of the Art in User Interface Software Tools, chapter 5.Ablex, Norwood, N.J. 1992. [21] Rouff, Christopher. Formal specification of user interfaces. ACM SIGCHI Bulletin. Pages: 27 – 33. 1996. [22] David Carr. Specification of Interface Interaction Objects. In Proceedings of ACM CHI'94 Human Factors in Computing Systems Conference. Pages: 372-378. 1994. [23] Schumann, Johann. Automated theorem proving in software engineering. Springer. 2001. [24] Tschaen, Val´ery. Model-based testing of reactive systems. Springer Berlin / Heidelberg. <www.irisa.fr/vertecs/Publis/Ps/2005-Test-Chap-Book.pdf>. 2005. [25] Magee, Jeff. Behavioral Analysis of Software Architectures using LTSA. In proceedings of the 21st international conference on Software engineering. Pages: 634 – 637. 1999.

Related Documents


More Documents from ""