Automates Software Testing

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Automates Software Testing as PDF for free.

More details

  • Words: 2,305
  • Pages: 3
Platform-Independent And Tool-Neutral Test Descriptions For Automated Software Testing Chang Liu Information and Computer Science University of California, Irvine, Irvine, CA 92697, USA [email protected] of the application-under-test often become nightmares for test automation engineers. In Section 2 of this extended abstract, we provide a detailed analysis of these problems.

ABSTRACT Current automatic test execution techniques are sensitive to changes in program implementation. Moreover, different test descriptions are required by different testing tools. As a result, it is difficult to maintain or port test descriptions. To address this problem, we developed TestTalk, a comprehensive testing language. TestTalk test descriptions are platform-independent and tool-neutral. The same software test in TestTalk can be automatically executed by different testing tools on different platforms. The goal of TestTalk is to make software test descriptions, which represent a significant portion of a software project, last as long as the software project.

We are developing TestTalk, a platform-independent and tool-neutral testing language to alleviate these problems. TestTalk is introduced in Section 3. Section 4 presents one completed case study and two on-going case studies that use TestTalk. Section 5 lists related work. Finally, a summary is presented in Section 6. 2

CURRENT PROBLEMS IN AUTOMATED SOFTWARE TESTING Problem 1: Expensive to develop Current state-of-the-practice in software test automation uses general-purpose or tool-specific programming or scripting languages to automate test cases. For example, general-purpose languages such as C/C++, Perl, Tcl/Tk, and Expect are widely used in writing automated test cases. Tool-specific languages, such as BASIC for Rational Visual Test and 4Test for SilkTest/QAPartner, are also gaining popularity mainly because of the popularity of the test automation tools. Programming and scripting languages are powerful. Test designers have full access to the power of these languages to achieve their goals. While this is desirable, writing an automated test case in this way requires a complete development cycle including design, implementation, debugging, and testing (testing of the test case itself), just as any other software artifacts. This task is demanding in terms of both human resources and computational resources. Consequently, test automation is applied to only a very small number of test cases in many projects.

Keywords software testing, test description, automatic test execution. 1 INTRODUCTION Automatic software testing is gradually becoming accepted practice in the software industry. One reason is that more practitioners are exposed to the growing number of test automation tools and have become aware that test automation is a viable option. Another reason is that test automation tools are more mature, have more features, and are thus more appealing than simple capture-playback tools. The most important reason, however, is that the evershrinking development cycle and higher expectation of software quality is forcing project teams to seek help from automation. Software test automation techniques and tools promise to achieve shorter turnaround time and cut costs on frequently repeated tests. There are, however, pitfalls on this path to higher productivity [1]. Automated test cases are typically expensive to develop. Those written in programming or scripting languages are usually hard to understand and sensitive to changes. Switching testing automation tools, porting to other platforms, or upgrading the implementation Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. ICSE 2000, Limerick, Ireland © ACM 23000 1-58113-206-9/00/06 …$5.00

713

Problem 2: Sensitive to changes Automated test cases are indeed programs that drive an application-under-test by simulating an end user. This is usually achieved by simulating user interface events, which may include hot-keys, menu-items, mouse events, and keystrokes. Naturally, automated test cases are tied to the semantics of these low-level user interface events, which are subject to changes such as changed bindings of menu item hot keys. These changes may happen frequently and might happen very late in the application development cycle as application developers fine-tune the application for

the Apache web server1 on the host-under-test, and then it tries to get a page from this host machine. After it gets the page, it checks if a text string "It Worked!" is present in the returned page. Various test execution tools may be used to execute this test case automatically.

release. For test automation engineers, these changes are equivalent to requirement changes if their task is viewed as software development. We all know, as a basic software engineering problem, that requirement changes are hard to deal with when implementation is already underway. Not surprisingly, one of the biggest challenges of today's test automation engineers is to keep pace with the everchanging face of the application-under-test.

SimpleText $TestHost It Worked!

Problem 3: Difficult to port Implementation changes are not the only source of "requirement changes" for test automation engineers. If the application-under-test is ported to another platform, the project team should perform the same automated testing on the new platform. This presents test automation engineers a significant challenge because: first, very few test automation tools are cross-platform; second, even fewer test automation tools behave identically on different platforms; third, the application-under-test is probably also adjusted according to the different flavor of the new platform.

Fig. 1. A TestTalk test case from the Web Testing Framework. TestTalk test descriptions describe only what software tests do and what they expect as desired outputs, not how to execute the tests or how to verify the results. We intentionally separate test automation details from test descriptions, because test automation details usually have close ties to specific platforms or specific test execution tools. This separation ensures that TestTalk test descriptions are not sensitive to changes in application implementation details and are portable.

Switching test automation tools can also potentially force rewriting of all automated test cases. Switching is needed, among other situations, when a new test automation tool comes up with a desirable feature that a team's old favorite tool lacks.

While maintaining high understandability, maintainability, and portability, TestTalk test descriptions are also automation-friendly. For each test action and test oracle, testers can define how it can be implemented on various platforms using various tools. One way to define this is to write transformation rules that can translate TestTalk test actions and test oracles into complete or partial programs or scripts. For each testing tool on each platform, a set of transformation rules are developed to implement every test action and oracle. With a transformation rule set, a test case description can be transformed to a program or script. This generated program or script implement the test case on a particular platform using a particular tool. To use another testing tool or to test on another platform, only a new set of transformation rules is needed. All test descriptions remain the same. Therefore, not only are TestTalk test descriptions automation-friendly, the exact same test descriptions can also be automatically executed on different platforms using different tools.

3 OUR SOLUTION: TESTTALK To alleviate the problems described in Section 2, we propose a platform-independent and tool-neutral testing language to describe automation-friendly software tests. The language that we developed for this purpose is TestTalk [2,3], a comprehensive testing language based on XML (http://belmont-shores.ics.uci.edu/TestTalk). In TestTalk, a test case consists of a series of test actions and test oracles. A test action represents a particular test step or a particular series of test steps that testers would perform to carry out a test or a part of a test. A test oracle represents a verification step or a series of verification steps that testers would perform to verify parts or all of test outputs. A tester can define customized test actions and test oracles suitable in her application domain. A collection of tester-defined tests actions and test oracles is called a dialect. A tester can use both her own dialect as well as vocabulary provided by the TestTalk system to describe her test suites and test cases. Since dialects are defined by testers in the field, who know the application domain better than language designers, the test descriptions are likely easier to write and easier to understand.

4 CASE STUDIES To validate the TestTalk language, we used it in a web testing framework project. Currently, we are using TestTalk in two other case studies. In this section, we briefly introduce these case studies.

An example TestTalk test case SimpleText is shown in Fig. 1. In this test case, there are only two simple test actions: RestartServer and GetPage. Only one simple test oracle Text is used. When this test case is executed, it first restarts

Case Study 1: Web Testing Framework The Web Testing Framework is a framework for Apachebased web testing. The Web Testing Framework can be 1

714

Apache is an open-source HTTP server software package.

used by web developers to write test cases that can be automatically executed to test a web site. It can also be used by developers of the Apache web server to quickly verify if a new build of the Apache server functions as expected. Test cases in the Web Testing Framework are all written in TestTalk. Web developers or Apache developers can use a set of predefined test actions and test oracles to quickly put together a test case. They can choose to execute the test case by lynx2, Expect3, lwp4, wget5, or curl6. They can also choose to execute the test case by any other tool that they think is suitable. Furthermore, they can even add they own test actions and test oracles. To implement a new test action or oracle, or to implement an existing test action or oracle using a new tool, a web tester only needs to provide a segment of Perl code that complies with a predefined interface protocol. This ability to incorporate new code is also the reason why the project is named "Web Testing Framework" instead of "Web Testing Tool". Expandability was a major design goal.

essentially a set of protocols, implementations other than the HP implementation are expected. It is critical to quickly and accurately verify conformance of a new E-Speak implementation.

Our experience with this project showed that TestTalk supported multi-platform and multi-tool test automation.

TDD (Test Data Definition language) from Sun Microsystems has three types of elements to help generate more effective test data by specifying test categories. These three types of elements are properties, test variables, and test directives. TDD does not provide support for test automation.

We are working on the E-Fluency Test project (http://belmont-shores.ics.uci.edu/E-Fluency), which is a certification test suite for E-Speak. Because of the heterogeneous nature of E-Speak, E-Fluency Test has to be platform independent. TestTalk is an ideal language to use in such a situation. 5 RELATED WORK Ostrand and Balcer's TSL (Test Specification language) is a language designed to describe functional test suites by specifying categories. It provides property and selector mechanism, which testers can use as constraints to annotate the test specification so that they can control the complexity and number of the tests. TSL does not address test oracles and test result verification.

Case Study 2: Securibot Many computer systems were compromised after attackers exploited known security problems. To solve this problem, we are developing a framework for automated security checking and patching. The framework, named as Securibot, provides a self-operating mechanism for security checking and patching. Securibot performs security testing and analysis using security profiles and security updates, both of which are specified in TestTalk. Securibot can also detect compromised systems using attack signatures, also specified in TestTalk. Most important, the Securibot framework allows system vendors to publish recently discovered security weaknesses and new patches in TestTalk so that the Securibot system running on deployed systems can automatically check out security updates and apply the patches.

4Test is a commercial testing language from Segue. While it supports a higher abstraction level so that test descriptions are not closely tied to the GUI of the application-under-test, it is tool-specific. 6 SUMMARY Software test automation has great potential in improving software quality and reducing software development costs. However, understandability, portability, and maintainability of executable test descriptions must be addressed before the potential of test automation can be fully realized. TestTalk is an attempt in this direction. The Web Testing Framework case study shows that a customizable test framework can be built on top of TestTalk. We are working on two more case studies to collect further evidences.

Case Study 3: E-Fluency Test for E-Speak E-Speak is an on-going project at Hewlett-Packard (http://www.e-speak.hp.com/). The goal is to provide uniform services over the Internet. E-Speak defines a uniform service interface (API) and uniform service interaction (E-Speak engine) that allows e-services to dynamically interact to discover, negotiate, broker and compose themselves to solve a business to business or business to consumer service request. Since E-Speak is

REFERENCES [1] Chang Liu, Debra J. Richardson, "Programming Languages Considered Harmful in Writing Automated Software Tests", Technical Report 99-09, Inform. & Comp. Sci., University of California, Irvine, Feb. 1999. [2] Chang Liu, Debra J. Richardson, "TestTalk, A Test Description Language: Write Once, Test by Anyone, Anytime, Anywhere, with Anything", Technical Report 99-08, Inform. & Comp. Sci., UC Irvine, Feb. 1999.

2

Lynx is a text-based web browser. Expect is a testing tool based on scripting language TCL. 4 Lwp is libwww-perl, the Perl World-Wide-Web library (http://www.perl.com/CPAN). 5 GNU wget is a network utility to retrieve files from the World Wide Web. 6 Curl is a command line tool for getting data from a URL. 3

[3] Chang Liu, "TestTalk: A Comprehensive Testing Language", the 14th IEEE International Conference on Automated Software Engineering, Doctoral Symposium, Oct. 1999, Cocoa Beach, Florida, USA.

715

Related Documents

Automates Software Testing
November 2019 1
Software Testing
October 2019 21
Software Testing
October 2019 20
Software Testing
November 2019 23
Software Testing
June 2020 1
Software Testing
November 2019 11