Winrunner

  • Uploaded by: api-3756170
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Winrunner as PDF for free.

More details

  • Words: 1,533
  • Pages: 34
eTest Center, Bangalore

Agenda •

Software Testing - Why needed?



Testing Methodology Followed



Test Automation



Different testing types & Tools used



Winrunner-Functionality Testing tool

Software Testing •

Determine whether the system meets requirements specified by the ‘CLIENT’



Find the bugs and track the same through Defect Tracking tools.



Improve quality of the application and add value to the Organization.



Track usability issues which are not specified explicitly in the client requirements.

Software Testing Methodologies •



Waterfall methodology •

Unit Testing



Integration Testing



System Testing



User Acceptance Testing

Incremental methodology

E-Testing Methodology Onsite Offshore Iteration

Knowledge Repository

Requirement Analysis Inputs on Tools, Checklists, Environment

Strategy Formulation

Test Planning

Functional, Stress, Performance,etc.

Functional Test Tools Load Test Tools

Test Cases Generation

Post Deployment Evaluation

Regression, Defects

Release

Test Management Tool

Scripting

Test Execution

Automation – Why Required? • •





• •

Reduced testing time Consistent test procedures – ensure process repeatability and resource independence. Eliminates errors of manual testing Reduces QA cost – Upfront cost of automated testing is easily recovered over the lifetime of the product Improved testing productivity – test suites can be run earlier and more often Proof of adequate testing For doing Tedious work – test team members can focus on quality areas.

Test Automation ●

Functionality Testing Tools for Functionality Testing are used mainly when the application has to be tested in a number of hardware and browser combinations. Tools:Silk Test, SQA Robot,Winrunner.



Performance Testing To Test the Scalability of the application and to determine Performance bottlenecks and Stability at high loads. Tools: Silk Performer,Loadrunner, Webload,WAS.

Which Test Cases to Automate? 

Tests that need to be run for every build of the application (sanity check, regression test)



Tests that use multiple data values for the same actions (data driven tests)



Tests that require detailed information from application internals (e.g., SQL, GUI attributes)



Stress/load testing

Which Test Cases Not to Automate?  Usability testing ❏

"How easy is the application to use?"

 One-time testing  "ASAP" testing ❏

"We need to test NOW!"

 Ad hoc/random testing ❏

based on intuition and knowledge of application

 Tests without predictable results

From Manual to Automated Testing Wait for processes to complete

Perform user actions

1

1 Generate automated script

Verify AUT functions as expected

2

2

Synchronize script playback to application performance

3

3

Repeat steps until all applications are verified compliant

4

4 Add verification

Run test or suite of tests

Testing Is a Team Effort TEAM MEMBER

RESPONSIBILITY

Project Manager

Manages the testing process

Business Analyst

Analyzes enterprise and creates tests

Developer

Develops the applications and performs defect fixes

WinRunner Expert

Creates automated tests based on planning documentation and requirement specifications

Subject Matter Expert Understands how applications work in terms (business user) of navigation and data System Administrators

Manages the test environment

Automated Testing Process - A Team Effort Typical Responsibilities

2

1 Generate automated script

Synchronize script playback to application performance

WinRunner Expert

WinRunner Expert

Subject Matter Experts

Business Analysts

Business Analysts

4

3 Add verification

WinRunner Expert

Run test or suite of tests

WinRunner Expert Business Analysts System Administrators

Testing Process • Gather test documentation – – –

what type of testing is required for the AUT? which test data to use? what results are expected?

• Learn the AUT

– what screens to navigate? – what visual cues to establish?

• Confer with project team – functional experts

Mercury Interactive’s Winrunner

STEPS INVOLVED:

•Script generation •Customization of Scripts •Parameterization of data •Maintenance of Test Scripts in Test Suites •Save Test Results

1 Record user actions in script

2 Synchronize script to application under test

4

3 Add verification statements to check AUT

Run test or suite of tests

Recording and Playback Analog vs. Context Sensitive Scripts Initial/End Conditions The GUI Map

RECORDING and PLAYBACK

What Happens During Recording? Depart Date: ___/___/___ 12/12/03 From City.Paine . : ________________ Thomas BMW To City. . . . .: ________________ 1973 Flight. . . . . . :Drive _______ 234 Willow

set_window("Automobile Purchase Form", 10); edit_set ("Customer Name", "Thomas Paine"); edit_set ("Address","234 Willow Drive"); edit_set ("Date", "12/12/03"); list_select_item ("Make", "BMW"); edit_set ("Year", "1973"); edit_set ("Model", "2002tii"); button_press ("Insert Sale");

Order Number. . . .: __________ Customer. . . . . . . .: ___________________ Billing Date. . . . . . :

2002tii

What Happens During Playback? Purchase Completed... Depart Date: ___/___/___ 12/12/03 From City. . : ________________ Thomas Paine BMW To City. . . . .: ________________ 1973 Flight. . . . . .Drive : _______ 234 Willow

set_window("Automobile Purchase Form", 10); edit_set ("Customer Name", "Thomas Paine"); edit_set ("Address","234 Willow Drive"); edit_set ("Date", "12/12/03"); list_select_item ("Make", "BMW"); edit_set ("Year", "1973"); edit_set ("Model", "2002tii"); button_press ("Insert Sale");

Order Number. . . .: __________ Customer. . . . . . . .: ___________________ Billing Date. . . . . . :

2002tii

Two Recording Modes

Context Sensitive

Analog

When the application is based on GUI objects

When the application has non-GUI areas (e.g., a drawing application)

Default mode

When mouse tracks are necessary for correct execution

Recommended

When you are unable to use Context Sensitive mode

TIP A test can combine both Context Sensitive and Analog statements

Context Sensitive Recording ● ● ●



Object based Readable script Maintainable script (editable) Script not affected by user interface changes ❏



if object moves location on GUI, script will still replay correctly

Portable script ❏

a context sensitive script can be ported to different platforms with different configurations

A Closer Look at GUI Objects menu

window

static text list item

edit field scroll bar frame

radio button

push button

User Actions in the Test Script

Specify window for input Type input into an edit field Type encrypted input into password field Press a button to submit "OK" Specify a list box selection

set_window("Login", 10); edit_set ("Username", "thomas"); password_edit_set("Password:", "kzptnzet"); button_press ("OK"); set_window("Automobile Purchase Form", 10); list_select_item ("Make", "BMW");

1 Record user actions in script

2 Synchronize script to application under test

4

3 Add verification statements to check AUT

Run test or suite of tests

Recording and Playback Analog vs. Context Sensitive Scripts Initial/End Conditions The GUI Map

ANALOG vs. CONTEXT SENSITIVE SCRIPTS

Context Sensitive Script R-eview

set_window ("Save As"); edit_set ("File Name", "output14"); button_press ("OK");

output14

Analog Recording ●





Screen-coordinate dependent Test script describes mouse and keyboard activities 3 commands: ❏ ❏ ❏



mouse press/release mouse move keyboard type

Covers all types of applications

x

y

Analog Script move_locator_track (1); mtype (" -+"); type (" output14" ); move_locator_track (2); mtype (" -+ ");

keyboard

mouse movement mouse click

timing output14

Analog or Context-Sensitive? Application

Functionality under test

Graphics program

Paintbrush stroke

Graphics program

Preferences checkboxes

Virtual reality environment Client/server database

Context Sensitive

Analog

Mouse-based movement controls Data entry using standard GUI objects

• Context Sensitive statements describe actions made to GUI objects and are recommended for most situations

• Analog statements are useful for literally describing the keyboard, mouse, and mouse button input of the user

WinRunner Tracks AUT’s Windows and Objects With the GUI Map File The GUI Map file contains the: • Windows of the AUT • Objects within each window • Physical attributes that create each object’s unique identification

GUI Map File WINDOW: Login Name:

Physical Description:

Name

class: edit attached_text: "Name"

Password

class: edit attached_text: "Password"

OK

class: push_button label: "OK"

GUI Map Editor •

Visual tree displays windows and objects contained in the GUI Map File



First level consists of all windows in AUT

Parent Window (logical name)



Second level consists of objects uniquely identified within each parent window

Child Objects (logical names)

Physical Description of window or object highlighted above

The GUI Map Characteristics • Allows separation of physical attributes from test scripts • Enables WinRunner to uniquely identify objects in the AUT using physical attributes • -Allows WinRunner to refer to objects in the script using an intuitive logical name • Provides the connection between logical names a-nd physical attributes

Strengths •Maintainability –If a button label changes in the application, update the button description once in the GUI map rather than in 500 tests •Readability –button_press("Insert") instead of button_press("{class: ThunderSSCommand}"); •Portability –Use the same script for all platforms, with a different GUI map for each platform

Check Points ●

Gui Check Points



Db Check



Bitmap Check



Text Check

Why Synchronize?

Attempts next step Script fails

Accepts input

Inputs data to AUT

Sends data to database server

Waits

Waits for server; cannot continue

Waits

Waits

Continues

T AU

Run script

Sc

T

rip t

With synchronization point

Accepts input

Sends data to database server Synchronization point

Inputs data to AUT

AU

Run script

Sc

rip t

Without synchronization point

Server processes data Server returns results Client affirms transaction is complete

Synchronization Points •

The AUT's performance may slow down as the number of users increases



Synchronization points allow WinRunner to wait for the AUT, just like a real user

Playback

Test Results Report Checkpoint outcome is either OK or mismatch ert_Sale Insert_Sale

Insert_Sale Insert_Sale Insert_Sale Insert_Sale

Checkpoint details can be opened in a separate window

Thank You

Related Documents

Winrunner
November 2019 20
Winrunner Qtns
November 2019 19
Winrunner Faq
October 2019 19
Winrunner Qtns
November 2019 9
Winrunner Quest1
October 2019 11
Winrunner Problems
November 2019 7