Quality Attributes

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Quality Attributes as PDF for free.

More details

  • Words: 2,662
  • Pages: 13
1 QUALITY ATTRIBUTES To ensure a quality application, the test types that are defined in the test plan correspond to one or more Quality Attribute.

1.1 Top Quality Attributes -

Portability – software transferred easily from one computer to the other Reliability – satisfy requirements without error, fault tolerance, recoverable Efficiency – time/resource - perform with minimum computer resources (memory, cycles) Human engineering – easily understood and used by human users Testability – easily verified by execution Understandability – read by software maintainer, operability Modifiability – easily revised by software maintainer Functionality

1.2 Quality Assurance Software Quality Assurance involves the entire software development PROCESS – monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems found are dealt with. Quality Assurance is oriented to ‘prevention’.

1.3 Quality Control Software Quality Control involves the testing of an application under controlled conditions and evaluating the results. Testing validates that the system performs as expected and that the system performs within expectations when things go wrong. Quality Control is oriented to ‘detection’.

All validation and verification test types mentioned in this document are classified as Quality Control.

2 SOFTWARE DEVELOPMENT LIFE CYCLE AND THE TEST LIFE CYCLE 2.1 Verification or Static Testing -

Verification testing is the review of documentation, technology and code. It is testing that does NOT execute code. Static testing is performed at the beginning of the development life cycle Static testing is performed to: - Detect and remove defects early in the development and test cycle. Thus preventing the migration of defects to later phases of software development. - Improve communication to the entire development project team. - Ease the transition to subsequent testing phases.

2.1.1 Types of Static Testing: Inspections and Walkthroughs

Inspections - Measuring any important work product against standards and requirements. - Individuals read work product before meeting, the small group then meets and is led by an impartial moderator who is not the creator. - The purpose is to identify problems, not to solve them. Walkthroughs - Less formal than inspections since individuals do not prepare for the meeting and is led by the creator. - The walkthrough usually shows how different pieces of the system interact, thus can expose usability, redundancy and missed details.

2.1.2 Static Testing Within the SDLC

Static testing is performed at different phases of the Software Development Lifecycle these include: 1. Requirements - to ensure that the user’s need are properly understood before translating them into design. Ensure it includes basic functionality, definitions, and usability metrics. Ensure they are complete, correct, precise, consistent, relevant, testable, traceable, feasible, no design detail, manageable. 2. Functional Design - to translate requirements into the set of external interfaces. Describes what the user can see. Reveal ambiguities, errors, and misunderstandings. 3. Internal Design - to translate functional specification into data structures, data flows and algorithms. Describes what the user cannot see. 4. Code review - for the source code itself.

2.2 Validation or Dynamic Testing -

Dynamic testing evaluates a system or component during or at the end of the development process to determine whether it satisfies specified requirements. It is testing that DOES execute code. The bulk of the testing effort is Dynamic testing. Generally verification testing finds 20% of bugs whereas validation finds 80% How the tester executes the code can be accomplished by Black Box Testing or White Box Testing: - Black Box Testing -

-

The program is treated like a black box, thus the user cannot see the code.

It is testing that is driven from the outside, from the customer’s perspective and is carried out without knowledge of the details of the underlying code. - It is not based on any knowledge of internal design or code. Tests are based on requirements and functionality. White/Glass Box Testing -

-

Programmer uses his understanding and access to the source code to develop test cases.

-

White box testing is based on coverage of code statements, paths, branches, conditions, logical expressions (true/false), loops, and internal data structures.

2.2.1 Unit testing

-

The most ‘micro’ scale of testing where the smallest executable pieces of code. Performed by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code Perform line coverage testing. This is the execution of every line of code at least once. Perform branch coverage testing. This is the execution of each branch twice – once for a true value and one for a false value. Perform condition testing. This is the execution of each condition at least once for true and once for false.

2.2.2 Smoke Test or Qualification testing

-

-

Each time the test team receives a new version of the program, an initial test is performed to determine whether the build is stable enough to be tested. It is a short test hitting all major pieces of functionality – a “shot through the dark” to determine if the software is performing well enough to accept it for a major testing effort. An ideal test to be automated. Subsequently can be run by the development staff before testing begins. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state

2.2.3 Usability Testing

-

Test based on accessibility (navigation), responsiveness (clear), efficiency (minimum amount of steps/time), and comprehensibility (help system). Testing for ‘user-friendliness’ Id differences in user needs and the product delivered. To be done early in the cycle and performed by real users. Redone in the middle/end of cycle.

2.2.4 Integration Testing

-

Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, 3rd party applications etc.

2.2.5 Functional testing

-

-

Functional Testing is a black-box type testing geared to functional requirements of an application. The following are types of functional testing: Specification Verification testing. The QA group checks the system against the related design documentation or specifications. The team compares the program’s behaviour against every word of the documentation. State Transitions. Test the system capability to switch transitions and not lose track of its current state. Multi-Processing. Test the system’s capability to multi-task and verify that multiple users can use it at the same time. Error Recovery. Test every error message of the program. Verify that a message is displayed and that it is the correct message. Verify the system can anticipate errors and protect against them, notice error conditions and deal with a detected error in a reasonable way.

-

Output. Test the output of the program whether display, print, graph or saved. Verify the look, spelling, usability and legibility. Some sub-testing of functional testing includes Equivalency issues, Boundary conditions and Random. - Equivalency. Test the program response by selecting only unique paths. Example - Valid input is 1-10, test 2 and 9. No need to test the remaining data since the same functionality is executed. - Boundary Conditions. Test the program response to all extreme input and output values. Example – Valid input is 1-10, test 0, 1, 10, 11. - Random. Test the program response by selecting unusual input values. Example – Valid input is 1-10, test @, -1, 9.7. - Calculations. Test the program computes calculations correctly.

2.2.6 System Testing

-

Black-box type of testing that is based on overall requirements specifications; covers all combined parts of a system. Complete system testing which does NOT repeat functionality testing System testing includes Performance, Load, Volume, Memory, Compatibility, Conversion and Recovery. These types of testing are described in the next sections.

2.2.7 End to End

-

Testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, functionality, applications, or systems if appropriate.

2.2.8 Performance

-

System performance measured against requirements under various conditions. The objective is performance enhancement. Can test using White box or Black box testing techniques.

-

Developers perform White box by monitoring time spent on modules, paths, and specific types of data. Test team performs Black box testing using benchmarking to compare the latest version’s performance to the previous versions.

2.2.9 Load Testing

-

Test the system’s limits. Some sub-testing include: - Stress testing – peak bursts of activity – overwhelm the product – long tasks - Volume testing – largest task the program can deal with - Memory testing – how memory and storage is used by the program – low memory

2.2.10 Security and Access -

Program security requirements are met.

-

Test the ease for an unauthorized user to gain access to the program.

2.2.11 Configuration / Hardware Compatibility testing

-

-

Test a range of computers. Even if it works on one model, test against their differing printers, peripherals, memory, and internal logic cards. For Web or client/server test different hardware, software, browser configurations. Need technical testers to test drivers, hardware settings, connectivity issues, system conflict resolution and printers

2.2.12 Recovery

-

Test the contingency features built into application for handling interruptions etc.

-

How well a system recovers from crashes, hardware failures, or other catastrophic problems? System’s backup, restore and restart capabilities

2.2.13 User Acceptance testing

-

Final testing based on specifications of the end-user or customer. The customers use the product in the same way they would while performing their daily jobs. Compare the end product to the needs of the end users Ensure your test group tests all acceptance tests before delivering it to the customer. Strive for realism. Alpha Testing – end users inside the company that were not involved in the development of the product. Beta Testing - subset of actual customers outside the company, and use the product in the same way that they would if they bought the finished version and give you their comments.

2.2.14 Regression testing

-

-

Reusing old test cases to determine whether a: - Fix or modifications of the software does what it is supposed to do. - Fix does not introduce new errors. Retest fixed bugs and recheck the integrity of the program as a whole. Includes rerunning the difficult tests, the ones the program will probably fail if it’s broken. Takes a large percentage of time in a development cycle, thus they are often automated. Regression testing can be typed as: - Unit Regression testing – retest a single program or component - Regional Regression testing – retest modules connected to the program or component - Full Regression testing – retest entire application after a change has been made.

2.2.15 Adhoc Testing -

Creative, informal tests that are based on formal test cases and thus not documented by the test team.

-

These are tests that are based on the testers having significant understanding of the software before testing it.

-

The tests are random and based on error guessing, knowledge of the system and gut-feel.

-

Tests include: -

Initial and Later States. A program may pass the first time through but fail the second. Or an user may back up in a middle of a routine and restart.

-

Against the Grain. Verify the appropriate error handling occurs when the tester does not follow the correct sequence of steps for a piece of functionality.

-

Race Conditions. Testing the collision of two events.

-

Extreme Data conditions. Test fields with special characters, negative numbers and incorrect dates.

-

Three Finger Salute (Control-Alt-Delete). Test that the system is recoverable with a soft reboot to the system. Also test against a hard reboot (powering off the personal computer).

2.2.16 Parallel Testing

-

Test new software along side production using the same data to verify results.

2.2.17 Benefits Realization Testing

-

Value or business returns obtained from application. Is it likely that the application will deliver the original projected benefits?

2.3 Web Specific Testing

-

-

Due to a more complex user interface, technical issues and compatibility combinations, the testing effort required for web applications is considerably larger than for applications without a web interface. Web Testing not only includes the test types that are defined previously in this document, but also includes several web specific test types.

2.3.1 Compatibility Testing

-

-

This type of testing determines if an application, under supported configurations, performs as expected with various combinations of hardware and software. Compatibility testing includes the testing of: - Different browsers. For example Netscape Navigator, Internet Explorer, AOL and their various releases. - Different operating systems. For example Windows ’95, ’98, ‘NT, Unix, Macintosh - Different monitor resolutions. For example color resolutions, font settings, display settings Compatibility testing also can include testing: - Different hardware configurations. For example PC, Laptop, hand-held device – associated screen sizes - Different internet connections. For example proxy servers, firewalls, modems, direct connections - Different desktops. For example email, mouse, sound, display, video, extensions, network software

2.3.2 Security Testing

-

Security testing determines the ability of the application to resist unauthorized entry or modification. Security testing can be accomplished through the following ways: - Audits – to ensure that all products installed on a site are secure when checked against known vulnerabilities

Attacking the server through vulnerabilities in the network infrastructure. Hack directly through the web site and its HTML. Cookies attacking – finding patterns in cookies and attempting to crack the algorithm Test the following: firewalls, passwords, SSL, logging, virus detection, routers, networks, encryption, proxy servers. -

-

2.3.3 Performance Testing

-

-

Testing the performance for conditions, which are under our control. For example, one cannot guarantee fast response over a public Internet when the intermediate nodes and channels are not under our control. Thus, the focus of the test team is to only verify that the server response and page rendering is within the specified requirements. Stages of performance testing include: Concurrency, Response Time, Performance with increased load., Capacity and Stress

2.3.4 Usability Testing

-

Usability testing ids the differences in user needs and the product delivered. Tests include: - Navigation – Determine if the web application allows freedom of movement, flexibility, consistency, clearly marked paths, personalized service, quick delivery and immediate answers. The site must be easy, logical, intuitive, quick and errorfree. -

Performance - Determine if the response time is within the acceptable time frame a typical user will wait. In general, an average user will wait a maximum of 7 seconds for a page to download successfully

-

Multimedia – Test the use of animation, audio and video.

-

Accessibility - Identify and repair significant barriers to access by individuals with disabilities. Test video, auditory and motor.

-

Internationalization – Identify whether the web application is North American specific or applicable to users in Asia, Europe and the rest of the world. Some examples of testing include: languages, time-zones, punctuation, measurement.

2.3.5 Functionality Testing

-

-

-

Functionality testing includes all the types of tests listed in the previous section with the addition of the following: Link Testing – Verify that the link takes you to the expected destination, it is not broken and all parts of the site are connected. Types of links to test include: - Embedded links – underlined ext that indicates that “more stuff” is available - Structural links - links to a set of pages that are subordinate to the current page and reference links. - Associative links – additional links that may be of interest to the user. Web Technology Testing - Tests the different technologies your web application uses. Some of the types of technologies that are tested include HTML, XML, Java, JavaScript, ActiveX, CGI-Bin Scripts, Cascading Style Sheets, Caching, Proxies, and Protocols. User Interface Testing - Ensure that the web guidelines or accepted good practices for web pages are used. Verify the proper use of language, white space, scrolling, content, color, fonts, and graphics. Test the use of frames, tables and forms.

Related Documents

Quality Attributes
November 2019 6
Attributes
November 2019 20
Leadership Attributes
October 2019 9
27.attributes
May 2020 10
Geometrical Attributes
June 2020 10
Quality
June 2020 30