application: version:
app x 1.00.00
-1-
print date: 10/14/08 tester: john doe
generic test checklist version 97-03-26
document bookmarks: application: version:
app x 1.00.00
tester: company: group:
john doe your company your group
printed:
11/25/97 7:47 am
checklist items are based predominantly upon several texts (esp. cem kaner’s book - bug list appendix) + experience + other i have found that when i miss bugs, it is convenient and effective to build a new check into the checklist to prevent a recurrence of the missed bug while testing future builds or apps. (every test failure recognized is a lesson learned for which the solution is built into the master checklist—a mistake is not necessarily a bad thing when you can build upon it.)
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-2-
print date: 10/14/08 tester: john doe
revision history tester
date
comments
people responsible for app test lead: tester: developer: developer lead: program manager:
other people associated with app support analyst:
brief application description
schedule #
milestone name
sched’d date
actual date
checklist instructions
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-3-
print date: 10/14/08 tester: john doe
document file notes: • • •
rtf file: the document is saved as an rtf file to avoid the compatibility confusion (is the doc saved as win word 6.0, 95, or 97…and what version of word do you have?). read only: the document has the read only attribute set. this is the ancient method of converting a document into a template. when user goes to save the read only file, the save as dialog is popped up every time. (did this because saved file as an .rtf.) question format: all checklist line items are written so that answers of yes are ‘a bad thing’, and answers of no are ‘a good thing.’
preparation front end tools tracking tools:
bug tracking: testing must have a bug tracking tool. use raid, access app, etc. test case manager: used to track status of test cases, avoid duplication of efforts (re-use test cases with slight modification), etc. use access .mdb, word .doc, etc. configuration management: use visual source safe (or sim. prod.) to save test cases, test plans, etc. also use it to look for code changes, churn rate, version control, etc.
diagnostic tools:
virus scan: use f-prot or similar virus scanner to check master disk sets. scan test machines periodically. nt taskmanager: tracks resource usage, threads, performance, etc. windiff/examdiff: compare file differences. registry dumps, dir c:\*.* /s, or ini file snapshot changes over time—take snapshots both before and after broken state occurs…then compare. (or use fc.exe f/nt too). wps: debug tool that lists all modules loaded into memory at a given point. (applet comes with windows nt.) save list and windiff multiple lists over time, or across o.s.’s. perfmon: nt performance monitor. check out processor, ram, disk access, network, server performance. (in nt4.0, type ‘diskperf’ at command prompt to activate disk stats after reboot. sysinfo: msoffice app that lists all info about currently loaded modules in memory. (comes with msoffice.) shotgun: compares differences in registry between different snapshots over time. does much more. wmem: checks system resources (user & gdi) and memory used by loaded apps dr.watson: debug tool that lists machine configuration, task list, module list, stack dump. performance profiling: used to determine where bottlenecks occur in code, etc. built-into vb4.0 enterprise. complexity analysis: cyclomatic complexity metric, mccabe’s metric, etc. to determine complexity of source code. coverage analysis: ensure that testing hit all lines code, all functions, etc.
automation tools:
dialog check: runs standard battery of tests against dialog boxes. smart monkey: ivan, koko, etc. excellent tools for quickly automating. simulators: while testing modules, can use drivers or other simulators to provide inputs to functions / apps being tested. vb code = good tool. visual test: automation where appropriate (repeatable). macro recorder: used to reproduce intermittent bugs easily without effort of visual test. test data generator: used to populate tables with test data.
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-4-
print date: 10/14/08 tester: john doe
backend tools
db native gui tools: dig them up. enterprise manager, security manager, nt user manager for domains, etc. db command line tools: dig them up. isql, etc. vb / access apps: create these to test things out. batch scripts: run long processes overnight. business rule enforcement, etc. schema comparison tool: compare schemas between servers. dba: consult local dba for other tools. event viewer: look at nt event viewer for history of silent errors. data elements list: list of databases, tables, stored procedures, etc. with descriptions.
project risk / contingency list many of the items here originated at http://www.azor.com/codeplan.html
product:
reliability required: too much? too little? database size: too big? too small? complexity: too complex / time consuming? overly simplified? … …
computer:
execution time constraint: impossible objective given hardware platform? too loose, users angry? main storage constraint: too much, unfeasible? platform volatility: os too new? too old? development code turnaround time: builds take too long? … …
personnel:
tester capability: any weak links? automation experience: if automation is necessary, any weak links here? inexperienced? spend too much time automating uselessly? application experience: new to app? misunderstand or unaware of basic business rules / functionality / user scenarios? … …
project:
latest techniques: still using ancient techniques / tools that are inefficient? using techniques / tools that are brand new, kinks not worked out, inefficient? schedule tolerance: is float time and resource loading too inflexible? … …
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-5-
print date: 10/14/08 tester: john doe
*unique* component testing unique component testing is concerned with testing the application specific business rules, features, and user scenarios. unfortunately, everything in this section is unique and must be created manually. i have not figured out exactly what to do with this section beyond filling in a brief descriptions of each feature / business rule / user scenario. then, detailed test cases should be built from the brief descriptions.
new/changed features list
#1-xxxxxx: #2-xxxxxx: #3-xxxxxx: #4-xxxxxx: #5-xxxxxx: #6-xxxxxx: #7-xxxxxx: #8-xxxxxx:
this feature … this feature … this feature … this feature … this feature … this feature … this feature … this feature …
new/changed business rules list
#1-xxxxxx: #2-xxxxxx: #3-xxxxxx: #4-xxxxxx: #5-xxxxxx: #6-xxxxxx: #7-xxxxxx: #8-xxxxxx:
this business rule requires that … this business rule requires that … this business rule requires that … this business rule requires that … this business rule requires that … this business rule requires that … this business rule requires that … this business rule requires that …
user scenarios / app critical path list
#1-xxxxxx: #2-xxxxxx: #3-xxxxxx: #4-xxxxxx: #5-xxxxxx: #6-xxxxxx: #7-xxxxxx: #8-xxxxxx:
users typically do x, y, and z as outlined in accompanying user scenario. users typically do x, y, and z as outlined in accompanying user scenario. users typically do x, y, and z as outlined in accompanying user scenario. users typically do x, y, and z as outlined in accompanying user scenario. users typically do x, y, and z as outlined in accompanying user scenario. users typically do x, y, and z as outlined in accompanying user scenario. users typically do x, y, and z as outlined in accompanying user scenario. users typically do x, y, and z as outlined in accompanying user scenario.
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
this pathway must work. … this pathway must work. … this pathway must work. … this pathway must work. … this pathway must work. … this pathway must work. … this pathway must work. … this pathway must work. …
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-6-
print date: 10/14/08 tester: john doe
*generic* component testing component testing is concerned with testing the various modules that comprise the application. the generic line items in this section can be applied across many different applications with little modification.
basic functionality (maintenance testing only) skip this entire subsection unless you are doing maintenance testing. build turnaround times(install, learn, and test the app) are 2-4 days for typical maintenance patches. these items do not apply to 95% of testers, and should be ignored.
initial research
existing documentation: dig up old manuals, test plans, test cases, materials in files, materials in visual source safe, etc. screen shots: where appropriate (maintenance testing, existing products upgrades, etc.), take screen shots of all forms, dialogs, etc. compact view app functionality. fact gathering: after reviewing existing documentation, go on fact gathering mission to fill in holes. speak with developers and support analysts. learn server names, server owners, passwords and logins, build version numbers, etc. that are pertinent to your testing. learn app - click test: click through all buttons, tabs, etc. to ensure they work properly. work recursively from upper left corner through lower right of each form and dialog in app. (objective is to learn the new app.) smoke test: does the new build pass the standard smoke test? (use vt4.0, smart monkey, macro recorder?, or manual tests.)
front end: front end testing is concerned with testing through the application interface. this is standard black box testing. please note that the items in this section are listed in order of priority. items at the top are higher priority than items at the bottom. the prioritization was based upon marv parson’s observations given in a video lecture on the history of testing. his excellent observation is that first and foremost, a tester must be sure that there are no data related bugs. second, the setup must work. third, there should be no ‘visible’ system crashes. fourth, everything else ‘should’ work (ui, localization, etc.).
data testing accuracy / integrity:
calculations - reports: calculation errors in reports? wrong data loaded? < link to backend testing section. > calculations – app: bad underlying functions? overflow or underflow? outdated constants? impossible parenthesis? wrong order of operators? calculations – backend: can tester generate queries that show calculated values from query differ from actual app calc’d values (in table)? (if yes, double check test query math.) div by 0: can test force this error condition? truncate: truncate instead of rounding precision of numbers? incorrect formula / approximation for conversion? compatibility: work with existing data structures? (print reports or other summaries with old and new systems. compare.) test data: generate sufficient amount to thoroughly test the app.
database connectivity:
save: does it fail? is all data saved? retrieval: does it fail? is all data retrieved?
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-7-
print date: 10/14/08 tester: john doe
installation testing install scenarios:
clean machine: does setup fail? does app fail when run? (first wipe clean machine and reinstall. write down setup specs.) new bld over old bld: does setup fail? does app fail when run? when run old app, does it recognize need to upgrade install (reinstall)? new bld over new bld: does setup fail? does app fail? uninstall: does start > settings > control panel > add/remove programs > uninstall app fail? does a fresh install following an uninstall fail? reinstall: does reinstall fail? install path: do long filenames fail? do spaces in path fail? do other drives fail? does not default path fail?
version control tests:
build number: did wrong build number get placed on app? wrong build number at file properties for .exe? wrong build number at back end database build? stomp test (matrix): if install other apps over this one, do newer/older dlls/ocxs conflict? install component variance: any varying app components across varying install scenarios (windiff directories of multiple computers & check dates, versions, etc.)? missing required components: any required files missing? (make list of all components [dll’s, vbx’s, ocx’s, etc.] to compare w/what actually installed on clean machine.)
misc. install tests:
stress test: if hit cancel while installing, does app crash? low disk space test fail the install? wrong sequence disks fail the install? icon / workgroup: is workgroup and/or icon not created? is the start button shortcut missing any necessary command line parms? auxiliary components: does odbc, or other third party supplemental products need to be installed? if yes, is it missing? dsn’s setup: have odbc dsn’s been setup incorrectly? setup dialog: does cancel at dialog crash setup.exe? is there any residue left in registry? ini’s left orphaned? shortcuts (icons) left orphaned? registry changes: did not perform all required registry changes? broke registry? (use shotgun.exe or windiff.exe to check registry dumps before and after installs…review the differences.) do .ini files exist and need same research (do not forget odbc.ini and win.ini)?
boundary condition testing many items taken from cem kaner video.
data:
dataset: max / min size problems? numeric: min’s / max’s / absurds problems? alpha: problems with ascii 1-32? 128+? a-z? a-z? (remember to check 13-enter, 9-delete, tab, etc.) numerosity: problem with number of elements in list? field size: problems with field size (n chars, long in place of int, etc.) error guessing: any inputs that will be most likely to break the system that actually break it? data structures: failure within data structures? break at constraints for elements? (analyze data structures for boundary conditions break constraints.) files: file size problem? timeouts: query timeouts?
app: template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-8-
print date: 10/14/08 tester: john doe
initial uses: does app fail or act peculiar at first run? anything strange at second run? loops: boundary failure at loop counter (try mins, maxs, one less then max, etc.)? repeats: if repeat same operations over and over 100,000 times, does app fail? dates: problems with mins / maxs dates? year 2000? year 1899? time boundary problems (0, 12, 24, etc.)? space: location boundary problems (too close, too far away)? memory: boundary failure in memory (not stress test)? syntax: any valid inputs that do not work? any invalid inputs that do work?
hardware:
monitors: problems with old monitors? too new monitors? too old/new drivers? color problems? hard drive: problems with old drives? too new drives? too old/new drives? color problems?problem with number of drives? with drive being full? with size of drive? combinations of drives with cdroms, etc.? cpu: cpu too old? too new? too slow? too fast? printers: problems with old printers? too new printers? too old/new drivers? color problems? shade problems at extremes? miscellaneous: mouse/trackball/touchpad too old/new? keyboard too old/new? pens too old/new? drivers for each too old/new? timeouts: timeouts with communications?
environment (multiple configurations hardware / software) environment matrices:
interoperability matrix: not run with office 4.3? office 95? office97? not run with ie 3.1b? not run with ms works? resource matrix: are there memory leaks? (proceed through app and record system resources at various points, or graph depending on tool used.) running with matrix: screen savers, virus protection, share, stacker, explore, find, msoffice toolbar, exchange/outlook, atm, and other memory resident applets cause problems? (build test matrix.) does activation cause errors if switch to other apps currently running (alt-tab, ctrl+esc, double-click icon at taskbar). o.s. compatibility matrix: app not run on win95? win nt 4.0? win nt3.51? wfw? novell netware? video matrix: screens look bad at varying resolution (640x480, 800x600, etc.)? how about varying colors (16, 256, high, etc.)? how about vga vs. svga? how about b/w monitors? network config’s matrix: not run on other computers? fails against other servers? if vary network settings, anything break? odbc version/make matrix: user not able to run app with various versions and/or makes of odbc drivers? (make matrix) printer matrix: output fail on some printers? fail on some drivers? any configurations or settings blow up app? (be creative.) input matrix: output fail on keyboard (standard vs. intl.)? mouse? pen? trackball? touchpad? digital pad? disabled matrix: does app accommodate poor sight (preferences)? deaf?
typical configuration errors:
device: wrong device? wrong device address? device unavailable? device returned to wrong type of pool? channel: time out problems? noisy channel? channel goes down? ignore / exceed throughput limits? disk: wrong storage device? does not check directory of current disk? doesn’t close file? unexpected end of file? disk sector bugs? other length (or filesize) dependent errors?. instructions / return codes: wrong operation or instruction codes? misunderstood status or return code? device protocol error? assumes device is / is not / should be / should not be initialize inappropriately? miscellaneous: underutilizes device intelligence? paging mechanism ignored or misunderstood?
ui testing communication:
tool tips & status bar: missing command button help tips (yellow boxes) when mousepointer in proximity? missing status/command-line tip/descriptions (at bottom screen) when mousepointer in proximity?
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
-9-
print date: 10/14/08 tester: john doe
missing info.: no instructions? cursor not present? cursor not switch to hourglass? no status box during long delays? states which appear impossible to exit? enable/disable toolbar buttons: should toolbar buttons be enabled / disabled to provide clarity for available operations? wrong, misleading, confusing info.: factual or spelling errors? more than one name for same feature? information overload? product support email address or phone number invalid (help about, splash screen)? help text & error messages: inappropriate reading level? verbose? emotional? inconsistent? factual errors? truncated message text? missing text? duplicate words? training documents: factual or spelling errors? missing information? incorrect information? not properly translated? spot check index…wrong page numbers? spot check table contents…wrong page numbers? wrong app name, version, or other significant datum?
dialog boxes:
keyboard: any keys not work? (move thru dialog with tab, shift-tab and hotkeys.) mouse: any mouse actions not work? (click, double-click, right-mouse click, etc.) cancelling: any method unavailable? (escape key, close from ctrl-box menu, double-click ctrl-box, hit win95 x button.) oking: any method unavailable (enter, double click listbox item, etc.) default buttons: no default button on dialog (default at hit enter)? no cancel button on dialog (default at hit escape)? layout error: wrong size? non-standard format? modal: is window setup improperly (modal or not modal)? can user click behind dialog and cause problems? window buttons: should control menu or min/max buttons be visible? sizable: is window frame wrong (should dialog be fixed / sizable)? title / icon: is dialog improperly titled? is dialog missing or using wrong icon? tab order: incorrect tab order (jumps all over)? display layout: poor aesthetics? obscured instructions? misuse of flash and/or color? heavy reliance on color? layout inconsistent with environment? screen too busy (need tab controls, or toggle option buttons to reduce complexity/clutter)? boundary conditions: any boundary problems? (reference boundary conditions section of this checklist for all text boxes in every dialog.) sorting: are drop down lists not sorted where should be? (check list boxes, combo boxes, menus, etc.) active window: incoming mail kill app? help / quick preview / tip of day / cue cards / wizards / other apps running in background or on top of app cause problem? mdi forms: puke, is it unnecessary? i don’t like them (unnecessarily complex). memory leak test: does app leak memory as user goes in and out of dialog boxes dozens of times? (run task manager or other tool in conjunction with quick vt or macro recorder ‘script’ to check dialogs.)
command structure:
time wasters: garden paths (deeply nested command)? choices that do not work or are stubs, “are you really, really sure?” menus: too complex? too simplistic? too many paths to same place? can not get there from here? hot key duplicates? hot key idiosyncratic (not standard with other apps)? hot keys not work? unrelated commands tossed under same menu item? popup menus: does right mouse button invoke popup menu? should it for efficiency? command line parameters: forced distinction between upper/lower case? unable to reverse parameters (locate anywhere in parm line)? abbreviations/names that are not allowed? no batch input (for testing, faster runs, etc.)? command line too complex? keyboard: failure to use cursor, edit or function keys? non standard use of cursor, dit or function keys? failure to filter invalid keys at input? failure to indicate keyboard state changes? failure to scan for function and control keys? state transitions: can not do nothing and leave? can not quit mid-program? can not stop mid-command? can not pause?
program rigidity:
user options: can not do what did last time? can not find out what did last time? can not execute customizable command? are there side effects to the preference changes? is there infinite tailor-ability? control: who is in control, comp. or user—is it appropriate? is the system novice friendly? is it experience-hostile? artificial intelligence and automated stupidity? superfluous information requested? unnecessary repetition of steps? output: limited to certain data or formats? can not redirect output? can not control layout? can not edit labeling (tables/graphs)? can not scale graphs?
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
- 10 -
print date: 10/14/08 tester: john doe
preferences:
user tailorability: can not toggle on’off noise, case sensitivity, hardware, automatic saves, etc.? can not change device initializations, scrolling speed, etc.? visual preferences: can not toggle: scroll bars? status bar (@ bottom screen)? tool tips? window maximize/minimize? hidden windows? default view? dialog colors? complex/simple menu structure? file: default with 4 most recent files? default directory for open/save as? localization: defaults not match local? (date / time, currency, etc. should match system defaults.)
usability:
accessibility: can users enter, navigate and exit the app easily? responsiveness: can user do what they want, when they want, easily? efficiency: can users do what they want in a minimum amount of steps and that is clear? (wizard?) comprehensibility: do users understand product structure, help system and documentation (or too complex, incomplete, etc.)? user scenarios: be sure to write-up test cases that are user scenarios and simulate how a user will use the system. ease of use: is the app easy to learn and use?
localization:
translation: mistranslated text? untranslated text? error messages not translated? text within bitmaps that need to be translated (if so…oh shit)? macro language not translated? english-only dependencies: dependencies on tools or technologies only available in english? cultural dependencies: dependencies on concepts or metaphors only understood in english? unicode: any issues here? currency: not match locality (britsh pound, japanese yen, etc.)? date/time: any problems with different formats for various countries? constants: any constants that vary with locality (financial / accounting equations, tax rates, etc.)? if so, how handle? dialog contingency: dialogs can not be resized to 125% if translation requires additional space?
performance testing
general: slow app? slow echoing? poor responsiveness? no type-ahead? no warning that operation will take a long time? benchmarks: are there any discrepencies in performance among similar operations? are some operations too slow? (do many. for reports, query times, etc. look for hooks in app to log out this info. develop automated processes if necessary to carry out. write up results in table / matrix.) profiling: are there any significant bottlenecks in performance? (tool to detect where in code cpu time is going.) modem considerations: graphics and help at 300 baud? data transfers of large sets kill system?
load testing
volume test: fail at large size of input/output? (try batch processing where available to speed up testing.) stress test: fail at rapid input? fail at preemptive input (starts before last input finished)? execution limit test: fail at 2 instances of app running? 5 instances? 10 instances? 40 instances? 80 instances? window limit test: fail if run several apps (esp. with lots of buttons=windows)? how does app respond (should be with low memory error, and not be fatal.) storage test: fail when hard-drive filled up. memory issues: fail when limited ram exists? (eat ram until to check app response and survivability. limit other resources and run app.) resource not returned: doesn’t indicate when done with device? doesn’t erase old files from disk? doesn’t return unused memory? wastes computer time? prioritize tasks: is app failing to prioritize tasks? (should be doing lower priority items during down times. app might prioritize, but never get to low priority tasks…need check to force action after xx days elapsed.)
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
- 11 -
print date: 10/14/08 tester: john doe
error handling error preventions:
disaster prevention: no backup facility? no undo? no “are you sure” confirmations? no incremental saves? version control: inadequate version control (at startup)? initial state: inadequate initial state validation? not check for missing (or outdated) components? input: inadequate tests user input? inadequate protection against corrupt data? inadequate tests of passed parameters? o.s. bugs: inadequate protection against o.s. bugs? security: inadequate protection against malicious use? coverage: errors that were not handled by programmer?
error response:
appropriateness: inappropriate messages? not easily understandable? help: unable to hit f1, or help button to get further details? (big no-no.) error detection: ignores overflow? ignores impossible value(s)? ignores error flag? ignores hardware fault or error conditions? ignores data comparisons? error recovery: lack of automatic error correction? failure to report an error? failure to set error flag? program flow returns to wrong area? unable to abort errors easily? poor recovery from hardware problems? no escape from missing disk? error logging: unavailable? not informative: lacks module name, or time/date stamp, or error number and name? unable to toggle on/off easily? review error log: any new errors entered into error log after your testing? (look for failed assertions and silent errors.) audit trail: lack of audit trail? not informative audit trail (history of use): lacks date/time stamps, user name, etc? does it slow down system excessively? if so, is there an option to toggle logging off?
miscellaneous error handling issues:
unnecessary: is error not necessary? if not, why hassle user, add to training expense, add to workload of tech support, increase localization costs, etc.? error rate: excessive error rate? (query on data validation audit logs. quantity records not promoted through per batch.) user interaction: no user options / filters for error handling? awkward error handling interface? no area for user to enter comments in reponse to error for logging? misc. nuisances: inadequate privacy or security? obsession with security? can not hide menus? can not support standard o/s features? can not allow long filenames?
race conditions
data: multiple simultaneous read/writes kill server? (multiple updates compete so that successor begins execution on top of data that has not completed processing of predecessor operation.) wrong assumptions: assume that one event or task finished before another begins? assume that input will not occur during a brief processing interval? assume that interrupts won’t occur during brief interval? assume that a person, device, or process will respond quickly? prerequisite check: task starts before its prerequisites are met?
security
logins: do authorization rules exist? (they should to monitor violations, etc.) passwords: are passwords forced to be changed every xx number of weeks? are the passwords forced to be more than 4 characters? p/w have mixed case? p/w have numerals and special characters? encryption: should there be encryption at data outputs? data transmits? saved document/data repositories? security violation plan: have procedures been established to report and punish violators? off-hours: are users limited to specified hours during the day? should the limitation exist? installation: are security measures temporarily suspended during installation? if so, can user stop install midday and access confidential data, or alter priveleges, etc.? security: device use forbidden by user/caller? specifies wrong privilege level for device.
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
- 12 -
print date: 10/14/08 tester: john doe
automated processes this section needs further development. contingency plan: missing contingency plan in case automation doesn’t work? (for example, app may not be integrated soon enough to build automation scripts, or takes too long.)
back end back end testing is concerned with testing through the back end database system. this is also black box testing, and should be coordinated with front end testing. please note that the items in this section are listed in order of priority. items at the top are higher priority than items at the bottom. this section is in its infancy, and will require significant modification in the near future.
general
object dependency: run sql server dependency check on ojects that changed to determine scope of impacted objects. then test all impacted objects (sp_depends). search front end for all references to changed stored procedures / tables / other objects. data integrity: ensure di is maintained, data validation is used, anticipate errors from manual investigation. data tracking: all transactions validated (import and output sets manually verified via xl, etc.) data cleanup: temp objects removed, etc. data recovery: transaction failure should be recoverable (if rollbacks properly in place). component maintenance: automated maintenance scripts for testing various scenarios. component stress testing: subject test server to high loads, etc. security: permissions, logins, etc.
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
- 13 -
print date: 10/14/08 tester: john doe
structural back end tests database schema tests
databases and devices: verify the database names. verify enough space is allocated for each database. verify database options settings. tables, fields, constraints, defaults: verify tablespace names (tables, rollback segments, etc.). verify field names for each table. verify field types (esp. the number of characters in varchar and varchar2). verify whether field allows null or not. verify constraints on fields (min/max/etc.). verify default values for fields. verify table permissions. keys and indices: verify every table has a primary key. verify foreign key existence where appropriate – index, trigger, etc. verify column data types between foreign keys and primary keys (should be the same). verify indices: unique or not unique, bitmap or b-tree, etc. stored procedures: verify stored procedure name. verify whether stored procedure is installed in database. verify parameter names, types and count of parameters. run stored procedures from command line to boundary check the parms. verify output of stored procedure. (does it do what supposed to do? what not supposed to do?) error messages: force stored procedure to fail, and check every error message s.p. generates. are there any errors that do not yet have a predefined error message? triggers, update: verify trigger name. verify trigger assigned to correct field of correct table. verify trigger updates child table fk when sql alter parent table pk. verify rollback when an error occurs. triggers, insert: verify trigger name. verify trigger assigned to correct field of correct table. verify trigger inserts child table fk when sql adds parent table pk. verify rollback when an error occurs. try to insert record with existing pk. triggers, deletion: verify trigger name. verify trigger assigned to correct field of correct table. verify trigger deletes child table fk when sql deletes parent table pk. verify rollback when an error occurs. try to delete record with and without existing pk (child records).
back end integration tests
general: look for conflicts between schema, triggers, and stored procedures. verify server setup script accommodates both setting up databases from scratch as well as setting up databases over the top of existing databases. verify environment variables defined (dos), if applicable. record setup time and issues encountered.
functional back end tests
functionality: verify every feature in backend (from requirements, functional specs, and design spec). verify data is inserted, updated, and deleted according to business rules. look for invalid logic. check error prevention and error handling mechanisms. data integrity / data consistency: verify data validation before insertion, update, and deletion. verify data security mechanisms are adequate. verify dimension or reference tables have triggers to maintain referential integrity. verify major fields across all tables do not contain invalid data / characters. try to insert a child record before inserting its parent. try to delete a record that is still referenced by other records in different tables (kill parent and leave orphans). verify updates to pk cascade and update fk fields in different tables. login and user security: verify email login security. verify oracle login security. verify nt domain login security. review roles, logins, etc. for abnormalities. check concurrent logins (multiple simultaneous users logged on).
performance
perfmon: tool to detect performance across all system resources. set it up and use it. benchmarks: determine time required for scripts to execute, etc.
network failure
at login: enter bogus password or login id. how does system react? reset permissions to eliminate tester, then attempt login, etc. in process: pull out network connection in middle of process, login off network, etc.
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
- 14 -
print date: 10/14/08 tester: john doe
high traffic: setup (or coordinate with busy part of day) app so run with high network traffic.
server failure
kill server: shut down server abruptly while client still running. compare to standard shutdown. cpu activity: simulate high cpu consumption by having server execute many other tasks so that busy. file i/o: simulate high activity by copying large batch files, or running test util designed for this. services not started: run with services not started on server. silent failures: go to server periodically, and review event viewer history.
maintenance
walk thru tables: any orphaned temp tables? any duplicate tables? any bogus data jumping out? business rule enforcement script: any nulls in inappropriate fieds? any foreign key values in child table without matching primary key values in parent table? any calculations that are not correct? (write up test script consisting of multiple queries to check every business rule.)
integration testing integration testing is concerned with testing the full application, from installation through all(or most) functionality. note that drivers and stubs should be used in place of code that is not yet written. this is standard black box testing.
prerequisites
check-in: all modules should be checked in. executable: the executable should be built. there should be a full setup too. confirm pull location and build number.
milestone checks
regression: regression test existing know bugs. full functionality: cover as much functionality as possible. run the horizontal tests (higher priority) of the app, moving vertically (deeper tests) where time permits. run installation tests, and compatability and configuration tests, etc. objective: developers want zero defects—meaning that no new high priority bugs are found or high priority existing bugs resurface. testers of course want to find these if they exist.
unit testing unit testing is another name for white box or glass box testing. it is concerned with testing the app by walking through the source code. note that drivers and stubs should be used in place of code that is not yet written. also note that white box testing is typically performed by the developer, unless the project is large enough to accommodate separate white box testers.
code inspections
coupling: is the effort required to interconnect the components during integration (after all separate components written). maintainability: is the effort required to locate and fix bugs efficiently, organized and logical code flow, not overly complex. comments: thorough enough? or to excess to the point of redundancy of the code lines?
live inspections
case coverage: trace through code one line at a time in if…elseif…else…endif, and select case…case….end select statements to check for failures. be sure to use debug window to alter condition variable/pointer values, and reset next
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
- 15 -
print date: 10/14/08 tester: john doe
statement to repeatedly reset execution pointer to previously executed lines of code speeds up testing). error handling coverage: force all errors in error handler to occur by enter them into debug window at appropriate lines of code while tracing code (at debug enter error ### to force error to occur). debug.print: use this to test values and assertions.
design testing design testing occurs prior to the start of development. this section is located down here because it unfortunately gets less use than the other sections above. (also, i perform maintenance testing, so there is negligible design work.) this section is in its infancy and will require significant modification in the future.
general design issues systems overview:
high-level compatibility: missing / incomplete description of interface with other systems / databases / apps? are all dependencies with external groups clearly defined (i.e.: feedstore hooks, etc.)? systems structure chart: missing / incomplete systems breakdown chart? (should graphically show major system components, locations, and descriptions.) major functional requirements: missing / incomplete list of major functional requirements of system? confusing boundaries: confusing boundaries between major components of system? manual vs. automatic: confusion as to which system components are automated vs. manual? is description incomplete?
data overview:
data flow diagrams: do they exist? are they detailed enough? without a fairly detailed plan, how in god’s green earth can all of the individual programmers write their separate components and then have them seamlessly come together at the end of the project? the first thing a sane person does on a road trip through new territory is to consult a road map…and a dfd is one piece of the roadmap. entity-relationship diagrams: do er diagrams exist showing the layout of the data (all tables, databases, etc.)? data requirements: missing / incomplete list of data requirements for each of major systems? (should have system i/o diagram including names, descriptions, and sizes of data elements.) any size problems with database? data groupings: data groupings not in logical categories? (should be static, historical—no change likely, and transaction related.) standard naming: data names non-standard? spaces in names? names too long? data relationships: primary keys and foreign keys not defined? (hierarchical relationships.) source definition: source of data unclear? (dept., individual, server name, etc.) test data requirements: has this been ignored?
spec review:
thorough: are there any incomplete or missing spec sections? glossary: are there any amiguous terms (tla’s – three letter acronyms, etc.) in docs? (if yes, consider glossary.) business rules: missing appendix which clearly defines business rules? (why should programmer / tester decipher designer’s / user’s intent when it can be so easily listed in specs?) project baseline schedule: is project schedule unrealistic? is schedule incomplete? are there insufficient milestones? have any tasks been left improperly hanging (not wrapped into fragnets of like tasks)? are any fragnets improperly left hanging (not terminated into milestones)? are any milestones left hanging (not eventually rolled up into a single end of project milestone)? have responsibilities not been assigned to some tasks? are there any unrealistic durations? are there any nonsenscial predecessor / successor relationships? has contingency time not been built into the durations? did scheduler ignore or fail to obtain input from staff? acceptance criteria: missing / incomplete acceptance strategy? (needed to clearly define minimum scope of work -- when work is complete.)
feature overview:
useless: is feature useless? if yes, why add complexity?
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
app x 1.00.00
- 16 -
print date: 10/14/08 tester: john doe
duplicate efforts: are we reinventing wheel? are there similar features, products (i.e.: ‘free’ code—snicker snicker) already at ms? competitor analysis: are there any similar features in competitors’ apps? if yes, did you fail to research it? (take the good ideas, and take good notes of the bad ideas.) priority: missing / unclear prioritization of all features? (very important f/crunch time when determine what features to nix.) feature interactions: missing details? (for example: undo-able feature, repeatable feature, multi-user impacts, ill effects of aborting feature, backward compatibility issues, relationship to other features, etc.)
other considerations:
user interface design: use front-end testing checklist section above to test ui design (menus, dialogs, etc.)
areas not covered the following areas will not be covered by testing. this list is living, and could change with respect to this project, as long as project is still in progress. • • •
undocumented features: not responsible for bugs appearing in undocumented or non-maintained areas of the specification. external product impact: not responsible for bugs dependent on released microsoft products. any bugs that are found will be reported to the respective product group. backend data maintenance: sql server backup and recovery, etc..
criteria for acceptance into testing • • • • •
development testing: each new build has to be tested by thedevelopment team before released to testing. existing bugs fixed: developers must resolve all previously discovered bugs marked for fix. release document: must be issued for every new build. it will include the build number, all changes, and new features from the previous build. in a component release, it should indicate which areas are ready for testing, and which areas are not. instructions: all necessary setup instruction and scripts should be provided. elements list: a list of all elements in the current drop with version number.
bug reporting • •
bug tracking system: use the bug tracking system provided (raid, etc.) sample embed report: here is a standard bug report. all bugs should contain this information at a minimum:
user environment server minimum requirements: • p/166 machine • 64mb or up memory • 6gb hard disk space • windows nt 4.0 • nt sql server 6.5 client minimum requirements: • 486/66 machine
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes
application: version:
• • • •
app x 1.00.00
- 17 -
print date: 10/14/08 tester: john doe
16mb or up memory 1200mb or up hard disk space super vga display win 3.11, win nt 3.1, win 95, win nt 4.0
template filename: template by:
c:\testing\docs\testkit\07_generictestchecklist.rtf matt pierce
create date: filesize:
12/08/96 12:19 pm 113,428 bytes