Software Quality Assurance A Self-Teaching Introduction
Software Quality Assurance A Self-Teaching Introduction
A ssur ance
Original Title and Copyright: Software Testing and Quality Assurance: A Practical Approach, 5/e.
© 2016 by S.K. Kataria & Sons.
This publication, portions of it, or any accompanying software may not be reproduced in any
way, stored in a retrieval system of any type, or transmitted by any means, media, electronic dis-
play or mechanical display, including, but not limited to, photocopy, recording, Internet postings,
or scanning, without prior permission in writing from the publisher.
Publisher: David Pallai
Mercury Learning and Information
22841 Quicksilver Drive
Dulles, VA 20166
[email protected]
www.merclearning.com
(800) 232-0223
The publisher recognizes and respects all marks used by companies, manufacturers, and develop-
ers as a means to distinguish their products. All brand names and product names mentioned in
this book are trademarks or service marks of their respective companies. Any omission or misuse
(of any kind) of service marks or trademarks, etc. is not an attempt to infringe on the property
of others.
181920321 This book is printed on acid-free paper in the United States of America
Our titles are available for adoption, license, or bulk purchase by institutions, corporations, etc.
For additional information, please contact the Customer Service Dept. at 800-232-0223(toll free).
All of our titles are available in digital format at authorcloudware.com and other digital vendors.
The sole obligation of Mercury Learning and Information to the purchaser is to replace the
book, based on defective materials or faulty workmanship, but not based on the operation or
functionality of the product.
Index 655
1
Introduction to Software
Testing
Inside this Chapter:
1.0. Introduction
1.1. The Testing Process
1.2. What is Software Testing?
1.3. Why Should We Test? What is the Purpose?
1.4. Who Should Do Testing?
1.5. What Should We Test?
1.6. Selection of Good Test Cases
1.7. Measurement of the Progress of Testing
1.8. Incremental Testing Approach
1.9. Basic Terminology Related to Software Testing
1.10. Testing Life Cycle
1.11. When to Stop Testing?
1.12. Principles of Testing
1.13. Limitations of Testing
1.14. Available Testing Tools, Techniques, and Metrics
1.0. INTRODUCTION
Testing is the process of executing the program with the intent of finding
faults. Who should do this testing and when should it start are very important
questions that are answered in this text. As we know software testing is the
fourth phase of the software development life cycle (SDLC). About 70% of
development time is spent on testing. We explore this and many other inter-
esting concepts in this chapter.
OR
“Software testing is the process of executing a program or system
with the intent of finding errors.”
[Myers]
OR
“It involves any activity aimed at evaluating an attribute or capabil-
ity of a program or system and determining that it meets its required
results.”
[Hetzel]
Testing is NOT:
a. The process of demonstrating that errors are not present.
b. The process of showing that a program performs its intended func-
tions correctly.
c. The process of establishing confidence that a program does what it is
supposed to do.
So, all these definitions are incorrect. Because, with these as guidelines,
one would tend to operate the system in a normal manner to see if it works.
One would unconsciously choose such normal/correct test data as would
prevent the system from failing. Besides, it is not possible to certify that a
system has no errors—simply because it is almost impossible to detect all
errors.
So, simply stated: “Testing is basically a task of locating errors.”
It may be:
a. Positive testing: Operate the application as it should be operated.
Does it behave normally? Use a proper variety of legal test data,
including data values at the boundaries to test if it fails. Check actual
test results with the expected. Are results correct? Does the applica-
tion function correctly?
In this database table given above there are 15 test cases. But these are
not sufficient as we have not tried with all possible inputs. We have not con-
sidered the trouble spots like:
i. R emoving statement (@ ai_year % 400 = 0) would result in Y2K
problem.
ii. Entering year in float format like 2010.11.
iii. Entering year as a character or as a string.
iv. Entering year as NULL or zero (0).
This list can also grow further. These are our trouble spots or critical
areas. We wish to locate these areas and fix these problems before our
customer does.
Why it happened?
As we know software testing constitutes about 40% of overall effort and
25% of the overall software budget. Software defects are introduced during
SDLC due to poor quality requirements, design, and code. Sometimes due
to the lack of time and inadequate testing, some of the defects are left behind,
only to be found later by users. Software is a ubiquitous product; 90% of
people use software in their everyday life. Software has high failure rates due
to the poor qualify of the software.
Smaller companies that don’t have deep pockets can get wiped out
because they did not pay enough attention to software quality and conduct
the right amount of testing.
Cem Kaner said—“The best test cases are the ones that find bugs.” Our
efforts should be on the test cases that finds issues. Do broad or deep cover-
age testing on the trouble spots.
A test case is a question that you ask of the program. The point of run-
ning the test is to gain information like whether the program will pass or fail
the test.
Test Case ID
Purpose
Preconditions
Inputs
Expected Outputs
Postconditions
Execution History
Date Result Version Run By
(Continued)
(Continued)
(Continued)
7. Test suite: A collection of test scripts or test cases that is used for validat-
ing bug fixes (or finding new bugs) within a logical or physical area of a
product. For example, an acceptance test suite contains all of the test
cases that were used to verify that the software has met certain prede-
fined acceptance criteria.
8. T
est script: The step-by-step instructions that describe how a test case is
to be executed. It may contain one or more test cases.
9. T
est ware: It includes all of testing documentation created during the
testing process. For example, test specification, test scripts, test cases,
test data, the environment specification.
10. Test oracle: Any means used to predict the outcome of a test.
11. T
est log: A chronological record of all relevant details about the execu-
tion of a test.
12. Test report: A document describing the conduct and results of testing
carried out for a system.
6. T
esting should begin “in small” and progress toward testing “in
large”: The smallest programming units (or modules) should be
tested first and then expanded to other parts of the system.
7. Testing should be conducted by an independent third party.
8. All tests should be traceable to customer requirements.
9. Assign best people for testing. Avoid programmers.
10. T
est should be planned to show software defects and not their
absence.
11. P
repare test reports including test cases and test results to summa-
rize the results of testing.
12. A
dvance test planning is a must and should be updated in a timely
manner.
To see some of the most popular testing tools of 2017, visit the following
NOTE
site: https://www.guru99.com/testing-tools.html
SUMMARY
8. Verification is
a. Checking product with respect to customer’s expectations
b. Checking product with respect to SRS
c. Checking product with respect to the constraints of the project
d. All of the above.
9. Validation is
a. Checking the product with respect to customer’s expectations
b. Checking the product with respect to specification
c. Checking the product with respect to constraints of the project
d. All of the above.
10. Which one of the following is not a testing tool?
a. Deja Gnu b. TestLink
c. TestRail d. SOLARIS
ANSWERS
1. b. 2. c. 3. c. 4. a.
5. d. 6. b. 7. d. 8. b.
9. a. 10. d.
FIGURE 1.4
while the test execution is done in the end. This early design of tests
reduces overall delay by increasing parallelism between develop-
ment and testing. It enables better and more timely validation of
individual phases. The V-model is shown in Figure 1.5.
FIGURE 1.5
REVIEW QUESTIONS
2
Software Verification and
Validation
Inside this Chapter:
2.0. Introduction
2.1. Differences Between Verification and Validation
2.2. Differences Between QA and QC
2.3. Evolving Nature of Area
2.4. V&V Limitations
2.5. Categorizing V&V Techniques
2.6. Role of V&V in SDLC—Tabular Form
2.7. Proof of Correctness (Formal Verification)
2.8. Simulation and Prototyping
2.9. Requirements Tracing
2.10. Software V&V Planning (SVVP)
2.11. Software Technical Reviews (STRs)
2.12. Independent V&V (IV&V) Contractor
2.13. Positive and Negative Effects of Software V&V on Projects
2.14. Standard for Software Test Documentation (IEEE829)
2.0. INTRODUCTION
Software that satisfies its user expectations is a necessary goal of a success-
ful software development organization. To achieve this goal, software engi-
neering practices must be applied throughout the evolution of the software
Verification Validation
1. It is a static process of verifying 1. It is a dynamic process of validating/
documents, design, and code. testing the actual product.
2. It does not involve executing the 2. It involves executing the code.
code.
3. It is human based checking of 3. It is the computer-based execution
documents/files. of program.
4. Target is requirements specification, 4. Target is actual product—a unit, a
application architecture, high level module, a set of integrated modules,
and detailed design, and database and the final product.
design.
5. It uses methods like inspections, 5. It uses methods like black-box,
walk throughs, desk-checking, etc. gray-box, and white-box testing.
6. It, generally, comes first—before 6. It generally follows verification.
validation.
7. It answers the question—Are we 7. It answers the question—Are we
building the product right? building the right product?
8. It can catch errors that validation 8. It can catch errors that verification
cannot catch. cannot catch.
Both of these are essential and complementary. Each provides its own
sets of error filters.
Each has its own way of finding the errors in the software.
1. Theoretical Foundations
Howden claims the most important theoretical result in program testing and
analysis is that no general purpose testing or analysis procedure can be used
to prove program correctness.
Guessing
Interface Analysis
It is the detailed examination of the interface requirements specifications.
The evaluation criteria is the same as that for requirements specification.
The main focus is on the interfaces between software, hardware, user, and
external software.
Criticality Analysis
Criticality is assigned to each software requirement. When requirements are
combined into functions, the combined criticality of requirements form the
criticality for the aggregate function. Criticality analysis is updated periodi-
cally as requirement changes are introduced. This is because such changes
can cause an increase or decrease in a functions criticality which depends on
how the revised requirement impacts system criticality.
Criticality analysis is a method used to locate and reduce high-risk
problems and is performed at the beginning of the project. It identifies
the functions and modules that are required to implement critical program
functions or quality requirements like safety, security, etc.
Criticality analysis involves the following steps:
Step 1: C onstruct a block diagram or control flow diagram (CFD) of the
system and its elements. Each block will represent one software
function (or module) only.
Step 2: T race each critical function or quality requirement through
CFD.
Step 3: Classify all traced software functions as critical to:
a. Proper execution of critical software functions.
b. Proper execution of critical quality requirements.
Step 4: F ocus additional analysis on these traced critical software
functions.
Step 5: R epeat criticality analysis for each life cycle process to deter-
mine whether the implementation details shift the emphasis of
the criticality.
By program variable we broadly include input and output data, e.g., data
entered via a keyboard, displayed on a screen, or printed on paper. Any
externally observable aspect of the program’s execution may be covered by
the precondition and postcondition.
integrated V&V approach is very dependent upon the nature of the product
and the process used to develop it. Earlier the waterfall approach for testing
was used and now incremental approach is used. Regardless of the approach
selected, V&V progress must be tracked. Requirements/ evaluation matrices
play a key role in this tracking by providing a means of insuring that each
requirement of the product is addressed.
Step 7: Assessment
It is important that the software V&V plan provide for the ability to collect
data that can be used to assess both the product and the techniques used to
develop it. This involves careful collection of error and failure data, as well
as analysis and classification of these data.
Summary:
i. Complexity of software development and maintenance processes.
ii. Error frequencies for software work products.
iii. Error distribution throughout development phases.
iv. Increasing costs for error removal throughout the life cycle.
also does not exist for testing a specification or high level design. The
idea of testing a software test plan is also bewildering. Testing also does
not address quality issues or adherence to standards which are possible
with review processes.
Summary:
i. Exhaustive testing is impossible.
ii. Intermediate software products are largely untestable.
c. Reviews are a Form of Testing: The degree of formalism, scheduling,
and generally positive attitude afforded to testing must exist for software
technical reviews if quality products are to be produced.
Summary:
i. Objectives
ii. Human based versus machine based
iii. Attitudes and norms
d. Reviews are a Way of Tracking a Project: Through identification
of deliverables with well defined entry and exit criteria and successful
review of these deliverables, progress on a project can be followed and
managed more easily [Fagan]. In essence, review processes provide
milestones with teeth. This tracking is very beneficial for both project
management and customers.
Summary:
i. Individual developer tracking
ii. Management tracking
iii. Customer tracking
e. Reviews Provide Feedback: The instructor should discuss and
provide examples about the value of review processes for providing
feedback about software and its development process.
Summary:
i. Product ii. Process
Summary:
i. Project understanding ii. Technical skills
Inspections Walkthroughs
1. It is a five-step process that is 1. It has fewer steps than
well formalized. inspections and is a less formal
process.
2. It uses checklists for locating 2. It does not use a checklist.
errors.
3. It is used to analyze the quality 3. It is used to improve the quality
of the process. of the product.
4. This process takes a longer time. 4. It is a shorter process.
5. It focuses on training of junior 5. It focuses on finding defects.
staff.
Test Plan
1. Test-plan Identifier: Specifies the unique identifier assigned to the test
plan.
2. Introduction: Summarizes the software items and features to be tested,
provides references to the documents relevant for testing (for example,
overall project plan, quality assurance plan, configuration management
plan, applicable standards, etc.).
3. Test Items: Identifies the items to be tested including their version/
revision level, provides references to the relevant item documentation
(for example, requirements specification, design specification, user’s
guide, operations guide, installation guide, etc.), and identifies items
which are specifically excluded from testing.
4. Features to be Tested: Identifies all software features and their
combinations to be tested, identifies the test-design specification
associated with each feature and each combination of features.
5. Features not to be Tested: Identifies all features and significant
combinations of features which will not be tested, and the reasons for
this.
6. Approach: Describes the overall approach to testing (the testing activities
and techniques applied, the testing of non functional requirements
such as performance and security, the tools used in testing); specifies
completion criteria (for example, error frequency or code coverage);
identifies significant constraints such as testing-resource availability and
strict deadlines; serves for estimating the testing efforts.
7. Item Pass/Fail Criteria: Specifies the criteria to be used to determine
whether each test item has passed or failed testing.
8. Suspension Criteria and Resumption: Specifies the criteria used to
suspend all or a portion of the testing activity on the test items (for
example, at the end of working day, due to hardware failure or other
external exception, etc.), specifies the testing activities which must be
repeated when testing is resumed.
Test-Case Specification
1. Test-case Specification Identifier: Specifies the unique identifier
assigned to this test-case specification.
2. Test Items: Identifies and briefly describes the items and features to
be exercised by this test case, supplies references to the relevant
item documentation (for example, requirements specification, design
specification, user’s guide, operations guide, installation guide, etc.).
3. Input Specifications: Specifies each input required to execute the test
case (by value with tolerance or by name); identifies all appropriate
databases, files, terminal messages, memory resident areas, and external
values passed by the operating system; specifies all required relationships
between inputs (for example, timing).
4. Output Specifications: Specifies all of the outputs and features (for
example, response time) required of the test items, provides the exact
value (with tolerances where appropriate) for each required output or
feature.
5. Environmental Needs: Specifies the hardware and software configuration
needed to execute this test case, as well as other requirements (such as
specially trained operators or testers).
6. Special Procedural Requirements: Describes any special constraints on
the test procedures which execute this test case (for example, special set-
up, operator intervention, etc.).
7. Intercase Dependencies: Lists the identifiers of test cases which must be
executed prior to this test case, describes the nature of the dependencies.
Inputs
Expected results
Actual results
Date and time
Test-procedure step
Environment
Repeatability (whether repeated; whether occurring always, occa-
sionally, or just once).
Testers
Other observers
Additional information that may help to isolate and correct the cause
of the incident; for example, the sequence of operational steps or his-
tory of user-interface commands that lead to the (bug) incident.
4. Impact: Priority of solving the incident/correcting the bug (urgent, high,
medium, low).
Test-Summary Report
1. Test-Summary-Report Identifier: Specifies the unique identifier assigned
to this report.
2. Summary: Summarizes the evaluation of the test items, identifies
the items tested (including their version/revision level), indicates
the environment in which the testing activities took place, supplies
references to the documentation over the testing process (for example,
test plan, test-design specifications, test-procedure specifications, test-
item transmittal reports, test logs, test-incident reports, etc.).
3. Variances: Reports any variances/deviations of the test items from
their design specifications, indicates any variances of the actual testing
process from the test plan or test procedures, specifies the reason for
each variance.
4. Comprehensiveness Assessment: Evaluates the comprehensiveness of
the actual testing process against the criteria specified in the test plan,
identifies features or feature combinations which were not sufficiently
tested and explains the reasons for omission.
5. Summary of Results: Summarizes the success of testing (such as
coverage), identifies all resolved and unresolved incidents.
SUMMARY
1. Software engineering technology has matured sufficiently to be addressed
in approved and draft software engineering standards and guidelines.
2. Business, industries, and government agencies spend billions annually
on computer software for many of their functions:
To manufacture their products.
To provide their services.
To administer their daily activities.
To perform their short- and long-term management functions.
3. As with other products, industries and businesses are discovering that
their increasing dependence on computer technology to perform these
functions, emphasizes the need for safe, secure, reliable computer
systems. They are recognizing that software quality and reliability
are vital to their ability to maintain their competitiveness and high
technology posture in the market place. Software V&V is one of several
methodologies that can be used for building vital quality software.
ANSWERS
1. a. 2. c. 3. d. 4. a.
5. b. 6. b. 7. c. 8. d.
9. a. 10. d.
REVIEW QUESTIONS
1. a. Discuss briefly the V&V activities during the design phase of the
software development process.
b. Discuss the different forms of IV&V.
2. a. What is the importance of technical reviews in software development
and maintenance life cycle?
b. Briefly discuss how walkthroughs help in technical reviews.
3. a. Explain why validation is more difficult than verification.
b. Explain validation testing.
3
Software Quality
Inside this Chapter:
3.0. Introduction
3.1. Role of Process in Software Quality
3.2. Software Control
3.3. Quality Assurance
3.4. Quality Assurance Analyst
3.5. Quality Factor
3.6. Quality Management
3.7. Methods of Quality Management
3.8. Core Components of Quality
3.9. Core Aspects of Quality
3.0. INTRODUCTION
There are two ways of knowing whether good quality software has been
produced or NOT:
1. Measuring the attributes of the software that has been developed
(quality control).
2. Monitoring and controlling the development process of the s oftware
(quality assurance).
We can compare the process of developing software with that of
reparing of a pizza. If we prepare a pizza (somehow) and then serve it,
p
we will certainly get plenty of comments on its quality. By then it is too late.
The only way is to take care the next time we make pizza. Ultimately, the
consumers should be satisfied. So we define quality as “A product which
fulfills and continues to meet the purpose for which it was produced is a
quality product.”
The quality can be ensured provided that the process is ensured. Like
we need a good recipe to prepare a good pizza, similarly we need good tools
and methods during software development.
c. Assemble these into a quality assurance plan for the project. This
describes what the procedures and standards are, when they will be
done, and who does then.
3.4. QA ANALYST
A QA analyst is responsible for applying the principles and practices of SQA
throughout SDLC. Main functions are:
a. To create end-to-end test plans.
b. To execute the plan.
c. To manage all activities in the plan.
All objectives must be met and the solution should be in terms of func-
tionality performance, reliability, stability, and compatibility with other
internal and external systems.
The analyst does this by ensuring that every phase and feature of
software solution is tested. He or she may even do back-end (DBMS-based)
testing.
In the last ten years, managers have increasingly recognized that systems
and processes management is also their task. Mangers also have responsibil-
ity for the ethical conduct of their organizations.
Manager behavior
Knowledge Managements
base (KB) Performance skills
Customer satisfaction
Continuous improvement
Totality
Documentation QM Improvement
Foundation
SQA
Quality planning, product reviews, process tailoring, metrics, change management, SCM
Plan
Design Coding Testing Implementation
requirements
1X 5X 20 X 50 X 100–270X
45% of all 35% of all Only 20% of Does what Did I build right
errors are errors are errors originate I have build or product?
analysis errors design errors during coding not?
SUMMARY
In this chapter, we have studied what is software quality, how it can be
improved, controlled, and ensured. We have also studied quality manage-
ment, the core components of quality, and its core aspects.
ANSWERS
1. a. 2. a. 3. d. 4. c.
5. c. 6. a. 7. a. 8. b.
9. a. 10. c.
4
Black-Box (or Functional)
Testing Techniques
Inside this Chapter:
4.0. Introduction to Black-Box (or Functional Testing)
4.1. Boundary Value Analysis (BVA)
4.2. Equivalence Class Testing
4.3. Decision Table Based Testing
4.4. Cause-Effect Graphing Technique
4.5. Comparison on Black-Box (or Functional) Testing Techniques
4.6. Kiviat Charts
of two (or more) faults. So we derive test cases by holding the values of all
but one variable at their nominal values and letting that variable assume its
extreme values.
If we have a function of n-variables, we hold all but one at the nominal
values and let the remaining variable assume the min, min+, nom, max–,
and max values, repeating this for each variable. Thus, for a function of n
variables, BVA yields (4n + 1) test cases.
Please note that we explained above that we can have 13 test cases
(4n + 1) for this problem. But instead of 13, now we have 15 test cases. Also,
test case ID number 8 and 13 are redundant. So, we ignore them. However,
we do not ignore test case ID number 3 as we must consider at least one test
case out of these three. Obviously, it is mechanical work!
We can say that these 13 test cases are sufficient to test this program
using BVA technique.
The commission program produced a monthly sales report that gave the total
number of locks, stocks, and barrels sold, the salesperson’s total dollar sales
and finally, the commission.
Out of these 15 test cases, 2 are redundant. So, 13 test cases are sufficient
to test this program.
In this technique, the input and the output domain is divided into a
finite number of equivalence classes. Then, we select one representative of
each class and test our program against it. It is assumed by the tester that
if one representative from a class is able to detect error then why should
he consider other cases. Furthermore, if this single representative test case
did not detect any error then we assume that no other test case of this class
can detect error. In this method, we consider both valid and invalid input
domains. The system is still treated as a black-box meaning that we are not
bothered about its internal logic.
The idea of equivalence class testing is to identify test cases by using one
element from each equivalence class. If the equivalence classes are chosen
wisely, the potential redundancy among test cases can be reduced.
In fact, we will have, always, the same number of weak equivalence class test
cases as the classes in the partition.
Just like we have truth tables in digital logic, we have similarities between
these truth tables and our pattern of test cases. The Cartesian product guar-
antees that we have a notion of “completeness” in two ways:
a. We cover all equivalence classes.
b. We have one of each possible combination of inputs.
2. Also, strongly typed languages like Pascal and Ada, eliminate the need
for the consideration of invalid inputs. Traditional equivalence testing
is a product of the time when languages such as FORTRAN, C, and
COBOL were dominant. Thus, this type of error was common.
4.2.5. Solved Examples
Please note that the expected outputs describe the invalid input values
thoroughly.
So, we get this test case on the basis of valid classes – M1, D1, and Y1
above.
c. Weak robust test cases are given below:
So, we get 7 test cases based on the valid and invalid classes of the input
domain.
d. Strong robust equivalence class test cases are given below:
As done earlier, the inputs are mechanically selected from the approxi-
mate middle of the corresponding class:
So, three month classes, four day classes, and three year classes results
in 3 × 4 × 3 = 36 strong normal equivalence class test cases. Furthermore,
adding two invalid classes for each variable will result in 150 strong robust
equivalence class test cases.
It is difficult to show these 150 classes here.
d. And finally, the strong robust equivalence class test cases are as follows:
Step 3. T
ransform this cause-effect graph, so obtained in step 2 to a
decision table.
Step 4. C
onvert decision table rules to test cases. Each column of the
decision table represents a test case. That is,
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11
C1: a < b + c? F T T T T T T T T T T
C2: b < a + c? — F T T T T T T T T T
C3: c < a + b? — — F T T T T T T T T
C4: c = b? — — — T T T T F F F F
C5: a = c? — — — T T F F T T F F
C6: b = c? — — — T F T F T F T F
a1: Not a triangle × × ×
a2: Scalene ×
a3: Isosceles × × ×
a4: Equilateral ×
a5: Impossible × × ×
Each “-” (hyphen) in the decision table represents a “don’t care” entry.
Use of such entries has a subtle effect on the way in which complete decision
tables are recognized. For limited entry decision tables, if n conditions exist,
there must be 2n rules. When don’t care entries indicate that the condition is
irrelevant, we can develop a rule count as follows:
Rule 1. Rules in which no “don’t care” entries occur count as one rule.
Note that each column of a decision table represents a rule and the
number of rules is equal to the number of test cases.
Rule 2. Each “don’t care” entry in a rule doubles the count of that rule.
Note that in this decision table we have 6 conditions (C1—C6). Therefore,
n=6
Also, we can have 2n entries, i.e., 26 = 64 entries. Now we establish the
rule and the rule count for the above decision table.
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11
C1: a < b + c? F T T T T T T T T T T
C2: b < a + c? — F T T T T T T T T T
C3: c < a + b? — — F T T T T T T T T
C4: a = b? — — — T T T T F F F F
C5: a = c? — — — T T F F T T F F
C6: b = c? — — — T F T F T F T F
Rule Count 32 16 8 1 1 1 1 1 1 1 1 = 64
a1: Not a triangle × × ×
a2: Scalene ×
a3: Isosceles × × ×
a4: Equilateral ×
a5: Impossible × × ×
From the previous table we find that the rule count is 64. And we have
already said that 2n = 26 = 64. So, both are 64.
The question, however, is to find out why the rule count is 32 for the
Rule-1 (or column-1)?
We find that there are 5 don’t cares in Rule-1 (or column-1) and hence
2n = 25 = 32. Hence, the rule count for Rule-1 is 32. Similarly, for Rule-2, it
is 24 = 16 and 23 = 8 for Rule-3. However, from Rule-4 through Rule-11, the
number of don’t care entries is 0 (zero). So rule count is 20 = 1 for all these
columns. Summing the rule count of all columns (or R1-R11) we get a total
of 64 rule count.
Many times some problems arise with these decision tables. Let us
see how.
Consider the following example of a redundant decision table:
Conditions 1–4 5 6 7 8 9
C1 T F F F F T
C2 — T T F F F
C3 — T F T F F
a1 × × × — — —
a2 — × × × — ×
a3 × — × × × ×
Please note that the action entries in Rule-9 and Rules 1–4 are NOT
identical. It means that if the decision table were to process a transaction in
which C1 is true and both C2 and C3 are false, both rules 4 and 9 apply. We
observe two things
1. Rules 4 and 9 are in-consistent because the action sets are different.
2. The whole table is non-deterministic because there is no way to decide
whether to apply Rule-4 or Rule-9.
Also note carefully that there is a bottom line for testers now. They
should take care when don’t care entries are being used in a decision table.
4.3.3. Examples
We have already studied the problem domain for the famous triangle prob-
lem in previous chapters. Next we apply the decision table based technique
on the triangle problem. The following are the test cases:
So, we get a total of 11 functional test cases out of which three are
impossible cases, three fail to satisfy the triangle property, one satisfies the
equilateral triangle property, one satisfies the scalene triangle property, and
three ways to get an isoceles triangle.
Conditions
C1: month in M1 T
C2: month in M2 T
C3: month in M3 T
C4: day in D1
C5: day in D2 :
C6: day in D3 :
C7: day in D4 : : :
C8: year in y1
a1: Impossible : : : : : : : :
a2: Next date
23-03-2018 13:49:20
106 • Software Quality Assurance
Because we know that we have serious problems with the last day of
last month, i.e., December. We have to change month from 12 to 1. So, we
modify our classes as follows:
M1 = {month: month has 30 days}
M2 = {month: month has 31 days except December}
M3 = {month: month is December}
D1 = {day: 1 ≤ day ≤ 27}
D2 = {day: day = 28}
D3 = {day: day = 29}
D4 = {day: day = 30}
D5 = {day: day = 31}
Y1 = {year: year is a leap year}
Y2 = {year is a common year}
The Cartesian product of these contain 40 elements. Here, we have a
22-rule decision table. This table gives a clearer picture of the Next Date
function than does the 36-rule decision table and is given below:
In this table, the first five rules deal with 30-day months. Notice that the
leap year considerations are irrelevant. Rules (6 – 10) and (11 – 15) deal with
31-day months where the first five with months other than December and
the second five deal with December. No impossible rules are listed in this
portion of the decision table.
Still there is some redundancy in this table. Eight of the ten rules simply
increment the day. Do we really require eight separate test cases for this
sub-function? No, but note the type of observation we get from the decision
table.
Finally, the last seven rules focus on February and leap year. This deci-
sion table analysis could have been done during the detailed design of the
Next Date function.
Further simplification of this decision table can also be done. If the
action sets of two rules in a decision table are identical, there must be at least
one condition that allows two rules to be combined with a don’t care entry.
In a sense, we are identifying equivalence classes of these rules. For exam-
ple, rules 1, 2, and 3 involve day classes as D1, D2, and D3 (30 day classes).
These can be combined together as the action taken by them is the same.
Similarly, for other rules other combinations can be done. The correspond-
ing test cases are shown in the table as in Figure 4.17.
a2: increment × × × × × × × × × × × × × ×
day
a3: reset day × × × × ×
a4: increment × × × ×
month
a5: reset month ×
a6: increment ×
year
FIGURE 4.16 Decision Table for the Next Date Function.
23-03-2018 13:49:20
108 • Software Quality Assurance
Step 4. Because there are 11 rules, we get 11 test cases and they are:
1 2 3 4
Conditions C1 1 0 0 0
(or Causes) C2 0 1 0 0
C3 0 0 1 1
C4 1 0 0 0
C5 0 1 1 0
C6 0 0 0 1
Actions E1 × — — —
(or Effects) E2 — × — —
E3 — — × —
E4 — — — ×
That is, if C1 and C4 are 1 (or true) then the effect (or action) is E1. Simi-
larly, if C2 and C5 is 1 (or true), action to be taken is E2, and so on.
Step 4. Because there are 4 rules in our decision table above, we must have
at least 4 test cases to test this system using this technique.
These test cases can be:
1. Salary = 20,000, Expenses = 2000
2. Salary = 100,000, Expenses = 10,000
3. Salary = 300,000, Expenses = 20,000
4. Salary = 300,000, Expenses = 50,000
So we can say that a decision table is used to derive the test cases which
can also take into account the boundary values.
4.5.1. Testing Effort
The functional methods that we have studied so far vary both in terms of the
number of test cases generated and the effort to develop these test cases.
To compare the three techniques, namely, boundary value analysis (BVA),
equivalence class partitioning, and decision table based technique, we con-
sider the following curve shown in Figure 4.21.
We can say that the effort required to identify test cases is the lowest in
BVA and the highest in decision tables. The end result is a trade-off between
the test case effort identification and test case execution effort. If we shift
our effort toward more sophisticated testing methods, we reduce our test
execution time. This is very important as tests are usually executed several
times. Also note that, judging testing quality in terms of the sheer number
of test cases has drawbacks similar to judging programming productivity in
terms of lines of code.
The examples that we have discussed so far show these trends.
4.5.2. Testing Efficiency
What we found in all of these functional testing strategies is that either the
functionality is untested or the test cases are redundant. So, gaps do occur in
functional test cases and these gaps are reduced by using more sophisticated
techniques.
We can develop various ratios of the total number of test cases generated
by method-A to those generated by method-B or even ratios on a test case
basis. This is more difficult but sometimes management demands numbers
even when they have little meaning. When we see several test cases with the
same purpose, sense redundancy, detecting the gaps is quite difficult. If we
use only functional testing, the best we can do is compare the test cases that
result from two methods. In general, the more sophisticated method will
help us recognize gaps but nothing is guaranteed.
4.5.3. Testing Effectiveness
How can we find out the effectiveness of the testing techniques?
a. By being dogmatic, we can select a method, use it to generate test
cases, and then run the test cases. We can improve on this by not
being dogmatic and allowing the tester to choose the most appropriate
method. We can gain another incremental improvement by devising
appropriate hybrid methods.
b. The second choice can be the structural testing techniques for the test
effectiveness. This will be discussed in subsequent chapters.
Note, however, that the best interpretation for testing effectiveness is
most difficult. We would like to know how effective a set of test cases is for
finding faults present in a program. This is problematic for two reasons.
1. It presumes we know all the faults in a program.
2. Proving that a program is fault free is equivalent to the famous halting
problem of computer science, which is known to be impossible.
The chart on the left shows that all metrics are well within the acceptable
range. The chart on the right shows an example where all metrics are above
maximum limits.
FIGURE 4.23 D
evCodeMetricsWeb screen shot with Kiviat:
http://devcodemetrics.sourceforge.net/
FIGURE 4.24
FIGURE 4.25
FIGURE 4.26
FIGURE 4.27
FIGURE 4.28
dimensional data?
The solution: express everything in terms of a common measure -
cost.
There are then two dimensions - utilization and cost - which when
Cost/Utilization—The Method
The following steps are shown below:
1. Choose factors to be measured.
2. Determine the cost of each factor as a percent of total system cost.
3. Determine the utilization of each factor.
4. Prepare a chart showing the cost and utilization of each factor.
5. Compute the measure of cost/utilization, F.
6. Compute the measure of balance, B.
7. Evaluate the resulting chart and measures.
Cost/Utilization—The Measures
Cost/Utilization: F = ∑uipl
i
where: ui = percent utilization of factor i
pi = cost contribution of factor i
Balance: B = 1− 2 √ ∑(F − ui)2 × pi
ost/Utilization—
FIGURE 4.35 C FIGURE 4.36 Cost/Utilization—
The Measures. The Measures.
FIGURE 4.41 C
omposite Cost/Utilization Histogram for Two Real Linked
Systems.
Conclusions
It is essential to maintain balance between system components in order to:
reduce costs.
SUMMARY
We summarize the scenarios under which each of these techniques will be
useful:
ANSWERS
1. c. 2. b. 3. b. 4. a.
5. b. 6. b. 7. a. 8. c.
9. a. 10. a.
Q. 11. C
onsider the above use case diagram for coffee maker. Find at least
ten acceptance test cases and black-box test cases and document it.
Ans. Test cases for coffee maker.
Preconditions: Run coffee maker by switching on power supply.
23-03-2018 13:50:46
Test Test case Test case Test steps Actual Test status
case id name description Step Expected result result (P/F)
Acc04 Edit a The user will be Enter the Upon completion, a
recipe prompted for the recipe name status message is printed
recipe name they along with vari- and the coffee maker is
23-03-2018 13:50:47
Test ID Description/steps Expected results Actual results Test status (P/F)
checkOptions Precondition: Run CoffeeMaker
Enter: 0 Program exits
23-03-2018 13:50:49
Test ID Description/steps Expected results Actual results Test status (P/F)
addRecipe3 Precondition: Run CoffeeMaker Mocha could not be
Enter: 1 added. Price can not be
Name: Mocha negative. Return to main
Price: –50 menu.
23-03-2018 13:50:50
Test ID Description/steps Expected results Actual results Test status (P/F)
addRecipe8 Precondition: Run CoffeeMaker Please input an integer.
Enter: 1 Return to main menu.
(Continued)
23-03-2018 13:50:52
Test ID Description/steps Expected results Actual results Test status (P/F)
deleteRecipe2 Precondition: Run CoffeeMaker There are no recipes to
Enter: 2 delete.
Return to main menu.
REVIEW QUESTIONS
23-03-2018 13:50:53
Black-Box (or Functional) Testing Techniques • 139
REVIEW QUESTIONS
1. Perform the following:
a. Write a program to find the largest number.
b. Design a test case for program at 2(a) using a decision table.
c. Design equivalence class test cases.
2. Explain any five symbols used in the cause-effect graphing technique?
3. How do you measure:
a. Test effectiveness?
b. Test efficiency?
4. Write a short paragraph:
a. Equivalence testing.
5. Explain the significance of boundary value analysis. What is the purpose
of worst case testing?
6. Describe cause-effect graphing technique with the help of an example.
7. a. Discuss different types of equivalence class tests cases.
b. C
onsider a program to classify a triangle. Its input is a triple of the
integers (day x, y, z) and date types or input parameters ensure that
they will be integers greater than zero and less than or equal to 200.
The program output may be any of the following words: scalene,
isosceles, equilateral, right angle triangle, not a triangle. Design the
equivalence class test cases.
8. How can we measure and evaluate test effectiveness? Explain with the
help of 11 step S/W testing process.
9. What is the difference between:
Equivalence partitioning and boundary value analysis methods?
10. Consider the previous date function and design test cases using the
following techniques:
a. Boundary value analysis.
b. Equivalence class partitioning.
The function takes current date as an input and returns the previous
date of the day as the output.
Design robust test cases and identify equivalence class test cases for
output and input domains for this problem.
20. What is the difference between weak normal and strong normal
equivalence class testing?
21. Consider a program for the determination of previous date. Its input is a
triple of day, month, and year with the values in the range:
1 ≤ month ≤ 12
1 ≤ day ≤ 31
1900 ≤ year ≤ 2025
The possible outputs are “Previous date” and “Invalid date.” Design a
decision table and equivalence classes for input domain.
22. Consider a program given below for the selection of the largest of
numbers.
main ( )
{
float A, B, C;
printf (“Enter 3 values:”);
scanf (“%d%d%d”, &A, &B, &C);
printf (“Largest value is”);
if (A > B)
{
if (A > C)
printf (“%d\n”, A);
else
printf (“%d\n”, C);
}
else
{
if (C > B)
printf (“%d”, C);
else
printf (“%f”, B);
}
}
a. D
esign the set of test cases using BVA technique and equivalence
class testing technique.
b. Select a set of test cases that will provide 100% statement coverage.
c. Develop a decision table for this program.
23. Consider the above program and show that why is it practically impossible
to do exhaustive testing?
24. a. Consider the following point-based evaluation system for a trainee
salesman of an organization:
The marks of any three subjects are considered for the calculation of
average marks. Scholarships of $1000 and $500 are given to students
securing more than 90% and 85% marks, respectively. Develop a
decision table, cause effect graph, and generate test cases for the above
scenario.
5
White-Box (or Structural)
Testing Techniques
Inside this Chapter:
5.0. Introduction to White-Box Testing or Structural Testing or
Clear-Box or Glass-Box or Open-Box Testing
5.1. Static Versus Dynamic White-Box Testing
5.2. Dynamic White-Box Testing Techniques
5.3. Mutation Testing Versus Error Seeding—Differences in Tabular
Form
5.4. Comparison of Black-Box and White-Box Testing in Tabular
Form
5.5. Practical Challenges in White-Box Testing
5.6. Comparison on Various White-Box Testing Techniques
5.7. Advantages of White-Box Testing
f g
Total Statements Exercised
Statement Coverage =
Total Number of Executable Statements in Program
× 100
i = 0 ;
if (code = = “y”)
{
statement –1 ;
statement–2 ;
:
:
statement – n ;
}
else
result = {marks/ i} * 100 ;
In this program, when we test with code = “y,” we will get 80% code
coverage. But if the data distribution in the real world is such that 90% of the
time the value of code is not = “y,” then the program will fail 90% of the time
because of the exception-divide by zero. Thus, even with a code coverage of
80%, we are left with a defect that hits the users 90% of the time. The path
coverage technique, discussed next, overcomes this problem.
Path Coverage =
f Total Path Exercised
Total Number of paths in Program g × 100
5.2.2.3. Condition coverage
In the above example, even if we have covered all the paths possible, it does
not mean that the program is fully tested. Path testing is not sufficient as it
does not exercise each part of the Boolean expressions, relational expres-
sions, and so on. This technique of condition coverage or predicate monitors
whether every operand in a complex logical expression has taken on every
TRUE/FALSE value. Obviously, this will mean more test cases and the num-
ber of test cases will rise exponentially with the number of conditions and
Boolean expressions. For example, in if-then-else, there are 22 or 4 p ossible
Condition Coverage =
f Total Decisions Exercised
Total Number of Decisions in Programg× 100
McCabe IQ covers about 146 different counts and measures. These met-
rices are grouped according to six main “collections” each of which provides
a different level of granularity and information about the code being ana-
lyzed. The collections are given below:
i. McCabe metrics based on cyclomatic complexity, V(G).
ii. Execution coverage metrics based on any of branch, path, or Boolean
coverage.
iii. Code grammar metrics based around line counts and code structure
counts such as nesting.
iv. OO metrics based on the work of Chidamber and Kemerer.
v. Derived metrics based on abstract concepts such as understability,
maintainability, comprehension, and testability.
vi. Custom metrics imported from third-party software/systems, e.g.,
defect count.
McCabe IQ provides for about 100 individual metrics at the method,
procedure, function, control, and section/paragraph level. Also, there are 40
metrices at the class/file and program level.
Categories of Metrics
There are three categories of metrics:
1. McCabe metrics
2. OO metrics
3. Grammar metrics
Please remember that when collecting metrics, we rely upon subordinates
who need to “buy into” the metrics program. Hence, it is important to only
collect what you intend to use.
We should keep in mind, the Hawthorne Effect which states that when
you collect metrics on people, the people being measured will change their
behavior. Either of these practices will destroy the efficiency of any metrics
program.
The three metrics categories are explained below.
I. McCabe metrics
Essential Complexity
EDM =
Cyclomatic Complexity
CD = Decisions Made
Lines of Executable Code
If the path coverage is < 90% for new code or 70% for code under
maintenance then the test scripts require review and enhancement.
h. Boolean coverage: A technique used to establish that each condition
within a decision is shown by execution to independently and correctly
affect the outcome of the decision.
The major application of this technique is in safety critical sys-
tems and projects.
i. Combining McCabe metrics: Cyclomatic complexity is the basic
indicator for determining the complexity of logic in a unit of code. It
can be combined with other metrics.
2. Code refactoring
If V(G) > 10 and the condition
V(G) – EV(g) ≤ V(g) is true
Then, the code is a candidate for refactoring.
4. Test coverage
If the graph between V(G) against path coverage does not show a linear
increase then the test scripts need to be reviewed.
II. OO Metrics
a. Average V(G) for a class: If average V(G) > 10 then this metric
indicates a high level of logic in the methods of the class which in turn
indicates a possible dilution of the original object model. If the average
is high, then the class should be reviewed for possible refactoring.
b. Average essential complexity for a class: If the average is greater
than one then it may indicate a dilution of the original object model.
If the average is high, then the class should be reviewed for possible
refactoring.
c. Number of parents: If the number of parents for a class is greater
than one then it indicates a potentially overly complex inheritance
tree.
d. Response for class (RFC): RFC is the count of all methods within
a class plus the number of methods accessible to an object of this
class due to implementation. Please note that the larger the number
of methods that can be invoked in response to a message, the greater
the difficulty in comprehension and testing of the class. Also, note
that low values indicate greater specialization. If the RFC is high then
making changes to this class will be increasingly difficult due to the
extended impact to other classes (or methods).
e. Weighted methods for class (WMC): WMC is the count of
methods implemented in a class. It is a strong recommendation that
WMC does not exceed the value of 14. This metric is used to show
the effort required to rewrite or modify the class. The aim is to keep
this metric low.
f. Coupling between objects (CBO): It indicates the number of non-
inherited classes this class depends on. It shows the degree to which
this class can be reused.
For dynamic link libraries (DLLs) this measure is high as the soft-
ware is deployed as a complete entity.
For executables (.exe), it is low as here reuse is to be encouraged.
Please remember this point:
What is to be done?
The percentages of methods in a class using an attribute are averaged
and subtracted from 100. This measure is expressed in percentage.
Two cases arise:
i. If % is low, it means simplicity and high reusability.
ii. If % is high, it means a class is a candidate for refactoring and
could be split into two or more subclasses with low cohesion.
j. Combined OO metrics: V(G) can also be used to evaluate OO
systems. It is used with OO metrics to find out the suitable candidates
for refactoring.
By refactoring, we mean making a small change to the code which
improves its design without changing its semantics.
any path through the program that introduces at least one new set of
processing statements or a new condition. See the following steps:
Step 1. Construction of flow graph from the source code or flow charts.
Step 2. Identification of independent paths.
Step 3. Computation of cyclomatic complexity.
Step 4. Test cases are designed.
Using the flow graph, an independent path can be defined as a path in
the flow graph that has at least one edge that has not been traversed before
in other paths. A set of independent paths that cover all the edges is a basis
set. Once the basis set is formed, test cases should be written to execute all
the paths in the basis set.
i.e., .
We next show the basic notations that are used to draw a flow graph:
SOLVED EXAMPLES
EXAMPLE 5.1. Consider the following code:
void foo (float y, float a *, int n)
{
float x = sin (y) ;
if (x > 0.01)
z = tan (x) ;
else
z = cos (x) ;
for (int i = 0 ; i < x ; + + i) {
a[i] = a[i] * z ;
Cout < < a [i] ;
}
Draw its flow graph, find its cyclomatic complexity, V(G), and the independ-
ent paths.
SOLUTION. First, we try to number the nodes, as follows:
1. void foo (float y, float a *, int n)
{
float x = sin (y) ;
if (x > 0.01)
2. z = tan (x) ;
else
3. z = cos (x) ;
7.
cout < < i ;
}
So, its flow graph is shown in Figure 5.4. Next, we try to find V(G) by
three methods:
This means that we must execute these paths at least once in order to
test the program thoroughly. So, test cases can be designed.
EXAMPLE 5.2. Consider the following program that inputs the marks of five
subjects of 40 students and outputs average marks and the pass/fail message.
h # include <stdio.h>
a. (1) main ( ) {
(2) int num_student, marks, subject, total;
b. h (3) float average ;
(4) num_student = 1;
h
e. (5) while (num_student < = 40) {
f. h (6) total = 0 ;
h
(7) subject = 1;
(8) while (subject < = 5) }
(9) Scanf (“Enter marks: % d”, & marks);
(10) total = total + marks ;
(11) subject ++;
(12) }
g. (13) average = total/5 ;
h
(14) if (average > = 50)
h. h (15) printf (“Pass... Average marks = % f”,
average);
(16) else
i. h (17) print (“FAIL ... Average marks are % f”,
average);
j. h (18) num_student ++;
(19) }
h
c. (20) printf (“end of program”);
d. h (21) }
Draw its flow graph and compute its V(G). Also identify the independent
paths.
SOLUTION. The process of constructing the flow graph starts with dividing
the program into parts where flow of control has a single entry and exit point.
In this program, line numbers 2 to 4 are grouped as one node (marked as
“a”) only. This is because it consists of declaration and initialization of varia-
bles. The second part comprises of a while loop-outer one, from lines 5 to 19
and the third part is a single printf statement at line number 20.
Note that the second part is again divided into four parts—statements
of lines 6 and 7, lines 8 to 12, line 13, and lines 14–17, i.e., if-then-else
structure using the flow graph
notation, we get this flow graph
in Figure 5.5.
Here, “∗” indicates that the node
is a predicate node, i.e., it has an
outdegree of 2.
The statements corre-
sponding to various nodes are
given below:
Statement
Nodes Numbers
a 2–4
b 5
e 6–7
f 8
z 9–12
g 13–14
h 15
i 17
j 18
c 19 FIGURE 5.5 Flow Graph for Example 5.2.
d 20
FIGURE 5.9
23-03-2018 16:30:18
174 • Software Quality Assurance
1. In test case id–2, we call GCD function recursively with x = x – y and y
NOTES as it is.
2. In test case id–3, we call GCD function recursively with y = y – x and x
as it is.
SOLUTION.
Cyclomatic complexity is a software metric that provides a quantitative
measure of the logical complexity of a program.
Cyclomatic complexity has a foundation in graph theory and is computed
in one of the three ways:
i. The number of regions corresponds to the cyclomatic complexity.
ii. Cyclomatic complexity: E – N + 2 (E is the number of edges, and
N is number of nodes).
iii. Cyclomatic complexity: P + 1 (P is the number of predicate nodes).
Referring to the flow graph, the cyclomatic number is:
1. The flow graph has three regions.
2. Complexity = 8 edges – 7 nodes + 2 = 3
3. Complexity = 2 predicate nodes + 1 = 3 (Predicate nodes = C1, C2)
FIGURE 5.11
Two test cases are required for complete branch coverage and four test cases
are required for complete path coverage.
Assumptions:
c1; if(i%2==0)
f1: EVEN()
f2: ODD()
c2: if(j > 0)
f3: POSITIVE()
f4: NEGATIVE()
i.e.
if(i%2==0){
EVEN();
}else{
ODD();
}
if(j < 0){
POSITIVE();
}else{
NEGATIVE();
}
Test cases that satisfy the branch coverage criteria, i.e., <c2, f1, c2, f3> and
<c1, f2, c2, f4>.
SOLUTION.
Nodes Lines
A 0, 1, 2
B, C, D 3, 4, 5
E 6
F 7
G, H, I 8, 9, 10
J 11
6 if (x ≤ 0) {
7 if(y ≥ 0){
8 z = y*z + 1;
9 }
10 }
11 else {
12 z = 1/x;
13 }
14 y = x * y + z
15 count = count – 1
16 while (count > 0)
17 output (z);
18 end
Draw its data-flow graph. Find out whether paths (1, 2, 5, 6) and (6, 2, 5, 6)
are def-clear or not. Find all def-use pairs?
SOLUTION. We draw its data flow graph (DD-path graph) first.
FIGURE 5.14
3 b
4 d
1 2 3
1 a d
2 b+e
3 c
Note that if there are several links between two nodes then “+” sign denotes a
parallel link.
This is, however, not very useful. So, we assign a weight to each entry
of the graph matrix. We use “1” to denote that the edge is present and “0”
to show its absence. Such a matrix is known as a connection matrix. For the
Figure above, we will have the following connection matrix:
1 2 3 4 1 2 3 4
1 1 1 1 1 1 2–1=1
2 2
3 1 1–1=0
3 1
4 1 1–1=0
4 1
1 + 1 = 2 = V(G)
Now, we want to compute its V(G). For this, we draw the matrix again,
i.e., we sum each row of the above matrix. Then, we subtract 1 from each
row. We, then, add this result or column of result and add 1 to it. This gives
us V(G). For the above matrix, V(G) = 2.
EXAMPLE 5.10. Consider the following flow graph:
FIGURE 5.21
Similarly, we can find two link or three link path matrices, i.e., A2, A3, ...
A . These operations are easy to program and can be used as a testing tool.
n–1
i. The flow graph of a given program is ii. The def/use graph for this
as follows: program is as follows:
Now, let us find its dcu and dpu. We draw a table again as follows:
30 Then
31 commission = 0.10 * 1000.0
32
commission = commission + 0.15 *
800
33
commission = commission + 0.20 *
(sales – 1800)
34 Else if (sales > 100.0)
35 then
36 commission = 0.10 * 1000.0
37
commission = commission + 0.15 *
(sales – 1000.0)
38 else commission = 0.10 * sales
39 Endif
40 Endif
41
Output (“Commission is $”,
commission)
42 End commission
Now, we draw its flow graph first:
DD-path Nodes
A 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
B 14
C 15, 16, 17, 18, 19
D 20, 21, 22, 23, 24, 25, 26, 27, 28
E 29
F 30, 31, 32, 33
G 34
H 35, 36, 37
I 38
J 39
K 40
FIGURE 5.27 DD Path Graph. L 41, 42
The initial value definition for the variable, “titalstocks” occurs at node-
11 and it is first used at node-17. Thus, the path (11, 17) which consists of the
node sequence <11, 12, 13, 14, 15, 16, 17>, is definition clear.
The path (11, 22), which consists of the node sequence <11, 12, 13, 14,
15, 16, 17, 18, 19, 20> * & <21, 22> is not definition clear because the values
of total stocks are defined at node-11 and at node-17. The asterisk, *, is used
to denote zero or more repetitions.
Thus, out of 43 du-paths, 8-paths, namely, (11, 22), (17, 25), (17, 22),
(17, 25), (31, 33), (31, 41), (32, 41), and (36, 41), are not definition clear.
These 8-paths are the main culprits and thus the cause of the error.
Step 2: N
ext, we tabulate data actions. The D and U actions in xyz-class can
be read from the code.
= Define
= Use
where
= Define
= Use
1–5 DUD
1–2 DUD
Paths from node-2:
2–3 DD
2–4 DD
Paths from node-3:
3–6–7 DUD
3–6–8–10 DU–
Paths from node-4:
4–6–7 DUD
4–6–8–10 DU–
Paths from node-5:
5–6–7 DUD
5–6–8–10 DU–
Paths from node-7:
5–6–7 DUD
5–6–8–10 DU–
Similarly, the trace data flow for “Z” is as
follows: Define/Use Paths for Z are shown
in Figure 5.31. The variable Z has d actions
at nodes-1 and-7.
∴ Paths from node-1 are:
1–5–6–7 DUD
1–5–6–8–9 DU– where
1–5–6–8 DU– = Define
1–2–3–6–7 DUD = Use
1–2–4–6–7 DUD
1–2–3–6–8–9 DU–
1–2–3–6–8 DU–
Paths from node-7:
7–6–8–9 DU– FIGURE 5.31 T
race Data Flow
7–6–7 DU– for “Z.”
7–6–8 DU–
Step 4: Merge the paths
a. Drop sub-paths: Many of the tabulated paths are sub-paths. For
example,
{1 2 3 6 8} is a sub-path of {1 2 3 6 8 9}.
So, we drop the sub-path.
b. Connect linkable paths: Paths that end and start on the same node
can be linked. For example,
{1 5} {5 6 8 10}
becomes {1 5 6 8 10}
We cannot merge paths with mutually exclusive decisions. For example,
{1 5 6 8 9} and {1 2 3 6 7}
cannot be merged because they represent the predicate branches from
node-1.
So, merging all the traced paths provides the following set of paths:
The (7 6)* means that path 7–6 (the loop) can be iterated. We require that a
loop is iterated at least twice.
Step 5: Check testability
We need to check path feasibility.
Try path 1: {1–5–6–8–10}
Condition Comment
1. x ≤ 10 Force branch to node-5
2. y′ = z + x Calculation on node-5
3. x≤z+x Skip node-7, proceed to node-8
4. x≤z Skip node-9, proceed to node-10
The values that satisfy all these constraints can be found by trial and error,
graphing, or solving the resulting system of linear inequalities.
The set x = 0, y = 0, z = 0 works, so this path is feasible.
Try path 5: {1–2–3–6–8–10}
Condition Comment
1. x > 10 Force branch to node-2
2. x′ = x + 2 Calculation on node-2
3. y′ = y – 4 Calculation on node-2
4. x′ > z Force branch to node-3
5. x′ ≤ y′ Skip node-7, proceed to node-8
6. x′ ≤ z Skip node-9, proceed to node-10
Test Data
Input test data Output
Path Nodes visited x y z B9 B10
1. 1–5–6–8–10 0 0 0 – 0
9 0 9 – 18
2. 1–5–6–8–9–10 9 0 8 17 25
9 0 0 9 18
3. 1–5–6–(7 6)* 8–10 –1 0 –1 – –2
4. 1–5–6–(7–6)* 8–9–10 0 0 1 –1 –1
9 0 –1 –1 18
7. 1–2–3–6–(7–6)* 8–10 10 0 62 – 24
8. 1–2–3–4–(7–6)*8–9–10 10 0 12 –26 24
10 0 61 23 24
10. 1–2–3–6–8–9–10 10 0 0 12 24
12. 1–2–3–6–(7–6)*–8–9–10 10 0 1 1 24
Functional testing techniques always result in a set of test cases and s tructural
metrics are always expressed in terms of something countable like the num-
ber of program paths, the number of decision-to-decision paths (DD-paths),
and so on.
Figures 5.32 and 5.33 show the trends for the number of test cover-
age items and the effort to identify them as functions of structural testing
methods, respectively. These graphs illustrate the importance of choosing an
appropriate structural coverage metric.
SUMMARY
1. White-box testing can cover the following issues:
a. Memory leaks
b. Uninitialized memory
c. Garbage collection issues (in JAVA)
2. We must know about white-box testing tools also. They are listed below:
a. Purify by Rational Software Corporation
b. Insure++ by ParaSoft Corporation
c. Quantify by Rational Software Corporation
d. Expeditor by OneRealm Inc.
ANSWERS
1. b. 2. b. 3. a. 4. b.
5. a. 6. a. 7. b. 8. b.
9. c. 10. a.
FIGURE 5.34
REVIEW QUESTIONS
1. White-box testing is complementary to black-box testing, not alternative.
Why? Give an example to prove this statement.
2. a. What is a flow graph and what it is used for?
b. Explain the type of testing done using flow graph?
3. Perform the following:
a. Write a program to determine whether a number is even or odd.
b. Draw the paths graph and flow graph for the above problem.
c. Determine the independent path for the above.
4. Why is exhaustive testing not possible?
5. a. Draw the flow graph of a binary search routine and find its independent
paths.
FIGURE 5.35
14. What is data flow testing? Explain du-paths. Identify du- and dc-paths of
any example of your choice. Show those du-paths that are not dc-paths.
15. Write a short paragraph on data flow testing.
16. Explain the usefulness of error guessing testing technique.
17. Discuss the pros and cons of structural testing.
18. a. What are the problems faced during path testing? How they can be
minimized?
b. Given the source code below:
void foo (int a, b, c, d, e) {
if (a = = 0) {
return;
}
int x = 0;
if ((a = = b) or (c = = d)) {
x = 1;
}
e = 1/x;
}
List the test cases for statement coverage, branch coverage, and condition
coverage.
19. Why is error seeding performed? How it is different from mutation
testing?
20. a. Describe all methods to calculate the cyclomatic complexity.
b. What is the use of graph matrices?
21. Write a program to calculate the average of 10 numbers. Using data flow
testing design all du- paths and dc-paths in this program.
22. Write a short paragraph on mutation testing.
23. Write a C/C++ program to multiply two matrices. Try to take care of as
many valid and invalid conditions are possible. Identify the test data.
Justify.
24. Discuss the negative effects of the following constructs from the white-
box testing point of view:
a. GO TO statements
b. Global variables
25. Write a C/C++ program to count the number of characters, blanks, and
tabs in a line. Perform the following:
a. Draw its flow graph.
b. Draw its DD-paths graph.
c. Find its V(G).
d. Identify du-paths.
e. Identify dc-paths.
26. Write the independent paths in the following DD-path graph.
Also calculate mathematically. Also name the decision nodes shown in
Figure 5.36.
27. What are the properties of cyclomatic complexity?
FIGURE 5.36
28. Explain in detail the process to ensure the correctness of data flow in a
given fragment of code.
main
{
int K = 35, Z;
Z = check (K);
printf (“\n%d”, Z);
}
check (m)
{
int m;
if (m > 40)
return (1);
else
return (0);
}
29. Write a C program for finding the maximum and minimum out of three
numbers and compute its cyclomatic complexity using all possible
methods.
30. Consider the following program segment:
void sort (int a[], int n)
{
int i, j;
for (i = 1; i < n; i++)
for (j = i + 1, j < n; j++)
if (a[i] > a[j])
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
i. Draw the control flow graph for this program segment.
ii. Determine the cyclomatic complexity for this program (give all
intermediate steps).
iii. How is cyclomatic complexity metric useful?
31. Explain data flow testing. Consider an example and show all “du” paths.
Also identify those “du” paths that are not “dc” paths.
32. Consider a program to find the roots of a quadratic equation. Its input is
a triplet of three positive integers (say a, b, c) from the interval [1, 100].
The output may be one of the following words—real roots, equal roots,
imaginary roots. Find all du-paths and those that are dc-paths. Develop
data flow test cases.
33. If the pseudocode below were a programming language, list the test
cases (separately) required to achieve 100% statement coverage and
path coverage.
1. If x = 3 then
2. Display_message x;
3. If y = 2 then
4. Display_message y;
5. Else
6. Display_message z;
7. Else
8. Display_message z;
34. Consider a program to classify a triangle. Draw its flow graph and DD-
path graph.
6
Gray-Box Testing
Inside this Chapter:
6.0. Introduction to Gray-Box Testing
6.1. What Is Gray-Box Testing?
6.2. Various Other Definitions of Gray-Box Testing
6.3. Comparison of White-Box, Black-Box, and Gray-Box Testing
Approaches in Tabular Form
SUMMARY
1. As testers, we get ideas for test cases from a wide range of knowl-
edge areas. This is partially because testing is much more effective
when we know what types of bugs we are looking for. As testers of
complex systems, we should strive to attain a broad balance in our
knowledge, learning enough about many aspects of the software and
systems being tested to create a battery of tests that can challenge
the software as deeply as it will be challenged in the rough and tum-
ble day-to-day use.
2. Every tester in a test team need not be a gray-box tester. More is the
mix of different types of testers in a team, better is the success.
ANSWERS
1. a. 2. b. 3. c. 4. b. 5. c.
REVIEW QUESTIONS
7
Levels of Testing
Inside this Chapter:
7.0. Introduction
7.1. Unit, Integration, System, and Acceptance Testing Relationship
7.2. Integration Testing
7.0. INTRODUCTION
When we talk of levels of testing, we are actually talking of three levels of testing:
1. Unit testing
2. Integration testing
3. System testing
The three levels of testing are shown in Figure 7.1.
FIGURE 7.2
TEST is one of the CASE tools for unit testing (Parasoft) that automati-
cally tests classes written in MS.NET framework. The tester need not write
a single test or a stub. There are tools which help to organize and execute
test suites at command line, API, or protocol level. Some examples of such
tools are:
- - -
FIGURE 7.4
It begins with the main program, i.e., the root of the tree. Any lower-level
unit that is called by the main program appears as a “stub.” A stub is a piece
of throw-away code that emulates a called unit. Generally, testers have to
develop the stubs and some imagination is required. So, we draw.
Where “M” is the main program and “S”
represents a stub from the f igure, we find out
that:
Number of Stubs Required =
(Number of Nodes – 1)
Once all of the stubs for the main
rogram have been provided, we test the main
p
program as if it were a
standalone unit. FIGURE 7.5 Stubs.
7.2.2.4. Big-bang integration
Instead of integrating component by component and testing, this approach
waits until all the components arrive and one round of integration testing is
done. This is known as big-bang integration. It reduces testing effort and
removes duplication in testing for the multi-step component integrations.
Big-bang integration is ideal for a product where the interfaces are stable
with fewer number of defects.
7.2.3.1. Pairwise integration
The main idea behind pairwise integration is to eliminate the stub/driver
development effort. Instead of developing stubs and drivers, why not use the
actual code? At first, this sounds like big-bang integration but we restrict a
session to only a pair of units in the call graph. The end result is that we have
one integration test ses-
sion for each edge in the
call graph. This is not
much of a reduction in
sessions from either top-
down or bottom-up but
it is a drastic reduction
in stub/driver develop-
ment. Four pairwise
integration sessions are
shown in Figure 7.7. FIGURE 7.7 Pairwise Integration.
7.2.3.2. Neighborhood integration
The neighborhood of a node in a graph is the set of nodes that are one edge
away from the given node. In a directed graph, this includes all of the imme-
diate predecessor nodes and all of the immediate successor nodes. Please
note that these correspond to the set of stubs and drivers of the node.
For example, for node-16, neighborhood nodes are 9, 10, and 12 nodes
as successors and node-1 as predecessor node.
We can always compute the number of neighbors for a given call graph.
Each interior node will have one neighborhood plus one extra in case leaf
nodes are connected directly to the root node.
In module-A, nodes 1 and 5 are source nodes and nodes 4 and 6 are sink
nodes. Similarly, in module-B, nodes 1 and 3 are source nodes and nodes 2
and 4 are sink nodes. Module-C has a single source node 1 and a single sink
node, 5. This can be shown as follows:
7.2.5. System Testing
System testing focuses on a complete, integrated system to evaluate compli-
ance with specified requirements. Tests are made on characteristics that are
only present when the entire system is run.
the component and integration testing. The system test team generally
reports to a manager other than the product-manager to avoid conflicts
and to p rovide freedom to individuals during system testing. Testing the
product with an independent perspective and combining that with the
perspective of the customer makes system testing unique, different, and
effective.
The behavior of the complete product is verified during system testing.
Tests that refer to multiple modules, programs, and functionality are included
in system testing. This task is critical as it is wrong to believe that individually
tested components will work together when they are put together.
System testing is the last chance for the test team to find any leftover
defects before the product is handed over to the customer.
System testing strives to always achieve a balance between the objective
of finding defects and the objective of building confidence in the product
prior to release.
The analysis of defects and their classification into various categories
(called as impact analysis) also gives an idea about the kind of defects that
will be found by the customer after release. If the risk of the customers
getting exposed to the defects is high, then the defects are fixed before the
release or else the product is released as such. This information helps in
planning some activities such as providing workarounds, documentation on
alternative approaches, and so on. Hence, system testing helps in reducing
the risk of releasing a product.
System testing is highly complementary to other phases of testing. The
component and integration test phases are conducted taking inputs from
functional specification and design. The main focus during these testing
phases are technology and product implementation. On the other hand, cus-
tomer scenarios and usage patterns serve as the basis for system testing.
the test result cannot be taken as pass. Either the product or the non func-
tional testing process needs to be fixed here.
Non functional testing requires understanding the product behavior,
design, architecture, and also knowing what the competition provides. It also
requires analytical and statistical skills as the large amount of data generated
requires careful analysis. Failures in non functional testing affect the design
and architecture much more than the product code. Because non functional
testing is not repetitive in nature and requires a stable product, it is per-
formed in the system testing phase.
The differences listed in the table above are just the guidelines and not
the dogmatic rules.
Because both functional and non functional aspects are being tested in
the system testing phase so the question that arises is—what is the ratio of
the test-cases or effort required for the mix of these two types of testing? The
answer is here: Because functional testing is a focus area starting from the
unit testing phase while non functional aspects get tested only in the system
testing phase, it is a good idea that a majority of system testing effort be
focused on the non functional aspects. A 70%–30% ratio between non func-
tional and functional testing can be considered good and 50%–50% ratio is
a good starting point. However, this is only a guideline and the right ratio
depends more on the context, type of release, requirements, and products.
test may be performed in the next phase. So, the guideline is—“A test case
moved from a later phase to an earlier phase is a better option than delaying
a test case from an earlier phase to a later phase, as the purpose of testing is
to find defects as early as possible.” This has to be done after completing all
tests meant for the current phase, without diluting the tests of the current
phase.
We are now in a position to discuss various functional system testing
techniques in detail. They are discussed one by one.
In this method of system testing, the test cases are developed and
checked against the design and architecture to see whether they are actual
product-level test cases. This technique helps in validating the product
features that are written based on customer scenarios and verifying them
using product implementation.
If there is a test case that is a customer scenario but failed validation
using this technique, then it is moved to the component or integration test-
ing phase. Because functional testing is performed at various test phases, it is
important to reject the test cases and move them to an earlier phase to catch
defects early and avoid any major surprise at later phases.
We now list certain guidelines that are used to reject test cases for system
functional testing. They are:
1. Is this test case focusing on code logic, data structures, and unit of
the product?
If yes, then it belongs to unit testing.
2. Is this specified in the functional specification of any component?
If yes, then it belongs to component testing.
bank service needs a prompt reply. Some mail can be given automated mail
replies also. Hence, the terminology feature of the product should call the
e-mail appropriately as a claim or a transaction and also associate the profile
and properties in a way a particular business vertical works.
Syndication: Not all the work needed for business verticals is done by prod-
uct development organizations only. Even the solution integrators, service
providers pay a license fee to a product organization and sell the products
and solutions using their name and image. In this case, the product name,
company name, technology names, and copyrights may belong to the latter
parties or associations and the former would like to change the names in
the product. A product should provide features for those syndications in the
product and they are tested as a part of BVT.
FIGURE 7.11 S
tage-1 of Onsite FIGURE 7.12 S
tage-2 of the Onsite
Deployment. Deployment.
Please note that in stage-1, the recorder intercepts the user and the
live system to record all transactions. All the recorded transactions from the
live system are then played back on the product under test under the super-
vision of the test engineer (as shown by dotted lines). In stage-2, the test
engineer records all transactions using a recorder and other methods and
plays back on the old live system (as shown again by dotted lines). So, the
overall stages are:
Sending the product too late may mean too little a time for beta defect fixes
and this one defeats the purpose of beta testing. So, late integration testing
phase and early system testing phase is the ideal time for starting a beta
program.
We send the defect fixes to the customers as soon as problems are
reported and all necessary care has to be taken to ensure the fixes meets the
requirements of the customer.
How many beta customers should be chosen?
If the number chosen are too few, then the product may not get a suffi-
cient diversity of test scenarios and test cases.
If too many beta customers are chosen, then the engineering organiza-
tion may not be able to cope with fixing the reported defects in time. Thus,
the number of beta customers should be a delicate balance between pro-
viding a diversity of product usage scenarios and the manageability of being
able to handle their reported defects effectively.
Finally, the success of a beta program depends heavily on the willingness
of the beta customers to exercise the product in various ways.
There are many contractual and legal requirements for a product. Failing
to meet these may result in business loss and bring legal action against the
organization and its senior management.
The terms certification, standards, and compliance testing are used
interchangeably. There is nothing wrong in the usage of terms as long as
the objective of testing is met. For example, a certifying agency helping an
organization meet standards can be called both certification testing and stan-
dards testing.
c riteria can be developed for a set of parameters and for various types of non
functional tests.
For example, a test to find out how many client-nodes can simulta-
neously log into the server. Failures during scalability test includes the
system not responding or system crashing. A product not able to respond
to 100 concurrent users while it is supposed to serve 200 users simultane-
ously is a failure. For a given configuration, the following template may
be used:
10–100
thousand
records
These tools help identify the areas of code not yet exercised after performing
NOTE functional tests.
NOTE The reliability of a product should not be confused with reliability testing.
deliberately to simulate the resource crunch and to find out its behavior. It is
expected to gracefully degrade on increasing the load but the system is not
expected to crash at any point of time during stress testing.
It helps in understanding how the system can behave under extreme and
realistic situations like insufficient memory, inadequate hardware, etc. Sys-
tem resources upon being exhausted may cause such situations. This helps to
know the conditions under which these tests fail so that the maximum limits,
in terms of simultaneous users, search criteria, large number of transactions,
and so on can be known.
Both spike and bounce tests determines how well the system behaves when
NOTE
sudden changes of loads occur.
Two spikes together form a bounce test scenario. Then, the load increases
into the stress area to find the system limits. These load spikes occur sud-
denly on recovery from a system failure.
There are differences between reliability and stress testing. Reliability
testing is performed by keeping a constant load condition until the test case
is completed. The load is increased only in the next iteration to the test case.
In stress testing, the load is generally increased through various means
such as increasing the number of clients, users, and transactions until and
beyond the resources are completely utilized. When the load keeps on
increasing, the product reaches a stress point when some of the transactions
start failing due to resources not being available. The failure rate may go up
beyond this point. To continue the stress testing, the load is slightly reduced
below this stress point to see whether the product recovers and whether the
failure rate decreases appropriately. This exercise of increasing/decreasing
the load is performed two or three times to check for consistency in behavior
and expectations (see Figure 7.15).
FIGURE 7.15
Sometimes, the product may not recover immediately when the load is
decreased. There are several reasons for this. Some of the reasons are
1. Some transactions may be in the wait queue, delaying the recovery.
2. Some rejected transactions may need to be purged, delaying the
recovery.
3. Due to failures, some clean-up operations may be needed by the
product, delaying the recovery.
4. Certain data structures may have gotten corrupted and may perma-
nently prevent recovery from stress point.
We can show stress testing with variable load in Figure 7.15.
Another factor that differentiates stress testing from reliability testing is
mixed operations/tests. Numerous tests of various types run on the system in
stress testing. However, the tests that are run on the system to create stress
points need to be closer to real-life scenarios.
3. The operations that generate the amount of load needed are planned
and executed for stress testing.
4. Tests that stress the system with random inputs (like number of
users, size of data, etc.) at random instances and random magnitude
are selected and executed as part of stress testing.
Defects that emerge from stress testing are usually not found from any
other testing. Defects like memory leaks are easy to detect but difficult
to analyze due to varying load and different types/ mix of tests executed.
Hence, stress tests are normally performed after reliability testing. To detect
stress-related errors, tests need to be repeated many times so that resource
usage is maximized and significant errors can be noticed. This testing helps
in finding out concurrency and synchronization issues like deadlocks, thread
leaks, and other synchronization problems.
TIPS Select those test cases that provide end-to-end functionality and run them.
7.2.5.6. Acceptance testing
It is a phase after system testing that is done by the customers. The customer
defines a set of test cases that will be executed to qualify and accept the
product. These test cases are executed by the customers are normally small
in number. They are not written with the intention of finding defects. Test-
ing in detail is already over in the component, integration, and system test-
ing phases prior to product delivery to the customer. Acceptance test cases
are developed by both customers and the product organization. Acceptance
test cases are black-box type of tests cases. They are written to execute near
real-life scenarios. They are used to verify the functional and non functional
aspects of the system as well. If a product fails the acceptance test then it
may cause the product to be rejected and may mean financial loss or may
mean rework of product involving effort and time.
A user acceptance test is:
• A chance to complete test software.
• A chance to completely test business processes.
• A condensed version of a system.
• A comparison of actual test results against expected results.
• A discussion forum to evaluate the process.
The main objectives are as follows:
• Validate system set-up for transactions and user access.
• Confirm the use of the system in performing business process.
• Verify performance on business critical functions.
• Confirm integrity of converted and additional data.
The project team will be responsible for coordinating the preparation
of all test cases and the acceptance test group will be responsible for the
execution of all test cases.
4. Tests that verify the basic existing behavior of the product are
included.
5. When the product undergoes modifications or changes, the accept-
ance test cases focus on verifying the new features.
6. Some non functional tests are included and executed as part of
acceptance testing.
7. Tests that are written to check if the product complies with certain
legal obligations are included in the acceptance test criteria.
8. Test cases that make use of customer real-life data are included for
acceptance testing.
FIGURE 7.16
We shall discuss each of these tests one by one.
to the command level and then apply test cases to check that each command
words as intended. No attention is paid to the combination of these basic
commands, the context of the feature that is formed by these combined
commands, or the end result of the overall feature. For example, FAST for
a File/SaveAs menu command checks that the SaveAs dialog box displays.
However, it does not validate that the overall file-saving feature works nor
does it validate the integrity of saved files.
Typically, errors encountered during the execution of FAST are
reported through the standard issue-tracking process. Suspending testing
during FAST is not recommended. Note that it depends on the organization
for which you work. Each might have different rules in terms of which test
cases should belong to RAT versus FAST and when to suspend testing or to
reject a build.
7.2.5.7. Performance testing
The primary goal of performance testing is to develop effective enhance-
ment strategies for mantaining acceptable system performance. It is an
information gathering and analyzing process in which measurement data are
collected to predict when load levels will exhaust system resources.
Performance tests use actual or simulated workload to exhaust system
resources and other related problematic areas, including:
a. Memory (physical, virtual, storage, heap, and stack space)
b. CPU time
c. TCP/IP addresses
d. Network bandwidth
e. File handles
These tests can also identify system errors, such as:
a. Software failures caused by hardware interrupts
b. Memory runtime errors like leakage, overwrite, and pointer errors
c. Database deadlocks
d. Multithreading problems
7.2.5.7.1. Introduction
In this internet era, when more and more of business is transacted online,
there is a big and understandable expectation that all applications will run as
fast as possible. When applications run fast, a system can fulfill the business
requirements quickly and put it in a position to expand its business. A system
or product that is not able to service business transactions due to its slow
performance is a big loss for the product organization, its customers, and its
customer’s customer. For example, it is estimated that 40% of online mar-
keting for consumer goods in the US happens in November and December.
Slowness or lack of response during this period may result in losses of several
million dollars to organizations.
In another example, when examination results are published on the
Internet, several hundreds of thousands of people access the educational
websites within a very short period. If a given website takes a long time to
complete the request or takes more time to display the pages, it may mean a
lost business opportunity, as the people may go to other websites to find the
results. Hence, performance is a basic requirement for any product and is
fast becoming a subject of great interest in the testing community.
Performance testing involves an extensive planning effort for the defi-
nition and simulation of workload. It also involves the analysis of collected
data throughout the execution phase. Performance testing considers such
key concerns as:
� Will the system be able to handle increases in web traffic with-
out compromising system response time, security, reliability, and
accuracy?
� At what point will the performance degrade and which components
will be responsible for the degradation?
� What impact will performance degradation have on company sales
and technical support costs?
Each of these preceding concerns requires that measurements be
applied to a model of the system under test. System attributes, such as
response time, can be evaluated as various workload scenarios are applied
to the model. Conclusions can be drawn based on the collected data. For
example, when the number of concurrent users reaches X, the response time
equals Y. Therefore, the system cannot support more than X number of con-
current users. However, the complication is that even when the X number
of concurrent users does not change, the Y value may vary due to differing
user activities. For example, 1000 concurrent users requesting a 2K HTML
page will result in a limited range of response times whereas response times
may vary dramatically if the same 1000 concurrent users simultaneously
submit purchase transactions that require significant server-side processing.
Designing a valid workload model that accurately reflects such real-world
usage is no simple task.
3. Latency
4. Tuning
5. Benchmarking
6. Capacity planning
We shall discuss these factors one by one.
1. Throughput. The capability of the system or the product in handling
multiple transactions is determined by a factor called throughput.
It represents the number of requests/ business transactions pro-
cessed by the product in a specified time duration. It is very import-
ant to understand that the throughput, i.e., the number of transactions
serviced by the product per unit time varies according to the load the
product is put under. This is shown in Figure 7.17.
From this graph, it is clear that the load to the product can be
increased by increasing the number of users or by increasing the num-
ber of concurrent operations of the product. Please note that initially
the throughput keeps increasing as the user load increases. This is the
ideal situation for any product and indicates that the product is capable
of delivering more when there are more users trying to use the product.
Beyond certain user load conditions (after the bend), the throughput
comes down. This is the period when the users of the system notice a
lack of satisfactory response and the system starts taking more time to
complete business transactions. The “optimum throughput” is repre-
sented by the saturation point and is one that represents the maximum
throughput for the product.
2. Response time. It is defined as the delay between the point of request
and the first response from the product. In a typical client-server
infrastructure a vailable for the product. Thus, from the figure above,
we can compute both the latency and the response time as follows:
Network latency = (N1 + N2 + N3 + N4)
Product latency = (A1 + A2 + A3)
Actual response time = (Network latency + Product latency)
The discussion about the latency in performance is very important, as
any improvement that is done in the product can only reduce the response
time by the improvements made in A1, A2, and A3. If the network latency
is more relative to the product latency and, if that is affecting the response
time, then there is no point in improving the product performance. In
such a case, it will be worthwhile to improve the network infrastructure.
In those cases where network latency is too large or cannot be improved,
the product can use intelligent approaches of caching and sending multi-
ple requests in one packet and receiving responses as a bunch.
4. Tuning. Tuning is procedure by which the product performance is
enhanced by setting different values to the parameters (variables) of
the product, operating system, and other components. Tuning improves
the product performance without having to touch the source code of
the product. Each product may have certain parameters or variables
that can be set at run time to gain optimum performance. The default
values that are assumed by such product parameters may not always give
optimum performance for a particular deployment. This necessitates
the need for changing the values of parameters or variables to suit the
deployment or a particular configuration. During performance testing,
tuning of the parameters is an important activity that needs to be done
before collecting numbers.
5. Benchmarking. It is defined as the process of comparing the throughput
and response time of the product to those of the competitive products.
No two products are the same in features, cost, and functionality. Hence,
it is not easy to decide which parameters must be compared across two
products. A careful analysis is needed to chalk out the list of transactions
to be compared across products. This produces meaniningful analysis to
improve the performance of the product with respect to competition.
6. Capacity planning. The most important factor that affects performance
testing is the availability of resources. A right kind of hardware and
software configuration is needed to derive the best results from
FIGURE 7.19
a. A
performance testing requirement should be testable. All features/
functionality cannot be performance tested.
For example, a feature involving a manual intervention cannot be
performance tested as the results depend on how fast a user responds
with inputs to the product.
b. A
performance testing requirement needs to clearly state what
factors need to be measured and improved.
c. A
performance testing requirement needs to be associated with the
actual number or percentage of improvement that is desired.
There are two types of requirements that performance testing focuses on:
1. Generic requirements.
2. Specific requirements.
1. Generic requirements are those that are common across all
products in the product domain area. All products in that area are
expected to meet those performance expectations.
Examples are time taken to load a page, initial response when a
mouse is clicked, and time taken to navigate between screens.
Specific requirements are those that depend on implementation
for a particular product and differ from one product to another in a
given domain.
An example is the time taken to withdraw cash from an ATM.
During performance testing both generic and specific require-
ments need to be tested.
See Table in next page for examples of performance test
requirements.
testing for 10 concurrent operations may be less than that of testing for
10,000 operations by several times. Hence, a methodical approach is to
gradually improve the concurrent operations by say 10, 100, 1000, 10,000,
and so on rather than trying to attempt 10,000 concurrent operations in the
first iteration itself. The test case documentation should clearly reflect this
approach.
Performance testing is a tedious process involving time and effort. All
test cases of performance testing are assigned different priorities. Higher
priority test cases are to be executed first. Priority may be absolute (given by
customers) or may be relative (given by test team). While executing the test
cases, the absolute and relative priorities are looked at and the test cases are
sequenced accordingly.
The performance test case is repeated for each row in this table and
factors such as the response time and throughput are recorded and analyzed.
After the execution of performance test cases, various data points are
collected and the graphs are plotted. For example, the response time graph
is shown below:
Plotting the data helps in making an easy and quick analysis which is
difficult with only raw data.
FIGURE 7.21
SUMMARY
We can say that we start with unit or module testing. Then we go in for inte-
gration testing, which is then followed by system testing. Then we go in for
acceptance testing and regression testing. Acceptance testing may involve
alpha and beta testing while regression testing is done during m
aintenance.
System testing can comprise of “n” different tests. That is it could
mean:
1. End-to-end integration testing
2. User interface testing
3. Load testing in terms of
a. Volume/size
b. Number of simultaneous users
c. Transactions per minute/second (TPM/TPS)
4. Stress testing
5. Testing of availability (24 × 7)
Performance testing is a type of testing that is easy to understand but
difficult to perform due to the amount of information and effort needed.
ANSWERS
1. a. 2. a. 3. b. 4. b.
5. d. 6. b. 7. d. 8. b.
9. c. 10. a.
Weight Meaning
+2 Must test, mission/safety critical
+1 Essential functionality, necessary for robust operation
+0 All other scenarios
FIGURE 7.22
REVIEW QUESTIONS
1. Differentiate between alpha and beta testing?
2. Explain the following: Unit and Integration testing?
3. a. What would be the test objective for unit testing? What are the quality
measurements to ensure that unit testing is complete?
b. Put the following in order and explain in brief:
i. System testing
ii. Acceptance testing
iii. Unit testing
iv. Integration testing
4. Explain integration and system testing.
5. Write a short paragraph on levels of software testing.
16. a. Explain how you test the integration of two fragment codes with
suitable examples.
b. W
hat are the various kinds of tests we apply in system testing?
Explain.
17. Assume that you have to build the real-time multiperson computer
game. What kinds of testing do you suggest or think are suitable. Give a
brief outline and justification for any four kinds of tests.
18. Discuss some methods of integration testing with examples.
19. a. What is the objective of unit and integration testing? Discuss with an
example code fragment.
b. Y
ou are a tester for testing a large system. The system data model is
very large with many attributes and there are many interdependencies
within the fields. What steps would you use to test the system and
what are the effects of the steps you have taken on the test plan?
20. What is the importance of stubs? Give an example.
21. a. Explain BVT technique.
b. Define MM-path graph. Explain through an example.
c. Give the integration order of a given call graph for bottom-up testing.
d. Who
performs offline deployment testing? At which level of testing it
is done?
e. What is the importance of drivers? Explain through an example.
22. Which node is known as the transfer node of a graph?
23. a. Describe all methods of integration testing.
b. Explain different types of acceptance testing.
24. Differentiate between integration testing and system testing.
25. a. What are the pros and cons of decomposition based techniques?
b. E
xplain call graph and path-based integration testing. Write
advantages and disadvantages of them.
c. Define acceptance testing.
d. Write a short paragraph on system testing.
8
Quality Assurance
Inside this Chapter:
8.0. Introduction
8.1. Quality Planning
8.2. Quality Plan Objectives
8.3. Planning Process Overview
8.4. Business Plan and Quality Plan
8.5. TQM
8.6. TQM Concepts
8.7. Zero Defect Movement
8.0. INTRODUCTION
The term Quality Assurance or QA means to ensure that the software
conforms to prescribed technical requirements. It is essentially an audit
function. It does not prescribe good or complete testing. QA is more than
finding bugs during the time left until the deadline following exactly some
detailed test plans. It is more than defect detection. Software quality
assurance (SQA) is a planned and systematic pattern of actions required
to ensure quality in software. The objective of SQA is to improve software
quality by monitoring both software and the development process to ensure
full compliance with the established standards and procedures. SQA benefits
the software development project by saving time and money.
Actually, QA and, hence, total quality management have evolved as
shown in Figure 8.1.
that the project plans and quality plan at the unit level must be consistent
with the strategic quality plans at the company level.
At the project level, the projects should plan for quality at the project
level. These are generally strategic-level quality plans with details of respon-
sibilities and actions the project plan must define in all aspects of quality
plan at the project level. Note that the quality objectives of the project may
be inherited from the organizational-level objectives or may be defined sep-
arately for the project.
Principles of TQM
General
1. Get to know the next and final customer.
2. Get to know the direct competitors.
3. Dedicate to continual, rapid improvement in quality, response time,
flexibility, and cost.
4. Achieve unified purpose via extensive sharing of information and
involvement in planning and implementation of change.
Operations
7. Cut flow time, distance, inventory, and space along the chain of
customers.
8. Cut setup, changeover, get ready, and start-up time.
9. Operate at the customer’s rate of use or a smoothed representation
of it.
Capacity
16. Maintain/improve present resources and human work before think-
ing about new equipment and automation.
17. Automate incrementally when process variability cannot otherwise
be reduced.
18. Seek to have workstations, machines, flow lines, and cells for each
product per customer family.
Develop commitment
Plan self-assessment
Communicate plans
Conduct self-assessment
SUMMARY
In this chapter, we have seen what quality assurance is and how to plan
quality. What are the objectives of quality assurance? It explains the
planning process. It relates the business plan with the quality plan. Finally, it
discusses how total quality can be achieved and how zero-defect movement
should occur.
ANSWERS
1. c. 2. b. 3. c. 4. a.
5. a. 6. a. 7. b. 8. a.
9. a. 10. b.
Exciting quality
Satisfaction
Region
Expected quality
Dissatisfaction
Basic quality
Region
REVIEW QUESTIONS
1. The company maintains an information system for the purpose of
meeting customers needs. How are these needs determined for
external customers? For internal customers?
2. “Software quality is not by accident.” Justify the statement.
3. How can we plan quality?
4. What is TQM? Explain TQM concepts.
5. What is the “Zero Defect” Movement?
6. Explain the different types of quality requirements.
7. How are testing and SQA are related?
8. Assume you are responsible for developing a web-based application
and in that application you need to prepare the quality control (QC)
checklist for conducting a database testing. Mention the specific
items, responses, and comments that need to be incorporated in the
checklist.
9
Quality Standards
Inside this Chapter:
9.0. Introduction
9.1. Quality Models/Standards/Guidelines
9.2. Types of Models
9.3. ISO Standards
9.4. CMM and CMMI
9.5. Six Sigma Concepts
9.6. Quality Challenge
9.7. National Quality Awards
9.0. INTRODUCTION
Literature says that unlike standardization in the communications field,
in software development standardization is viewed with a mixed-type of
reactions. Opponents (not in favor) say that standardization curtails individual
drive to be innovative. Proponents (in favor) say that standards reduce
the activity of reinventing the same or similar processes for development
and QA. Note that the repeatability of processes is the key benefit of this
standardization. Also note that repeatability reduces the cost of software
development and produces a base-quality level of software products.
ISO 9002 — Deals with the quality system model for QA in production.
NOTE
ISO 9003 — Deals with the quality system model for QA in testing products.
ct
op
r ev
ct
ision
criteria is also shown in
Figure 9.1. It is clear from
the figure that an arrow from od u on
Pr
a quality criterion to a quality ct t ransiti
Completeness Correctness
Consistency
Accuracy Reliability
Error tolerance
Execution efficiency
Efficiency
Storage efficiency
Access control
Integrity
Access audit
Operability
Training Usability
Communicativeness
Simplicity
Conciseness Maintainability
Instrumentation
Self-descriptiveness Testability
Expandability
Flexibility
Generality
Portability
Modularity
Machine independence
Interoperability
Communication commonality
Data commonality
Suitability
Accuracy
Functionality
Interoperability
Security
Maturity
Recoverability
Understandability
Usability Learnability
Operability
Time behavior
Efficiency
Resource behavior
Analyzability
Changeability
Maintainability
Stability
Testability
Adaptability
Installability
Portability
Conformance
Replaceability
is amended. Thus, we can say that ISO 9000 is both self-correcting and a
learning system. It changes to reflect changing needs. Also, note that it is
known as QA rather than a traditional quality control-type system.
The final management note is the quality review. This is possible because
of the self-adjusting nature of the R & D QA system. The management
review is the engine for that process. Review should find out:
a. What information is needed to be sure that the quality policy is
being implemented?
b. What information is needed to decide whether the policy needs an
amendment?
c. How frequently these data need to be collected?
Note that it is vital to decide what is critical and genuinely indicative of
the health of the organization. The more data management asks for the less
it will be able to make sense of the data. Also note that the cost of data col-
lection needs to be considered. The more management asks for data that are
not automatically collected as part of the day-to-day work, the more the cost
of the quality system will rise. The efficient and economic way to resolve this
is to scan the procedures once they have been written in order to identify
data that exists in the system.
As an organization moves from one level to the next, its process maturity
NOTE
improves to produce better quality software at a lower cost.
CMM Model
Capability Maturity Model (CMM)
The capability maturity model (CMM) is a methodology used to develop and
refine an organization’s software development process.
The core principles of CMM include:
•• well-defined steps for performing each task.
•• proper training for each person to perform his or her job.
•• existence of management support for all efforts performed.
•• measurement of all steps performed.
Overview of CMM
The CMM is not a software life-cycle model. Instead, it is a strategy for
improving the software process, irrespective of the actual life-cycle model
used. The CMM was developed by the Software Engineering Institute (SEI)
in 1986. CMM is used to judge the maturity of the software processes of an
organization. Term maturity is a measure of the goodness of the process itself.
Optimizing 5
Managed 4
Defined 3
Repeatable 2
Initial 1
FIGURE 9.4
2. The CMM also emphasizes the need to recover information for later
use in the process and for improvement of the process. This is equiv-
alent to the quality records of ISO 9001 that document whether or
not the required quality is achieved and whether or not the quality
system operates effectively.
CMM Applications
The KPAs of CMM, as discussed, must be satisfied by an organization to
be at a certain level of maturity. For example, for an organization to be at
level 3, the organization must meet all six KPAs associated with level 2. SEI
proposes two methods to evaluate the current capabilities of organizations—
internal assessments and external assessments.
CMMI
CMM-SE IPD-CMM
SEI-CMMI Model
From Figure 9.5, it is clear that each process area of the staged represen-
tation of the CMMI for development includes one or more specific goals.
Also, the CMMI for development has generic goals that are applicable to all
process areas. A goal is a required component that must be achieved to con-
sider the process area satisfied. Each goal includes one or more practices.
A practice is an expected component where each practice or an equivalent
alternative must be achieved to consider the processes are satisfied. This is
shown below:
Maturity level
Need of CMMI
1. Present day web applications are very complex. So, they use some
subsystems (third-party components) also. For example, a commu-
nication module may be purchased from some third-party vendor.
2. Many other components like RDBMS, messaging, security, and
real-time processing are part of bigger softwares.
Understand that the coexistence and interoperability of these differ-
ent components developed by different vendors is very important
for project’s success. Thus, there is a need to find out the maturity
level of an integrated product development process.
3. Complex software systems usually run on a specialized hardware and
a specialized OS. These need to be evaluated.
product then it is considered to have achieved Six Sigma level. Top notch
companies around the world generally are considered to be operating on
around 99% perfection which suggests that there can be further improve-
ment and reduction in defects hence helping the company increase prof-
itability. This improvement can save millions of dollars for the companies
which are wasted in various unnecessary processing and defects that lead to
customer dissatisfaction.
Six Sigma uses process data and analytical techniques in order to find
out various process variables. Once the process variables are obtained, they
help in developing the exact understanding of various processes. This under-
standing/data is then used to improve the processes and help in reducing
defects/losses in other areas of the organization.
Six Sigma stands for Six Standard Deviations (Sigma is the Greek let-
ter used to represent standard deviation in statistics) from the mean. Six
Sigma methodologies provide the techniques and tools to improve the capa-
bility and reduce the defects in any process. Six Sigma is a registered service
mark and trademark of Motorola, Inc. Motorola has reported over $17 bil-
lion in savings from Six Sigma as of 2006.
Six Sigma methodology can also be used to create a brand new business
process from the ground up using DFSS (Design For Six Sigma) principles.
Six Sigma strives for perfection. It allows for only 3.4 defects per million
opportunities for each product or service transaction. Six Sigma relies heavily
on statistical techniques to reduce defects and measure quality.
Historical Overview
Six Sigma was originally developed as a set of practices designed to improve
manufacturing processes and eliminate defects, but its application was sub-
sequently extended to other types of business processes as well. In Six Sigma,
a defect is defined as anything that could lead to customer dissatisfaction.
The particulars of the methodology were first formulated by Bill Smith
at Motorola in 1986. Six Sigma was heavily inspired by six preceding decades
of quality improvement methodologies such as quality control, TQM, and
Zero Defects, based on the work of pioneers such as Shewhart, Deming,
Juran, Ishikawa, Taguchi, and others.
Like its predecessors, Six Sigma asserts that:
1. Continuous efforts to achieve stable and predictable process results
(i.e., reduce process variation) are of vital importance to business
success.
ent Process
provem desig
u o us i m n/re
des
ntin ign
Co
l
ro An
nt
co
al
ys
ss
ce
is
of
pro
?= v ar
ical
6
ianc
Statist
$$
?
!
Software Requirements
Six Sigma software tools augment the implementation of Six Sigma meth-
odology by complimenting and sometimes substituting human efforts. Six
Sigma software tools fill in the vacuum of additional needs by companies
that are implementing the Six Sigma methodology.
Software developers, in accordance with the needs and demands of dif-
ferent businesses, have developed various Six Sigma software tool modules.
Some examples of Six Sigma software tools are as follows:
1. DMAIC Six Sigma - a process management tool
2. Design for Six Sigma or DFSS - a design tool
3. Quality improvement package - a quality control tool
4. Production management package - a process simulation tool
5. Project optimization and simulation - an analytical tool
6. Testing and measurement - a testing and control tool
The all-encompassing comprehensive Six Sigma software tool pack-
ages pack a lot of powerful features into them which help to speed up the
decision-making process and data mining, while dramatically simplifying
predictive modeling activities.
ANALYSIS TOOLS
1. iGrafx Process for Six Sigma
2. EngineRoom by MoreSteam
3. IBM WebSphere Business Modeler
4. JMP
5. Microsoft Visio
6. Minitab
HARDWARE REQUIREMENTS
Most of the Six Sigma software tools are available for both Mac and IBM
compatible PCs. The minimum system requirements are:
•• At least Pentium 386; but for most products - 1.0 GHz Pentium
processor
•• 256MB RAM
•• 1.0GB of free disk space
•• Graphics card (at least VGA or better is recommended)
•• Windows, several versions; depending on which product you buy
With Six Sigma software tools at your disposal, you can process a lot of
data, more than you ever could by hand. Artificial intelligence is used for
faster, more dependable project selection and analysis. Six Sigma software
tools also assist you in predicting future behaviors and tendencies. Six Sigma
software tools have finally come of age and are here to stay.
Measure
Characterize process
Control
Evaluate Maintain new process
Improve
Improve and verify process
FIGURE 9.8
DMAIC
The five phases in the DMAIC project methodology are:
1. Define high-level project goals and the current process.
2. Measure key aspects of the current process and collect relevant data.
3. Analyze the data to verify cause-and-effect relationships. Determine
what the relationships are and attempt to ensure that all factors have
been considered.
4. Improve or optimize the process based on data analysis using tech-
niques like design of experiments.
5. Control to ensure that any deviations from the target are corrected
before they result in defects. Set-up pilot runs to establish process
capability, move on to production, set up control mechanisms, and
continuously monitor the process.
DMADV
The five phases in the DMADV project methodology are:
1. Define design goals that are consistent with customer demands and
the enterprise strategy.
DMAIC Define• Define the project goals and customer (internal and
external) deliverables
Measure • Measure the process to determine current performance
Analyze • Analyze and determine the root cause(s) of the defects
Improve • Improve the process by eliminating defects
Control • Control future process performance
IMPLEMENTATION ROLES
One of the key innovations of Six Sigma is the professionalizing of quality
management functions. Prior to Six Sigma, quality management in practice
was largely relegated to the production floor and to statisticians in a separate
quality department. Six Sigma borrows martial arts ranking terminology to
define a hierarchy (and career path) that cuts across all business functions
and a promotion path straight into the executive suite.
Six Sigma identifies several key roles for its successful implementation.
1. Executive Leadership includes the CEO and other members of top
management. They are responsible for setting up a vision for Six
Sigma implementation. They also empower the other role holders
with the freedom and resources to explore new ideas for break-
through improvements.
2. Champions are responsible for Six Sigma implementation across
the organization in an integrated manner. The Executive Leader-
ship draws them from upper management. Champions also act as
mentors to Black Belts.
3. Master Black Belts, identified by Champions, act as in-house coaches
on Six Sigma. They devote 100% of their time to Six Sigma. They
assist Champions and guide Black Belts and Green Belts. Apart from
statistical tasks, their time is spent on ensuring consistent application
of Six Sigma across various functions and departments.
4. Black Belts operate under Master Black Belts to apply Six Sigma
methodology to specific projects. They devote 100% of their time
to Six Sigma. They primarily focus on Six Sigma project execution,
whereas Champions and Master Black Belts focus on identifying
projects/ functions for Six Sigma.
5. Green Belts are the employees who take up Six Sigma implementa-
tion along with their other job responsibilities. They operate under
the guidance of Black Belts.
N(0, 1)
=0
=1
–6 –5 –4 –3 –2 –1 0 1 2 3 4 5 6
–s +s
FIGURE 9.9
The term “Six Sigma process” comes from the notion that if one has six
standard deviations between the process mean and the nearest specification
limit, as shown in Figure 9.9, there will be practically no items that fail to
meet specifications. This is based on the calculation method employed in
process capability studies.
In a capability study, the number of standard deviations between the
process mean and the nearest specification limit is given in sigma units. As
process standard deviation goes up, or the mean of the process moves away
from the center of the tolerance, fewer standard deviations will fit between
the mean and the nearest specification limit, decreasing the sigma number
and increasing the likelihood of items outside specification.
SIGMA LEVELS
The table below gives long-term DPMO values corresponding to various
short-term sigma levels.
Note that these figures assume that the process mean will shift by
1.5 sigma towards the side with the critical specification limit. In other
words, they assume that after the initial study determining the short-term
sigma level, the long-term Cpk value will turn out to be 0.5 less than the
short-term Cpk value. So, for example, the DPMO figure given for 1 sigma
assumes that the long-term process mean will be 0.5 sigma beyond the
specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in
the short-term study (Cpk = 0.33). Note that the defect percentages only
indicate defects exceeding the specification limit that the process mean is
nearest to. Defects beyond the far specification limit are not included in the
percentages.
End
Move on to next highest priority process.
FIGURE 9.10
Success stories of Six Sigma training are evident in all fields of business.
Because Six Sigma methodology encompasses the entire process of doing
business, it is likely to show a flaw here or there, such as companies that
embraced Six Sigma have found out. Howsoever small in number the fail-
ures may be, they are all due to differing reasons. However, any negative
results can probably be traced back to either improper implementation or
incomplete Six Sigma training.
Lack of Originality
Noted quality expert Joseph M. Juran has described Six Sigma as “a basic
version of quality improvement,” stating that “[t]here is nothing new there.
It includes what we used to call facilitators. They’ve adopted more flamboy-
ant terms, like belts with different colors. I think that concept has merit to
set apart, to create specialists who can be very helpful. Again, that’s not a
new idea. The American Society for Quality long ago established certificates,
such as for reliability engineers.”
Role of Consultants
The use of “Black Belts” as itinerant change agents is controversial as it has
created a cottage industry of training and certification. Critics argue there
is overselling of Six Sigma by too great a number of consulting firms, many
of which claim expertise in Six Sigma when they only have a rudimentary
understanding of the tools and techniques involved.
The expansion of the various “Belts” to include “Green Belts,” “Master
Black Belts,” and “Gold Belts” is commonly seen as a parallel to the various
“belt factories” that exist in martial arts.
industrial average of 55%. The metrics are used throughout the life cycle of
the product, in production as well as in the software development. The need
and importance of reducing defects have through the adoption of the Six
Sigma has helped companies like Wipro to sustain SEI CMM Level 5.
In the pharmaceutical industry, adoption of the Six Sigma technique
helped the industry reduce wastage and rework involved in the production.
It was said that 5–10% of medicines produced during a period were to be
discarded or modified due to the defects. The adoption of Six Sigma helped
the pharmaceutical companies to reduce the errors in the production. The suc-
cess story on the adoption of Six Sigma prompted Pfizer to achieve the Pfizer
Global Manufacturing mission of aiming at zero defects through Right First
Time. Right First Time or RFT is a technique adapted by Pfizer to its core
processes to assure quality to its products and customer services the first time.
The airline industry had to adopt the Six Sigma metrics for its survival.
The increased cost of fuel, the competition driven by low budget airlines,
etc has made the need for lower cost without a hit to quality the need of
the hour. The number of errors in handling the calls from customers, and
ticketing is to be minimized drastically. It was with this intention that the
airline industry adopted Six Sigma into the organization. Indian companies
like Kingfisher and Jet Airways, all have adopted Six Sigma techniques.
Hospitality services are another industry which benefited by the adop-
tion of Six Sigma techniques. Providing personalized service to each and
every customer by bending to their demands within a limited time without
comprising the quality was aided by the Six Sigma metrics. The Six Sigma
technique is adopted in every field from maintaining full occupancy to effi-
cient housekeeping, ensuring a balanced inventory supply, and to minimize
wastage in the inventory. Starwood Hotels and Resorts Inc was the first com-
pany to adopt Six Sigma in the hospitality sector.
Steel industries like TISCO use this technique to minimize the inadequacies
in the design, imperfect products, etc. In 1998, Forbes Magazine applauded
the Mumbai Tiffin Suppliers Association or the Mumbai Dabbawallahs for
their way of functioning with just one error in every 8 million. Logistics,
insurance, call centers, all embrace the Six Sigma techniques for improving
the quality of service provided by them.
Irrespective of the type of industry, all companies have to adopt Six
Sigma techniques as quality and timely delivery are crucial for their survival.
TABLE 9.1 A
Comparison for the House of Total Quality with the Baldrige Categories and
Deming Principles
House of Total Quality a Baldrige Categoriesb Deming Principlesc
The Roof
Management System
1. Systems, process SYSTEM 1. Publish the aims
2. Leadership 1.0 Leadership and purpose of the
3. Strategy 3.0 Strategic quality organization
4. Mission, vision, values planning 2. Learn the new philosophy
* Long-range planning 7. Teach and institute
leadership
Social System
1. Structure 4.0 Human resources 14. Take action to
2. Social norms development and accomplish the
3. Teams management transformation
4. Organizational • Employee development • Hierarchy style of
personality • Partnership management must change
development • Transformation can
• Cross-functional teams only be accomplished
by people not hardware
(Continued)
SUMMARY
Implementation and use of software quality standards is as important as is
software development. Quality assurance is an umbrella activity, i.e., it must
be dealt with during the entire SDLC. Better standards, tools, metrics, and
quality models are still the need of researchers. This chapter focuses on all
of these aspects.
ANSWERS
1. b. 2. a. 3. c. 4. b.
5. b. 6. b. 7. b. 8. b.
9. a. 10. b. 11. a. 12. a.
13. a. 14. a. 15. a.
REVIEW QUESTIONS
1. What is quality modeling?
2. “If a software system is understandable it would definitely be
maintable also.” Justify this statement.
3. Distinguish between quality control and quality assurance.
4. What testing processes are likely to enhance software quality?
5. Give Dr. Demings and Philip Crosby’s definitions of quality.
6. Give a quality definition as per international standards.
7. Compare Boehm’s and ISO 9/26 quality models.
8. Explain CMM. How is CMMI different?
9. What are the factors considered for a quality software product?
10. What is Six Sigma quality? What are its organization structures?
10
Reducing the Number of
Test Cases
Inside this Chapter:
10.0. Prioritization Guidelines
10.1. Priority Category Scheme
10.2. Risk Analysis
10.3. Regression Testing—Overview
10.4. Prioritization of Test Cases for Regression Testing
10.5. Regression Testing Technique—A Case Study
10.6. Slice Based Testing
There are four schemes that are used for prioritizing the existing set test
cases. These reduction schemes are as follows:
1. Priority category scheme
2. Risk analysis
3. Interviewing to find out problematic areas
4. Combination schemes
All of these reduction methods are independent. No one method is bet-
ter than the other. One method may be used in conjunction with another
one. It raises confidence when different prioritization schemes yield similar
conclusions.
We will discuss these techniques now.
Risk
Problem Probability of Impact of exposure
ID Potential problem (ri) occurrence (li) risk (xi) = li * xi
A Loss of power 1 10 10
B Corrupt file header 2 1 2
C Unauthorized access 6 8 48
D Databases not 3 5 15
synchronized
E Unclear user 9 1 9
documentation
F Lost sales 1 8 8
G Slow throughput 5 3 15
: : : : :
: : : : :
: : : : :
FIGURE 10.1 Risk Analysis Table (RAT).
We can see from the graph of Figure 10.2 that a risk with high severity is
deemed more important than a problem with high probability. Thus, all risks
mapped in the upper-left quadrant fall into priority 2.
For example, the risk-e which has a high probability of occurrence but a
low severity of impact is put under priority 3.
Method II: For an entirely different application, we may swap the defini-
tions of priorities 2 and 3, as shown in Figure 10.3.
An organization favoring Figure 10.3 seeks to minimize the total number
of defects by focusing on problems with a high probability of occurrence.
Dividing a risk matrix into quadrants is most common, testers can deter-
mine the thresholds using different types of boundaries based on application
specific needs.
Method III: Diagonal band prioritiza-
tion scheme.
If severity and probability tend to
be equal weight, i.e., if li = xi, then diag-
onal band prioritization scheme may
be more appropriate. This is shown in
Figure 10.4.
This threshold pattern is a com-
promise for those who have difficulty
in selecting between priority-2 and
priority-3 in the quadrant scheme. FIGURE 10.4 Method III.
by program changes from the rest of the code. The modules enclosed in the
firewall could be those that interact with the modified modules or those that
are direct ancestors or direct descendants of the modified modules.
The firewall concept is simple and easy to use, especially when the
change to a program is small. By retesting only the modules and interfaces
inside the firewall, the cost of regression integration testing can be reduced.
FIGURE 10.6
Test setup means the process by which AUT (application under test) is
placed in its intended or simulated environment and is ready to receive data
and output the required information. Test setup becomes more challenging
when we test embedded software like in ATMs, printers, mobiles, etc.
The sequence in which tests are input to an application is an important
issue. Test sequencing is very important for applications that have an inter-
nal state and runs continuously. For example, an online-banking software.
We then execute the test cases. Each test needs verification. This can
also be done automatically with the help of CASE tools. These tools com-
pare the expected and observed outputs. Some of the tools are:
a. Test Tube (by AT&T Bell Labs.) in 1994: This tool can do
selective retesting of functions. It supports C.
b. Echelon (by Microsoft) in 2002: No selective retesting but does
test prioritization. It uses basic blocks to test. It supports C and
binary languages.
c. ATACLx Suds (by Telcordia Technologies) in 1992: It does
selective retesting. It allows test prioritization and minimization. It
does control/data flow average. It also supports C.
Static slicing may lead to an unduly large program slice. So, Korel and Laski
proposed a method for obtaining dynamic slices from program executions.
They used a method to extract executable and smaller slices and to allow
more precise handling of arrays and other structures. So, we discuss dynamic
slicing.
Let “P” be the program under test and “t” be a test case against which P
has been executed. Let “l” be a location in P where variable v is used. Now,
the dynamic slice of P with respect to “t” and “v” is the set of statements in P
that lie in trace (t) and did effect the value of “v” at “l.” So, the dynamic slice
is empty if location “l” was not traversed during this execution. Please note
that the notion of a dynamic slice grew out of that of a static slice based on
program “P” and not on its execution.
Let us solve an example now.
EXAMPLE 10.1. Consider the following program:
1. main ( ) {
2. int p, q, r, z;
3. z = 0;
4. read (p, q, r);
5. if (p < q)
6. z = 1; //modified z
7. if (r < 1)
8. x = 2
9. output (z);
10. end
11. }
Test case (t1): <p = 1, q = 3, r = 2>. What will be
the dynamic slice of P with respect to variable “z” at line
9? What will be its static slice? What can you infer? If
t2: <p = 1, q = 0, r = 0> then what will be dynamic and FIGURE 10.7
static slices?
SOLUTION. Let us draw its flow graph first shown in Figure 10.7.
\ Dynamic slice (P) with respect to variable z at line 9 is td = <4, 5, 7, 8>
Static slice, ts = <3, 4, 5, 6, 7, 8>
Dynamic slice for any variable is generally smaller than the corresponding
NOTE
static slice.
Dynamic slice contains all statements in trace (t) that had an effect on
NOTE
program output.
Inferences mode:
1. A dynamic slice can be constructed based on any program variable
that is used at some location in P, the program that is being modified.
2. Some programs may have several locations and variables of interest at
which to compute the dynamic slice, then we need to compute slices
of all such variables at their corresponding locations and then take
union of all slices to create a combined dynamic slice. This approach
is useful for regression testing of relatively small components.
3. If a program is large then a tester needs to find out the critical loca-
tions that contain one or more variables of interest. Then, we can
build dynamic slices on these variables.
SUMMARY
Regression testing is used to confirm that fixed bugs have, in fact, been fixed
and that new bugs have not been introduced in the process and that features
that were proven correctly functional are intact. Depending on the size of a
project, cycles of regression testing may be performed once per milestone
or once per build. Some bug regression testing may also be performed dur-
ing each acceptance test cycle, focusing on only the most important bugs.
Regression tests can be automated.
ANSWERS
1. a. 2. d. 3. b. 4. a.
5. c. 6. a. 7. b. 8. c.
9. c. 10. a.
REVIEW QUESTIONS
11
Test Management and
Automation
Inside this Chapter:
11.0. Automated Testing
11.1. Consideration During Automated Testing
11.2. Static and Dynamic Analysis Tools
11.3. Problems with Manual Testing
11.4. Scope of Automated Testing
11.5. Disadvantages of Automated Testing
11.6. Testing Tools
11.7. Test Automation: “No Silver Bullet”
11.8. Testing and Debugging
11.9. Criteria for Selection of Test Tools
11.10. Design and Architecture for Automation
11.11. Characteristics of Modern Tools
11.12. Case Study on Automated Tools, Namely, Rational Robot,
WinRunner, Silk Test, and Load Runner
testing tools. Automated testing tools assist software testers to evaluate the
quality of the software by automating the mechanical aspects of the software
testing task. Automated testing tools vary in their underlying approach, qual-
ity, and ease of use.
Manual testing is used to document tests, produce test guides based on
data queries, provide temporary structures to help run tests, and measure
the result of the test. Manual testing is considered to be costly and time-
consuming. Therefore, automated testing is used to cut down time and cost.
(B) Dynamic test tools: These tools test the software system with “live”
data. Dynamic test tools include the following:
a. Test driver: It inputs data into a module-under-test (MUT).
b. Test beds: It simultaneously displays source code along with the
program under execution.
c. Emulators: The response facilities are used to emulate parts of
the system not yet developed.
d. Mutation analyzers: The errors are deliberately “fed” into the
code in order to test the fault tolerance of the system.
Note that in this graph, as the number of errors increase, the amount of
effort to find their causes also increases.
Once errors are identified in a software system to debug the problem a
number of steps are followed:
Step 1. Identify the errors.
Step 2. Design the error report.
Step 3. Analyze the errors.
Step 4. Debugging tools are used.
Step 5. Fix the errors.
Step 6. Retest the software.
After the corrections are made, the software is retested using regression
tests so that no new errors are introduced during the debugging process.
Note that debugging is an integral component of the software test-
ing process. Debugging occurs as a consequence of successful testing and
revealing the bugs from the software-under-test (SUT). When a test case
uncovers an error, debugging is the process that results in the removal of
the bugs. Also note that debugging is not testing, but it always occurs as a
consequence of testing. The debugging process begins with the execution of
a test case. This is shown in Figure 11.2.
The debugging process attempts to match the symptom with the cause
thereby leading to error correction. The purpose of debugging is to locate
and fix the offending code responsible for a symptom violating a known
specification.
Debugging Approaches
Several approaches have been discussed in literature for debugging
software-under-test (SUT). Some of them are discussed below.
1. Brute force method: This method is most common and least
efficient for isolating the cause of a software error. We apply this
method when all else faild. In this method, a printout of all registers
and relevant memory locations is obtained and studied. All dumps
should be well documented and retained for possible use on subse-
quent problems.
2. Back tracking method: It is a fairly common debugging approach
that can be used successfully in small programs. Beginning at the
site where a symptom has been uncovered, the source code is traced
backward until the site of the cause is found. Unfortunately, as the
number of source lines increases, the number of potential backward
paths may become unmanageably large.
1. Meeting Requirements
a. There are many tools available in the market today but rarely do they
meet all of the requirements of a given product or a given organiza-
tion. Evaluating different tools for different requirements involves a
lot of effort, money, and time. A huge delay is involved in selecting
and implanting test tools.
b. Test tools may not provide backward or forward compatibility with
the product-under-test (PUT).
c. Test tools may not go through the same amount of evaluation for
new requirements. Some tools had a Y2K-problem also.
d. A number of test tools cannot distinguish between a product failure
and a test failure. This increases analysis time and manual testing.
The test tools may not provide the required amount of trouble-
shooting/debug/error messages to help in analysis. For example, in
case of GUI testing, the test tools may determine the results based
on messages and screen coordinates at run-time. So, if the screen
elements of the product are changed, it requires the test suite to be
2. Technology Expectations
a. In general, test tools may not allow test developers to extend/modify
the functionality of the framework. So, it involves going back to the
tool vendor with additional cost and effort. Very few tools available
in the market provide source code for extending functionality or fix-
ing some problems. Extensibility and customization are important
expectations of a test tool.
b. A good number of test tools require their libraries to be linked with
product binaries. When these libraries are linked with the source
code of the product, it is called the “instrumented code”. This causes
portions of testing be repeated after those libraries are removed,
as the results of certain types of testing will be different and better
when those libraries are removed. For example, the instrumented
code has a major impact on the performance testing because the
test tools introduce an additional code and there could be a delay in
executing the additional code.
c. Finally, test tools are not 100% cross-platform. They are supported
only on some OS platforms and the scripts generated from these
tools may not be compatible on other platforms. Moreover, many of
the test tools are capable of testing only the product, not the impact
of the product/test tool to the system or network. When there is an
impact analysis of the product on the network or system, the first
suspect is the test tool and it is uninstalled when such analysis starts.
3. Training Skills
Test tools require plenty of training, but very few vendors provide the training
to the required level. Organization-level training is needed to deploy the
test tools, as the users of the test suite are not only the test team but also
the development team and other areas like SCM (software configuration
management). Test tools expect the users to learn new languages/scripts and
may not use standard languages/scripts. This increases the skill requirements
for automation and increases the need for a learning curve inside the
organization.
4. Management Aspects
A test tool increases the system requirement and requires the hardware and
software to be upgraded. This increases the cost of the already-expensive
test tool. When selecting the test tool, it is important to note the system
requirements and the cost involved in upgrading the software and hardware
needs to be included with the cost of the tool. Migrating from one test tool
to another may be difficult and requires a lot of effort. Not only is this dif-
ficult as the test suite that is written cannot be used with other test tools but
also because of the cost involved. As the tools are expensive and unless the
management feels that the returns on investment (ROI) are justified, chang-
ing tools are generally not permitted.
Deploying a test tool requires as much effort as deploying a product in
a company. However, due to project pressures, test tools effort at deploying
gets diluted, not spent. Thus, later it becomes one of the reasons for delay or
for automation not meeting expectations. The support available on the tool
is another important point to be considered while selecting and deploying
the test tool.
SUMMARY
Testing is an expensive and laborious phase of the software process. As a
result, testing tools were among the first software tools to be developed.
These tools now offer a range of facilities and their use and significantly
reduces the cost of the testing process. Different testing tools may be inte-
grated into the testing workbench.
These tools are:
1. Test manager: It manages the running of program tests. It keeps
track of test data, expected results, and program facilities tested.
2. Test data generator: It generates test data for the program to
be tested. This may be accomplished by selecting data from a
database.
3. Oracle: It generates predictions of expected test results.
4. File comparator: It compares the results of program tests with
previous test results and reports differences between them.
5. Report generator: It provides report definition and generation
facilities for test results.
6. Dynamic analyzer: It adds code to a program to count the number
of times each statement has been executed. After the tests have
been run, an execution profile is generated showing how often each
program statement has been executed.
Testing workbenches invariably have to be adapted to suit the test plan
of each system. A significant amount of effort and time is usually needed to
create a comprehensive testing workbench. Most testing work benches are
open systems because testing needs are organization specific.
Automation, however, makes life easier for testers but is not a “silver
bullet”. It makes life easier for testers for better reproduction of test
results, coverage, and reduction in effort. With automation we can pro-
duce better and more effective metrics that can help in understanding the
state of health of a product in a quantifiable way, thus taking us to the next
change.
ANSWERS
1. a. 2. e. 3. b. 4. a.
5. d. 6. a. 7. a. 8. b.
9. a. 10. a.
REVIEW QUESTIONS
1. Answer the following:
a. What is debugging?
b. What are different approaches to debugging?
c. Why is exhaustive testing not possible?
2. Explain the following:
a. Modern testing tools.
12
A Case Study on
Testing of E-Learning
Management Systems
Abstract
Software testing is the process of executing a program or system with the
intent of finding errors. It involves any activity aimed at evaluating an attrib-
ute or capability of a program or system and determining that it meets its
required results. To deliver successful software products, quality has to be
ensured in each and every phase of a development process. Whatever the
organizational structure may be, the most important point is that the output
of each phase should be of very high quality. The SQA team is responsible to
ensure that all the development team should follow the quality-oriented pro-
cess. Any modifications to the system should be thoroughly tested to ensure
that no new problems are introduced and that the operational performance
is not degraded due to the changes. The goal of testing is to determine and
ensure that the system functions properly beyond the expected maximum
workload. Additionally, testing evaluates the performance characteristics
like response times, transaction rates, and other time sensitive issues.
Chapter One
Introduction
NIIT Technologies is a global IT and business process management services
provider with a footprint that spans 14 countries across the world. It has been
working with global corporations in the USA, Europe, Japan, Asia Pacific,
and India for over two decades. NIIT Technologies provides independent
validation and verification services for your high-performance applications.
Their testing services help organizations leverage their experience in testing
to significantly reduce or eliminate functional, performance, quality, and reli-
ability issues. NIIT Technologies helps enterprises and organizations make
their software development activities more successful and finish projects in
time and on budget by providing systematic software quality assurance.
The government of India Tax Return Preparers scheme to train unem-
ployed and partially employed persons to assist small and medium taxpay-
ers in preparing their returns of income has now entered its second phase.
During its launch year, on a pilot basis, close to 5,000 TRPs at 100 centers in
around 80 cities across the country were trained. 3737 TRPs were certified
by the Income Tax Department to act as Tax Return Preparers who assisted
various people in filing their IT returns. The government has now decided
to increase their area of operations by including training on TDS returns and
service tax returns to these TRPs. The quality assurance and testing team of
NIIT who constantly indulges in testing and maintaining the product qual-
ity have to test such online learning content management websites such as
www.trpscheme.com in the following manner:
Functional and regression testing
System testing: Load/stress testing, compatibility testing
Full life cycle testing
Chapter Two
Software Requirement Specifications
2.1. INTRODUCTION
This document aims at defining the overall software requirements for “testing
of an online learning management system (www.trpscheme.com).” Efforts
have been made to define the requirements exhaustively and accurately.
2.1.1. Purpose
This document describes the functions and capabilities that will be pro-
vided by the website, www.trpscheme.com. Its purpose is that the resource
center will be responsible for the day-to-day administration of the scheme.
The functions of the resource center will include to specify the curriculum
and all other matters relating to the training of the Tax Return Preparers
and maintain the particulars relating to the Tax Return Preparers. Also,
any other function that is assigned to it by the Board for the purposes of
implementation of the scheme.
2.1.2. Scope
The testing of the resource center section for service tax is done manually
mainly using functional and regression testing. Other forms of testing may
also be used such as integration testing, load testing, installation testing, etc.
Sites
http://en.wikipedia.org/Software_testing
2.1.5. Overview
The rest of the SRS document describes the various system requirements,
interfaces, features, and functionalities in detail.
FIGURE 2.1
2.2.1.5. Communication interfaces
The application should support the following communication protocols:
1. Http
2. Proxy server: In computer networks, a proxy server is a server
(a computer system or an application program) that acts as a go-between
for requests from clients seeking resources from other servers.
2.2.1.6. Memory constraints
At least 512 MB RAM and 2 GB hard disk will be required.
2.2.3. User Characteristics
Education level: The user be able to understand one of the languages
of the browser (English, Hindi, Telugu). The user must also have a basic
knowledge of tax return and payments rules and regulations.
Technical expertise: The user should be comfortable using general-
purpose applications on a computer.
2.2.4. Constraints
Monitor sizes and ratios and color or black-and-white monitors render
it virtually impossible to design pages that look good on all device types.
Font sizes and colors need to be changeable to fit the requirements of
sight-impaired viewers.
When the STRPs click on the login button of resource center service tax
on TRPscheme.com, the following page will be displayed.
Homepage
When the STRP logs in by user id and password, the homepage is displayed.
The homepage has “Reported by STRP” menu on the left under which the
user will see two links, “Return Filed” and “Monthly/Quarterly Tax Paid
Form.” The user will also see two report links, “Return Filed” and “Monthly/
Quarterly Tax Paid Report” under the “Reports” menu. A message “This
is the Resource Center site. By using this site you can get information on
Resource Center Service Tax” also appears on the homepage.
Form validation: This form will require validation of the data such as
all of the mandatory fields cannot be left blank and “STC Code” must be
filled in otherwise the form will not be submitted. Fields such as “Amount
of Tax Payable,” “Amount of Tax Paid,” and “Interest Paid” will only be
numeric.
To complete a form, the user must fill out the following fields. All of the
fields are mandatory in this form.
•• Name of Assesses
•• STC Code
•• Period
•• Monthly/Quarterly
•• Month
•• Amount of Tax Payable
•• Amount of Tax Paid
Field format/length for STC Code will be as follows: [First 5 alphabetical]
[6-9 numeric] [10 alphabetical] [11-12 ST] [13-15 numeric]
“Month” drop-down list will be populated based on the “Period” and
“Monthly/Quarterly” selection. “Month” will be selected. If the user has
selected “Period” as April 1 though Sept 30 and 2009 and “Monthly” in
“Monthly/Quarterly” drop down then he or she will see April, May, June,
July, August, and September in the “Month” drop down. If the TRP has
selected “Quarterly” in “Monthly/Quarterly” drop down then the drop
down will show Apr_May_June and July_Aug_Sep.
The STRP can only fill in the details for the same STD code, period, and
month only once.
Report to view Monthly\Quarterly form data.
This report will allow STRPs to view Monthly\Quarterly Tax Paid form
data and will be able generate a report of the data. STRPs will generate
reports in HTML format and also be to able to export them into Excel
format.
To view the report data the STRP is required to provide the “Period” in
the given fields that are the mandatory fields.
The STRP can also use the other field STC code to generate.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “Cancel” button will take the user to the login page.
2.5 Service Wise Report (Admin Report)
This report will allow the admin to generate a Report Service Wise of STRPs.
This report will be generated in HTML format as well as in Excel format.
Validations:
This page should contain a “Service Type” drop down and “Date from”
and “To” textboxes.
To view the Service Wise Report data the admin can select multiple ser-
vices from the “Service Type” list box and the data for those services will
be populated. “Service Type” will be a mandatory field so the user has to
select at least one service to view the report data.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data them into Excel format.
The TRP id, TRP name, and service type will also be provided in the
Excel sheet.
The “Cancel” button will take the user to the login page.
The user needs to fill in both the “Date from” and “To” fields. “Date
from” and “To” will extract the data based on “Date of Filling Return.”
STRPs Wise Report (Admin Report)
This report will allow the admin to search the data of the STRPs and will
be able to generate a report of the data. The admin will generate reports in
HTML format and also in Excel format.
To view the STRPs Wise Report data users have to give a “Period” because
its a mandatory field while the rest of the fields are non mandatory.
The user can also provide the date range if the user wants data from a
particular date range. If no date range is provided then all the data from
all of the STRPs will be populated for the given period.
The user needs to fill in both “Date from” and “To” fields. “Date from”
and “To” will extract the data based on “Date of Filling Return.”
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “TRP id” and “TRP name” will also be provided in the Excel sheet.
The “Cancel” button will take the user to the login page.
STRP Summary Report (Admin Report).
This report will allow the admin to generate a report for the top ten STRPs
based on the highest amount of tax paid for each return filled by the TRP.
This report will be generated in HTML format as well as in Excel format.
Validations:
To view this report the user will have to select a “Zone” as well as a
“Period.” These are mandatory filters.
There will be an option of “ALL” in the “Zone” drop down if the report
needs to be generated for all the zones.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “Cancel” button will take the user to the login page.
The user needs to fill both “Date from” and “To” fields. “Date from” and
“To” will extract the data based on “Date of Filling Return.”
The user can either select the “Period” or “Date from” and “To” to gen-
erate the report. Both of the fields cannot be selected.
Zone/Commissionerate Wise Report (Admin Report)
This report will allow the admin to generate the report Zone/
Commissionerate Wise of STRPs. This report will be generated in HTML
format as well as in Excel format.
Validations:
To view the Commissionerate Wise Report data the admin can p rovided
“Zone,” “Commissionerate,” and “Division” to view the data but if
no input is provided then the data will include the entire “Zone,” the
“Commissionerate,” and the “Division.” The user will have to select
“Zone” because it will be a mandatory field. There will be an option of
“ALL” in the “Zone” drop down if the report needs to be generated for
all of the “Zone.”
“Commissionerate” will be mapped to the “Zone” and “Division” will
be mapped to “Commissionerate,” i.e., if a user selects a “Zone” then
all the “Commissionerate” under that “Zone” will come in to the “Com-
missionerate” drop down and if a user selects a “Commissionerate” then
only those “Division” will be populated in the “Division” drop down that
are under that “Commissionerate.” If any LTU is selected in the “Zone”
drop down the no other field will be populated.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “TRP id,” “TRP name,” “Commissionerate,” and “Division” will also
be provided in the Excel sheet.
The “Cancel” button will take the user to the login page.
2.3.2. Functions
It defines the fundamental actions that must take place in the software in
accepting and processing the inputs and generating the outputs. The system
will perform the following:
VALIDITY CHECKS
The address should be correct.
An Internet connection should be present.
RESPONSES TO ABNORMAL SITUATIONS
An error message will be generated if the date format is wrong.
An error message will be generated if the STC code is entered i ncorrectly.
An error message will be generated if two users are assigned the same
STC code.
2.3.3. Modules
Test Plan
We write test plans for two very different purposes. Sometimes the test plan
is a product; sometimes it’s a tool. It’s too easy but also too expensive to
confuse these goals. In software testing, a test plan gives detailed testing
information regarding an upcoming testing effort including:
Scope of testing
Schedule
Test deliverables
Release criteria risks and contingencies
How the testing will be done?
Who will do it?
What will be tested?
How long it will take?
What the test coverage will be, i.e., what quality level is required?
Test Cases
A test case is a set of conditions or variables under which a tester will deter-
mine if a requirement upon an application is partially or fully satisfied. It
may take many test cases to determine that a requirement is fully satisfied. In
order to fully test that all of the requirements of an application are met, there
must be at least one test case for each requirement unless a requirement has
sub requirements. In that situation, each sub requirement must have at least
one test case. There are different types of test cases.
Common test case
Functional test case
Invalid test case
Integration test case
Configuration test case
Compatibility test case
Chapter Three
System Design
Chapter Four
Reports And Testing
4.2. TESTING
The importance of software testing and its implications with respect to soft-
ware quality cannot be overemphasized. Software testing is a critical ele-
ment of software quality assurance and represents the ultimate review of
specification, design, and code generation.
4.2.1. Types of Testing
White-Box Testing: This type of testing goes inside the program and check
all the loops, paths, and branches to verify the program’s intention.
4.2.2. Levels of Testing
Unit Testing: This is the first phase of testing. The unit test of the system
was performed using a unit test plan. The parameters that are required to be
tested during a unit testing are as follows:
Validation check: Validations are all being performed correctly. For this,
two kinds of data are entered for each entry going into the database—valid
data and invalid data.
Integrating Testing: It is a systematic technique for constructing the pro-
gram structure while at the same time conducting tests to uncover tests asso-
ciated with interfacing. The objective is to take unit tested components and
build a program structure that has been dictated by design.
In this testing we followed the bottom-up approach. This approach
implies construction and testing with atomic modules.
Stress Testing: The final test to be performed during unit testing in the
stress test. Here the program is put through extreme stress like all of the keys
of the keyboard being pressed or junk data being put through. The system
being tested should be able to handle that stress.
Functional Testing: Functional testing verifies that your system is ready for
release. The functional tests define your working system in a useful manner.
A maintained suite of functional tests:
Captures user requirements in a useful way.
Gives the team (users and developers) confidence that the system meets
those requirements.
Load Testing: Load testing generally refers to the practice of modeling the
expected usage of a software program by simulating multiple users accessing
the program’s services concurrently. As such, this testing is most relevant for
multi-user systems often one built using a client/server model, such as web
servers.
Chapter Five
Test Cases
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: A quicklink “Return Same as PASS
FR_301 availability of Homepage of the student01 Filed” appears on expected.
“Return Filed user appears. password: the left hand side
Report” to pass123 of the screen under
student role. “View Reports”
STRP_R To verify the 1. Login as student. Loginid: Return “Filed Same as PASS 118560
FR_302 accessibility Homepage of the student01 Report” page expected.
of “Return user appears. password: appears.
Filed” 2. Click on the pass123
button. quicklink “Return
Filed” under “View
Reports” heading.
STRP_R To verify 1. Login as student. Loginid: The values under Same as
FR_306 the report Homepage of the student01 the respective expected.
outputs in user appears. password: columns in
an Excel 2. Click on the pass123 HTML and Excel
spreadsheet quicklink “Return spreadsheet should
and HTML Filed,” “Return Filed match. The column
format. Report” page appears. headings are as
3. Fill in the follows:
“Period” field. Name
4. Click on the STC
“Export to Excel” Code
button. Period
5. Next select the Date of Filling
same period and click Return
on the “Generate Amount of Tax
Report” button. Payable
6. Observe and Amount of Tax Paid
verify the values Interest Paid
under the respective
column headings in
HTML format with
the Excel spread-
sheet format.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: 1. Report should Same as PASS
FR_307 functionality Homepage of the student01 be generated for expected.
of the user appears. password: selected period
“Generate 2. Click on the pass123 showing correct
Report” quicklink “Return values under the
button when Filed,” “Return on respective column
the “STC Filed Report” page headings.
Code” filed appears. 2. Message “No
is blank and 3. Fill all the Record Found”
the “Period” mandatory fields should appear if no
field is except the “STC record for selected
selected. Code.” period exists.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: 1. “File Download” Same as PASS
FR_308 functionality Homepage of the student01 dialog box appears expected.
of the user appears. password: with options
“Export 2. Click on the pass123 “Open,” “Save,” and
to Excel” quicklink “Return “Cancel.”
button when Filed,” “Return 2. Report should be
the “STC Filed Report” page generated in Excel
Code” field appears. for selected period
is blank and 3. Fill all the showing the correct
the “Period” mandatory fields values under the
field is except the “STC respective column
selected. Code.” headings.
4. Click on the 3. Message “No
“Generate Report” Record Found”
button. should appear if no
record for selected
period exists.
STRP_R To verify the 1. Login as student. Loginid: A Date Time Picker Same as PASS
FR_309 functionality Homepage of the student01 Window should pop expected.
of the user appears. password: up with the current
“Calendar” 2. Click on the pass123 date selected in the
button on quicklink “Return calendar.
“Return Filed,” “Return
Filed Filed Report” page
Report.” appears.
3. Select a period in
the “Period” field.
4. Click on the
“Pick a date” button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_10 To verify the 1. Login as student. The report should Same as PASS
FR_310 format of the Homepage of the be generated. expected.
“STC Code” user appears.
textbox. 2. Click on the
quicklink “Return
Filed,” “Return Filed
Report” page appears.
3. Fill “STC Code”
in the following
template.
STC code length: 15
characters 1-5:
alphabetical 6-9:
numerical 10th:
alphabetical
11-12: ST
13-15: numerical
4. Fill all the other
Mandatory details.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: 1. An error message Same as PASS
FR_311 functionality Homepage of the student01 should appear expected.
of the user appears. password: stating “STC Code
“Generate 2. Click on the pass123 is invalid.” with a
Report” quicklink “Return STC code: “Back” button.
button when Filed Report,” ASZDF23 2. By clicking on
length of the “Return Filed Report” 45GST87 the “Back” button,
“STC Code” page appears. Period: “Return Filed
is less than 15 3. Fill in the April 1st - Report” page
characters. “STC Code” in the Sept 30th; appears.
following template. 2007-2008
STC code length: 14
characters 1-5:
alphabetical (In
Caps) 6-9:
numerical 10th:
alphabetical (In
Caps) 11-12: ST
13-14: numerical
4. Fill all the others
Mandatory details.
5. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: 1. An error message Same as PASS
FR_312 functionality Homepage of the student01 should appear expected.
of “Export user appears. password: stating
To Excel” 2. Click on the pass123 “STC Code is
button when quicklink “Return STC Code: invalid.” with a
the length Filed Report,” ASZDF23 “Back” button.
of the “STC “Return Filed Report” 45GST87 2. By clicking on
Code” is page appears. Period: the “Back” button
less than 15 3. Fill in the “STC April 1st - “Return Filed
characters. Code” in the Sept 30th; Report” page
following template. 2007-2008 appears.
STC code length:
14 characters.
1-5: alphabetical
(In Caps) 6-9:
numeral 10th:
alphabetical (In
Caps) 11-12: ST
13-14: numeral
4. Fill all the others
Mandatory details.
5. Click on the
“Export To Excel”
button.
STRP_R To verify the 1. Login as student. Loginid: An error message Same as
FR_313 functionality Homepage of the student01 should appear expected.
of the user appears. password: stating “STC Code
“Generate 2. Click on the pass123 is invalid.” With a
Report” quicklink “Return STC Code: “Back” button.
button when Filed Report,” asdfg234
the letters “Return Filed 5gST87
of the “STC Report” page Period:
Code” are appears. April 1st -
written in 3. Fill in the “STC Sept 30th;
small letters. Code” in the 2007-2008
following template.
STC code length:
15 characters. 1-5:
alphabetical (In
Small) 6-9: numeral
10th: Alphabet (In
Small) 11-12: ST
13-15: numeral
4. Select a period
the “Period” field.
5. Click on the
“Export To Excel”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: 1. An error message Same as PASS
FR_317 functionality Homepage of the student01 should appear expected.
of the user appears. password: stating “STC Code
“Generate 2. Click on the pass123 is invalid.” With a
Report” quicklink “Return STC Code: “Back” button.
button when Filed Report,” ASZDFJU 2. By clicking on
all of the “Return Filed ILHGLO the “Back” button
characters Report” page YU “Return Filed
of the “STC appears. Period: Report” page
Code” are 3. Fill in the “STC April 1st - appears.
alphabetical. Code” in the Sept 30th;
following template. 2007-2008
STC code length: 15
characters.
1-15: alphabetical
(In Caps)
4. Fill all the other
Mandatory details.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as PASS
FR_321 functionality Homepage of the student01 be generated. expected.
of the user appears. password:
“Generate 2. Click on the pass123
Report” quicklink “Return Period:
button when Filed,” “Return April 1st -
the date Filed Report” page Sept 30th;
format is appears. 2007-2008
“dd/mm/ 3. Fill the “Date Date from:
yyyy” in any from” and/or “To” 10/01/2007
or both of in “dd/mm/yyyy”
the “Date format.
from” 4. Select a period in
and “To” the “Period” field.
textboxes. 5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_322 functionality Homepage of the student01 be generated. expected.
of the user appears. password:
“Export 2. Click on the pass123
To Excel” quicklink “Return Period:
button when Filed,” “Return April 1st -
Filed Report” page Sept 30th;
appears. 2007-2008
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
date format 3. Fill the “Date Date from: PASS
is “dd/ from the” in “dd/ 01/10/2007
mm/yyyy” mm/yyyy” format.
in any or 4. Select a period in
both of the “Period” field.
“Date from” 5. Click on the
and “To” “Export To Excel”
textboxes. button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_323 functionality Homepage of the student01 be generated if expected.
of the user appears. password: records exist in that
“Generate 2. Click on the pass123 period. Otherwise,
Report” quicklink “Return Period: the message “No
button when Filed,” “Return April 1st - Record Found.”
the “Date Filed Report” page Sept 30th; should display.
from” and appears. 2007-2008
“To” fields 3. Fill the “Date “Date
are filled. from” and “To” from”:
fields in “dd/mm/ 01/10/2007
yyyy” format. “To”:
4. Select a period in 30/09/2008
the “Period” field.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_324 functionality Homepage of the student01 be generated if the expected.
of the user appears. password: records exist in that
“Export 2. Click on the pass123 period. Otherwise,
To Excel” quicklink “Return Period: the message “No
button when Filed,” “Return April 1st - Record Found.”
the “Date Filed Report” page Sept 30th; should display.
from” and appears. 2007-2008
“To” fields 3. Fill in the “Date “Date
are filled. from” and “To” from”:
fields in “dd/mm/ 01/10/2007
yyyy” format. “To”:
4. Select a period in 30/09/2008
the “Period” field.
5. Click on the
“Export To Excel”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_325 functionality Homepage of the student01 be generated if the expected.
of the user appears. password: records exist from
“Generate 2. Click on the pass123 the date entered
Report” quicklink “Return Period: in the “Date from”
button when Filed,” “Return Filed April 1st - field. Otherwise,
only the Report” page appears. Sept 30th; the message “No
“Date from” 3. Fill in the “Date 2007-2008 Record Found.”
field is filled from” field in “dd/ “Date should display.
and the “To” mm/yyyy” format. from”:
field is left 4. Select a period in 01/10/2007
blank. the “Period” field.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_326 functionality Homepage of the student01 be generated if the expected.
of the user appears. password: records exist from
“Export 2. Click on the pass123 the date entered
To Excel” quicklink “Return Period: in the “Date from”
button when Filed,” “Return Filed April 1st - field. Otherwise,
only the Report” page appears. Sept 30th; the message “No
“Date from” 3. Fill in the “Date 2007-2008 Record Found.”
field is filled from” field in “dd/ “Date should display.
and the “To” mm/yyyy” format. from”:
field is left 4. Select a period in 01/10/2007
blank. the “Period” field.
5. Click on the
“Export To Excel”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_327 functionality Homepage of the student01 be generated if expected.
of the user appears. password: the records exist
“Generate 2. Click on the pass123 until the date
Report” quicklink “Return Period: entered in the “To”
button when Filed,” “Return April 1st - field. Otherwise,
only the “To” Filed Report” page Sept 30th; the message “No
field is filled appears. 2007-2008 Record Found.”
in and the 3. Fill in the “To” “To”: should display.
“Date from” field in “dd/mm/ 30/09/2008
field is left yyyy” format.
blank. 4. Select a period in
the “Period” field.
5. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_328 functionality Homepage of the student01 be generated if expected.
of the user appears. password: the records exist
“Export 2. Click on the pass123 untill the date
To Excel” quicklink “Return Period: entered in the “To”
button when Filed,” “Return Filed April 1st - field. Otherwise,
only the “To” Report” page appears. Sept 30th; the message “No
field is filled 3. Fill in the “To” 2007-2008 Record Found.”
in and the field in “dd/mm/ “To”: should display.
“Date from” yyyy” format. 30/09/2008
field is left 4. Select a period in
blank. the “Period” field.
5. Click on the
“Export To Excel”
button.
STRP_R To verify the 1. Login as student. Period: An error message Same as PASS 112387
FR_329 functionality Homepage of the April 1st - should appear expected.
of the user appears. Sept 30th; saying, “From Date
“Generate 2. Click on the 2007-2008 can not be greater
Report” quicklink “Return “Date than To Date.”
button when Filed,” “Return Filed from”:
the “Date Report” page appears. 01/10/2008
from” is 3. Fill in the “Date “To” date:
greater than from” and “To” field 30/09/2008
the “To in “dd/mm/yyyy”
Date.” format.
4. Select a period in
the “Period” field.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Period: An error message Same as PASS 11238
FR_330 functionality Homepage of the April 1st - should appear expected.
of the user appears. Sept 30th; saying “From Date
“Export 2. Click on the 2007-2008 can not be greater
To Excel” quicklink “Return “Date than To Date.”
button when Filed,” “Return Filed from”:
the “Date Report” page appears. 01/10/2008
from” is 3. Fill in the “Date “To” Date:
greater than from” and “To” 30/09/2008
the “To” fields in “dd/mm/
Date. yyyy” format.
4. Select a period in
the “Period” field.
5. Click on the
“Export To Excel”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: An error message Same as PASS
FR_331 Max Length Homepage of the student01 saying “Date expected.
of the “Date user appears. password: Format of Start
from” field. 2. Click on the pass123 Date is not valid.”
quicklink “Return Period: should appear with
Filed,” “Return Filed April 1st - the “Back” button.
Report” page appears. Sept 30th; On clicking the
3. Enter more than 2007-2008 “Back” button,
10 characters in the “Date the “Return Filed
“Date from” field. from”: Report” page should
4. Enter a valid date 01/10/2008 appear.
in the “To” field.
5. Select a period in
the “Period” field.
6. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. NA Homepage of the Same as PASS
FR_333 functionality Homepage of the user appears. expected.
of the user appears.
“Home” 2. Click on the
button at quicklink “Return
the “Return Filed Report,”
Filed “Return Filed Report”
Report” page appears.
page. 3. Fill all of the
mandatory fields
with valid data.
4. Click on the
“Home” quicklink.
STRP_R To verify the 1. Login as student. NA Homepage of the Same as PASS
FR_334 functionality Homepage of the user appears. expected.
of the user appears.
“Home” 2. Click on the
button at quicklink “Return
the error Filed Report,”
message “Return Filed Report”
page. page appears.
3. Leave the
“Period” field
unselected.
4. Click on the
“Generate Report”
button.
5. Click on the
“Home” quicklink.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the
1. Login as student. NA Homepage of the Same as PASS
FR_335 functionality
Homepage of the user appears. expected.
of the user appears.
“Cancel” 2. Click on the
button on the
quicklink “Return
“Return Filed
Filed Report,”
Report” page.
“Return Filed
Report” page
appears.
3. Click on the
“Cancel” button.
STRP_R To verify 1. Login as student. NA The “Period” drop Same as PASS
FR_336 the values of Homepage of the down should display expected.
the “Period” user appears. two values:
drop down. 2. Click on the 1. April 1st - Sept
quicklink “Return 30th
Filed Report,” 2. Oct 1st - March
“Return Filed 31st
Report” page
appears.
3. Click on the
“Period” drop down.
STRP_R To verify 1. Login as student. Loginid: When we click on Same as PASS 118564
FR_337 whether the Homepage of the student01 the “Back” button expected.
fields are user appears. password: the user comes back
retaining 2. Click on the pass!23 to “Return Filed
values or quicklink “Return “Date Report” page and all
not after Filed Report,” from”: the previous filled
the error “Return Filed 30/09/2008 values remain intact.
message Report” page
appears. appears.
3. Leave the
“Period” field
unselected.
4. Click on the
“Generate Report”
button.
5. An error message
appears.
6. Click on the
“Back” button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: If the report Same as PASS
FR_338 pagination Homepage of the student01 output section expected.
on the report user appears. password: contains more than
output 2. Click on the pass123 10 records, the
section. quicklink “Return Period: pagination takes
Filed Report,” April 1st - place and the next
“Return Filed Sept 30th; 10 records will be
Report” page 2007-2008 visible on the next
appears. page.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: There will be only Same as PASS
FR_339 pagination Homepage of the student01 one page of output expected.
on the report user appears. password: section and all of
output 2. Click on the pass123 the pagination links
section when quicklink “Return Period: are disabled.
the number Filed Report,” April 1st -
of records “Return Filed Sept 30th;
are less than Report” page 2007-2008
10. appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: There will be only Same as PASS
FR_340 pagination Homepage of the student01 one page of output expected.
on the report user appears. password: section and all of
output 2. Click on the pass123 the pagination links
section when quicklink “Return Period: are disabled.
the records Filed Report,” April 1st -
are equal to “Return Filed Sept 30th;
10. Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The next 10 records Same as PASS
FR_341 pagination Homepage of the student01 will be visible on the expected.
on the report user appears. password: next page and the
output 2. Click on the pass123 “Next” and “Last”
section when quicklink “Return Period: links are clickable.
the records Filed Report,” April 1st -
are greater “Return Filed Sept 30th;
than 10. Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: Every page of Same as PASS
FR_342 number of Homepage of the student01 the report output expected.
records on user appears. password: section should
each page in 2. Click on the pass123 contain a maximum
the report quicklink “Return Period: of 10 records.
output Filed Report,” April 1st -
section. “Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: By clicking on the Same as PASS
FR_343 functionality Homepage of the student01 “Next” button, expected.
of the “Next” user appears. password: the next page of
button 2. Click on the pass123 the report output
on the quicklink “Return Period: section appears.
pagination. Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: By clicking on Same as PASS
FR_344 functionality Homepage of the student01 the “Last” button, expected.
of the “Last” user appears. password: the last page of
button on 2. Click on the pass123 the report output
pagination. quicklink “Return Period: section appears.
Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: By clicking on the Same as PASS
FR_345 functionality Homepage of the student01 “First” button, expected.
of the “First” user appears. password: the first page of
button on 2. Click on the pass123 the report output
pagination. quicklink “Return Period: section appears.
Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: By clicking on the Same as PASS
FR_346 functionality Homepage of the student01 “Prev” button, the expected.
of the “Prev” user appears. password: previous page of
button on 2. Click on the pass123 the report output
pagination. quicklink “Return Period: section appears.
Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The entered page Same as PASS
FR_347 functionality Homepage of the student01 number will appear expected.
of the “Go” user appears. password: and the text above
button when 2. Click on the pass123 the “First Prev Next
the user quicklink “Return Period: Last” link will show
enters a Filed,” “Return April 1st - the current page.
page number Filed Report” page Sept 30th;
in text box. appears. 2007-2008
3. Select a period in Go to
the “Period” field. Page: 2
4. Click on the
“Generate Report”
button.
5. Fill in a page
number in the “Go
to Page” textbox and
click on the “Go”
button.
STRP_R To verify the 1. Login as student. Loginid: Textbox does not Same as PASS
FR_348 functionality Homepage of the student01 accept the value and expected.
of the “Go” user appears. password: remains blank.
button when 2. Click on the pass123
the user quicklink “Return Period:
enters an Filed,” “Return April 1st -
alphanumeric Filed Report” page Sept 30th;
value in the appears. 2007-2008
text box. 3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
5. Fill in an
alphabetical character
in the “Go to Page”
textbox and click on
the “Go” button.
STRP_R To verify the 1. Login as student. Loginid: The page number Same as PASS
FR_349 text of the Homepage of the student01 details of the expected.
page number user appears. password: report output
details of the 2. Click on the pass123 section should
pagination. quicklink “Return Period: show the current
Filed,” “Return April 1st - page number in
Filed Report” page Sept 30th; the format “Page
appears. 2007-2008 (current page)
3. Select a period in of (Total pages)
the “Period” field. Pages.”
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
4. Click on the
“Generate Report”
button.
5. Fill in a page
number in the “Go
to Page” textbox and
click on the “Go”
button.
STRP_R To verify the 1. Login as student. Loginid: If the report output Same as PASS
FR_350 availability Homepage of the student01 section contains expected.
of the “First” user appears. password: more than 10
and “Prev” 2. Click on the pass123 records and the
links on the quicklink “Return Period: user is at last page
pagination. Filed,” “Return April 1st - of the pagination
Filed Report” page Sept 30th; then the “First” and
appears. 2007-2008 “Prev” links on the
3. Select a period in pagination should
the “Period” field. be enabled.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: If the report output Same as PASS
FR_351 availability Homepage of the student01 section contains expected.
of the “Next” user appears. password: more than 10
and “Last” 2. Click on the pass123 records and the
links on the quicklink “Return Period: user is at first page
pagination. Filed,” “Return Filed April 1st - of the pagination
Report” page appears. Sept 30th; then the “Next” and
3. Select a period in 2007-2008 “Last” links on the
the “Period” field. pagination will be
4. Click on the enabled.
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: If the report output Same as PASS
FR_352 availability of Homepage of the student01 section contains expected.
the “First,” user appears. password: more than 10
“Prev,” 2. Click on the pass123 records and the
“Next,” quicklink “Return Period: user is neither on
and “Last” Filed,” the “Return April 1st - the first page nor
links on the Filed Report” page Sept 30th; on the last page of
pagination. appears. 2007-2008 the pagination then
all four links on the
pagination page
should be enabled.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify 1. Login as student. Loginid: The output section Same as PASS
FR_353 the sorting Homepage of the student01 of the report should expected.
order of the user appears. password: be sorted on the
records in 2. Click on the pass123 alphabetical order
the report quicklink “Return Period: of column “Name.”
output Filed,” “Return April 1st -
section page. Filed Report” page Sept 30th;
appears. 2007-2008
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The Login page Same as PASS
FR_354 functionality Homepage of the student01 of the website is expected.
of the user appears. password: displayed.
“Logout” 2. Click on the pass123
button on quicklink “Return
the “Return Filed,” “Return
Filed Filed Report” page
Report” appears.
page. 3. Click on the
“Logout” button.
STRP_R To verify the 1. Login as student. Loginid: The quicklinks Same as PASS
FR_355 functionality Homepage of the student01 should be clickable expected.
of the user appears. password: and the respective
quicklinks on 2. Click on the pass123 page should be
left side on quicklink “Return displayed.
the “Return Filed,” “Return
Filed Filed Report” page
Report” appears.
page. 3. Click on any of
the quicklinks on
left side of the page.
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_30 To verify the 1. Login as student “Monthly/Quarterly PASS
1 availability of the role. Homepage of the Tax Paid Form”
“Monthly/Quarterly user appears. quicklink should
Tax Paid Form” 2. Observe the appear under the
quicklinks to the quicklinks appearing “To Be Reported By
student role. under the “To Be STRP” section.
Reported By STRP”
section.
STRP_MQF_30 To verify the 1. Login as student The “Monthly/ PASS
2 accessibility of the role. Homepage of the Quarterly Tax Paid
“Monthly/Quarterly user appears. Form” page should
Tax Paid Form.” 2. Click the “Monthly/ appear.
Quarterly Tax Paid
Form” quicklink
appearing under the
“To Be Reported By
STRP” section in
quicklink.
STRP_MQF_30 To verify the “STRP 1. Login as student. 1. The “STRP ID” PASS
4 Details” at the Homepage of the user should show the login
“Monthly/Quarterly appears. ID of the logged in
Tax Paid Form” 2. Click on the user.
page. quicklink “Monthly/ 2. The “STRP Name”
Quarterly Tax Paid should show the name
Form,” the “Monthly/ of the logged in user.
Quarterly Tax Paid 3. The “STRP PAN
Form” page appears. Number” should show
the PAN No. of the
logged in user.
STRP_MQF_30 To verify the 1. Login as student 1. The “Monthly/ PASS
5 functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value 2. Click the “Monthly/ submitted.
is entered in the Quarterly Tax Paid 2. The following
“Name of Assessee” Form” quicklink error message
field. appearing under the should appear with
“To Be Reported By the “Back” button:
STRP” section on the “Name of Assessee is
homepage. mandatory.”
3. Do not enter any 3. Clicking the “Back”
value in the “Name of button should take the
Assessee” field. user to homepage.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
4. Enter valid values in
all mandatory fields.
5. Click the “Submit”
button.
STRP_MQF_30 To verify the 1. Login as student 1. The “Monthly/ PASS
6 functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value is 2. Click the “Monthly/ submitted.
entered in the “STC Quarterly Tax Paid 2. The following
Code” field. Form” quicklink error message should
appearing under the appear with the “Back”
“To Be Reported By button: “STC Code is
STRP” section on the mandatory for valid
homepage. Return Filed.”
3. Do not enter any 3. Clicking the “Back”
value in the “STC button should take the
Code” field. user to the homepage.
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_30 To verify the 1. Login as student 1. The “Monthly/ PASS
8 functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value 2. Click the “Monthly/ submitted.
is entered in the Quarterly Tax Paid 2. The following
“Amount of Tax Form” quicklink error message should
Payable” field. appearing under the come with the “Back”
“To Be Reported By button: “Amount of tax
STRP” section on the payable is mandatory
homepage. for valid Return
3. Do not enter any Filed.”
value in the “Amount 3. Clicking the “Back”
of Tax Payable” field. button should take the
4. Enter valid values user to the homepage.
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_31 To verify the 1. Login as student The form should get PASS
0 functionality of the role. Homepage of the submitted successfully
“Submit” button user appears. and the following
when the value in 2. Click the “Monthly/ confirmation message
the “STC Code” Quarterly Tax Paid should appear:
field is entered Form” quicklink “Record has been
saved successfully.”
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
in the following a ppearing under the
format: “To Be Reported By
STC code length: STRP” section on the
15 characters. homepage.
1-5: alphabetical 3. Enter a value of the
6-9: numeral “STC Code” in the
10th: alphabetical following format:
11-12: ST STC code length: 15
13-15: numeral characters.
1-5: alphabetical
6-9: numeral
10th: alphabetical
11-12: ST
13-15: numeral
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_31 To verify the max Specification not
1 length of the provided.
“Amount of Tax
Paid” textbox.
STRP_MQF_31 To verify the max Specification not
2 length of the provided.
“Name of Assessee”
textbox.
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_301 To verify the 1. Login as student “Monthly/Quarterly PASS
availability of the role. Homepage of the Tax Paid Form”
“Monthly/Quarterly user appears. quicklink should
Tax Paid Form” 2. Observe quicklinks appear under the
quicklinks to appearing under the “To Be Reported By
student role. “To Be Reported By STRP” section.
STRP” section.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_302 To verify the 1. Login as student “Monthly/Quarterly PASS
accessibility of the role. Homepage of the Tax Paid Form” page
“Monthly/Quarterly user appears. should appear.
Tax Paid Form.” 2. Click the “Monthly/
Quarterly Tax Paid
Form” quicklink
appearing under the
“To Be Reported By
STRP” section in
quicklink.
STRP_MQF_304 To verify the “STRP 1. Login as student. 1. The “STRP ID” PASS
Details” at the Homepage of the user should show the login
“Monthly/Quarterly appears. ID of the logged in
Tax Paid Form” page. 2. Click on the user.
quicklink “Monthly/ 2. The “STRP
Quarterly Tax Paid Name” should show
Form,” “Monthly/ the name of the
Quarterly Tax Paid logged in user.
Form” page appears. 3. The “STRP PAN
Number” should
show the PAN No.
of the logged in
user.
STRP_MQF_305 To verify the 1. Login as student 1. The “Monthly/ PASS
functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value is 2. Click the “Monthly/ submitted.
entered in the “Name Quarterly Tax Paid 2. The following
of Assessee” field. Form” quicklink error message
appearing under the should come with
“To Be Reported the “Back” button:
By STRP” section “Name of Assessee is
in quicklink on the mandatory.”
homepage. 3. Clicking the “Back”
3. Do not enter any button should take the
value in the “Name of user to the homepage.
Assessee” field.
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_306 To verify the 1. Login as student 1. The “Monthly/ PASS
functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value is 2. Click the “Monthly/ submitted.
entered in the “STC Quarterly Tax Paid 2. The following error
Code” field. Form” quicklink message should
appearing under the come with the “Back”
“To Be Reported By button: “STC Code is
STRP” section on the mandatory for valid
homepage. Return Filed.”
3. Do not enter any 3. Clicking the “Back”
value in the “STC button should take the
Code” field. user to the homepage.
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_308 To verify the 1. Login as student 1. The “Monthly/ PASS
functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value 2. Click the “Monthly/ submitted.
is entered in the Quarterly Tax Paid 2. The following
“Amount of Tax Form” quicklink error message should
Payable” field. appearing under the come with the “Back”
“To Be Reported By button: “Amount of tax
STRP” section on the payable is mandatory
homepage. for valid Return
3. Do not enter any Filed.”
value in the “Amount 3. Clicking the “Back”
of Tax Payable” field. button should take the
4. Enter valid values user to the homepage.
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_310 To verify the 1. Login as student The form should get PASS
functionality of the role. Homepage of the submitted successfully
“Submit” button user appears. and the following
when value in the
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
“STC Code” field 2. Click the “Monthly/ confirmation message
is entered in the Quarterly Tax Paid should appear:
following format: Form” quicklink “Record has been
STC code length: appearing under the saved successfully.”
15 characters. “To Be Reported By
1-5: alphabetical STRP” section on the
6-9: numeral homepage.
10th: alphabetical 3. Enter the value of
11-12: ST the “STC Code” in the
13-15: numeral following format:
STC code length: 15
characters.
1-5: alphabetical
6-9: numeral
10th: alphabetical
11-12: ST
13-15: numeral
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_311 To verify the max Specification not
length of the provided.
“Amount of Tax
Paid” textbox.
STRP_MQF_312 To verify the max Specification not
length of the provided.
“Name of Assessee”
textbox.
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_ To verify the 1. Login as student The “Monthly/ PASS
MQTR_301 availability of the role. Homepage of the Quarterly Tax Paid
“Monthly/Quarterly user appears. Report” quicklink
Tax Paid” report to 2. Observe the should appear under
student role. quicklinks on left side the “View Reports”
of the homepage. section.
STRP_ To verify the 1. Login as student The “Monthly/ PASS
MQTR_302 accessibility of the role. Homepage of the Quarterly Tax Paid
“Monthly/Quarterly user appears. Report” page should
Tax Paid” report 2. On the homepage, appear.
through quicklinks. under the “View
Reports” click on the
“Monthly/Quarterly
Tax Paid” link.
STRP_ To verify the 1. Login as student. Report should not PASS
MQTR_304 functionality of the 2. Go to “View get generated and
“Generate Report” reports” and the the following error
button when no “Monthly/Quarterly message should come
value is entered in Tax Paid” quicklinks, with the “Back”
the “Period” field. the “Monthly/ button: “Select The
Quarterly Tax Paid Period.” Clicking the
Report” page appears. “Back” button should
3. Do not enter any take the user to the
value in the “Period” “Monthly/Quarterly
field. Tax Paid Report” page.
4. Select the “Date from” Note: This ensures
and “To” fields from the that the “Period”
“Date” picker control. field is mandatory.
5. Click the “Generate
Report” button.
STRP_ To verify the 1. Login as student. 1. The report should PASS
MQTR_305 functionality of the 2. Go to “View get generated.
“Generate Report” reports” and the 2. All of the records of
button when no “Monthly/Quarterly the user should appear
value is entered in Tax Paid” quicklinks. in the “Reports” output
the “To” date field. 3. Select a period from section.
the “Period” drop down.
4. Do not enter any value
in the “To” date field.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
5. Select a valid date
in the “Date from”
field from the Date
picker control.
6. Click the “Generate
Report” button.
STRP_ To verify the 1. Login as student. 1. The report should PASS
MQTR_306 functionality of the 2. Go to “View get generated.
“Generate Report” Reports” and the 2. All of the records of
button when no “Monthly/Quarterly the user should appear
value is entered in Tax Paid” quicklinks. in the “Reports”
the “Date From” 3. Select a period from output section.
field. the “Period” drop
down.
4. Do not enter any
value in the “Date
from” field.
5. Select a valid date
in the “To” date field
from the Date picker
control.
6. Click the “Generate
Report” button.
STRP_ To verify the values 1. Login as student. 1. The “Assessment PASS
MQTR_310 appearing in the 2. Go to “View Year” drop down
“Assessment Year” Reports” and the should have the
drop down. “Monthly/Quarterly following values:
Tax Paid” quicklinks, a. 2007-2008
the “Monthly/ b. 2008-2009
Quarterly Tax Paid c. 2009-2010
Report” page appears. 2. These values should
3. Click the “year” be sorted by ascending
drop down. order.
(Continued)
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_RFR_317 To verify the 1. Login as admin. An error message PASS
functionality of the Homepage of the user should appear saying,
“Export to Excel” appears. “From Date can not
button when the 2. Click on the be greater than To
“Date from” is quicklink “Service Wise Date.”
greater than the Report,” “Service Wise
“To” date. Report” page appears.
3. Select a service
type from the “Service
Type” drop down.
4. Fill the “Date from”
and “To” fields in “dd/
mm/yyyy” format.
5. Click on the “Export
to Excel” button.
STRP_RFR_318 To verify the Max 1. Login as admin. An error message PASS
Length of the “Date Homepage of the user saying, “Date Format
from” field. appears. of Start Date is not
2. Click on the valid.” should appear
quicklink “Service Wise with the “Back”
Report,” “Service Wise button.
Report” page appears. On clicking the “Back”
3. Select a service button the “Service
type from the “Service Wise Report” page
Type” drop down. should appear.
4. Enter more than
10 characters in the
“Date from” field.
5. Enter a valid date in
the “To” field.
6. Click on the “Generate
Report” button.
STRP_RFR_319 To verify the Max 1. Login as admin. An error message PASS
Length of the “To” Homepage of the user saying, “Date Format
field. appears. of End Date is not
2. Click on the valid.” should appear
quicklink “Service Wise with the “Back”
Report,” “Service Wise button.
Report” page appears. On clicking the “Back”
3. Select a service button the “Service
type from the “Service Wise Report” page
Type” drop down. should appear.
4. Enter more than 10
characters in the “To”
field.
5. Enter a valid date in
the “Date from” field.
6. Click on the
“Generate Report”
button.
(Continued)
CONCLUSION
a. Advantages:
Delivery of a quality product and software met all quality requirements.
Website is developed within the time frame and budget.
A disciplined approach to software development.
Quality products lead to happy customers.
Software is developed within the time frame and budget.
b. Limitations:
Quality is compromised to deliver the product on time and within budget.
It is a time consuming process.
It requires a large number of employees that leads to an increase in the
product cost.
13
Object-Oriented Testing
Inside this Chapter:
13.0. Basic Unit for Testing, Inheritance, and Testing
13.1. Basic Concepts of State Machines
13.2. Testing Object-Oriented Systems
13.3. Heuristics for Class Testing
13.4. Levels of Object-Oriented Testing
13.5. Unit Testing a Class
13.6. Integration Testing of Classes
13.7. System Testing (with Case Study)
13.8. Regression and Acceptance Testing
13.9. Managing the Test Process
13.10. Design for Testability (DFT)
13.11. GUI Testing
13.12. Comparison of Conventional and Object-Oriented Testing
13.13. Testing Using Orthogonal Arrays
13.14. Test Execution Issues
13.15. Case Study—Currency Converter Application
The techniques used for testing object-oriented systems are quite similar
to those that have been discussed in previous chapters. The goal is to
provide some test design paradigms that help us to perform Object-Oriented
Testing (OOT).
Case I: Extension
Suppose we change some methods in class A. Clearly,
We need to retest the changed methods.
We need to retest interaction among changed and unchanged methods.
We need to retest unchanged methods, if data flow exists between state-
ments, in changed and unchanged methods.
But, what about unchanged subclass B, which inherits from A? By anti-
composition:
“The changed methods from A also need to be exercised in the unique context
of the subclass.”
Here, we will not find the error unless we retest both number (DC) and
the inherited, previously tested, init balance( ) member functions.
FIGURE 13.2
Later, we specialize cert of deposit to have its own rollover ( ). That is,
SOLVED EXAMPLES
EXAMPLE 13.1. Consider the following code for the shape class hierarchy.
Class Shape {
private :
Point reference_point;
public :
void put_reference_point (Point);
point get_reference_point ( );
void move_to (point);
void erase ( );
virtual void draw ( ) = 0;
virtual float area ( );
shape (point);
shape ( );
}
Class triangle : public shape {
private :
point vertex 2;
point vertex 3;
public :
point get_vertex1 ( );
point get_vertex2 ( );
point get_vertex3 ( );
void set_vertex1 (point);
void set_vertex2 (point);
void set_vertex3 (point);
void draw ( );
float area ( );
Triangle ( );
Triangle (point, point, point);
}
Class Equi Triangle : Public Triangle
{
public
float area ( );
equi triangle ( );
equi triangle (point, point, point);
}
What kind of testing is required for this class hierarchy?
SOLUTION. We can use method-specific retesting and test case reuse for
the shape class hierarchy.
Let D = denote, develop, and execute test suite
R = Reuse and execute superclass test suite
E = Extend and execute superclass test suite
S = Skip, super’s test adequate
N = Not testable
Then, we get the following table that tells which type of testing is to be
performed.
NA = Not Applicable
State-Based Behavior
A state machine accepts only certain sequences of input and rejects all
others. Each accepted input/state pair results in a specific output. State-
based behavior means that the same input is not necessarily always accepted
and when accepted, does not necessarily produce the same output.
This simple mechanism can perform very complex control tasks.
Examples of sequentially constrained behavior include:
a. GUI interface control in MS-Windows
b. Modem controller
c. Device drivers with retry, restart, or recovery
d. Command syntax parser
e. Long-lived database transactions
f. Anything designed by a state model
The central idea is that sequence matters. Object behavior can be r eadily
modeled by a state machine.
A state machine is a model that describes behavior of the system under
test in terms of events (inputs), states, and actions (outputs).
FIGURE 13.4
A Water-Tank Example—STD
We draw STD of a water tank system with a valve. This valve can be in one
of the two states—shut or open. So, we have the following:
Mealy/Moore Machines
There are two main variants of state models (named for their developers).
Moore Machine:
Transitions do not have output.
An output action is associated with each state. States are active.
Mealy Machine:
Transitions have output.
No output action is associated with state. States are passive.
In software engineering models, the output action often represents the
activation of a process, program, method, or module.
Although the Mealy and Moore models are mathematically equivalent,
the Mealy type is preferred for software engineering.
A passive state can be precisely defined by its variables. When the same
output action is associated with several transitions, the Mealy machine pro-
vides a more compact representation.
Conditional/Guarded Transitions
Basic state models do not represent conditional transitions. This is remedied
by allowing a Boolean conditions on event or state variables.
Consider a state model for stack class. It has three states: empty, loaded,
and full.
We first draw the state machine model for a STACK class, without
guarded transitions. The initial state is “Empty.”
What is a guard? It is a predicate expression associated with an event.
Now, we draw another state machine model for STACK class, with guarded
transitions.
SOLUTION.
The game starts.
The player who presses the start button first gets the first serve. The
button press is modeled as the player-1 start and player-2 start events.
The current player serves and a volley follows. One of three things end
the volley.
If the server misses the ball, the server’s opponent becomes the server.
If the server’s opponent misses the ball, the server’s score is incremented
and gets another chance.
If the server’s opponent misses the ball and the server’s score is at the
game point, the server is declared winner.
FIGURE 13.9
Properties of Statecharts
1. They use two types of state: group and basic.
2. Hierarchy is based on a set-theoretic for malism (hypergraphs).
3. Easy to represent concurrent or parallel states and processes.
Therefore, we can say that:
Statecharts = State diagram
+
Depth
+
Orthogonality
+
Broadcast Communication
A basic state model and its equivalent statechart are shown below:
(a) (b)
FIGURE 13.10
We have already discussed the STD. We now discuss its equivalent, s tatechart.
In the Figure 13.10(b), we observe the following:
1. State D is a super state. It groups states A and C because they share
common transitions.
2. State A is the initial state.
3. Event f fires the transition AB or CB, depending on which state is active.
4. Event g fires AC but only if C is true (a conditional event).
5. Event h fires BC because C is marked as the default state.
6. The unlabelled transition inside of state D indicates that C is the default
state of D.
Statecharts can represent hierarchies of single-thread or concurrent state
machines.
Any state may contain substates.
The substate diagram may be shown on a separate sheet.
Decomposition rules are similar to those used for data flow diagrams
(DFD). Orthogonal superstates can represent several situations.
The interaction among states of separate classes (objects).
The non interaction among states of separate processes which
proceed independently. For example, “concurrent,” “parallel,”
“multi-thread,” or “asynchronous” execution.
Statecharts have been adapted for use in many OOA/OOD methodologies
including:
Booch OOD
Object modelling technique (OMT)
Object behavior analysis (OBA)
Fusion
Real-time object-oriented modeling (ROOM)
EXAMPLE 13.3. Consider a traffic light control system. The traffic light has
five states—red, yellow, green, flashing red, and OFF. The event, “power
on,” starts the system but does not turn on a light. The system does a self test
and if no faults are recognized, the no fault condition becomes true. When a
reset event occurs and the no fault condition holds, the red on event is gen-
erated. If a fault is raised in any state, the system raises a “Fault” event and
returns to the off state. Draw its
a. State transition diagram.
b. State chart.
SOLUTION. (a) We first draw its state transition diagram (STD) shown in
Figure 13.11.
The unlabeled transition inside the cycling state shows that red is the
default state of cycling.
An aggregration of states is a superstate and the model within an aggre-
gation is a process. The entire model is also a process, corresponding to the
entire system under study.
The traffic light model has three processes: the traffic light system, the
on process, and the cycling process.
The state enclosed in the undivided box are mutually exclusive indicat-
ing that the system may be in exactly one of these states at a time. This is
called XOR decomposition.
The two superstates “on” and “cycling” show how transitions are
simplified.
The event reset fires off-red on because red on is marked as the only
default state with both superstates on and cycling.
In a state chart, a state may be an aggregate of other states (a superstate)
or an atomic state.
In the basic state model, one transition arrow must exist for each transi-
tion having the same event, the same resultant state but different accept-
ing states. This may be represented in a statechart with a single transition
from a superstate.
Figure 13.12 shows the statechart for the traffic light system.
Concatenation
Concatenation involves the formation of a subclass that has no locally defined
features other than the minimum requirement of a class definition.
State Space
A subclass state space results from the two factors:
FIGURE 13.13
TIPS Class hierarchies that do not meet these conditions are very likely to be
buggy.
Missing transition:
FIGURE 13.14
FIGURE 13.15
Missing action:
FIGURE 13.16
FIGURE 13.17
EXAMPLE 13.7. For a three player game, what conformance test suite will
you form? Derive the test cases.
SOLUTION.
TABLE 13.1 Conformance Test Suite for Three Player Game.
EXAMPLE 13.8. For a three player game, what sneak path test suite will you
form? Derive the test cases.
SOLUTION.
TABLE 13.2 Sneak Path Test Suite for Three Player Game.
3 if (a = = 2) | | (x > 1)
4 x = x + 1;
5 cout < < x;
}
We draw its control graph as it is our main tool for test case identification
shown in Figure 13.22.
Statement coverage for this C++ code: It simply requires that a test
suite cause every statement to be executed at least once.
We can get 100% C1 coverage for the abx method with one test case.
Predicate Testing
A predicate is the condition in a control statement: if, case, do while, do
until, or for. The evaluation of a predicate selects a segment of code to be
executed.
There are four levels of predicate average [Myers]:
Decision coverage
Condition coverage
Decision/condition coverage
Multiple condition coverage
Each of these subsumes C1 and provides greater fault detecting power.
There are some situations where predicate coverage does not subsume state-
ment coverage:
Methods with no decisions.
Methods with built-in exception handlers, for example, C++ try/throw/
catch.
Decision Coverage
We can improve on statement coverage by requiring that each decision
branch be taken at least once (at least one true and one false evaluation).
Either of the test suites below provides decision (C2) coverage for the abx
( ) method.
TABLE 13.3 Test Suite for Decision Coverage of abx ( ) Method (function).
Decision Coverage
It does not require testing all possible outcomes of each condition. It
improves on this by requiring that each condition be evaluated as true or
false at least once. There are four conditions in the abx ( ) method.
Either of the following test suites will force at least one evaluation of
every condition. They are given below:
Decision/Condition Coverage
Condition coverage does not require testing all possible branches. Decision/
condition coverage could improve on this by requiring that each condition be
evaluated as true or false at least once and each branch be taken at least once.
However, this may be infeasible due to a short-circuit Boolean evaluation.
Most programming languages evaluate compound predicates from left
to right, branching as soon as a sufficient Boolean result is obtained. This
allows statements like
if (a = 0) or (b/a > c) then .....
to handle a = 0 without causing a “divide-by-zero” exception. This can
prevent execution of compound conditions.
Test values
Test case Path a b x x′
DC1.1 1-2-3-4-5 2 0 4 3
DC1.2 1-3-5 1 1 1 1
The following table shows how each condition is covered by the M test
suite (for the abx ( ) method).
a b x Test Case
>1 =0 dc M1.1
≠0 dc M1.2
≤1 =0 dc M1.3
Impossible due to short circuit
≠0 dc M1.4
Impossible due to short circuit
=2 dc >1 M1.1
dc ≤1 M1.2
≠2 dc >1 M1.3
dc ≤1 M1.4
So, there are 8 possible conditional variants. And we are able to exercise
all 8 with only 4 test cases.
For the same value of x, paths <A> <C> and <B> <D> are not possible.
Because the predicates could be merged, you may have found a fault or
at least a questionable piece of code.
4. Unstructured Code.
Acceptable: Exception handling, break.
Not acceptable: Anything else.
Test case derivation: We now have a graph model of the entire class.
We can identify all intra-class control paths. We can identify all intra-
class du paths for instance variables. We can apply all the preceding test
case techniques at the class level.
b. The C* metric: We know that V(G) or C of a graph, G, is given by
e – n + 2. Similarly, for the FREE flow graph the class complexity is
represented by C* or V*(G). It is the minimum number of intra-class
control paths.
∴ C* = E – N + 2
E = em + 2m
N = nm + ns
where em = Total edges in all inserted subgraphs
m = Number of inserted subgraphs
nm = Total nodes in all inserted subgraphs
ns = Nodes in state graph (states)
Thus, we have
C * = em + 2m − nm − ns + 2
FIGURE 13.24
We need to consider three main facets of a class and its methods to
develop responsibility test cases. They are:
i. Functional Analysis
What kind of function is used to transform method inputs into
outputs?
Are inputs externally or internally determined?
ii. Domain Analysis
What are valid and invalid message inputs, states, and message
outputs?
What values should we select for test cases?
iii. Behavior Analysis
Does the sequence of method activation matter or not?
When sequence matters, how can we select efficient testing
sequence?
We will discuss each of these one by one.
A Testable Function
Must be independently invocable and observable.
Is typically a single responsibility and the collaborations necessary to
carry it out.
Often corresponds to a specific user command or menu action.
Testable functions should be organized into a hierarchy
Follow the existing design.
Develop a test function hierarchy.
Testable functions should be small, “atomic” units of work.
A method is a typically testable function for a class-level test. A use case
is a typically testable function for a cluster-level or system test.
Step 2. F
ind values for the variable matrix such that the determinant of the
entire matrix is not zero. This requires that
No row or column consists entirely of zeros.
An input domain and domain values are relatively easy to define. So, we have
a. Any input domain is the entire range of valid values for all external and
internal inputs to the method under test.
b. Private instance variables should be treated as input variables.
c. The domain must be modeled as a combinational function if there are
dependencies among several input variables.
An output domain may be more difficult to identify. So, we have
a. The output domain of an arithmetic function is the entire range of
values that can be computed by the function.
b. Output domains can be discontinuous–they may have “holes” or “cliffs.”
c. The output domain of a combination function is the entire range of
values for each action stub.
Next, we discuss a technique under domain analysis which is a type of
responsibility-based/ black-box testing method. This technique is popularly
known as equivalence class partitioning.
An equivalence class is a group of input values which require the same
response. A test case is written for members of each group and for non-
members. The test case designates the expected response.
FIGURE 13.25
Collection
operation Input size Collection state Expected result
1. Add Single element empty added
Single element not empty added
Single element capacity-1 added
Single element full reject
Several elements, not empty reject
sufficient to overflow
Several elements capacity-1 reject
Null element empty no action
Null element not empty no action
2. Update/ Several elements, not empty reject
Replace sufficient to overflow
by 1
Several elements, not empty accept
sufficient to reach
capacity
Several elements, not empty accept
sufficient to reach
capacity
Several elements, not empty accept
sufficient to reach
capacity-1
Several elements, not empty accept, check clean-up
fewer than in
updated collection
Collection Element
operation Input position Collection state Expected result
All operations Single item First Not empty Added
Single item Last Not empty Added
Delete/ Single item dc Single element Deleted
Remove Single item dc Empty Reject
A normal value
The upper-bound
The upper-bound +1
Try formula verification tests with array elements initialized to unusual
data patterns.
All elements zero or null
All elements one
All elements same value
All elements maximum value, all bits on, etc.
All elements except one are zero, one, or max
We treat each sub-structure with a special role (header, pointer vector,
etc.), as a separate data structure.
The pair-wise operand test pattern may also be applied to operators with
array operands.
Relationship test patterns: Collection classes may implement relation-
ships (mapping) between two or more classes. We have various ways of
showing relationships. For example
Entity-relationship model:
FIGURE 13.26
FIGURE 13.27
FIGURE 13.28
Exception Testing
Exception handling (like Ada exceptions, C++ try/throw/catch) add implicit
paths. It is harder to write cases to activate these paths. However, exception
handling is often crucial for reliable operations and should be tested. Test
cases are needed to force exceptions.
File errors (empty, overflow, missing)
I/O errors (device not ready, parity check)
Arithmetic over/under flows
Memory allocation
Task communication/creation
How can this testing be done?
1. Use patches and breakpoints: Zap the code or data to fake an error.
2. Use selective compilation: Insert exception-forcing code (e.g., divide-
by-zero) under control of conditional assembly, macro definition, etc.
3. Mistune: Cut down the available storage, disk space, etc. to 10% of
normal, for example, saturate the system with compute bound tasks.
This can force resource related exceptions.
4. Cripple: Remove, rename, disable, delete, or unplug necessary
resources.
5. Pollute: Selectively corrupt input data, files, or signals using a data zap
tool.
Suspicion Testing
There are many situations that indicate additional testing may be valuable
[Hamlet]. Some of those situations are given below:
1. A module written by the least experienced programmer.
2. A module with a high failure rate in either the field or in development.
3. A module that failed an inspection and needed big changes at the last
minute.
4. A module that was subject to a late or large change order after most of
the coding was done.
5. A module about which a designer or programmer feels uneasy.
These situations don’t point to specific faults. They may mean more
extensive testing is warranted. For example if n-tests are needed for branch
coverage, use 5n instead to test.
Error Guessing
Experience, hunches, or educated guesses can suggest good test cases.
There is no systematic procedure for guessing errors. According to Beizer,
“logic errors and fuzzy thinking are inversely proportional to the probability
of a path’s execution.”
For example, for a program that sorts a list, we could try:
An empty input list.
An input list with only one item.
A list where all entries have the same value.
A list that is already sorted.
We can try for weird paths:
Try to find the most tortuous, longest, strongest path from entry to exit.
Try “impossible” paths, and so on.
The idea is to find special cases that may have been overlooked by more
systematic techniques.
Historical Analysis
Metrics from past projects or previous releases may suggest possible trouble
spots. We have already noted that C metric was a good predictor of faults.
Coupling and cohesion are also good fault predictors. Modules with high
coupling and low cohesion are 7 times more likely to have defects compared
to modules with low coupling and high cohesion.
Testing really occurs with the third view but we still have some problems.
For example, we cannot test abstract classes because they cannot be instanti-
ated. Also, if we are using fully flattened classes, we will need to “unflatten”
them to their original form when our unit testing is complete. If we do not
use fully flattened classes, in order to compile a class, we will need all of the
other classes above it in the inheritance tree. One can imagine the software
configuration management (SCM) implications of this requirement.
The class as a unit makes the most sense when little inheritance occurs
and classes have what we might call internal control complexity. The class
itself should have an “interesting” state-chart and there should be a fair
amount of internal messaging.
1. Client/Supplier Integration
This integration is done by “users.” The following steps are followed:
Step 1. We first integrate all servers, that is, those objects that do not
send messages to other application objects.
Step 2. Next we integrate agents, i.e., those objects that send and
receive messages. This first integration build consists of the
immediate clients of the application servers.
Step 3. There may be several agent builds.
Step 4. Finally, we integrate all actors, i.e., application objects that
send messages but do not receive them.
This technique is called client/supplier integration and is shown in
Figure 13.29.
2. Thread Integration
Thread integration is integration by end-to-end paths. A use case contains at
least one, possibly several threads.
Threaded integration is an incremental technique. Each processing func-
tion is called a thread. A collection of related threads is often called a build.
Builds may serve as a basis for test management. The addition of new threads
for the product undergoing integration proceeds incrementally in a planned
fashion. System verification diagrams are used for “threading” the requirements.
3. Configuration Integration
In systems where there are many unique target environment configurations,
it may be useful to try to build each configuration. For example, in a dis-
tributed system, this could be all or part of the application allocated to a
particular node or processor. In this situation, servers and actors are likely to
encapsulate the physical interface to other nodes, processors, channels, etc.
It may be useful to build actor simulators to drive the physical subsystem in
an controllable and repeatable manner.
Configuration integration has three main steps:
1. Identify the component for a physical configuration.
2. Use message-path or thread-based integration for this subsystem.
3. Integrate the stabilized subsystems using a thread-based approach.
4. Hybrid Strategy
A general hybrid strategy is shown in the following steps:
a. Do complete class testing on actors and servers. Perform limited
bottom-up integration.
b. Do top-down development and integration of the high-level control
modules. This provides a harness for subsequent integration.
c. Big-bang the minimum software infrastructure: OS configuration,
database initialization, etc.
d. Achieve a high coverage for infrastructure by functional and structural
testing.
e. Big-bang the infrastructure and high-level control.
f. Use several message path builds or thread builds to integrate the
application agents.
Users
The following questions can help to identify user categories:
Who?
Who are the users?
Can you find any dichotomies?
— Big company versus small
— Novice versus experienced
— Infrequent versus heavy user
Experience: Education, culture, language, training, work with simi-
lar systems, etc.
Why?
What are their goals in performing the task—what do they want?
What do they produce with the system?
How?
What other things are necessary to perform the task?
— Information, other systems, time, money, materials, energy, etc.
What methods or procedures do they use?
Environment
The user/task environment (as well as the OS or computer) may span a wide
range of conditions.
Consider any system embedded in a vehicle. Anywhere the vehicle can
be taken is a possible environment.
What external factors are relevant to the user? To the system’s ability to
perform? For example, buildings, weather, electromagnetic interference, etc.
What internal factors are relevant to the user? To the system’s ability to
perform? For example, platform resources like speed, memory, ports, etc.,
AC power system loading, multitasking.
With scenarios categories in hand, we can focus on specific test cases.
This is called an operational profile.
An activity is a specific discrete interaction with the system. Ideally, an
activity closely corresponds to an event-response pair. It could be a subjec-
tive definition but must have a start/stop cycle. We can refine each activity
into a test by specifying:
Probability of occurrence.
Data values derived by partitioning.
Equivalence classes are scenario-oriented.
Scenarios are a powerful technique but have limitations and require a
concentrated effort. So, we have the following suggestions:
User/customer cooperation will probably be needed to identify realistic
scenarios.
Scenarios should be validated with user/customer or a focus group.
Test development and evaluation requires people with a high level of
product expertise who are typically in short supply.
Generate a large number of test cases.
Well-defined housekeeping procedures and automated support is
needed if the scenarios will be used over a long period of time by many
people.
Next we consider a case study of ACME Widget Co.
We will illustrate the operational profile (or specific test cases) with the
ACME Widget Co. order system.
Users
There are 1000 users of the ACME Widget order system.
Their usage patterns differ according to how often they use the system.
Of the total group, 300 are experienced, and about 500 will use the sys-
tem on a monthly or quarterly basis. The balance will use the system less
than once every six months.
Environment
Several locations have significantly different usage patterns.
Plant, office, customer site, and hand-held access.
Some locations are only visited by certain users. For example, only
experienced users go to customer sites.
Usage
The main user-activities are order entry, order inquiry, order update,
printing a shipping ticket, and producing periodic reports.
After studying the usage patterns, we find proportions vary by user type
and location.
For example, the infrequent user will never print a shipping ticket but is
likely to request periodic reports.
Some scenarios are shown in Table 13.5.
Scenario
User type p1 Location p2 Activity p3 probability (p)
Infrequent 0.2 Plant 0.05 Report 0.75 0.0075
0.2 Plant 0.05 Update 0.15 0.0015
0.2 Plant 0.05 Inquiry 0.10 0.0010
0.2 Office 0.95 Inquiry 0.60 0.1140
0.2 Office 0.95 Update 0.10 0.0190
0.2 Office 0.95 Report 0.30 0.0570
1.0000
There are two main parts in an operational profile: usage scenarios and
scenario probabilities or:
Scenario
User type p1 Location p2 Activity p3 probability (p)
Experienced 0.3 Plant 0.80 Print Ticket 0.90 0.2160
Cyclical 0.5 Hand Held 0.40 Order Entry 0.95 0.1900
Cyclical 0.5 Office 0.50 Inquiry 0.50 0.1250
Infrequent 0.2 Office 0.95 Inquiry 0.60 0.1140
Cyclical 0.5 Office 0.50 Order Entry 0.30 0.0750
Infrequent 0.2 Office 0.95 Report 0.30 0.0570
Cyclical 0.5 Office 0.50 Update 0.20 0.0500
Cyclical 0.5 Plant 0.10 Print Ticket 0.90 0.0450
Experienced 0.3 Office 0.10 Order Entry 0.70 0.0210
Experienced 0.3 Customer 0.10 Inquiry 0.70 0.0210
Site
Infrequent 0.2 Office 0.95 Update 0.10 0.0190
Experienced 0.3 Plant 0.80 Update 0.05 0.0120
Experienced 0.3 Plant 0.80 Inquiry 0.05 0.0120
Infrequent 0.2 Plant 0.05 Report 0.75 0.0075
Cyclical 0.5 Hand Held 0.40 Inquiry 0.03 0.0060
Experienced 0.3 Customer 0.10 Update 0.20 0.0060
Site
Experienced 0.3 Office 0.10 Update 0.20 0.0060
Cyclical 0.5 Hand Held 0.40 Update 0.20 0.0060
Experienced 0.3 Customer 0.10 Order Entry 0.10 0.0030
Site
Experienced 0.3 Office 0.10 Inquiry 0.10 0.0030
Cyclical 0.5 Plant 0.10 Inquiry 0.05 0.0025
Cyclical 0.5 Plant 0.10 Update 0.05 0.0025
Infrequent 0.2 Plant 0.05 Update 0.15 0.0015
Infrequent 0.2 Plant 0.05 Inquiry 0.10 0.0010
1.0000
The operational profile is a framework for a complete test plan. For each sce-
nario, we need to determine which functions of the system under test will be used.
An activity often involves several system functions; these are called “runs.”
Each run is a thread. It has an identifiable input and produces a distinct output.
For example, the experienced/plant/ticket scenario might be composed
of several runs.
Display pending shipments.
Display scheduled pickups.
Assign carrier to shipment.
Enter carrier landing information.
Print shipment labels.
Enter on-truck timestamp.
Some scenarios may be low probability but have high potential impact.
For example, suppose ACME Widget is promoting order entry at the cus-
tomer site as a key selling feature. So, even though this accounts for only 3
in a thousand uses, it should be tested as if it was a high-priority scenario.
This can be accomplished by adding a weight to each scanario:
Weights Scenario
+2 Must test, mission/safety critical
+1 Essential functionality, necessary for robust operation
+0 All other scenarios
Approach used
Reveal faults in new or modified modules. This requires running new
test cases and typically reusing old test cases.
Reveal faults in unchanged modules. This requires re-running old test
cases.
Requires reusable library of test suites.
When to do?
Periodically, every three months.
After every integration of fixes and enhancements.
Frequency, volume, and impact must be considered.
What to test?
Changes result from new requirements or fixes.
Analysis of the requirements hierarchy may suggest which subset to
select.
If new modules have been added, you should redetermine the call paths
required for CI coverage.
Acceptance Testing
On completion of the developer administered system test, three additional
forms of system testing may be appropriate.
a. Alpha test: Its main features are:
1. It is generally done “in-house” by an independent test organization.
2. The focus is on simulating real-world usage.
3. Scenario-based tests are emphasized.
b. Beta test: It’s main features are:
1. It is done by representative groups of users or customers with
prerelease system installed in an actual target environment.
2. Customer attempts routine usage under typical operating conditions.
3. Testing is completed when failure rate stabilizes.
c. Acceptance test: Its main features are:
1. Customer runs test to determine whether or not to accept the system.
2. Requires meeting of the minds on acceptance criterion and
acceptance test plan.
2. The test design: It defines the features/functions to test and the pass fail
criterion. It designates all test cases to be used for each feature/functions.
3. The test cases: It defines the items to be tested and provides traceability
to SRS, SDD. User operations or installation guides. It specifies the
input, output, environment, procedures, intercase dependencies of each
test case.
4. Test procedures: It describes and defines the procedures necessary to
perform each test.
Each item, section, and sub-section should have an identifying number
and designate date prepared and revised, authors, and approvals.
How should we go about testing a module, program, or system? What
activities and deliverables are necessary? A general approach is described in
IEEE 87a, an accepted industry standard for unit testing. It recommends
four main steps:
Step 1. Prepare a testing plan: Document the approach, the necessary
resources, and the exit criterion.
Step 2. Design the test:
2.1 Develop an architecture for the test, organize by goals.
2.2 Develop a procedure for each test case.
2.3 Prepare the test cases.
2.4 Package the plan per IEEE 82a.
2.5 Develop test data.
Step 3. Test the components:
3.1 Run the test cases.
3.2 Check and classify the results of each test case:
3.2.1 Actual results meet expected results.
3.2.2 Failure observed:
Implementation fault.
Design fault.
Undetermined fault.
Choice of Standards
The planning aspects are proactive measures that can have an across-the-
board influence on all testing projects.
Standards comprise an important part of planning in any organization.
Standards are of two types:
1. External standards
2. Internal standards
Built-in test features are shown in Figure 13.31 and are summarized below:
1. Assertions automate basic checking and provide “set and forget” runtime
checking of basic conditions for correct execution.
2. Set/Reset provides controllability.
3. Reporters provide observability.
4. A test suite is a collection of test cases and plan to use them. IEEE
standard 829 defines the general contents of a test plan.
5. Test tools require automation. Without automationless testing, greater
costs will be incurred to achieve a given reliability goal. The absence of
tools inhibits testability.
6. Test process: The overall software process capability and maturity can
significantly facilitate or hinder testability. This model follows the key
process ability of the defined level for software product engineering.
Test case 1 in the table above says to test the combination that has book
set to “in-stock,” purchase set to “cash,” and shipping set to “overnight.”
clicked, the result is displayed. The clear button will clear the screen. Click-
ing on the quit button ends the application.
Now, we will perform the following on this GUI application:
FIGURE 13.33
This is RUC-3. Based on this real-use case, we derive system-level test cases
also. They are given below.
Third Level: To derive test cases from the finite state machines derived
from a finite state machine description of the external appearance of the
GUI. This is shown below:
A test case from this formulation is a circuit. A path in which the start
node is the end node is usually an idle state. Nine such test cases are shown
in the table below. The numbers in the table show the sequence in which
the states are traversed by the test case. The test cases, TC1 to TC9, are as
follows:
State TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8 TC9
Idle 1 1 1 1 1 1 1 1 1, 3
Missing country and dollar message 2 2
Country selected 2 2, 4 2 4, 6
U.S. dollar amount entered 2 2, 4 2
Missing U.S. dollar msg 3 5
Both inputs done 3 5 3 5 3 3 7
Missing country msg 3
Equivalent amount displayed 4 6 4 6
Idle 2 5 7 5 7 1 1 1 1
Fourth Level: To derive test cases from state-based event tables. This
would have to be repeated for each state. We might call this the exhaustive
level because it exercises every possible event for each state. However, it
is not truly exhaustive because we have not tested all sequences of events
across states. The other problem is that it is an extremely detailed view of
system testing that is likely very redundant with integration and even unit-
level test cases.
Now, we will discuss statechart-based system testing.
Statecharts are a fine basis for system testing. The problem is that State
charts are prescribed to be at the class level in UML. There is no easy way
to compose Statecharts of several classes to get a system-level Statechart.
A possible solution is to translate each class-level Statechart into a set of
event-driven petri nets (EDPNs) to describe threads to be tested. Then the
atomic system functions (ASFs) and the data places are identified. Say, For
our GUI-application they are as follows:
SUMMARY
Various key object-oriented concepts can be tested using some testing
methods and tools. They are summarized below:
variables.
Code coverage methods for methods of a
class.
Alpha-Omega method of exercising
methods.
State diagram to test states of a class.
higher.
7. Inter-object communication Message sequencing.
8. Object reuse and parallel Needs more frequent integration tests and
ANSWERS
1. a. 2. a. 3. c. 4. c.
5. a. 6. d. 7. a. 8. b.
9. b. 10. c.
Q. 4.The big bang is estimated to have occurred about 18 billion years ago.
Given: Paths/second = 1 × 103
Second/year = 3.154 × 107
If we start testing 103 paths/second at instant of the big bang, how
many paths could we test? At what loop value of x, would we run out
of time?
Ans. Given: 18 billion years = 1.8 × 109 years
and 3.154 × 107 seconds/year
REVIEW QUESTIONS
1. Object-oriented languages like JAVA do not support pointers. This
makes testing easier. Explain how.
2. Consider a nested class. Each class has one method. What kind of
problems will you encounter during testing of such nested classes? What
about their objects?
3. Explain the following:
a. Unit and integration testing
b. Object-oriented testing
4. Write and explain some common inheritance related bugs.
5. How is object-oriented testing different from procedural testing?
Explain with examples.
6. Describe all the methods for class testing.
7. Write a short paragraph on issues in object-oriented testing.
8. Explain briefly about object-oriented testing methods with examples.
Suggest how you test object-oriented systems by use-case approach.
9. Illustrate “how do you design interclass test cases.” What are the various
testing methods applicable at the class level?
10. a. What are the implications of inheritance and polymorphism in object-
oriented testing?
b. How does GUI testing differ from normal testing? How is GUI
testing done?
11. With the help of suitable examples, demonstrate how integration testing
and system testing is done for object-oriented systems?
12. How reusability features can be exploited by object-oriented testing
approach?
13. a. Discuss the salient features of GUI testing. How is it different from
class testing?
b. Explain the testing process for object-oriented programs (systems).
14. Draw a state machine model for a two-player game and also write all
possible control faults from the diagram.
14
The Game Testing Process 1
Developers don’t fully test their own games. They don’t have time to, and
even if they did, it’s not a good idea. Back at the dawn of the video game
era, the programmer of a game was also its artist, designer, and tester. Even
though games were very small—the size of email— the programmer spent
most of his time designing and programming. Little of his time was spent
testing. If he did any testing, it was based on his own assumptions about how
players would play his game. The following sidebar illustrates the type of
problem these assumptions could create.
This chapter appeared in Game Testing, Third Edition, C. Schultz and R. D. Bryant.
1
breathtaking (for the time) and the game went on to become one of the best
sellers on the Intellivision platform.
Weeks after the game was released, however, a handful of customers
began to call the game’s publisher, Mattel Electronics, with an odd com-
plaint: when they scored more than 9,999,999 points, the score displayed
negative numbers, letters, and symbol characters. This in spite of the prom-
ise of “unlimited scoring potential” in the game’s marketing materials. The
problem was exacerbated by the fact that the Intellivision console had a fea-
ture that allowed players to play the game in slow motion, making it much
easier to rack up high scores. John Sohl, the programmer, learned an early
lesson about video games: the player will always surprise you.
The sidebar story demonstrates why video game testing is best done by
testers who are: (a) professional, (b) objective, and (c) separated—either
physically or functionally—from the game’s development team. That remove
and objectivity allows testers to think independently of the developers, to
function as players, and to figure out new and interesting ways to break the
game. This chapter discusses how, like the gears of a watch, the game testing
process meshes into the game development process.
INPUTS OUTPUTS
Button Presses
Video
Audio
Audio
Video GAME CODE
(THE “BLACK-BOX”)
Vibration
Packets
Memory
Memory
Once some or all of these types of input are received by the game,
it reacts in interesting ways and produces such output as video, audio,
vibration (via force feedback devices), and data saved to memory cards or
hard drives.
The input path of a video game is not
oneway, however. It is a feedback loop, The Player
where the player and the game are constantly
reacting to each other. Players don’t receive Inputs Outputs
output from a game and stop playing. They
constantly alter and adjust their input “on the The Game
fly,” based on what they see, feel, and hear
in the game. The game, in turn, makes sim- FIGURE 14.2 The player’s
ilar adjustments in its outputs based on the feedback loop adjusts to the
game’s input, and vice versa.
inputs it receives from the player. Figure 14.2
illustrates this loop.
If the feedback received by the player were entirely predictable all
the time, the game would be no fun. Nor would it be fun if the feedback
received by the player were entirely random all the time. Instead, feed-
back from games should be just random enough to be unpredictable. It is
the unpredictability of the feedback loop that makes games fun. Because
the code is designed to surprise the player and the player will always sur-
prise the programmer, black-box testing allows testers to think and behave
like players.
Four white-box tests are required for this module to test the proper
behavior of each line of code within the module. The first test would be
to call the TeamName function with the parameter TEAM_AXIS and then
check that the string “RED” is returned. Second, pass the value of TEAM_
ALLIES and check that “BLUE” is returned. Third, pass TEAM_SPECTATOR
and check that “SPECTATOR” is returned. Finally, pass some other value
such as TEAM_NONE, which makes sure that “FREE” is returned. Together
these tests not only exercise each line of code at least once, they also test the
behavior of both the “true” and “false” branches of each if statement.
This short exercise illustrates some of the key differences between a
white-box testing approach and a black-box approach:
Black-box testing should test all of the different ways you could choose
a test value from within the game, such as different menus and buttons.
White-box testing requires you to pass that value to the routine in one
form—its actual symbolic value within the code.
By looking into the module, white-box testing reveals all of the possi-
ble values that can be provided to and processed by the module being
tested. This information might not be obvious from the product require-
ments and feature descriptions that drive black-box testing.
Black-box testing relies on a consistent configuration of the game and its
operating environment in order to produce repeatable results. White-
box testing relies only on the interface to the module being tested and
is concerned only about external files when processing streams, file sys-
tems, or global variables.
the game support? What features have been cut? The scope of testing
should ensure that no new issues were introduced in the process of fixing
bugs prior to this release.
2. Prepare for testing. Code, tests, documents, and the test environment
are updated by their respective owners and aligned with one another. By
this time the development team should have marked the bugs fixed for
this build in the defect database so the QA team can subsequently verify
those fixes and close the bugs.
3. Perform the test. Run the test suites against the new build. If you find
a defect, test “around” the bug to make certain you have all the details
necessary to write as specific and concise a bug report as possible. The
more research you do in this step, the easier and more useful the bug
report will be.
4. Report the results. Log the completed test suite and report any defects
you found.
5. Repair the bug. The test team participates in this step by being available
to discuss the bug with the development team and to provide any directed
testing a programmer might require to track the defect down.
6. Return to Step 1 and re-test. With new bugs and new test results
comes a new build.
These steps not only apply to black-box testing, they also describe
white-box testing, configuration testing, compatibility testing, and any
other type of QA. These steps are identical no matter what their scale. If
you substitute the word “game” or “project” for the word “build” in the
preceding steps, you will see that they can also apply to the entire game, a
phase of development (Alpha, Beta, and so on), or an individual module or
feature within a build. In this manner, the software testing process can be
considered fractal—the smaller system is structurally identical to the larger
system, and vice versa.
As illustrated in Figure 14.3, the The Tester
testing process itself is a feedback
loop between the tester and devel-
oper. The tester plans and executes Bugs Code
tests on the code, then reports the
bugs to the developer, who fixes The Developer
them and compiles a new build,
which the tester plans and executes, FIGURE 14.3 The testing process
and so on. feedback loop.
This is a very small portion of a very simple test suite for a very small
and simple game. The first section (steps one through seven) tests launching
the game, ensuring that the default display is correct, and exiting. Each step
either gives the tester one incremental instruction or asks the tester one
simple question. Ideally, these questions are binary and unambiguous. The
tester performs each test case and records the result.
Because the testers will inevitably observe results that the test designer
hadn’t planned for, the Comments field allows the tester to elaborate on a
Yes/No answer, if necessary. The lead or primary tester who receives the
completed test suite can then scan the Comments field and make adjust-
ments to the test suite as needed for the next build.
Where possible, the questions in the test suite should be written in such
a way that a “yes” answer indicates a “pass” condition—the software is work-
ing as designed and no defect is observed. “No” answers, in turn, should
indicate that there is a problem and a defect should be reported. There are
several reasons for this: it’s more intuitive, because we tend to group “yes”
and “pass” (both positives) together in our minds the same way we group
“no” and “fail.” Further, by grouping all passes in the same column, the com-
pleted test suite can be easily scanned by both the tester and test managers
to determine quickly whether there were any fails. A clean test suite will
have all the checks in the Pass column.
For example, consider a test case covering the display of a tool tip—a
small window with instructional text incorporated into many interfaces.
A fundamental test case would be to determine whether the tool tip text
contains any typographical errors. The most intuitive question to ask in the
test case is:
The problem with this question is that a pass (no typos, hence no bugs)
would be recorded as a “no.” It would be very easy for a hurried (or tired)
tester to mistakenly mark the Fail column. It is far better to express the
question so that a “yes” answer indicates a “pass” condition:
Is the text free of typographical errors?
Entry Criteria
It’s advisable to require that any code release meets some criteria for being
fit to test before you risk wasting your time, or your team’s time, testing it.
This is similar to the checklists that astronauts and pilots use to evaluate the
fitness of their vehicle systems before attempting flight. Builds submitted to
testing that don’t meet the basic entry criteria are likely to waste the time of
both testers and programmers. The countdown to testing should stop until
the test “launch” criteria are met.
The following is a list of suggestions for entry criteria. Don’t keep these
a secret from the rest of the development team. Make the team aware of the
purpose—to prevent waste—and work with them to produce a set of criteria
that the whole team can commit to.
The game code should be built without compiler errors. Any new com-
piler warnings that occur are analyzed and discussed with the test team.
The code release notes should be complete and should provide the detail
that testers need to plan which tests to run or to re-run for this build.
Defect records for any bugs closed in the new release should be updated
so they can be used by testers to make decisions about how much to test
in the new build.
Tests and builds should be properly version-controlled, as described in
the sidebar, “Version Control: Not Just for Developers.”
When you are sufficiently close to the end of the project, you also want
to receive the game on the media on which it will ship. Check that the
media provided contains all of the files that would be provided to your
customer.
of bugs in an old build. This is not only a waste of time, but it can cause
panic on the part of the programmer and the project manager.
Proper version control for the test team includes the following steps:
1. Collect all prior physical (e.g., disk-based) builds from the test team
before distributing the new build. The prior versions should be staked
together and archived until the project is complete. (When testing digital
downloads, uninstall and delete or archive prior digital builds.)
2. Archive all paperwork. This includes not only any build notes you received
from the development team, but also any completed test suites, screen
shots, saved games, notes, video files, and any other material generated
during the course of testing a build. It is sometimes important to retrace
steps along the paper trail, whether to assist in isolating a new defect or
determining in what version an old bug was re-introduced.
3. Verify the build number with the developer prior to distributing it.
4. In cases where builds are transmitted electronically, verify the byte count,
file dates, and directory structure before building it. It is vital in situations
where builds are sent via FTP, email, Dropbox (www.dropbox.com) or other
digital means that the test team makes certain to test a version identical
to the version the developers uploaded. Confirm the integrity of the
transmitted build before distributing it to the testers.
5. Renumber all test suites and any other build-specific paperwork or
electronic forms with the current version number.
6. Distribute the new build for smoke testing.
Configuration Preparation
Before the test team can work with the new build, some housekeeping is in
order. The test equipment must be readied for a new round of testing. The
test lead must communicate the appropriate hardware configuration to each
tester for this build. Configurations typically change little over the course of
game testing. To test a single-player-only console game, you need the game
console, a controller, and a memory card or hard drive. That hardware con-
figuration typically will not change for the life of the project. If, however, the
new build is the first in which network play is enabled, or a new input device
or PC video card has been supported, you will perhaps need to augment the
hardware configuration to perform tests on that new code.
Save your saves! Always archive your old player-created data, including
TIP game saves, options files, and custom characters, levels, or scenarios.
Testing takes place in the lab and labs should be clean. So should test
hardware. It’s difficult to be too fastidious or paranoid when preparing test
configurations. When you get a new build, reformat your PC rather than
merely uninstall the new build.
Delete your old builds! Reformat your test hardware—whether it’s a PC, a
TIP tablet or a smartphone. If it’s a browser game, delete the cache.
Browser games should be purged from each browser’s cache and the
browser should be restarted before you open the new game build. In the
case of Flash® games, you can right-click on the old build and select “Global
Settings…” This will launch a separate browser process and will connect
you to the Flash Settings Manager. Choosing the “Website Storage Settings
panel” will launch a Flash applet. Click the “Delete all sites” button and
close all of your browser processes. Now you can open the new build of your
Flash game.
iOS™ games should be deleted both from the device and the iTunes®
client on the computer the device is synched to. When prompted by iTunes,
choose to delete the app entirely (this is the “Move to Recycle Bin” or “Move
to Trash” button). Now, synch your device and make certain the old build has
been removed both from iTunes and your device. Empty the Recycle Bin (or
the Trash), relaunch iTunes, copy the new build, and synch your device again.
Android™ games, like iOS games, should be deleted entirely from the
device and your computer. Always synch your device to double-check that
you have scrubbed the old build off before you install the new build.
Whatever protocol is established, config prep is crucial prior to the
distribution of a new build.
Smoke Testing
The next step after accepting a new build and preparing to test it is to cer-
tify that the build is worthwhile to submit to formal testing. This process is
sometimes called smoke testing, because it’s used to determine whether a
build “smokes” (malfunctions) when run. At a minimum, it should consisted
of a “load & launch,” that is, the lead or primary tester should launch the
game, enter each module from the main menu, and spend a minute or two
playing each module. If the game launches with no obvious performance
problems and each module implemented so far loads with no obvious prob-
lems, it is safe to certify the build, log it, and duplicate it for distribution to
the test team.
Now that the build is distributed, it’s time to test for new bugs, right?
Not just yet. Before testing can take a step forward, it must first take a step
backward and verify that the bugs the development team claims to have fixed
in this build are indeed fixed. This process is known as regression testing.
Regression Testing
Fix verification can be at once very satisfying and very frustrating. It gives
the test team a good sense of accomplishment to see the defects they report
disappear one by one. It can be very frustrating, however, when a fix of one
defect creates another defect elsewhere in the game, as can often happen.
The test suite for regression testing is the list of bugs the development
team claims to have fixed. This list, sometimes called a knockdown list, is
ideally communicated through the bug database. When the programmer or
artist fixes the defect, all they have to do is change the value of the Devel-
oper Status field to “Fixed.” This allows the project manager to track the
progress on a minute-to-minute basis. It also allows the lead tester to sort
the regression set (by bug author or by level, for example). At a minimum,
the knockdown list can take the form of a list of bug numbers sent from the
development team to the lead tester.
Each tester will take the bugs they’ve been assigned and perform the
steps in the bug write-up to verify that the defect is indeed fixed. The fixes
for many defects are easily verified (typos, missing features, and so on).
Some defects, such as hard-to-reproduce crashes, could seem fixed, but the
lead tester might want to err on the side of caution before he closes the bug.
By flagging the defect as verify fix, the bug can remain in the regression set
(i.e., stay on the knockdown list) for the next build (or two), but out of the
set of open bugs that the development team is still working on. Once the
bug has been verified as fixed in two or three builds, the lead tester can then
close the bug with more confidence.
At the end of regression testing, the lead tester and project manager
can get a very good sense of how the project is progressing. A high fix rate
(number of bugs closed divided by the number of bugs claimed to have been
fixed) means the developers are working efficiently. A low fix rate could be
cause for concern. Are the programmers arbitrarily marking bugs as fixed
if they think they’ve implemented new code that might address the defect,
rather than troubleshooting the defect itself? Are the testers not writing
clear bugs? Is there a version control problem? Are the test systems config-
ured properly? While the lead tester and project manager mull over these
questions, it’s time for you to move on to the next step in the testing process:
performing structured tests and reporting the results.
These are the types of questions you will be asked by the lead tester,
project manager, or developer. Try to develop the habit of second-guessing
such questions by performing some quick additional testing before you write
the bug. Test to see whether the defect occurs in other areas. Test to deter-
mine whether the bug happens when you choose a different character. Test
to check which other game modes contain the issue. This practice is known
as testing “around” the bug.
Once you are satisfied that you have anticipated any questions that the
development team might ask, and you have all your facts ready, you are
finally ready to write the bug report.
possible. You can’t assume that everyone reading your bug report will be as
familiar with the game as you are. Testers spend more time in the game—
exploring every hidden path, closely examining each asset—than almost any-
one else on the entire project team. A well-written bug will give a reader
who is not familiar with the game a good sense of the type and severity of the
defect it describes.
This is neither a defect nor a fact; it’s an unsolicited and arbitrary opin-
ion about design. There are forums for such opinions—discussions with the
lead tester, team meetings, play testing feedback—but the bug database isn’t
one of them.
A common complaint in many games is that the artificial intelligence, or
AI, is somehow lacking. (AI is a catch-all term that means any opponents or
NPCs controlled by the game code.)
The AI is weak.
This could indeed be a fact, but it is written in such a vague and gen-
eral way that it is likely to be considered an opinion. A much better way to
convey the same information is to isolate and describe a specific example of
AI behavior and write up that specific defect. By boiling issues down to spe-
cific facts, you can turn them into defects that have a good chance of being
addressed.
Before you begin to write a bug report, you have to be certain that you
TIP have all your facts.
Brief Description
Larger databases could contain two description fields: Brief Description (or
Summary) and Full Description (or Steps). The Brief Description field is
used as a quick reference to identify the bug. This should not be a cute nick-
name, but a one-sentence description that allows team members to identify
and discuss the defect without having to read the longer, full description
each time. Think of the brief description as the headline of the defect report.
Crash to desktop.
This is a complete sentence, but it is not specific enough. What did the
tester experience? Did the game not save? Did a saved game not load? Does
saving cause a crash?
This is a run-on sentence that contains far too much detail. A good way
to boil it down might be
Write the full description first, and then write the brief description.
TIP Spending some time polishing the full description will help you
understand the most important details to include in the brief description.
Full Description
If the brief description is the headline of a bug report, the Full Description
field provides the gory details. Rather than a prose discussion of the defect,
the full description should be written as a series of brief instructions so that
anyone can follow the steps and reproduce the bug. Like a cooking recipe—
or computer code, for that matter—the steps should be written in second
person imperative, as though you were telling someone what to do. The last
step is a sentence (or two) describing the bad result.
The fewer steps, the better; and the fewer words, the better. Remember
Brad Pitt’s warning to Matt Damon in Ocean’s Eleven: don’t use seven steps
when four will do. Time is a precious resource when developing a game. The
less time it takes a programmer to read, reproduce, and understand the bug,
the more time he has to fix it.
1. Launch game.
2. Choose multiplayer.
3. Choose skirmish.
4. Choose “Sorrowful Shoals” map.
5. Choose two players.
6. Start game.
These are very clear steps, but for the sake of brevity they can be boiled
down to
1.
Create a game against one human player. Choose
Serpent tribe.
2. Send a swordsman into a Thieves Guild to get the
Mugging power-up.
3.
Have your opponent create any unit and give
that unit any power-up.
4.
Have your Swordsman meet the other player’s
unit somewhere neutral on the map.
5.
Activate the Mugging power-up.
6.
Attack your opponent’s unit.
--> Crash to desktop as Swordsman strikes.
This might seem like many steps, but it is the quickest way to repro-
duce the bug. Every step is important to isolate the behavior of the mug-
ging code. Even small details, like meeting in a neutral place, are important,
because meeting in occupied territory might bring allied units from one side
or another into the fight, and the test might then be impossible to perform.
Great Expectations
Oftentimes, the defect itself will not be obvious from the steps in the full
description. Because the steps produce a result that deviates from player
expectation, but does not produce a crash or other severe or obvious
symptom, it is sometimes necessary to add two additional lines to your full
description: Expected Result and Actual Result.
Expected Result describes the behavior that a normal player would rea-
sonably expect from the game if the steps in the bug were followed. This
expectation is based on the tester’s knowledge of the design specification,
the target audience, and precedents set (or broken) in other games, espe-
cially games in the same genre.
Actual Result describes the defective behavior. Here’s an example.
1. Create a multiplayer game.
2. Click Game Settings.
3. Using your mouse, click any map on the map list.
Remember the map you clicked on.
4. Press up or down directional keys on your keyboard.
5. Notice the highlight changes. Highlight any other
map.
6. Click Back.
7. Click Start Game.
Expected Result: Game loads map you chose with the keyboard.
Actual Result: Game loads map you chose with the mouse.
Although the game loaded a map, it wasn’t the map the tester chose with
the keyboard (the last input device he used). That’s a bug, albeit a subtle
one. Years of precedent creates the expectation in the player’s mind that the
computer will execute a command based on the last input the player gave.
Because the map-choosing interface failed to conform to player expectation
and precedent, it could be confusing or annoying, so it should be written up
as a bug.
Use the Expected/Actual Result steps sparingly. Much of the time,
defects are obvious (see Figure 14.5) Here’s an example of “stating the obvi-
ous” in a crash bug.
INTERVIEW
More players are playing games than ever before. As any human population
grows—and the pool of game players has grown exponentially over the last
decade—that population becomes more diverse. Players are different from
each other, have different levels of experience with games, and play games
for a range of different reasons. Some players want a competitive experi-
ence, some an immersive experience, some want a gentle distraction.
The pool of game testers in any organization is always less diverse than
the player base of the game they are testing. Game testers are profession-
als, they have skills in manipulating software interfaces, they are generally
(but not necessarily) experienced game players. It’s likely that if your job is
creating games, that you’ve played video games—a lot of them. But not every
player is like you.
Brent Samul, QA Lead for developer Mobile Deluxe, put it this way: “The
biggest difference when testing for mobile is your audience. With mobile
you have such a broad spectrum of users. Having played games for so long
myself, it can sometimes be really easy to overlook things that someone who
doesn’t have so much experience in games would get stuck on or confused
about.”
It’s a big job. “With mobile, we have the ability to constantly update and
add or remove features from our games. There are always multiple things to
test for with all the different configurations of smartphones and tablets that
people have today,” Mr. Samul says.
Although testers should write bugs against the design specification, the
authors of that specification are not omniscient. As the games on every plat-
form become more and more complex, it’s the testers’ job to advocate for
the players—all players—in their bug writing. (Permission Brent Samul)
Habits to Avoid
For the sake of clarity, effective communication, and harmony among mem-
bers of the project team try to avoid two common bug writing pitfalls: humor
and jargon.
Although humor is often welcome in high-stress situations, it is not wel-
come in the bug database. Ever. There are too many chances for misinter-
pretation and confusion. During crunch time, tempers are short, skins are
thin, and nerves are frayed. The defect database could already be a point of
contention. Don’t make the problem worse with attempts at humor (even if
you think your joke is hilarious). Finally, as the late William Safire warned,
you should “avoid clichés like the plague.”
It perhaps seems counterintuitive to want to avoid jargon in such a spe-
cialized form of technical writing as a bug report, but it is wise to do so.
Although some jargon is unavoidable, and each project team quickly devel-
ops it own nomenclature specific to the project they’re working on, testers
should avoid using (or misusing) too many obscure technical terms or acro-
nyms. Remember that your audience could range from programmers to
financial or marketing executives, so use plain language as much as possible.
Although testing build after build might seem repetitive, each new build
provides exciting new challenges with its own successes (fixed bugs and
passed tests) and shortfalls (new bugs and failed tests). The purpose of going
about the testing of each build in a structured manner is to reduce waste and
to get the most out of the game team. Each time around, you get new build
data that is used to re-plan test execution strategies and update or improve
your test suites. From there, you prepare the test environment and perform
a smoke test to ensure the build is functioning well enough to deploy to the
entire test team. Once the test team is set loose, your top priority is typically
regression testing to verify recent bug fixes. After that, you perform many
other types of testing in order to find new bugs and to check that old ones
have not re-emerged. New defects should be reported in a clear, concise,
and professional manner after an appropriate amount of investigation. Once
you complete this journey, you are rewarded with the opportunity to do it
all over again.
EXERCISES
1. Briefly describe the difference between the Expected Result and the
Actual Result in a bug write-up.
2. What’s the purpose of regression testing?
3. Briefly describe the steps in preparing a test configuration.
4. What is a “knockdown list”? Why is it important?
5. True or False: Black-box testing refers to examining the actual game
code.
6. True or False: The Brief Description field of a defect report should
include as much information as possible.
15
Basic Test Plan Template 1
Game Name
1. Copyright Information
Table of Contents
SECTION I: QA TEAM (and areas of responsibility)
1. QA Lead
a. Office phone
b. Home phone
c. Mobile phone
d. Email / IM / VOIP addresses
2. Internal Testers
3. External Test Resources
This chapter appeared in Game Testing, Third Edition, C. Schultz and R. D. Bryant.
1
c. Etc.
d. The final activity is usually to run an automated script
that reports the results of the various tests and posts
them in the QA portion of the internal Web site.
2. Level #2
3. Etc.
ii. Run though a predetermined set of multiplayer levels,
performing a specified set of activities.
1. Level #1
a. Activity #1
b. Activity #2
c. Etc.
d. The final activity is usually for each tester involved in
the multiplayer game to run an automated script that
reports the results of the various tests and posts them
in the QA portion of the internal Web site.
2. Level #2
3. Etc.
iii. Email showstopper crashes or critical errors to the entire
team.
iv. Post showstopper crashes or critical errors to the daily top
bugs list (if one is being maintained).
3. Daily Reports
a. Automated reports from the preceding daily tests are posted in the
QA portion of the internal Web site.
4. Weekly Activities
a. Weekly tests
i. Run though every level in the game (not just the preset ones
used in the daily test), performing a specified set of activities
and generating a predetermined set of tracking statistics. The
same machine should be used each week.
1. Level #1
a. Activity #1
b. Activity #2
c. Etc.
2. Level #2
3. Etc.
ii. Weekly review of bugs in the Bug Tracking System
1. Verify that bugs marked “fixed” by the development team
really are fixed.
2. Check the appropriateness of bug rankings relative to
where the project is in the development.
3. Acquire a “feel” for the current state of the game, which
can be communicated in discussions to the producer and
department heads.
4. Generate a weekly report of closed-out bugs.
b. Weekly Reports
i. Tracking statistics, as generated in the weekly tests.
5. Ad Hoc Testing
a. Perform specialized tests as requested by the producer, tech lead, or
other development team members
b. Determine the appropriate level of communication to report the
results of those tests.
6. Integration of Reports from External Test Groups
a. If at all possible, ensure that all test groups are using the same bug
tracking system.
b. Determine which group is responsible for maintaining the master
list.
c. Determine how frequently to reconcile bug lists against each other.
d. Ensure that only one consolidated set of bugs is reported to the
development team.
16
Game Testing By the Numbers 1
Product metrics, such as the number of defects found per line of code, tell you
how fit the game code is for release. Test metrics can tell you about the effective-
ness and efficiency of your testing activities and results. A few pieces of basic test
data can be combined in ways that reveal important information that you can
use to keep testing on track, while getting the most out of your tests and testers.
This chapter appeared in Game Testing, Third Edition, C. Schultz and R. D. Bryant.
1
Figure 16.1 provides a set of data for a test team starting to run tests
against a new code release. The project manager worked with the test lead
to use an estimate of 12 tests per day as the basis for projecting how long it
would take to complete the testing for this release.
Thirteen days into the testing, the progress lagged what had been pro-
jected, as shown in Figure 16.2. It looks like progress started to slip on the
fifth day, but the team was optimistic that they could catch up. By the tenth
day they seemed to have managed to steer back toward the goal, but during
the last three days the team lost ground again, despite the reassignment of
some people on to and off of the team.
Test Progress
180
160
140
120
Tests Run
100 Planned
80 Actual
60
40
20
0
22-Dec
23-Dec
28-Dec
29-Dec
30-Dec
4-Jan
5-Jan
6-Jan
7-Jan
8-Jan
10-Jan
11-Jan
12-Jan
To understand what is happening here, data was collected for each day
that a tester was available to do testing, and the number of tests he or she
completed each day. This information can be put into a chart, as shown in
Figure 16.3. The totals show that an individual tester completes an average
of about four tests a day.
Once you have the test effort data for each person and each day, you
must compare the test effort people have contributed to the number of
work days they were assigned to participate in system testing. Ideally, this
ratio would come out to 1.00. The numbers you actually collect will give
you a measurement of something you felt was true, but couldn’t prove
before: most testers are unable to spend 100% of their time on testing.
This being the case, don’t plan on testers spending 100% of their time
on a single task! Measurements will show you how much to expect from
system testers, based on various levels of participation. Some testers will
be dedicated to testing as their only assignment. Others perhaps perform
a dual role, such as developer/tester or QA engineer/tester. Collect effort
data for your team members that fall into each category, as shown in
Figure 16.4.
CUMULATIVE
TESTER DAYS 244
ASSIGNED DAYS 532
AVAILABILITY 46%
be much gnashing of teeth when the tests weren’t actually completed until
three weeks after they had been promised!
You need this kind of information to answer questions such as “How
many people do you need to get testing done by Friday?” or “If I can get you
two more testers, when can you be done?”
Burn into your mind that it’s easier to stay on track by getting a little
TIP extra done day to day than by trying to make up a large amount in a panic
situation; remember Rule #1: Don’t Panic.
Going back to Figure 16.1, you can see that on 8-Jan the team was only
six tests behind the goal. Completing one extra test on each of the previous
six work days would have had the team on goal. If you can keep short-
term commitments to stay on track, you will be able to keep the long-term
commitment to deliver completed testing on schedule.
You should measure TE for each release as well as for the overall project.
Figure 16.6 shows a graphical view of this TE data.
Test Effectiveness
0.070
0.060
0.050
0.040 Release
0.030 Total
0.020
0.010
0.000
V1 V2 V3 1 A1
DE DE DE MO PH
DE AL
Code Release
Notice how the cumulative TE reduced with each release and settled
at .042. You can take this measurement one step further by using test com-
pletion and defect detection data for each tester in order to calculate individ-
ual TEs. Figure 16.7 shows a snapshot of tester TEs for the overall project.
You can also calculate each tester’s TE per release.
TESTER B C D K Z TOTAL
TESTS RUN 151 71 79 100 169 570
DEFECTS
FOUND 9 7 6 3 9 34
DEFECTS
/ TEST 0.060 0.099 0.076 0.030 0.053 0.060
Note that for this project, the effectiveness of each tester ranges from
0.030 to 0.099, with an average of 0.060. The effectiveness is perhaps as
much a function of the particular tests each tester was asked to perform as
it is a measure of the skill of each tester. Like the overall TE measurement,
however, this number can be used to predict how many additional defects a
particular tester could find when performing a known number of tests. For
example, if tester C has 40 more tests to perform, expect her to find about
four more defects.
In addition to measuring how many defects you detect (quantitative),
it is important to understand the severity of defects introduced with each
Figure 16.9 graphs the trend of the severity data listed in Figure 16.8.
Take a moment to examine the graph. What do you see?
25 1
2
20 3
15 4
10
5
0
v1 v2 v3 o1 a1
De De De Dem Alph
Code Release
Notice that the severity 3 defects dominate. They are also the only cat-
egory to significantly increase after Dev1 testing, except for some extra 4s
popping up in the Demo1 release. When you set a goal that does not allow
any severity 2 defects to be in the shipping game, there will be a tendency
to push any borderline severity 2 issues into the severity 3 category. Another
explanation could be that the developers focus their efforts on the 1s and 2s
so they leave the 3s alone early in the project, with the intention of dealing
with them later. This approach bears itself out in Figures 16.8 and 16.9,
where the severity 3 defects are brought way down for the Demo1 release
and continue to drop in the Alpha1 release. Once you see “what” is happen-
ing, try to understand “why” it is happening that way.
Figure 16.10 shows what a Star Chart looks like prior to applying the
testers’ stars.
If you’re worried about testers getting into battles over defects and not
finishing their assigned tests quickly enough, you can create a composite
measure of each tester’s contribution to test execution and defects found.
Add the total number of test defects found and calculate a percentage for
each tester, based on how many they found divided by the project total. Then
do the same for tests run. You can add these two numbers for each tester.
Whoever has the highest total is the “Best Tester” for the project. This might
or might not turn out to be the same person who becomes the Testing Star.
Here’s how this works for testers B, C, D, K, and Z for the Dev1 release:
Tester B executed 151 of the team’s 570 Dev1 tests. This comes out to
26.5%. B has also found 9 of the 34 Dev1 defects, which is also 26.5%.
B’s composite rating is 53.
Tester C ran 71 of the 570 tests, which is 12.5%. C found 7 out of the 34
total defects in Dev1, which is 20.5%. C’s rating is 33.
Tester D ran 79 tests, which is approximately 14% of the total. D also
found 6 defects, which is about 17.5% of the total. D earns a rating of
31.5.
Tester K ran 100 tests and found 3 defects. These represent 17.5% of the
test total and about 9% of the defect total. K has a 26.5 rating.
Tester Z ran 169 tests, which is about 29.5% of the 570 total. Z found 9
defects, which is 26.5% of that total. Z’s total rating is 56.
Tester Z has earned the title of “Best Tester.”
When you have people on your team who keep winning these awards,
TIP take them to lunch and find out what they are doing so you can win
some too!
Be careful to use this system for good and not for evil. Running more
tests or claiming credit for new defects should not come at the expense of
other people or the good of the overall project. You could add factors to
give more weight to higher-severity defects in order to discourage testers
from spending all their time chasing and reporting low-severity defects that
won’t contribute as much to the game as a few very important high-severity
defects.
Use this system to encourage and exhibit positive test behaviors. Remind
your team (and yourself!) that some time spent automating tests could have
generous payback in terms of test execution. Likewise, spending a little time
up front to effectively design your tests, before you run off to start banging
on the game controller, will probably lead you to more defects. You will
learn more about these strategies and techniques in the remaining chapters
of this book.
This chapter introduced you to a number of metrics you can collect to
track and improve testing results. Each metric from this chapter is listed
below, together with the raw data you need to collect for each, mentioned
in parentheses:
Test Progress Chart (# of tests completed by team each day, # of tests
required each day)
Test Completed/Days of Effort (# of tests completed,# days of test effort
for each tester)
Test Participation (# of days of effort for each tester, # of days each tester
assigned to test)
Test Effectiveness (# of defects, # of tests for each release and/or tester)
Defect Severity Profile (# of defects of each severity for each release)
Star Chart (# of defects of each severity for each tester)
Testing Star (# of defects of each severity for each tester, point value of
each severity)
Best Tester (# of tests per tester, # of total tests, # of defects per tester,
# of total defects)
Testers or test leads can use these metrics to aid in planning, predict-
ing, and performing game testing activities. Then you will be testing by the
numbers.
EXERCISES
1. How does the data in Figure 16.3 explain what is happening on the
graph in Figure 16.2?
2. How many testers do you need to add to the project represented in
Figures 16.1 and 16.2 in order to bring the test execution back on
plan in the next 10 working days? The testers will begin work on the
very next day that is plotted on the graph.
3. Tester C has the best TE as shown in Figure 16.7, but did not turn
out to be the “Best Tester.” Explain how this happened.
4. You are tester X working on the project represented in Figure 16.7.
If you have run 130 tests, how many defects did you need to find in
order to become the “Best Tester?”
5. Describe three positive and three negative aspects of measuring the
participation and effectiveness of individual testers. Do not include
any aspects already discussed in this chapter.
A
Quality Assurance and
Testing Tools
IEEE/ANSI Software Test
Standard Process Purpose
829–1983 Software Test This standard covers the entire testing process.
Documentation
1008–1987 Software Unit This standard defines an integrated approach to
Testing systematic and documented unit testing.
1012–1986 Software This standard provides uniform and minimum
Verification and requirements for the format and content of
Validation Plans software verification and validation plans.
1028–1988 Software This standard provides direction to the reviewer
Reviews and or auditor on the conduct of evaluations.
Audits
730–1989 Software Quality This standard establishes a required format and
Assurance Plans a set of minimum contents for software quality
assurance plans.
828–1990 Software This standard is similar to IEEE standard 730, but
Configuration deals with the more limited subject of software
Management configuration management. This standard identifies
Plans requirements for configuration identification,
configuration control, configuration status
reporting, configuration audits and reviews.
1061–1992 Software This standard provides a methodology for
Quality Metrics establishing quality requirements. It also deals
Methodology with identifying, implementing, analyzing and
validating the process of software quality metrics.
Description Tools
Functional/Regression Testing WinRunner
Silkiest
Quick Test Pro (QTP)
Rational Robot
Visual Test
In-house Scripts
Load/Stress Testing (Performance) LoadRunner
Astra Load Test
Application Centre Test (ATC)
In-house Scripts
Web Application Stress Tool (WAS)
Test Case Management Test Director
Test Manager
In-house Test Case Management tools
Defect Tracking TestTrack Pro
Bugzilla
Element Tool
ClearQuest
TrackRecord
In-house Defect Tracking tools of clients
Unit/Integration Testing C++ Test
JUnit
NUnit
PhpUnit
Check
Cantata++
B
Sample Project Description
[N.B.: Students may be encouraged to prepare descriptions of projects on
these lines and then develop the following deliverables]
1. SRS Document 2. Design Document 3. Codes 4. Test Oracles
Keywords
Generic Technology Keywords: databases, network and middleware,
programming.
Specific Technology Keywords: MS-SQL server, HTML, Active Server Pages.
Project Type Keywords: analysis, design, implementation, testing, user interface.
Requirements:
Hardware requirements:
Alternatives
Number Description (if available)
1. PC with 2 GB hard-disk and 256 MB RAM Not-Applicable
2.
Software requirements:
Alternatives
Number Description (if available)
1. Windows 95/98/XP with MS-Office Not-Applicable
2. MS-SQL server MS-Access
3.
Manpower requirements:
2-3 students can complete this in 4-6 months if they work full-time on it.
C
Glossary
Abstract class: A class that cannot be instantiated, i.e., it cannot have any instances.
Abstract test case: See high-level test case.
Acceptance: See acceptance testing.
Acceptance criteria: The exit criteria that a component or system must satisfy in
order to be accepted by a user, customer, or other authorized entity. [IEEE 6.10]
Acceptance testing: It is done by the customer to check whether the product
is ready for use in the real-life environment. Formal testing with respect to user
needs, requirements, and business processes conducted to determine whether or
not a system satisfies the acceptance criteria and to enable the user, customers, or
other authorized entity to determine whether or not to accept the system. [After
IEEE 610]
Accessibility testing: Testing to determine the ease by which users with disabilities
can use a component or system. [Gerrard]
Accuracy: The capability of the software product to provide the right or agreed
results or effects with the needed degree of precision. [ISO 9126] See also
functionality testing.
Activity: A major unit of work to be completed in achieving the objectives of a
hardware/ software project.
Actor: An actor is a role played by a person, organization, or any other device which
interacts with the system.
Actual outcome: See actual result.
Actual result: The behavior produced/observed when a component or system is
tested.
Ad hoc review: See informal review.
Ad hoc testing: Testing carried out informally; no formal test preparation takes
place, no recognized test design technique is used, there are no expectations for
results and randomness guides the test execution activity.
Adaptability: The capability of the software product to be adapted for different
specified environments without applying actions or means other than those provided
for this purpose for the software considered. [ISO 9126] See also portability testing.
Agile testing: Testing practice for a project using agile methodologies, such as
extreme programming (XP), treating development as the customer of testing and
emphasizing the test-first design paradigm.
Aggregation: Process of building up of complex objects out of existing objects.
Algorithm test [TMap]: See branch testing.
Alpha testing: Simulated or actual operational testing by potential users/customers
or an independent test team at the developers’ site, but outside the development
organization. Alpha testing is often employed as a form of internal acceptance testing.
Analyst: An individual who is trained and experienced in analyzing existing systems
to prepare SRS (software requirement specifications).
Analyzability: The capability of the software product to be diagnosed for
deficiencies or causes of failures in the software, or for the parts to be modified to
be identified. [ISO 9126] See also maintainability testing.
Analyzer: See static analyzer.
Anomaly: Any condition that deviates from expectation based on requirements
specifications, design documents, user documents, standards, etc. or from someone’s
perception or experience. Anomalies may be found during, but not limited to,
reviewing, testing, analysis, compilation, or use of software products or applicable
documentation. [IEEE 1044] See also defect, deviation, error, fault, failure, incident,
or problem.
Arc testing: See branch testing.
Atomicity: A property of a transaction that ensures it is completed entirely or not
at all.
Attractiveness: The capability of the software product to be attractive to the user.
[ISO 9126] See also usability testing.
Audit: An independent evaluation of software products or processes to ascertain
compliance to standards, guidelines, specifications, and/or procedures based on
objective criteria, including documents that specify: (1) the form or content of the
products to be produced, (2) the process by which the products shall be produced,
and (3) how compliance to standards or guidelines shall be measured. [IEEE 1028]
Audit trail: A path by which the original input to a process (e.g., data) can be
traced back through the process, taking the process output as a starting point. This
facilitates defect analysis and allows a process audit to be carried out. [After TMap]
Automated testware: Testware used in automated testing, such as tool scripts.
Availability: The degree to which a component or system is operational and
accessible when required for use. Often expressed as a percentage. [IEEE 610]
Back-to-back testing: Testing in which two or more variants of a component or
system are executed with the same inputs, the outputs compared, and analyzed in
cases of discrepancies. [IEEE 610]
Data flow analysis: A form of static analysis based on the definitions and usage of
variables.
Data flow coverage: The percentage of definition-use pairs that have been
exercised by a test case suite.
Data flow test: A white-box test design technique in which test cases are designed
to execute definitions and use pairs of variables.
Dead code: See unreachable code.
Debugger: See debugging tool.
Debugging: The process of finding, analyzing, and removing the causes of failures
in software.
Debugging tool: A tool used by programmers to reproduce failures, investigate
the state of programs, and find the corresponding defect. Debuggers enable
programmers to execute programs step by step to halt a program at any program
statement and to set and examine program variables.
Decision: A program point at which the control flow has two or more alternative
routes. A node with two or more links to separate branches.
Decision condition coverage: The percentage of all condition outcomes and
decision outcomes that have been exercised by a test suite. 100% decision condition
coverage implies both 100% condition coverage and 100% decision coverage.
Decision condition testing: A white-box test design technique in which test cases
are designed to execute condition outcomes and decision outcomes.
Decision coverage: The percentage of decision outcomes that have been exercised
by a test suite. 100% decision coverage implies both 100% branch coverage and
100% statement coverage.
Decision outcome: The result of a decision (which therefore determines the
branches to be taken).
Decision table: A table showing combinations of inputs and/or stimuli (causes)
with their associated outputs and/or actions (effects) which can be used to design
test cases. It lists various decision variables, the conditions assumed by each of the
decision variables, and the actions to take in each combination of conditions.
Decision table testing: A black-box test design technique in which test cases are
designed to execute the combinations of inputs and/or stimuli (causes) shown in a
decision table. [Veenendaal]
Decision testing: A white-box test design technique in which test cases are
designed to execute decision outcomes.
Defect: A flaw in a component or system that can cause the component or system to
fail to perform its required function, e.g., an incorrect statement or data definition.
A defect, if encountered during execution, may cause a failure of the component or
system.
Load test: A test type concerned with measuring the behavior of a component
or system with increasing load, e.g., number of parallel users and/or numbers of
transactions to determine what load can be handled by the component or system.
Locale: An environment where the language, culture, laws, currency, and many
other factors may be different.
Locale testing: It focuses on testing the conventions for number, punctuations,
date and time, and currency formats.
Logic-coverage testing: See white-box testing. [Myers]
Logic-driven testing: See white-box testing.
Logical test case: See high-level test case.
Low-level test case: A test case with concrete (implementation level) values for
input data and expected results.
Maintainability: The ease with which a software product can be modified to correct
defects, modified to meet new requirements, modified to make future maintenance
easier, or adapted to a changed environment. [ISO 9126]
Maintainability testing: The process of testing to determine the maintainability
of a software product.
Maintenance: Modification of a software product after delivery to correct defects,
to improve performance or other attributes, or to adapt the product to a modified
environment. [IEEE 1219]
Maintenance testing: Testing the changes to an operational system or the impact
of a changed environment to an operational system.
Management review: A systematic evaluation of software acquisition, supply,
development, operation, or maintenance process, performed by or on behalf of
management that monitors progress, determines the status of plans and schedules,
confirms requirements and their system allocation, or evaluates the effectiveness
of management approaches to achieve fitness for purpose. [After IEEE 610, IEEE
1028]
Mandel bug: A bug whose underlying causes are so complex and obscure as to
make its behavior appear chaotic or even non-deterministic.
Master test plan: See project test plan.
Maturity: (1) The capability of an organization with respect to the effectiveness and
efficiency of its processes and work practices. See also capability maturity model and
test maturity model. (2) The capability of the software product to avoid failure as a
result of defects in the software. [ISO 9126] See also reliability.
Measure: The number or category assigned to an attribute of an entity by making
a measurement. [ISO 14598]
Measurement: The process of assigning a number or category to an entity to
describe an attribute of that entity. [ISO 14598]
Measurement scale: A scale that constrains the type of data analysis that can be
performed on it. [ISO 14598]
Memory leak: A situation in which a program requests memory but does not release
it when it is no longer needed. A defect in a program’s dynamic store allocation
logic that causes it to fail to reclaim memory after it has finished using it, eventually
causing the program to fail due to lack of memory.
Message: Message is a programming language mechanism by which one unit
transfers control to another unit.
Messages: It shows how objects communicate. Each message represents one object
making function call of another.
Metric: A measurement scale and the method used for measurement. [ISO 14598]
Migration testing: See conversion testing.
Milestone: A point in time in a project at which defined (intermediate) deliverables
and results should be ready.
Mistake: See error.
Moderator: The leader and main person responsible for an inspection or other
review process.
Modified condition decision coverage: See condition determination coverage.
Modified condition decision testing: See condition determination coverage
testing.
Modified multiple condition coverage: See condition determination coverage.
Modified multiple condition testing: See condition determination coverage
testing.
Module: Modules are parts, components, units, or areas that comprise a given
project. They are often thought of as units of software code. See also component.
Module testing: See component testing.
Monitor: A software tool or hardware device that runs concurrently with the
component or system under test and supervises, records, and/or analyzes the
behavior of the component or system. [After IEEE 610]
Monkey testing: Randomly test the product after all planned test cases are done.
Multiple condition: See compound condition.
Multiple condition coverage: The percentage of combinations of all single
condition outcomes within one statement that have been exercised by a test suite.
100% multiple condition coverage implies 100% condition determination coverage.
Multiple condition testing: A white-box test design technique in which test cases
are designed to execute combinations of single condition outcomes (within one
statement).
Multiplicity: Information placed at each end of an association indicating how many
instances of one class can be related to instances of the other class.
Release (or golden master): The build that will eventually be shipped to the
customer, posted on the Web, or migrated to the live Web site.
Release note: A document identifying test items, their configuration, current
status, and other delivery information delivered by development to testing, and
possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]
Reliability: Probability of failure free operation of software for a specified time
under specified operating conditions. The ability of the software product to perform
its required functions under stated conditions for a specified period of time, or for a
specified number of operations. [ISO 9126]
Reliability testing: The process of testing to determine the reliability of a software
product.
Replaceability: The capability of the software product to be used in place of
another specified software product for the same purpose in the same environment.
[ISO 9126] See also portability.
Requirement: A condition or capability needed by a user to solve a problem or
achieve an objective that must be met or possessed by a system or system component
to satisfy a contract, standard, specification, or other formally imposed document.
[After IEEE 610]
Requirements-based testing: An approach to testing in which test cases are
designed based on test objectives and test conditions derived from requirements,
e.g., tests that exercise specific functions or probe non functional attributes such as
reliability or usability.
Requirements management tool: A tool that supports the recording of
requirements, requirements attributes (e.g., priority, knowledge responsible)
and annotation, and facilitates traceability through layers of requirements and
requirements change management. Some requirements management tools also
provide facilities for static analysis, such as consistency checking and violations to
pre-defined requirements rules.
Requirements phase: The period of time in the software life cycle during which
the requirements for a software product are defined and documented. [IEEE 610]
Requirements tracing: It is a technique of ensuring that the product, as well as the
testing of the product, addresses each of its requirements.
Resource utilization: The capability of the software product to use appropriate
amounts and types of resources, for example, the amounts of main and secondary
memory used by the program and the sizes of required temporary or overflow files,
when the software performs its function under stated conditions. [After ISO 9126]
See also efficiency.
Resource utilization testing: The process of testing to determine the resource
utilization of a software product.
Safety testing: The process of testing to determine the safety of a software product.
Sanity test: See smoke test.
Scalability: The capability of the software product to be upgraded to accommodate
increased loads. [After Gerrard]
Scalability testing: Testing to determine the scalability of the software product.
Scenario testing: See use-case testing.
Scribe: The person who has to record each defect mentioned and any suggestions
for improvement during a review meeting on a logging form. The scribe has to make
sure that the logging form is readable and understandable.
Scripting language: A programming language in which executable test scripts are
written, used by a test execution tool (e.g., a capture/replay tool).
Security: Attributes of software products that bear on its ability to prevent
unauthorized access, whether accidental or deliberate, to programs and data.
[ISO 9126]
Security testing: Testing to determine the security of the software product.
Serviceability testing: See maintainability testing.
Severity: The degree of impact that a defect has on the development or operation
of a component or system. [After IEEE 610]
Shelfware: Software that is not used.
Simulation: A technique that uses an executable model to examine the behavior
of the software. The representation of selected behavioral characteristics of one
physical or abstract system by another system. [ISO 2382/1]
Simulator: A device, computer program, or system used during testing, which
behaves or operates like a given system when provided with a set of controlled
inputs. [After IEEE 610, DO178b] See also emulator.
Sink node: It is a statement fragment at which program execution terminates.
Slicing: It is a program decomposition technique used to trace an output variable
back through the code to identify all code statements relevant to a computation in
the program.
Smoke test: It is a condensed version of a regression test suite. A subset of all
defined/planned test cases that cover the main functionality of a component or
system, to ascertaining that the most crucial functions of a program work, but not
bothering with finer details. A daily build and smoke test are among industry best
practices. See also intake test.
Software feature: See feature.
Software quality: The totality of functionality and features of a software product
that bear on its ability to satisfy stated or implied needs. [After ISO 9126]
Software quality characteristic: See quality attribute.
Software runaways: Large size projects failed due to lack of usage of systematic
techniques and tools.
Software test incident: See incident.
Software test incident report: See incident report.
Software Usability Measurement Inventory (SUMI): A questionnaire
based usability test technique to evaluate the usability, e.g., user-satisfaction, of a
component or system. [Veenendaal]
Source node: A source node in a program is a statement fragment at which program
execution begins or resumes.
Source statement: See statement.
Specialization: The process of taking subsets of a higher-level entity set to form
lower-level entity sets.
Specification: A document that specifies, ideally in a complete, precise, and
verifiable manner, the requirements, design, behavior, or other characteristics of
a component or system, and, often, the procedures for determining whether these
provisions have been satisfied. [After IEEE 610]
Specification-based test design technique: See black-box test design technique.
Specification-based testing: See black-box testing.
Specified input: An input for which the specification predicts a result.
Stability: The capability of the software product to avoid unexpected effects from
modifications in the software. [ISO 9126] See also maintainability.
Standard software: See off-the-shelf software.
Standards testing: See compliance testing.
State diagram: A diagram that depicts the states that a component or system can
assume, and shows the events or circumstances that cause and/or result from a
change from one state to another. [IEEE 610]
State table: A grid showing the resulting transitions for each state combined with
each possible event, showing both valid and invalid transitions.
State transition: A transition between two states of a component or system.
State transition testing: A black-box test design technique in which test cases are
designed to execute valid and invalid state transitions. See also N-switch testing.
Statement: An entity in a programming language, which is typically the smallest
indivisible unit of execution.
Statement coverage: The percentage of executable statements that have been
exercised by a test suite.
Statement testing: A white-box test design technique in which test cases are
designed to execute statements.
Syntax testing: A black-box test design technique in which test cases are designed
based upon the definition of the input domain and/or output domain.
System: A collection of components organized to accomplish a specific function or
set of functions. [IEEE 610]
System integration testing: Testing the integration of systems and packages;
testing interfaces to external organizations (e.g., electronic data interchange,
Internet).
System testing: The process of testing an integrated system to verify that it meets
specified requirements. [Hetzel]
Technical review: A peer group discussion activity that focuses on achieving
consensus on the technical approach to be taken. A technical review is also known
as a peer review. [Gilb and Graham, IEEE 1028]
Technology transfer: The awareness, convincing, selling, motivating, collaboration,
and special effort required to encourage industry, organizations, and projects to
make good use of new technology products.
Test: A test is the act of exercising software with test cases. A set of one or more test
cases. [IEEE 829]
Test approach: The implementation of the test strategy for a specific project. It
typically includes the decisions made that follow based on the (test) project’s goal
and the risk assessment carried out, starting points regarding the test process, the
test design techniques to be applied, exit criteria, and test types to be performed.
Test automation: The use of software to perform or support test activities, e.g., test
management, test design, test execution, and results checking.
Test basis: All documents from which the requirements of a component or system
can be inferred. The documentation on which the test cases are based. If a document
can be amended only by way of formal amendment procedure, then the test basis is
called a frozen test basis. [After TMap]
Test bed: An environment containing the hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test. See also test
environment.
Test case: A test that, ideally, executes a single well-defined test objective, i.e.,
a specific behavior of a feature under a specific condition. A set of input values,
execution preconditions, expected results and execution postconditions, developed
for a particular objective or test condition, such as to exercise a particular program
path or to verify compliance with a specific requirement. [After IEEE 610]
Test case design technique: See test design technique.
Test case specification: A document specifying a set of test cases (objective,
inputs, test actions, expected results, and execution preconditions) for a test item.
[After IEEE 829]
Test maturity model (TMM): A five level staged framework for test process
improvement, related to the capability maturity model (CMM) that describes the
key elements of an effective test process.
Test object: The component or system to be tested. See also test item.
Test objective: A reason or purpose for designing and executing a test.
Test oracle: It is a mechanism, different from the program itself that can be used to
check the correctness of the output of the program for the test cases. It is a process in
which test cases are given to test oracles and the program under testing. The output
of the two is then compared to determine if the program behaved correctly for the
test cases. A source to determine expected results to compare with the actual result
of the software under test. An oracle may be the existing system (for a benchmark),
a user manual, or an individual’s specialized knowledge, but should not be the code.
[After Adrion]
Test outcome: See result.
Test pass: See pass.
Test performance indicator: A metric, in general high level, indicating to what
extent a certain target value or criterion is met. Often related to test process
improvement objectives, e.g., defect detection percentage (DDP).
Test phase: A distinct set of test activities collected into a manageable phase of a
project, e.g., the execution activities of a test level. [After Gerrard]
Test plan: A management document outlining risks, priorities, and schedules for
testing. A document describing the scope, approach, resources, and schedule of
intended test activities. It identifies amongst others test items, the features to be
tested, the testing tasks, who will do each task, degree of tester independence, the
test environment, the test design techniques and test measurement techniques to
be used, and the rationale for their choice, and any risks requiring contingency
planning. It is a record of the test planning process. [After IEEE 829]
Test planning: The activity of establishing or updating a test plan.
Test point analysis (TPA): A formula-based test estimation method based on
function point analysis. [TMap]
Test points: They allow data to be modified or inspected at various points in the
system.
Test policy: A high-level document describing the principles, approach, and major
objectives of the organization regarding testing.
Test procedure: See test procedure specification.
Test procedure specification: A document specifying a sequence of actions for the
execution of a test. Also known as test script or manual test script. [After IEEE 829]
Test process: The fundamental test process comprises planning, specification,
execution, recording, and checking for completion. [BS 7925/2]
Test tool: A software product that supports one or more test activities, such as
planning and control, specification, building initial files and data, test execution, and
test analysis. [TMap] See also CAST.
Test type: A group of test activities aimed at testing a component or system
regarding one or more interrelated quality attributes. A test type is focused on a
specific test objective, i.e., reliability test, usability test, regression test, etc., and may
take place on one or more test levels or test phases. [After TMap]
Testable requirements: The degree to which a requirement is stated in terms that
permit establishment of test designs (and subsequently test cases) and execution
of tests to determine whether the requirements have been met. [After IEEE 610]
Testability: The capability of the software product to enable modified software to
be tested. [ISO 9126] See also maintainability.
Testability looks: The code that is inserted into the program specifically to facilitate
testing.
Testability review: A detailed check of the test basis to determine whether the test
basis is at an adequate quality level to act as an input document for the test process.
[After TMap]
Tester: A technically skilled professional who is involved in the testing of a
component or system.
Testing: The process of executing the program with the intent of finding faults.
The process consisting of all life cycle activities, both static and dynamic, concerned
with planning, preparation, and evaluation of software products and related work
products to determine that they satisfy specified requirements, to demonstrate that
they are fit for purpose, and to detect defects.
Testing interface: A set of public properties and methods that you can use to
control a component from an external testing program.
Testware: Artifacts produced during the test process required to plan, design, and
execute tests, such as documentation, scripts, inputs, expected results, set-up and
clear-up procedures, files, databases, environment, and any additional software or
utilities used in testing. [After Fewster and Graham]
Thread testing: A version of component integration testing where the progressive
integration of components follows the implementation of subsets of the requirements,
as opposed to the integration of components by levels of a hierarchy.
Time behavior: See performance.
Top-down testing: An incremental approach to integration testing where the
component at the top of the component hierarchy is tested first, with lower level
components being simulated by stubs. Tested components are then used to test
lower level components. The process is repeated until the lowest level components
have been tested.
D
Bibliography
My special thanks are—for the great researchers without whose help this
book would not have been possible:
1. Jorgensen Paul, “Software Testing—A Practical Approach”, CRC Press,
2nd Edition 2007.
2. Srinivasan Desikan and Gopalaswamy Ramesh, “Software testing—
Principles and Practices”, Pearson Education Asia, 2002.
3. Tamres Louise, “Introduction to Software Testing”, Pearson Education
Asia, 2002.
4. Mustafa K., Khan R.A., “Software Testing—Concepts and Practices”,
Narosa Publishing, 2007.
5. Puranik Rajnikant, “The Art of Creative Destination”, Shroff Publishers,
First Reprint, 2005.
6. Agarwal K.K., Singh Yogesh, “Software Engineering”, New Age
Publishers, 2nd Edition, 2007.
7. Khurana Rohit, “Software Engineering—Principles and Practices”,
Vikas Publishing House, 1998.
8. Agarwal Vineet, Gupta Prabhakar, “Software Engineering”, Pragati
Prakashan, Meerut.
9. Sabharwal Sangeeta, “Software Engineering—Principles, Tools and
Techniques”, New Age Publishers, 1st Edition, 2002.
10. Mathew Sajan, “Software Engineering”, S. Chand and Company Ltd.,
2000.
11. Kaner, “Lessons Learned in Software Testing”, Wiley, 1999.
12. Rajani Renu, Oak Pradeep, “Software Testing”, Tata McGraw Hill, First
Edition, 2004.
13. Nguyen Hung Q., “Testing Applications on Web”, John Wiley, 2001.
14. “Testing Object-Oriented Systems—A Workshop Workbook”, by Quality
Assurance Institute (India) Ltd., 1994-95.
I N
IEEE829 standard, 50 National quality awards, 343
IEEE STD. 1012, 33 Neighborhood integration, 234
Implementation-based class testing, 483 Next date function, 83
limitation of, 494 complexities in, 83
Incident, 12 test cases for, 83, 91, 103
Incremental testing, 10 Non-functional testing, 240
Independent V&V contractor, 47 techniques, 250
Installation testing, 419 Normal testing, 356
Integration complexity, 155 No silver bullet, 379
Integration testing, 229, 243, 418, 517 O
classification of, 230 Object oriented testing, 451, 540
call graph based, 233 levels of, 513
decomposition-based, 230 Open box testing, 145
path-based, 235
Interoperability testing, 258 P
Intra class coverage, 493 Pairwise integration, 233
ISO 9001 and CMM, 322 Path coverage, 150
ISO standards, 317 Path coverage testing, 491
Pathological complexity, 155
K Payroll problem, 112
Kiviat chart, 117 Performance benchmarking, 279
Performance testing, 265
L
challenges of, 272, 282
Lack of cohesion between methods, 158 factors, 262
Life cycle of a build, 563 steps of, 271
Load testing, 418 tools for, 282
M Performance tuning, 278
Planning process overview, 295
Managerial independence, 48
Prioritization of test cases, 351
Manual testing, 375
guidelines, 351
Metrics, 154
for regression testing, 360
Mistake, 11
Priority category scheme, 352
Modern testing tools, 387 Proof of correctness, 37
Mothora, 20 Prototyping, 38
Mutation testing, 198
advantages of, 199 Q
disadvantages of, 199 QA analyst, 68