0% found this document useful (0 votes)
182 views

SQE Notes

Uploaded by

Taha Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
182 views

SQE Notes

Uploaded by

Taha Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Software Quality Engineering

Software:
 Refers to a collection of data or instructions
that guide a computer's operations.
 It's a compilation of data, programs, or
instructions to execute specific computer tasks.
Quality:
 Represents the level of excellence or the
general standard of something.
Engineering:
 A field in science and technology, focusing on
the design, construction, and utilization of
engines, machines, and structures.
Assurance:
 Indicates a level of certainty or confidence in
something or one's capabilities.
 Provides a guarantee that a product or service
will function as expected.
Quality Assurance (QA):
 A systematic approach to ensuring that
products adhere to predefined requirements
and standards.
 Emphasizes improving the processes to ensure
top-notch products for the customer.
 QA is a proactive process, concentrating on
preventing defects by focusing on the process
of product creation. It's a tool for management.
 All team members play a role in ensuring
quality.
 Example of QA activity: Verification.
QA's Role:
 QA aims to thwart quality issues via systematic
and documented activities.
 It involves setting up a robust quality
management system and regularly evaluating
its effectiveness.
Quality Assurance Process (PDCA or Deming
cycle):
 Plan: Set up the process objectives and
determine the necessary processes for a high-
quality outcome.
 Do: Implement and test processes. Make
necessary adjustments.
 Check: Monitor and adjust the processes to
ensure alignment with goals.
 Act: Execute actions necessary for process
improvement.
The cyclical PDCA steps are consistently revisited
to ensure regular assessment and enhancement of
organizational processes. Using QA guarantees that
products are made with the right procedures,
leading to fewer issues in the end product.

Quality Control (QC):


 A method within software engineering focused
on assuring the quality of products or services.
 Instead of the creation processes, QC zeroes in
on the final products' quality.
 The primary goal is to ascertain that the
product aligns with customer specifications and
requirements.
 If any discrepancies are found, they must
be resolved before the customer receives
the product.
 QC also evaluates and enhances the quality skill
sets of individuals, offering training and
certifications.
 This evaluation, crucial for service-based
businesses, ensures impeccable service
delivery to clients.
QC's Characteristics:
 QC comprises activities that ensure the quality
of products, emphasizing the detection of
defects in the actual items. As a result, it's
product-centric.
 A designated team, usually responsible for
quality control, tests products for potential
flaws.
 The approach of QC is reactive, aiming to
detect (and rectify) issues in the completed
product.
 The ultimate aim of QC is to find flaws post-
production but pre-release.
 A practical example of QC activity is
Validation.

Function and Execution of Quality Control (QC):


 Purpose:
 Engages in activities and methodologies to
maintain and elevate the quality of a
product, procedure, or service.
 Execution:
 It works by pinpointing and eradicating the
root causes of quality issues, utilizing
specific tools and machinery, ensuring
consistent alignment with customer needs.
 When QC employs statistical tools on
finalized products, it's termed Statistical
Quality Control (SQC).
 Although statistical tools can be utilized in
both QA and QC, their application on
processes (like process inputs & operational
metrics) is dubbed Statistical Process
Control (SPC) and is predominantly part of
QA.

Quality Control (QC) vs. Quality Assurance (QA)


 Aspect of Focus:
 QC: Product
 QA: Process
 Nature:
 QC: Reactive
 QA: Proactive
 Operational Position:
 QC: Line Function
 QA: Staff Function
 Purpose:
 QC: Find defects
 QA: Prevent defects
Activities Associated with QC and QA:
Quality Control Activities:
1.Walkthrough
2.Testing
3.Inspection
4.Checkpoint review
Quality Assurance Activities:
1.Quality Audit
2.Defining Process
3.Tool Identification and Selection
4.Training on Quality Standards and Processes

Understanding Software Quality


 Quality is a multifaceted notion, often being
interpreted differently by different individuals
– it's subjective.
 According to the IEEE, software quality is the
extent to which software embodies a desired
set of attributes like reliability or
interoperability.
 Manns and Coleman assert that software
designed for adaptability is of superior quality
and is likely to require fewer changes.
 Crosby defines quality as meeting
requirements.
 Ould sees software quality as its suitability for
its intended purpose.
 ISO-8402 describes quality as features of a
product that fulfill the stated or implicit needs
at a cost-effective rate.
 Gravin categorizes quality into five distinct
perspectives: transcendent, product-based,
user-based, manufacturing-based, and value-
based.

Diving Deeper into the Five Views of Quality:


1. Transcendental View:
 This perspective is universal and isn't exclusive
to software but applies to intricate domains of
daily life.
2. User View:
 Quality is perceived as its utility. The central
question here is whether the product meets the
user's needs and anticipations.
3. Manufacturing View:
 Here, quality is about adhering to
specifications. The product's quality is gauged
by how well it aligns with its specifications.
4. Product View:
 This perspective relates quality to the inherent
attributes of the product. Internal qualities of a
product dictate its external qualities.
5. Value-Based View:
 Quality, from this angle, depends on the price
a customer is prepared to pay.

Software Quality Analogized with Tomatoes:


 Attributes Perspective: Measuring software
attributes like Mean Time Between Failures
(MTBF) is akin to evaluating the size and shape
of tomatoes for packaging and taste.
 User Perspective: Just as one would assess if
tomatoes fit a recipe, software is evaluated by
its ability to fulfill user tasks.
 Process Adherence: Using an established
software process and releasing software only
when certain criteria are met is similar to
ensuring tomatoes are organically grown and
have no defects.
 Value for Money: Like ensuring tomatoes offer
value for their cost and have a long shelf-life,
software testing is often time-boxed to stay
within budget.
 Transcendent Perspective: The intrinsic
satisfaction and trust one might feel towards a
software or its developers can be compared to
the preference of getting tomatoes from a
trusted local farm.

Software Quality Assurance


 Software Quality Assurance (SQA)
encompasses:
 Defining and implementing processes.
 Conducting audits.
 Providing training.
 Processes within SQA can include:
 Software Development Methodology.
 Project Management.
 Configuration Management.
 Development and Management of
Requirements.
 Estimation procedures.
 Software Design.
 Testing, among others.
 Upon defining and implementing these
processes, SQA's primary duties are:
 Pinpointing the flaws in the processes.
 Amending these flaws to consistently
enhance the processes.

 Software Testing (Quality Control):


 Focuses on testing a product to find issues
before it's launched.
 Activities predominantly concern product
verification. For instance, Review Testing.
 It's centered around the product.
 Acts as a corrective method.
 It’s a reactive approach.
 Its domain is restricted to the specific
product under examination.

Guidelines for Effective Software Testing


 Construct a solid testing environment.
 Determine release criteria with caution.
 Deploy automated testing in areas with
heightened risks for cost-effectiveness and
efficiency.
 Ensure adequate time allocation for every
procedure.
 Prioritize rectifications of bugs based on the
frequency of software use.
 Establish specialized teams for security and
performance testing.
 Emulate customer accounts in a manner that
mirrors a real-world, production environment.

Software Quality Assurance: Key Functions


1.Technology Transfer
 This entails obtaining the product design
documentation alongside experimental
data and evaluations. These documents are
circulated, reviewed, and approved.
2.Validation
 A master validation plan for the entire
system is crafted. The criteria for
validating both product and process are
established, and resources for executing
this plan are allocated.
3.Documentation
 This function oversees the distribution and
archiving of documents. Any modification
in a document is processed through the
appropriate change control procedure, and
all types of documents are approved.
4.Product Quality Assurance
 Ensuring the quality of the products being
produced.
5.Quality Improvement Plans
 Formulating and executing plans to
enhance quality.

Total Quality Management (TQM)


 A commitment to continuous performance
improvement across all areas.
 Prioritizes customer satisfaction.
 Merges management and statistical methods.
 Aims for a zero-defect product.
 Necessitates dedication, leadership, and
training.
 Quality management through statistical analysis
was introduced by TQM.
 Dr. Edward Deming introduced these practices
in Japan post-World War II.
TQM Implementations/Certifications
 Major software implementations include:
 Six Sigma
 Capability Maturity Model Integration
(CMMI)
 ISO 9001:2000
 Test Maturity Model (TMM)

Six Sigma
 A quality management approach pioneered by
Motorola and later adopted by companies like
General Electric (GE) and Allied Signals.
 Aims for a staggering 99.99966% accuracy,
allowing only 3.4 defects per million products.
 Relies heavily on statistical evaluation and
enhancement.

CMMI
 Renowned as the go-to model for tech
enterprises.
 Conceived by the Software Engineering
Institute (SEI) at Carnegie Mellon University
with funding from the Department of Defense.
 CMMI was introduced after the various CMM
models.
 SEI ceased offering CMM after 2005.
CMMI Representations
 Continuous
 Staged
CMMI: Maturity Levels Overview
 Level 1 - Initial: Here, the quality environment
is chaotic with an absence of documented or
followed processes.
 Level 2 - Repeatable: Certain processes are
established and can be replicated. Ensures
adherence to processes at a project scope.
 Level 3 - Defined: Processes are standardized
and documented organization-wide. These
processes are open to potential enhancements.
 Level 4 - Managed: Emphasis is placed on
process metrics, ensuring effective process
adherence.
 Level 5 - Optimizing: Focus shifts to perpetual
process improvements, driven by learning and
innovation.

Understanding Process Maturity


Levels of Maturity:
1.Initial: At this stage, testing is erratic and
unstructured, often seen as a part of the
debugging process.
2.Managed:
 Distinct from debugging, testing is now a
managed process.
 Components include: crafting a test policy
and strategy, planning for testing,
monitoring and controlling tests, test
design and execution, and establishing a
test environment.
3.Defined:
 Testing is not merely an afterthought
following coding but is meticulously
defined and integrated into the
development lifecycle.
 Features include the establishment of a
test organization, a test training program,
integration into the test life cycle, non-
functional testing, and peer reviews.
4.Measured:
 The organization possesses a broad test
measurement program, enabling quality
assessment, productivity gauging, and
monitoring improvements.
 Aspects include test measurement, product
quality assessment, and advanced peer
reviews.
5.Optimization:
 The organization can consistently refine its
processes based on a data-driven grasp of
statistically controlled operations.
 Elements of this level include defect
prevention, quality control, and the
optimization of the test process.
Software Quality Indicators:
 Good Software is characterized by
performance, security, modifiability,
reliability, and usability.
Software Design Principles:
 Coupling: Measures the degree of dependencies
between entities.
 Cohesion: Assesses how closely related the
components of an entity are.
 Liskov’s Substitution Principle: If S is a
subtype of T, then objects of type T may be
replaced by objects of type S without
compromising the desired program properties,
such as its correctness.
 SOLID Principles:
 S: Single responsibility principle.
 O: Open/closed principle - software
entities should be open for extension but
closed for modification.
 L: Liskov Substitution principle.
 I: Interface segregation principle.
 D: Dependency inversion principle.
Law of Demeter or The Principle of Least
Knowledge:
 In a formal setting, a method "m" of an object
"O" can only call the methods of:
 The object "O" itself.
 The parameters of "m".
 Any objects created within "m".
 Direct components of "O".
 A global variable accessible by "O" within
the scope of "m".

Design Quality Attributes


Category: Design Qualities
1.Conceptual Integrity
 Description: Conceptual integrity refers to
the uniformity and unity of the design. This
encompasses how modules or components
are structured, and also extends to aspects
such as coding conventions and variable
naming conventions. A design with good
conceptual integrity feels unified and
coherent, rather than a collection of
disparate parts.
2.Maintainability
 Description: Maintainability describes how
easily a system can be modified. Whether
it's adding new features, adjusting existing
ones, rectifying errors, or meeting evolving
business needs, a maintainable system
allows for changes to be made more
efficiently and with fewer errors.
3.Reusability
 Description: Reusability pertains to the
ability of components or subsystems to be
used in different contexts beyond their
original application. A design that
prioritizes reusability avoids unnecessary
replication of components and expedites
development by leveraging existing assets.
Runtime Quality Attributes
Category: Run-time Qualities
1.Availability
 Description: Availability denotes the
system's operational uptime. It's typically
quantified as a percentage representing the
system's operational time against total time
over a designated period. Factors affecting
availability include system glitches,
infrastructure issues, targeted attacks, and
heavy system loads.
2.Interoperability
 Description: Interoperability refers to a
system's (or multiple systems') capacity to
seamlessly exchange and utilize
information with other external systems
developed and operated by third parties.
Such a system simplifies the internal and
external sharing and reuse of data.
3.Manageability
 Description: Manageability pertains to the
ease with which system administrators can
oversee the application. This often hinges
on the depth and utility of tools and
indicators available for system monitoring,
debugging, and performance optimization.
4.Performance
 Description: Performance measures a
system's promptness in executing actions
within a set timeframe. It can be gauged in
terms of latency (response time to an
event) or throughput (number of processed
events in a specific time span).
5.Reliability
 Description: Reliability showcases a
system's consistency in remaining
functional. It's often quantified as the
likelihood of a system consistently
performing its tasks without failure over a
predetermined period.
6.Scalability
 Description: Scalability is a system's
competence to accommodate load
increases without compromising
performance or to be readily expanded as
needed.
7.Security
 Description: Security embodies a system's
prowess in thwarting unauthorized or
malevolent activities outside its intended
usage and safeguarding data against
unauthorized access or alterations. The
objective of a secure system is to guard
assets and ensure data integrity.

System and User Quality Attributes


Category: System Supportability Qualities
1.Supportability
 Description: Supportability refers to a
system's capacity to offer useful insights
that assist in identifying and rectifying
issues when it malfunctions or doesn't
operate as expected.
2.Testability
 Description: Testability gauges the ease
with which specific test criteria can be
established for the system and its
individual components. It also assesses how
effortlessly these tests can be executed to
ascertain if the outlined criteria are
satisfied. Enhanced testability ensures that
system defects can be efficiently and
promptly pinpointed and addressed.
Category: User Qualities
1.Usability
 Description: Usability evaluates the extent
to which an application aligns with the
needs and expectations of its users. Key
aspects of usability include intuitive design,
ease of localization and globalization,
accessibility features for differently-abled
users, and an overall positive user
experience.
Software Quality Assurance Objectives:
 The primary aim is to enhance software
quality.
 Emphasize both preventive and corrective
measures to eradicate defects.
 Ensure the product aligns with customer
requirements and established standards.
 Assess documents crafted by the development
team.
 Monitor adherence to standards.
 Develop a Software Quality Assurance Plan,
which includes both test plans and test cases.
 Execute test cases.
 Administer a bug repository.
 Engage in code and design reviews.
SQA Activities Within a Project:
SQA tasks are intertwined with typical Software
Development Life Cycle (SDLC) phases and the
corresponding deliverables:
1.Requirements Collection Phase:
 Deliverable: Requirement Specifications
 SQA Task: Reviews
2.Analysis Phase:
 Deliverable: Functional Specifications
 SQA Task: Reviews
3.Architecture & Design Phase:
 Deliverable: Design Specifications
 SQA Task: Reviews
4.Development Phase:
 Deliverable: Code and Executable Files
5.Testing Phase:
 SQA Task: Implementation of Test Cases
6.Deployment Phase:
 Deliverable: Deployment Documents
 SQA Task: Review
Software Reviews:
 A review serves multiple purposes:
 Pinpoint areas of a product needing
improvement.
 Validate enhanced segments of a product.
 Produce technical work that's more
consistent, predictable, and manageable.
 Essentially, software reviews act as a filtering
mechanism in the software engineering
process.
 The primary goal is to detect errors during
analysis, design, coding, and testing phases.
 The necessity for software reviews stems from
the inherent human propensity to make
mistakes and the importance of identifying
these errors in engineering tasks.

Checklist for Reviews: When conducting a review,


ensure to inspect for:
 Completeness: Is everything covered?
 Consistency: Are there any contradictions?
 Ambiguities: Are there unclear or vague points?
 Thoroughness: Is every aspect scrutinized in
detail?
 Conformance to template: Does it adhere to
the preset structure or format?
 Suitability of Architecture and Design: Is the
structure and design appropriate for the
project's goals and requirements?

Formal Technical Reviews (FTRs) and Software


Inspections:
Software Inspection:
 A peer review process where trained individuals
inspect any work product to identify defects.
 Uses a well-defined process to ensure
thoroughness.
Formal Technical Reviews (FTRs):
 Aimed at identifying and rectifying defects in
documents or software.
 Performed during various stages of software
development.
Types of Reviews:
1.Reviews of Context: Focuses on understanding
the need, prospects, and risks of future work.
2.Reviews of Content: Evaluates an artifact's
quality and its suitability for the intended
purpose.
Purpose of FTRs:
 Detect and eliminate potentially costly
requirements or design defects early in the
software development lifecycle.
 Indicate potential defect densities expected
during testing when performed on code,
helping to inform testing strategies.
 Ensure software meets its requirements and
adheres to predefined standards.
 Achieve uniform software development and
make projects manageable.
Applications of FTRs:
 Requirements specification
 System design
 Preliminary design
 Detailed/Critical design
 Program/code
 User documentation
 Other defined development products
Constraints of FTR Meeting:
 Involves 3-5 people.
 Advanced preparation shouldn't exceed 2 hours
per person.
 The review meeting duration should be under 2
hours.
 The meeting should target a specific part of the
software product.
Participants in an FTR Meeting:
 Producer
 Review leader
 2 or 3 reviewers (including a recorder)
Review Meeting Protocol:
 The review leader sets the meeting agenda and
schedule.
 The producer distributes the material for
review.
 Reviewers prepare in advance.
Outcome of the Review Meeting:
 A list of review issues.
 A concise review summary report, often
referred to as meeting minutes.
Meeting Decisions:
1.Approve the work product as is.
2.Reject the work product due to identified
errors.
3.Conditionally approve the work product,
subject to specific changes and a follow-up
review.
4.Produce a sign-off sheet confirming the
decision.

The structured approach of FTRs ensures that


software products are developed to the highest
quality standards, thereby ensuring customer
satisfaction and reducing future maintenance costs.
Guidelines for Conducting Formal Technical
Reviews
1. Focus on the Product, Not the Producer:
 Maintain a constructive tone during the review,
aimed at identifying errors rather than
belittling individuals.
 Reviews are about improving the product, not
criticizing the person who produced it.
2. Set and Stick to an Agenda:
 Prevent drift by adhering to the meeting
agenda and schedule.
 Keep the review on track and ensure it stays
within the allocated time.
3. Minimize Debate and Rebuttal:
 If there's disagreement over the impact of an
issue raised, record it for offline discussion
instead of derailing the meeting with prolonged
debate.
4. Postpone Problem Solving:
 Reviews are not meant for immediate problem
solving.
 Focus on identifying issues and defer solutions
to post-review discussions.
5. Document with Written Notes:
 Assign a recorder to make written notes, and
consider using a wall board to display notes.
 Allow other reviewers to assess wording and
priorities as information is recorded.
6. Keep Participant Count Limited:
 While collaboration is beneficial, too many
participants can hinder productivity.
 Aim for a reasonable number of participants to
maintain focus and effectiveness.
7. Emphasize Advance Preparation:
 Ensure all review participants prepare in
advance.
 Encourage written comments from reviewers
prior to the meeting.
8. Develop Checklists for Each Product:
 Use checklists to structure the review and focus
on key issues.
 Enhance organization and clarity during the
review process.
9. Allocate Resources and Time:
 Schedule FTRs as tasks within the software
engineering process.
 Plan for modifications resulting from the
review.
10. Review Your Own Reviews:
 Debriefing sessions can reveal issues in the
review process itself.
 Apply the review process to your own review
guidelines and development standards.
11. Design Review Focus:
 Concentrate a design review on evaluating a
single design, rather than comparing multiple
designs.
 Use separate design reviews for meaningful
comparison.
12. Provide Meaningful Training:
 Effective reviewers should undergo formal
training.
 Training enhances review quality and
productivity, aiding the participants'
effectiveness.
These guidelines underscore the importance of
maintaining a constructive atmosphere, staying
focused, and ensuring proper preparation for
productive and valuable formal technical reviews.

Software Quality Engineering:


Types of Reviews:
 System Requirements Review (SRR)
 Preliminary Design Review (PDR)
 Critical Design Review (CDR)
 Production Readiness Review (PRR)
 System Test

Test Plan & Test Cases Flow:


 From development artifacts like Requirement
Specs, Functional Specs, and Design Specs,
corresponding SQA artifacts are created
including review documents, test plans, test
cases, and bug reports.
Test Case Execution:
 Print and follow the test cases.
 Execute as per instructions.
 Document results (pass/fail) and sign the
results.
Defect Reporting:
 Aim: Rectify defects.
 Clients: Developers and SQA Managers.
 Tools: Use efficient reporting tools, but even
simple tools like Excel can work, albeit with
limitations.
Defect Severity Levels:
1.Critical: Test process halted.
2.Major: Test process severely limited.
3.Significant: Notable functionality issue but not
a show-stopper.
4.Minor: Minimal problem with minimal impact.
5.Enhancement: Suggested product
improvement.
Test Cycle Overview:
 Represents a full testing activity for a
component/system.
 Minimum two test cycles are recommended.
 SQA Certificate is issued post phases, if
software meets exit criteria.

Top Management Support: Staffing & Facilities


 Prioritize merit-based hiring and avoid
nepotism.
 Seek both technical and non-technical
personnel.
 Ensure competitive compensation and training.
 Maintain separate budgets for SQA and
Development.
 Provide proper test facilities, like servers,
client machines, and dedicated space.
 Emphasize trust and authority, avoiding forced
decisions.
Roles in SQA:
 Tester: Design and implement test cases, write
scripts.
 SQA Manager: Review test cases, lead review
meetings, and handle conflicts.

Traits of an Excellent SQA Professional:


 Technical experience or education.
 Resilience and a good sense of humor.
 Ability to handle chaotic situations.
 Determined and tenacious.
 Evidence-based, avoiding assumptions.
 Logical and honest.
 Boldness and self-sufficiency.
These key points underscore the importance of
structured reviews, thorough testing, and the roles
and traits crucial for successful software quality
assurance.
Statistical Quality Assurance (SQA):
 Statistical SQA provides a quantitative measure
of software quality.
 It identifies potential process variations and
predicts defects.
 Steps:
1.Collection and categorization of software
defects.
2.Trace defects to underlying causes.
3.Use the Pareto principle to identify the
"vital few" defects.
4.Correct the problems causing these
defects.

Error Categories: From the collected data, the


vital few categories of errors such as Incomplete or
erroneous specification (IES), Misinterpretation of
customer communication (MCC), and Error in data
representation (EDR) account for a significant
portion of total errors. By focusing on these areas,
software organizations can make more impactful
quality improvements.
Phase Index (PI) and Error Index (EI):
 PI provides a measure of software quality at
each iteration of the software engineering
process. It factors in the severity and frequency
of errors.
 EI provides an overall measure of software
quality improvement across phases.
 The computation uses weighted factors to
account for the severity of errors (serious,
moderate, trivial).

Analysis for the Given Problem:

1st Phase Data:


 Serious errors (Si): 40
 Moderate errors (Mi): 78
 Minor errors (Ti): 100
2nd Phase Data:
 Serious errors (Si): 35
 Moderate errors (Mi): 12
 Minor errors (Ti): 90

Weighting Factors:
 Serious (ws): 10
 Moderate (wm): 3
 Minor (wt): 1

Product Size: 2000 KLOC (Kilo Lines of Code)

Calculations:

Phase Index (Pli) Calculation:

Pli = (ws x Si/Ei) + (wm x Mi/Ei) + (wt x Ti/Ei),

Where Ei is the total number of errors for that


phase.
For the 1st Phase: Ei = Si + Mi + Ti = 40 + 78 + 100 =
218

Pli for 1st phase = (10 x 40/218) + (3 x 78/218) + (1


x 100/218) = 3.36

For the 2nd Phase: Ei = Si + Mi + Ti = 35 + 12 + 90 =


137

Pli for 2nd phase = (10 x 35/137) + (3 x 12/137) + (1


x 90/137) = 3.47

Overall Error Index (EI) Calculation:


EI = (1 x 3.36 + 2 x 3.47) / 2000 = 0.00691 KLOC

The computed Error Index (EI) value is 0.00691 per


KLOC. Therefore, project appears to have a
comparatively low error rate, indicating that the
software is of good quality.
What is Software Reliability?
Software Reliability is how often a software works
without any issues. Think of it as a car. If your car
breaks down often, it's not reliable. But if it runs
smoothly for years, it's highly reliable.

Key Terms:
1.Failure: When your software doesn't work the
way it should. Like when your game crashes in
the middle of playing.
2.Fault: The actual mistake in the software
causing the failure. Think of it as a wrong wire
connection in a gadget that makes it
malfunction.
3.Time Interval between Failures: How long the
software works without crashing. More time
between crashes means it's more reliable.
Metrics (Or ways to measure reliability):
1.MTTF (Mean Time To Failure): Average time
the software works before it crashes.
2.MTTR (Mean Time To Repair): Average time
taken to fix the software after a crash.
3.MTBF (Mean Time Between Failure): Total
time between two crashes. It's the sum of MTTF
and MTTR.
Software Availability: It's like the uptime of a
website. If a website is available 99% of the time,
its availability is 99%.

Factors that affect Software Reliability:


1.Size & Complexity: Bigger software with more
lines of code has a higher chance of having
mistakes.
2.Development Process: How the software is
made. Using good tools and techniques can
reduce mistakes.
3.Team Skill: Skilled programmers make fewer
mistakes.
4.Operational Environment: How and where the
software is used. If you don't test your software
the way users use it, you might miss some
mistakes.

Conclusion:
Just like we want cars that don't break down often,
we want software that doesn't crash frequently.
Reliable software provides a better experience for
users and saves money in the long run by reducing
the costs of fixes and customer complaints.

What is an SQA Plan?


An SQA (Software Quality Assurance) Plan is like a
roadmap that guides how to ensure the quality of
software. It's created by both the SQA team and
the team that's building the software. This plan
sets the standards for how to produce quality
software for a specific project.
Main Sections of the SQA Plan:
1.Introduction:
 Purpose: What is the goal of this
document?
 Scope: Which parts of the software process
are under the quality check?
2.Management:
 Organizational Structure: Where does the
SQA team fit in the bigger picture of the
company?
 Tasks & Activities: What are the tasks of
the SQA team and when do they do them
during the software development?
 Roles & Responsibilities: Who does what to
ensure quality?
3.Documentation:
 This section lists all the written materials
produced during the software creation
process. Examples include:
 Project Documents: Like the project
plan which outlines how the software
will be made.
 Models: Diagrams or charts that help
visualize complex data structures or
software designs.
 Technical Documents: Detailed
instructions or plans, like how to test
the software.
 User Documents: Guides for end-users,
like FAQs or help manuals.
4.Standards, Practices, and Conventions:
 This section lists all the set rules and best
practices that the software making process
should follow.
5.Reviews and Audits:
 Here, they mention all the checks and
evaluations that will be done by the
software team, the SQA team, and even
sometimes the client or customer.

In Simple Terms:
The SQA Plan is like a checklist and guide for
making sure the software is of good quality. It
answers questions like: What rules should the
software follow? Who checks the software? When
and how do they check it? What documents are
created along the way?

What is the SQA Group?


The SQA (Software Quality Assurance) Group is a
team within an organization responsible for
ensuring that software products are of high quality.
They follow set guidelines and best practices to
make sure that software is developed correctly and
meets the required standards.

Roles within the SQA Group:


1.Test Manager:
 Admin Responsibilities: Handle budget,
hire team members, assign tasks, provide
training, and review team performance.
 Tech Responsibilities: Plan how to test the
software, carry out the tests, manage any
issues, improve testing processes, and use
metrics (data) to analyze performance.
2.Technical Leader:
 Role: Coordinates the tasks of several
engineers working on complicated projects.
 Skills: Needs to be technically proficient in
testing, but also good at managing
projects, communicating, and making
decisions.
 Responsibility: Sets the direction for how
software testing should be approached.
3.Principal Engineer:
 Role: Expert in testing.
 Responsibilities: Planning tests,
automating tests, setting up test
environments, developing tools for testing,
buying test equipment, checking
performance, ensuring reliability, and
overseeing business acceptance testing.
4.Senior Engineer:
 Role: More experienced tester.
 Responsibilities: Designing and running
tests, setting up test labs, helping software
developers to fix issues, attending test
planning meetings, and maintaining test
labs, tools, and automated test suites.
5.Junior Engineer:
 Role: Beginner or less experienced tester.
 Responsibilities: Assists more experienced
engineers with running tests, setting up
tests, and creating scripts to automate
tests.

In Simple Terms:
The SQA Group is like a team of quality-checkers
for software. Just like how a factory would have
inspectors to make sure products are made
correctly, this group checks that software works
well and doesn't have problems. Within the team,
there are different roles, from managers to junior
testers, each with its own set of responsibilities.
Week 4

ISO 9126 is an international standard that


describes how to measure the quality of software.
It breaks down software quality into six main
categories, each with its own sub-categories.

Main Characteristics:
1.Functionality:
 Measures if the software does what it's
supposed to.
 Suitability: Does it offer the right
functions?
 Accuracy: Does it give the correct
results?
 Interoperability: Can it work with
other systems?
 Security: Is it safe against
unauthorized access and attacks?
2.Reliability:
 Gauges if the software performs reliably
without crashing or errors.
 Maturity: Is it mature enough to avoid
frequent crashes?
 Fault Tolerance: Can it handle faults
gracefully?
 Recoverability: Can it recover quickly
after a failure?
3.Usability:
 Checks how user-friendly the software is.
 Understandability: Can users easily
grasp what the software does?
 Learnability: Can users quickly learn
how to use it?
 Operability: Is it easy to operate?
 Attractiveness: Is it pleasing to use?
4.Efficiency:
 Assesses if the software performs tasks
quickly and efficiently.
 Time Behavior: How fast does it
respond?
 Resource Utilization: Does it use
computer resources (like memory or
CPU) efficiently?
5.Maintainability:
 Measures how easy it is to modify or
maintain the software.
 Analyzability: Can you easily find and
understand issues or defects?
 Changeability: Can you easily make
changes to it?
 Stability: Are changes likely to
introduce new problems?
 Testability: Can you easily test it after
changes?
6.Portability:
 Determines how easily the software can be
transferred to another environment.
 Adaptability: Can you adjust it for
different environments?
 Installability: How easy is it to install?
 Coexistence: Can it run alongside other
software without issues?
 Replaceability: Can it replace another
software in its environment?

In simple terms, ISO 9126 is like a checklist to


measure software quality. It ensures the software
does its job, is reliable, user-friendly, efficient,
maintainable, and can be moved or adapted to
different situations.

ISO 9000:2000 Standard Components:


1.ISO 9000: This talks about the basic concepts
and the language used in the ISO standards.
2.ISO 9001: This lays out the criteria or
requirements for a quality management
system.
3.ISO 9004: Offers guidelines and best practices
for ensuring the quality of products and
achieving sustained success.

ISO 9000:2000 Key Principles:


1.Customer Focus:
 Make sure your product or service meets
the customer's needs.
 Understand how they'll use it for better
results.
2.Leadership:
 Leaders should clearly communicate the
organization's direction.
 They should also value and acknowledge
the contributions of their team.
3.Involvement of People:
 Everyone in the organization should be
aware and involved in achieving the
company's objectives.
 Encourage creativity and make use of
individual talents.
4.Process Approach:
 Look at tasks as a sequence of activities
that change inputs to outputs.
 Define, measure, and refine these
processes to get better results.
5.System Approach to Management:
 Understand the organization as a collection
of interconnected processes.
 Know how each process interacts with the
others to optimize overall performance.
6.Continual Improvement:
 Regularly check and refine processes for
better results.
 Set clear goals and methods for these
reviews.
7.Factual Approach to Decision Making:
 Make decisions based on data, not just gut
feelings.
 Use a structured system to measure and
evaluate data.
8.Mutually Beneficial Supplier Relationships:
 Have a good relationship with those who
supply your raw materials or components.
 A win-win relationship helps both sides
improve.

In short, ISO 9000:2000 sets guidelines to ensure


that products are of high quality. It emphasizes
understanding customer needs, clear leadership,
involving everyone in the organization, structured
processes, continuous improvement, data-driven
decisions, and having good relationships with
suppliers.

You might also like