DRAFT [2016-2017][KR][en] at 2023-06-02 13:24:01 +0300
Logo-do [errata] Profile

Software Engineering Practices

Unit 6 Software Quality Assurance

Lecture


Keywords

software quality assurance (SQA), testing, coverage-based testing, fault-based testing, error-based testing, black-box testing, white-box testing, error, mistake, fault, bug, failure, test adequacy criterion, verification, validation

6.1 Management of Software Quality

Invasion of automation in everyday life is increasing, more and more people are coming into contact with software systems, and the quality of those systems is a major concern. So the software quality is an important topic.

Quality has to be built in from the very beginning. This unit discusses the many dimensions of quality of both the software product and the software process.

Different people will have different perspectives on the quality of a software system. A system tester may view quality as ‘compliance to requirements’, whereas a user may view it as ‘fitness for use’. Both viewpoints are valid, but they need not coincide. Part of the confusion about what the quality of a system entails and how it should be assessed, is caused by mixing up these different perspectives.

Software Quality Assurance (SQ A) is a set of activities that defines and assesses the adequacy of software processes to provide evidence for a justified statement of confidence that the software processes will produce software products that conform to their established requirements.

Software quality assurance procedures provide the means to review and audit the software development process and its products. Quality assurance by itself does not guarantee quality products. Quality assurance merely sees to it that work is done the way it is supposed to be done.

Quality actions within software development organizations are aimed at finding opportunities to improve the development process. These improvements require an understanding of the development process, which can be obtained only through carefully collecting and interpreting data that pertain to quality aspects of the process and its products.

The goals of SQA are:

6.1.1 A quality attributes

Figure 6.1 lists the quality factors and their definitions.

Figure 6.1 Quality factors

These quality factors can be broadly categorized into three classes. The first class contains those factors that pertain to the use of the software after it has become operational. The second class pertains to the maintainability of the system. The third class contains factors that reflect the easiness with which a transition to a new environment can be made. These three categories are depicted in figure 6.2.

Figure 6.2 Three categories of software quality factors

Different participants of the software development process recognize five definitions of software quality:

6.1.2 Standards pertaining to software quality

The International Standards Organization (ISO), has established several standards that pertain to the management of quality. The one most applicable to our field, the development and maintenance of software, is ISO 9001. ISO 9001 can be augmented by more specific procedures, aimed specifically at quality assurance and control for software development. The IEEE Standard for Quality Assurance Processes is meant to provide such procedures. The Capability Maturity Model (CMM) is the best known attempt at directions on how to improve the development process. It uses a five-point scale to rate organizations and indicates key areas of focus in order to progress to a higher maturity level.

In 2011 was approved the standard ISO/IEC 25010 Systems and software engineering -- Systems and software Quality Requirements and Evaluation (SQuaRE) -- System and software quality models. This document highlights the characteristics of product quality and quality in use.

In 2013 were approved the ISO/IEC/IEEE 29119 standard series (Systems and software engineering - Software testing) which are developed to define an internationally-agreed set of standards for software testing that can be used by any organization when performing any form of software testing.

 

6.2 Software Testing

During earlier phases, intermediate products can, and should, be tested as well.

Testing is often mean executing a program to see whether it produces the correct output for a given input. This involves testing the end-product, the software itself. By the time the software has been written, we are often pressed for time, which does not encourage thorough testing. Good testing is difficult. It requires careful planning and documentation. There exist a large number of test techniques.

Since exhaustive testing is generally not feasible, we have to select an adequate set of test cases. Test techniques generally use some systematic means to derive test cases. These test cases are meant to provoke failures. Thus, the main objective is fault detection. Test techniques may be classified according to the criterion used to measure the adequacy of a set of test cases:

Alternatively, it is possible to classify test techniques based on the source of information used to derive test cases:

The IEEE Std 610 gives four definitions of the error. To distinguish between these definitions, the words ‘error’, ‘fault’, ‘failure’ and ‘mistake’ are used. The error is used to denote a measurement error, while mistake is used to denote a human error. The fault is an incorrect step, process or data definition in a program. In common usage bug has the same meaning. The failure is an inability of program to peform required functionality.

Some faults are critical. Special techniques, such as fault tree analysis, have been developed to find those critical faults. Using fault tree analysis, testers try to derive a contradiction by reasoning backwards from a given, undesirable, end situation.

A test adequacy criterion: Let a test set S containing just one test case. If we execute the program using S, then all statements are executed at least once. If our criterion to judge the adequacy of a test set is that 100% of the statements are executed, then S is adequate. If our criterion is that 100% of the branches are executed, then S is not adequate, since the (empty) else-branch of the if-statement is not executed by S.

A test adequacy criterion can be used in different ways: as stopping rule, as measurement, or as test case generator. If a test adequacy criterion is used as a stopping rule, it tells us when sufficient testing has been done. If statement coverage is the criterion, we may stop testing if all statements have been executed by the tests done so far. In this view, a test set is either good or bad; the criterion is either met, or it isn’t. If we relax this requirement a bit and use, say, the percentage of statements executed as a test quality criterion, then the test adequacy criterion is used as a measurement.

During SDLC there are used different testing activities (fig.6.3).

Figure 6.3 Testing activity in SDLC

The planning of test activities is described in IEEE Strd 1012. It describes verification and validation activities for a waterfall-like life cycle. The IEEE Std 610 defines verification as the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Verification thus tries to answer the question: have we built the system right? The validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. Validation then boils down to the question: have we built the right system?

As with any other life cycle activity, testing has to be carefully planned, controlled, and documented. The goal of testing is to find faults, not to prove correctness. Indeed, the absence of faults does not guarantee correctness. There are many manual and automated techniques to help find faults in code, as well as testing tools to show how much has been tested and when to stop testing. Experimental evaluations show that there is no uniform best test technique. Different techniques tend to reveal different types of error.

 

References...Hide
  1. ISO/IEEE Std 610.12:1990 - IEEE Glossary of Software Engineering Terminology.
  2. ISO/IEEE Std 730:2014 - IEEE Standard for Quality Assurance Processes.
  3. ISO/IEC 15504 Information technology – Process assessment.
  4. IEEE Std 829:2008 - IEEE Standard for Software and System Test Documentation
  5. ISO/IEC 25010:2011 Systems and software engineering -- Systems and software Quality Requirements and Evaluation (SQuaRE) -- System and software quality models.
  6. IEEE Std 1012:2012 - IEEE Standard for System and Software Verification and Validation.
  7. Juristo, N., Moreno, A., and Vegas, S. Reviewing 25 Years of Testing Technique Experiments. Empirical Software Engineering, 9: 2004.
  8. Beck, K. Test-Driven Development. 2003.
  9. Software engineering / Ian Sommerville. — 9th ed. Addison-Wesley, 2011. 
    https://edisciplinas.usp.br/pluginfile.php/2150022/mod_resource/content/1/1429431793.203Software%20Engineering%20by%20Somerville.pdf

Part of material was taken from:

  1. Software Engineering: Principles and Practice. Hans van Vliet. 2007.

 


© 2006—2023 Sumy State University