Different Levels of Testing
document the results. It is recommended to get sign-off and check in all documentation and
code according to Configuration Management (CM) procedures to ensure quality testing.
Each level of testing is either considered black or white box testing.
• Black box testing: not based on any knowledge of internal design or code.
Tests are based on requirements and functionality.
• White box testing: based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, and conditions.
Unit Testing is the first level of dynamic testing and is first the responsibility of the developers
than the testers. After the expected test results are met or differences are explainable/acceptable.
Testing where the user reconciles the output of the new system to the output of the current
system to verify the new system does the operations correctly.
Black-box type of testing geared to functional requirements of an application. Testers should
do this type of testing.
Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted enduser
or customer. User interviews, surveys, video recording of user sessions, and other
techniques can be used. Programmers and testers are usually not appropriate as usability
Incremental Integration Testing
It is recommended that continuous testing of an application as new functionality is added.
This may require that various aspects of an application's functionality be independent enough
to work separately before all parts of the program are completed, or that test drivers be
developed as needed; done by programmers or by testers.
Upon completion of unit testing, integration testing, which is black box testing, will begin. The
purpose is to ensure that distinct components of the application still work in accordance to
customer requirements. Test sets will be developed with the express purpose of exercising
the interfaces between the components. This activity is to be carried out by the Test Team.
Integration test will be termed complete when actual results and expected results are either in
line or differences are explainable/acceptable based on client input.
Upon completion of integration testing, the Test Team will begin system testing. During
system testing, which is a black box test, the complete system is configured in a controlled
environment to validate its accuracy and completeness in performing the functions as
designed. The system test will simulate production in that it will occur in the “production-like”
test environment and test all of the functions of the system that will be required in production.
The Test Team will complete the system test.
Prior to the system test, the unit and integration test results will be reviewed by SQA to
ensure that all problems have been resolved. It is important for higher level testing efforts to
understand unresolved problems from the lower testing levels. System test is deemed
complete when actual results and expected results are either in line or differences are
explainable/acceptable based on client input.
Similar to system testing; the 'macro' end of the test scale; involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications, or
systems if appropriate.
The objective of regression testing is to ensure that software remains intact. A baseline set
of data and scripts will be maintained and executed to verify that changes introduced during
the release have not “undone” any previous code. Expected results from the baseline are
compared to results of the software being regression tested. All discrepancies will be
highlighted and accounted for before testing proceeds to the next level.
Sanity testing will be performed whenever cursory testing is sufficient to prove that the
application is functioning according to specifications. This level of testing is a subset of
regression testing. It will normally include a set of core tests of basic GUI functionality to
demonstrate connectivity to the database, application servers, printers, etc.
Although performance testing is described as a part of system test, it can be regarded as a
distinct level of testing. Performance testing will verify the load, volume, and response times
as defined by requirements.
Testing an application under heavy loads, such as testing of a web site under a range of
loads to determine at what point the system's response time degrades or fails.
Test full, partial, or upgrade install/uninstall processes. The installation test for a release will
be conducted with the objective of demonstrating production readiness. This test is
conducted after the application has been migrated to the client’s site. It will encompass the
inventory of configuration items (performed by the application’s System Administration) and
evaluation of data readiness as well as dynamic tests focused on basic system functionality.
When necessary, a sanity test will be performed following the install.
Test how well the system protects against unauthorized internal or external access, willful
damage, etc; may require sophisticated testing techniques.
Test how well a system recovers from crashes, hardware failures, or other catastrophic
Test how well software performs in a particular hardware/software/operating
Compare software weaknesses and strengths to competing products.
Acceptance testing, which is black box testing, will give the client the opportunity to verify the
system functionality and usability prior to the system being moved to production. The
acceptance test will be the responsibility of the client, however, it will be conducted with full
support from the project team. The Test Team will work with the client to develop the
Testing of an application when development is nearing completion; minor design changes
may still be made as a result of such testing. Typically done by end-users or others, not by
programmers or testers.
Testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.
Последнее изменение этой страницы: 2018-09-12;