A. Planning Phase
» Objective:
During the planning phase of the development cycle, QA team should focus on the Requirements and Business Scenarios. QA team should review these documents for their comprehensibility, accuracy, feasibility, and testability and record all issues in a “Gap Analysis on Requirements” document.
» Tasks involved:
- Assessing the project QA effort
- Establish defects / issues tracking methodology
- Establish main testing approach
- Take part in meetings with client and requirements reviews with the development team
- Keep meeting minutes
- Establish the environment requirements
- Play a development support role
» QA Deliverables:
- Main “Bird’s Eye View” Testing Approach
- “Gap Analysis on Requirements” document
- Issues / Defects Tracking methodology (can be part of the Main Testing Approach)
- Meeting Minutes
B. Design Phase
Objective
During the Design phase, QA team should focus on reviewing and evaluating the design requirements and produce its draft Test Plan (a more refined, low-level Test Approach). During this phase, QA Team should participate in Design reviews (with the development team) and conduct gap analysis reviews on the Design requirements. All the issues should be recorded in the “Gap Analysis on Requirements” document.
The Design requirements review will help the QA team to refine the initial Test Approach and produce the draft Test Plan and the draft Test Cases structure. The draft Test Plan should define much of the detailed strategy and specific QA information that will be used for testing the application.
Tasks involved:
- Divide Design requirements into testable areas and sub-areas
- Structure the draft Test Plan into testable areas and sub-areas
- Define test strategies for each area and sub-area
- Define list of configurations for testing
QA Deliverables
- Main “Bird’s Eye View” Testing Approach
- “Gap Analysis on Requirements” document
- Issues / Defects Tracking methodology (can be part of the Main Testing Approach)
- Meeting Minutes
C. Construction Phase
During the Construction phase of the development cycle, QA team should begin to execute its primary role by identifying defects and issues. At the beginning of this phase, QA team will be spending most of its time generating detailed test cases. As this phase progresses, testing will receive release candidates (modules) of increased functionality to test.
By the time the construction phase closes, then, testing team will be primarily executing test cases.
C.1. Unit Testing
General Test Objectives
Unit Testing is primarily concerned with uncovering defects or logic errors in an application software component, typically a module or a screen.
The objective of the unit testing will be to assure that each component part or module of the system conforms in all aspects to the design, functional specifications and overall business objectives.
All the component or modules tested will be addressed as described in the ESD – Testing Techniques Tome document.
The exit criterion for this phase is code completion. That is, all functionality and logical and physical components of the modules must be completed and ready to interact with other modules within the application.
Testing Techniques
Ongoing walkthroughs of modules and functions will be conducted to assure quality, consistency, accuracy, and conformance to specifications on an informal basis. A top down approach to testing will be employed.
The component will first be “black-box tested”. The focus will be the component’s functionality; Does the module / component perform the function it was designated to perform properly? Input will be varied and the results will be compared to the expected results.
The component will next be “white box tested”. The inputs will be varied in such way as to exercise as many possible entries, exits and paths as possible / relevant. The focus is to exercise all the code of the program at least once to uncover any possible errors in paths not part of the “regular” processing.
Any abnormal performance will be recorded so that when load/stress are performed later, slower processes will be already known.
Methods of Creating and Validating Test Data
Test data will be similar to live data to the largest extent possible. This aids in the recognition of errors that might go undetected when using data, like “XXX”, “YYY”, “ZZZ”. Records counts and check sums will be calculated for gross data validation. Further more, for special test (like special characters, etc…) check the ESD – Testing Techniques Tome.
Defects and Issues
As a general ESD rule, all defects and issues are to be recorded and maintained in the CTPS specific defects and issues sections.
During the Unit testing phase we expect a high number of defects and issues to occur. Knowing this, all the defects and issues reported in this phase will be maintained as a single ProjectClarity defect, “Development Support for X module”, which will contain a defect punch list that will be continuously updated.
C.2. Acceptance into Integration Testing
Before accepting components for Integration Testing, QA Team must verify that adequate unit testing has been done. In order for a release to be accepted into Integration, it must pass the following acceptance test:
- Sign off form the QA that unit testing is complete and on all modules released.
- Verification that the release notes / description and release installation guide accompany the release source code.
C.3. Integration Testing
General Test Objectives
The objective of the integration testing will be to assure that the interfaces between components / modules are identified and tested. The primary focus will be on data integrity across the modules. The secondary focus will be to verify that the components on each side of the interface function properly and verify the design specifications (gap analysis).
Several cycles of integration testing will occur (every time the QA Team receives a new release / addition to the existing source). Completion of this milestone in the construction phase will occur when code is complete and subsequent releases will include only bug fixes with no new features.
Integration Testing will include:
- Execution of test cases (and, where applicable, test scripts)
- Defect tracking documents for every defect or variation from the expected results
- Updates of the already existing test cases and addition of the newly discovered ones
Defects and Issues
As a general ESD rule, all defects and issues are to be recorded and maintained in the CTPS specific defects and issues sections.
During integration testing, all defects will be recorded as single defect, with corresponding steps, description, screenshot and any other documents related to it. All defects should be traced and their CTPS status should be updated.
Performance flaws (e.g. slow processes, slow screens, etc) will be recorded for the later performance testing.
Regression Testing
QA Team should perform regression testing after each change on the test environment to make sure that the modified code behaves correctly and that modifications have not affected the existing functionalities.
A selective approach to regression testing should be used. By using this approach, the QA Team will identify and re-test only those parts of the application that have been or might have been affected by the modification.
The affected parts identification process should be done using the application matrix, main list of features, or diagram.
The regression test process typically consists of 5 steps:
- Identification of the affected modules.
- Identification of the affected functionalities.
- Identification of the existing test that must be rerun since they may produce a different test result.
- Performing regression test.
- Compile regression test report.
C.4. Acceptance into System Test
System testing occurs when code complete has been achieved; all subsequent releases after this point will only consist of bug fixes.
Acceptance into system testing requires that the application passed all critical test threads in order to ensure that general aspects of the application are robust enough to support the system testing process. The application must be fully functional.
Failure into acceptance would occur when a bug of sufficient severity prevents running tests cases against the application.
C.5. System Test
General Test Objectives
System testing is a formal testing event conducted to validate that the system serves the initial requirements and will function well in the targeted environment. System test is the QA Team’s final test of the application performance.
System testing is concerned with the technical operation of the application, including performance and fit. System testing demonstrates how the application meets the original service objectives. This type of testing encompasses many separate testing events, including performance testing, volume and stress testing, security and controls testing, recovery testing and documentation and procedures testing.
Performance Testing
Performance testing should be conducted to evaluate the compliance of the system with the specified performance requirements, e.g. testing specific business scenarios with set loads and mixes of transactions.
The performance testing should ensure that the system could handle the demands of the business requirements before going live |