This material is from "SSG/Appendicies/MSA_Test_Methodology.doc" so it is geared toward testing Internet Data Centers. This document is an extension (implementation) of the somewhat abstract MSF documents.

Tools of the Trade

The following tools should be employed in managing the test process:

·         Bug-tracking tool. A good bug-tracking tool should be used to record information about bugs discovered during testing. It should provide an ability to assign the bug to a person and provide a workflow-based mechanism to resolve and close it.

·         Test case management tool. A test case management tool is used to document test cases, group the test cases for every pass, record results for every pass, and maintain a test case history.

·         Automated test tool. Automated test tools are used to record test sequences and replay them for regression or successive passes.

 

All bugs should contain the following information as a minimum:

·         Error messages (recorded exactly as they appear on the screen or in log files)

·         Version number of the build where the bug was found

·         Exact, repeatable steps to reproduce the bug

·         A summary of any unique configuration or environmental factors in place during the test

 

Stage 1: Create Master Test Plan

The first step is to define and document test goals in a Master Test Plan (MTP). The MTP also documents the following items:

·         Assumptions made for testing

·         Scope of testing

·         Priorities to test

·         Level of testing to perform

·         Roles and responsibilities

·         Resources needed

·         Schedule estimate

·         Dependencies

·         Expectations

·         Risks and mitigation plan

·         Change control plan

The MTP is a living document throughout the test life cycle. The content for this document is generated from the functional specification document and the high-level release schedule.

During this stage, a Build and Triage plan is also generated. The Build and Triage plan documents the build process, the triage process, the people involved in these processes, and their roles and responsibilities.

For more information, refer to "Creating the Build and Triage Plan" later in this document.

Stage 2: Design Detailed Test Plans

For large or very complex projects, detailed test plans (DTPs) are generated from the MTP for the various areas that have been identified for testing. For smaller projects, the DTPs may be included within the MTP.

The DTPs, like the MTP, are based on the functional specification and other high-level design documents. Each scenario in a DTP is assigned a priority based on the probability that it will occur and its impact on business. DTPs are usually divided according to the organization of the test group, according to the order of availability of various components in the system, or according to the area of functionality.

For the MSA architecture testing, the following broad areas were identified:

·         Networking

·         Management

·         Availability

·         Scalability

·         Security

·         Performance

Stage 3: Design Detailed Test Cases

Each of the scenarios listed in the DTPs is translated into one or more detailed test cases to create the Detailed Test Cases (DTC) documents. The test cases list the detailed steps, data requirements to perform the test, expected results, and methods to record the test results.

For more information about test cases, refer to "Creating the Plans and Test Cases" later in this document.

Stage 4: Build

After the design is complete and the application or infrastructure is ready for testing, the systems engineer begins the build process. The systems engineer creates a build based on the architecture design and then hands it over to the test team. The build process involves deploying the design by running through the implementation documents in a step-by-step fashion. Often, a test engineer works with the systems engineer to document any issues that may appear during deployment. This is a good opportunity to discover test cases that had not been included in the initial test planning.

For the MSA architecture, testing can be broadly classified into two types:

·         Implementation testing

Implementation testing is accomplished while deploying the environment by using the prescriptive guides. All possible errors, missing steps, and exceptions in the guides that are found during deployment in the lab are captured and recorded. At the end of the implementation testing, a build is considered to be ready for design testing.

·         Design testing

Design testing begins only after the build is ready and the environment is deployed. It is designed to prove the capabilities of the overall systems architecture. Performance and management testing are included, which involves observing the end-to-end system responses to a number of extreme usage patterns. This set of tests is critical to understanding the characteristics of the actual system being used so that the most efficient and effective management processes can be designed to support that specific implementation.

Stage 5: Design Test Execution

After the build process is complete, a Build Verification Test (BVT) is performed before handing over the build for testing. After a successful BVT, the test team conducts a Build Acceptance Test (BAT), or smoke test, to verify that the build is stable enough for further testing. Frequently, these two tests are combined into a single effort.

If the BAT tests are successful, the test team accepts the build and begins design testing.

Design testing begins with the test team performing the various tests as documented in the DTCs. These tests are run on each major build, and the results are recorded and verified. Any deviations from the expected results are documented and are tracked with a bug-tracking tool.

Stage 6: Triage

Triage meetings are regularly held to review the bugs reported by the test team. During this review, each bug is given a priority and assigned to a development team member. Possible resolutions for each bug are then discussed.

Stage 7: Fix Design Bugs

The development team fixes the design bugs that are reported by the test team. After the development team fixes these bugs, they are assigned back to the test team member who opened them originally. The test team then retests the implementation and closes each bug as it is fixed. After all bugs are fixed, a newer build is created for retesting.

For more information about fixing bugs, refer to "Creating the Build and Triage Plan" later in this document.


 

Stage 8: Generate Test Report

When the test team completes the test cases, a test report is prepared to list the open bugs and to summarize system performance, system management findings, and suggested practices. The bugs are listed according to priority. This test report also highlights the impact on business if bugs are not fixed, and the test team also makes a recommendation on whether to proceed with production.

For more information about test reports, refer to ”Reporting and Release Processes” later in this document.

Stage 9: Release Review

During Release Review, the development, test, and program management teams meet to discuss whether to release the implementation. If this review is successful, the product is released to production. If not, the outstanding issues are reviewed and fixed.

For more information about Release Review, refer to ”Reporting and Release Processes” later in this document.

 

Roles and Responsibilities

The typical roles that may be required in an MSA architecture test team, along with their responsibilities, are described in Table 1:

Role

Responsibility

Test Lead

·          Define test goals and generate MTP

·          Generate Build and Triage plan

·          Review DTP

·          Review DTC

·          Review bugs entered in the bug-tracking tool and monitor their status

·          Conduct triage meeting

·          Generate weekly status reports

·          Escalate issues that are blocking or delaying testing

·          Review impact analysis and generate change management document

·          Track test schedule

·          Ensure that appropriate level of testing is achieved for a particular release

·          Lead the actual BAT execution

·          Execute test cases

·          Generate Test Report

Test Engineer

·          Generate DTP

·          Review DTC

·          Document problems found during deployment

·          Conduct BAT

·          Execute test cases

·          Report bugs in bug-tracking tool

·          Retest bugs that are fixed

Systems Engineer

·          Prepare builds and rebuilds

·          Conduct BVT

·          Fix or escalate any of the hardware or software availability issues

Table 1. Roles and Responsibilities

Note: In most companies, the systems engineer is a member of the development or product support team rather than part of the test team.

Managing the Test Process

In this section we will examine some of the elements that are required to successfully manage the test process.

Project Scheduling

The overall testing schedule is created based on the tests planned and effort required.

Listing the Dependencies

Some of the factors that may affect the testing effort include:

·         Functional specifications

·         Architecture diagrams

·         Design documents

·         Build dates

·         Material and personnel resources

·         Changes in functional scope

It is essential that all the dependencies are identified and listed to ensure proper and timely testing of the application and the architecture.

Prioritizing

The test plans and test cases should be prioritized as high, medium, or low importance. In the event of reduced test execution time, this will help in deciding which test cases must be executed and which can be safely bypassed.

Monitoring the Bug Trend

It is important to monitor the bug trend when the test execution begins. With every build, ideally, the number of bugs should decrease. When the number of open bugs is significantly low and all high-severity bugs have been closed, it is time to determine whether the application/infrastructure can be released to market/production.


 

Assigning Severity to Bugs

The following table lists guidelines that can be used to assign severity to the bugs.

Severity

Most Common Types

Conditions Required

1

·          Bug blocks build or further test of a feature

·          Bug affects further testing of a different feature that is being tested in parallel

·          System does not work. User cannot even begin to use significant parts of the system.

2

·          Steps defined in the documentation are not viable

·          Results or behavior of a function or process contradict expected results (as documented in functional specification)

·          Results or behavior of a function or process contradict with logically expected results

·          Documented functionality is missing (in this case test is blocked)

·          Documentation missing or inadequate

If the following conditions are met, severity is 2:

·          User has no simple workaround to mend situation.

·          User cannot easily figure out workaround.

·          Primary business requirements cannot be met by the system.

·          If conditions are not met, severity is 3.

3

·          Function or process is broken

·          Results or behavior of a function or process contradict expected results (as documented in functional specification)

·          Results or behavior of a function or process contradict logically expected results

·          Minor documentation errors and inaccuracies

·          Text misspellings

If the following conditions are met, severity is 3:

·          User has a simple workaround to mend situation.

·          User can easily figure out workaround.

·          Bug does not cause a bad user experience.

·          Primary business requirements are still functional.

·          Bug does not block a good number of other test cases.

If conditions are not met, severity is 2.

4

·          Suggestions

·          Future enhancements

Clearly not a product bug for this version.

Table 2. Severity Guidelines

Evaluating Project-Related Risks

A risk is the probability that an event which could jeopardize the system will occur. To evaluate risks, the test team can prepare a matrix that identifies risks and assigns one of the following risk exposure factors to each one:

·         Probability of loss. This category defines the probability that the risk will occur. Usually three levels of probability are good enough: for example, "Not Likely" (less than 50 percent), "Possible" (50 percent), and "Very Likely" (greater than 50 percent).

·         Size of loss. This category describes the impact on the project timeline when the event associated with a risk occurs. Again, three levels are usually adequate: "Negligible," "Jeopardizes Finish Date," and "Significant Effect on Finish Date."

·         Contingency plan. A plan for handling the circumstances of the risk. The plan could include building into the schedule an extra number of days to meet these circumstances, adding staff and other resources, or changing delivery scope.

Table 3 lists some of the common risks encountered in testing projects and possible contingency plans.

Risk

Contingency Plan

Development falls behind schedule.

Determine your ability to begin initial testing in parallel with last stages of development, and add test and development resources.

Testers are unfamiliar with the application.

Factor in additional days to train the testers on the application.

Applications are based on emerging technologies.

There could be unexpected delays due to the technology. The schedule should remain flexible.

Scope of new requirements increases.

There could be an unexpected increase in scope due to the evolving nature of these requirements. The schedule should remain flexible to meet these risks.

Table 3. Common Risks and Contingency Plans

Tools

The following tools should be employed in managing the test process:

·         Bug-tracking tool. A good bug-tracking tool should be used to record information about bugs discovered during testing. It should provide an ability to assign the bug to a person and provide a workflow-based mechanism to resolve and close it.

·         Test case management tool. A test case management tool is used to document test cases, group the test cases for every pass, record results for every pass, and maintain a test case history.

·         Automated test tool. Automated test tools are used to record test sequences and replay them for regression or successive passes.

Bug-Tracking Process

Any bug-tracking tool can be used to record bugs during the implementation phase. Specific permissions to access the tracking tool should be provided to each member of the testing team. During testing, all members of the team should be able to record bugs as they are discovered. To maintain good control over the status of bugs, only the test lead, test staff, or project managers should have rights to close them.

Bugs can be found in documentation, procedures, and other areas in addition to within the actual system itself. All issues with the system or its supporting infrastructure should be captured in the bug-tracking system for resolution. The bug-tracking system becomes the team’s "to do" list and can be used to quickly gauge the status of the project.

A tester logs the bugs in the tracking system with a recognizable description and comprehensive detail. All bugs should contain the following information as a minimum:

·         Error messages (recorded exactly as they appear on the screen or in log files)

·         Version number of the build where the bug was found

·         Exact, repeatable steps to reproduce the bug

·         A summary of any unique configuration or environmental factors in place during the test

Additionally, a suggested resolution may also be added if the tester knows a likely solution.

After fixing a bug, the developer changes its status from Open to Fixed. The developer changes the status to Duplicate, By Design, Not Repro, or Won’t Fix if the developer believes that the bug should not be fixed. Bugs that are no longer listed as Open or Active must be given final resolution by the test team. The following list provides examples of how the test team will resolve a bug by using the tracking tool:

·         Bugs with the Resolution field marked as Fixed are assigned back to the originator of the bug to perform regression testing on the new build.

·         Bugs with the Resolution field marked as Duplicate, By Design, Not Repro, or Won’t Fix (meaning that they do not require a fix) are closed by the test team if they agree with the resolution.

·         If the test team strongly disagrees with the resolution, the test team changes the Assign To field to Active and discusses in the triage meeting whether it should be reactivated. If the bug fails regression, it will be reactivated by changing the Status to Active. All active bugs will be reviewed again in the triage meeting.