Sunday, July 1, 2007

Standads to maintain in manual and automation testing

Automated Testing Detail Test Plan
Automated Testing DTP Overview
This Automated Testing Detail Test Plan (ADTP) will identify the specific tests that are to be performed to ensure the quality of the delivered product. System/Integration Test ensures the product functions as designed and all parts work together. This ADTP will cover information for Automated testing during the System/Integration Phase of the project and will map to the specification or requirements documentation for the project. This mapping is done in conjunction with the Traceability Matrix document, that should be completed along with the ADTP and is referenced in this document.
This ADTP refers to the specific portion of the product known as PRODUCT NAME. It provides clear entry and exit criteria, and roles and responsibilities of the Automated Test Team are identified such that they can execute the test.
The objectives of this ADTP are:
• Describe the test to be executed.
• Identify and assign a unique number for each specific test.
• Describe the scope of the testing.
• List what is and is not to be tested.
• Describe the test approach detailing methods, techniques, and tools.
• Outline the Test Design including:
• Functionality to be tested.
• Test Case Definition.
• Test Data Requirements.
• Identify all specifications for preparation.
• Identify issues and risks.
• Identify actual test cases.
• Document the design point
Test Identification
This ADTP is intended to provide information for System/Integration Testing for the PRODUCT NAME module of the PROJECT NAME. The test effort may be referred to by its PROJECT REQUEST (PR) number and its project title for tracking and monitoring of the testing progress.

Test Purpose and Objectives
Automated testing during the System/Integration Phase as referenced in this document is intended to ensure that the product functions as designed directly from customer requirements. The testing goal is to identify the quality of the structure, content, accuracy and consistency, some response times and latency, and performance of the application as defined in the project documentation.

Assumptions, Constraints, and Exclusions
Factors which may affect the automated testing effort, and may increase the risk associated with the success of the test include:
• Completion of development of front-end processes
• Completion of design and construction of new processes
• Completion of modifications to the local database
• Movement or implementation of the solution to the appropriate testing or production environment
• Stability of the testing or production environment
• Load Discipline
• Maintaining recording standards and automated processes for the project
• Completion of manual testing through all applicable paths to ensure that reusable automated scripts are valid

Entry Criteria
The ADTP is complete, excluding actual test results. The ADTP has been signed-off by appropriate sponsor representatives indicating consent of the plan for testing. The Problem Tracking and Reporting tool is ready for use. The Change Management and Configuration Management rules are in place.
The environment for testing, including databases, application programs, and connectivity has been defined, constructed, and verified.

Exit Criteria

In establishing the exit/acceptance criteria for the Automated Testing during the System/Integration Phase of the test, the Project Completion Criteria defined in the Project Definition Document (PDD) should provide a starting point. All automated test cases have been executed as documented. The percent of successfully executed test cases met the defined criteria. Recommended criteria: No Critical or High severity problem logs remain open and all Medium problem logs have agreed upon action plans; successful execution of the application to validate accuracy of data, interfaces, and connectivity.
Pass/Fail Criteria
The results for each test must be compared to the pre-defined expected test results, as documented in the ADTP (and DTP where applicable). The actual results are logged in the Test Case detail within the Detail Test Plan if those results differ from the expected results. If the actual results match the expected results, the Test Case can be marked as a passed item, without logging the duplicated results.
A test case passes if it produces the expected results as documented in the ADTP or Detail Test Plan (manual test plan). A test case fails if the actual results produced by its execution do not match the expected results. The source of failure may be the application under test, the test case, the expected results, or the data in the test environment. Test case failures must be logged regardless of the source of the failure. Any bugs or problems will be logged in the DEFECT TRACKING TOOL.
The responsible application resource corrects the problem and tests the repair. Once this is complete, the tester who generated the problem log is notified, and the item is re-tested. If the retest is successful, the status is updated and the problem log is closed.
If the retest is unsuccessful, or if another problem has been identified, the problem log status is updated and the problem description is updated with the new findings. It is then returned to the responsible application personnel for correction and test.
Severity Codes are used to prioritize work in the test phase. They are assigned by the test group and are not modifiable by any other group. The following standard Severity Codes to be used for identifying defects are:
Table 1 Severity Codes
Severity Code Number Severity Code Name Description
1. Critical Automated tests cannot proceed further within applicable test case (no work around)
2. High The test case or procedure can be completed, but produces incorrect output when valid information is input.
3. Medium The test case or procedure can be completed and produces correct output when valid information is input, but produces incorrect output when invalid information is input.(e.g. no special characters are allowed as part of specifications but when a special character is a part of the test and the system allows a user to continue, this is a medium severity)
4. Low All test cases and procedures passed as written, but there could be minor revisions, cosmetic changes, etc. These defects do not impact functional execution of system
The use of the standard Severity Codes produces four major benefits:
• Standard Severity Codes are objective and can be easily and accurately assigned by those executing the test. Time spent in discussion about the appropriate priority of a problem is minimized.
• Standard Severity Code definitions allow an independent assessment of the risk to the on-schedule delivery of a product that functions as documented in the requirements and design documents.
• Use of the standard Severity Codes works to ensure consistency in the requirements, design, and test documentation with an appropriate level of detail throughout.
• Use of the standard Severity Codes promote effective escalation procedures.

Test Scope
The scope of testing identifies the items which will be tested and the items which will not be tested within the System/Integration Phase of testing.
Items to be tested by Automation (PRODUCT NAME ...)
Items not to be tested by Automation(PRODUCT NAME ...)

Test Approach
Description of Approach
The mission of Automated Testing is the process of identifying recordable test cases through all appropriate paths of a website, creating repeatable scripts, interpreting test results, and reporting to project management. For the Generic Project, the automation test team will focus on positive testing and will complement the manual testing undergone on the system. Automated test results will be generated, formatted into reports and provided on a consistent basis to Generic project management.
System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for ensuring that this level of testing is performed.
Integration testing is conducted to determine whether or not all components of the system are working together properly. This testing focuses on how well all parts of the web site hold together, whether inside and outside the website are working, and whether all parts of the website are connected. Project teams of developers and test analyst are responsible for ensuring that this level of testing is performed.
For this project, the System and Integration ADTP and Detail Test Plan complement each other.
Since the goal of the System and Integration phase testing is to identify the quality of the structure, content, accuracy and consistency, response time and latency, and performance of the application, test cases are included which focus on determining how well this quality goal is accomplished.
Content testing focuses on whether the content of the pages match what is supposed to be there, whether key phrases exist continually in changeable pages, and whether the pages maintain quality content from version to version.
Accuracy and consistency testing focuses on whether today’s copies of the pages download the same as yesterday’s, and whether the data presented to the user is accurate enough.
Response time and latency testing focuses on whether the web site server responds to a browser request within certain performance parameters, whether response time after a SUBMIT is acceptable, or whether parts of a site are so slow that the user discontinues working. Although Loadrunner provides the full measure of this test, there will be various AD HOC time measurements within certain Winrunner Scripts as needed.
Performance testing (Loadrunner) focuses on whether performance varies by time of day or by load and usage, and whether performance is adequate for the application.
Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action.
Test Definition
This section addresses the development of the components required for the specific test. Included are identification of the functionality to be tested by automation, the associated automated test cases and scenarios. The development of the test components parallels, with a slight lag, the development of the associated product components.

Test Functionality Definition (Requirements Testing)
The functionality to be automated tested is listed in the Traceability Matrix, attached as an appendix. For each function to undergo testing by automation, the Test Case is identified. Automated Test Cases are given unique identifiers to enable cross-referencing between related test documentation, and to facilitate tracking and monitoring the test progress.
As much information as is available is entered into the Traceability Matrix in order to complete the scope of automation during the System/Integration Phase of the test.

Test Case Definition (Test Design)
Each Automated Test Case is designed to validate the associated functionality of a stated requirement. Automated Test Cases include unambiguous input and output specifications. This information is documented within the Automated Test Cases in Appendix 8.5 of this ADTP.

Test Data Requirements
The automated test data required for the test is described below. The test data will be used to populate the data bases and/or files used by the application/system during the System/Integration Phase of the test. In most cases, the automated test data will be built by the OTS Database Analyst or OTS Automation Test Analyst.

Automation Recording Standards
Initial Automation Testing Rules for the Generic Project:
1. Ability to move through all paths within the applicable system
2. Ability to identify and record the GUI Maps for all associated test items in each path
3. Specific times for loading into automation test environment
4. Code frozen between loads into automation test environment
5. Minimum acceptable system stability
Winrunner Menu Settings
1. Default recording mode is CONTEXT SENSITIVE
2. Record owner-drawn buttons as OBJECT
3. Maximum length of list item to record is 253 characters
4. Delay for Window Synchronization is 1000 milliseconds (unless Loadrunner is operating in same environment and then must increase appropriately)
5. Timeout for checkpoints and CS statements is 1000 milliseconds
6. Timeout for Text Recognition is 500 milliseconds
7. All scripts will stop and start on the main menu page
8. All recorded scripts will remain short; Debugging is easier. However, the entire script, or portions of scripts, can be added together for long runs once the environment has greater stability.

No comments:

My Contents