#1. Requirement Analysis/Gathering
Performance team interacts with the client for identification and gathering of requirement – technical and business. This includes getting information on application’s architecture, technologies and database used, intended users, functionality, application usage, test requirement, hardware & software requirements etc.#2. POC/Tool selection
Once the key functionality are identified, POC (proof of concept – which is a sort of demonstration of the real time activity but in a limited sense) is done with the available tools. The list of available performance test tools depends on cost of tool, protocol that application is using, the technologies used to build the application, the number of users we are simulating for the test, etc. During POC, scripts are created for the identified key functionality and executed with 10-15 virtual users.#3. Performance Test Plan & Design
Depending on the information collected in the preceding stages, test planning and designing is conducted.Test Planning involves information on how the performance test is going to take place – test environment the application, workload, hardware, etc.
Test designing is mainly about the type of test to be conducted, metrics to be measured, Metadata, scripts, number of users and the execution plan.
During this activity, a Performance Test Plan is created. This serves as an agreement before moving ahead and also as a road map for the entire activity. Once created this document is shared to the client to establish transparency on the type of the application, test objectives, prerequisites, deliverable, entry and exit criteria, acceptance criteria etc.
Briefly, a performance test plan includes:
a) Introduction (Objective and Scope) b) Application Overview
c) Performance (Objectives & Goals) d) Test Approach (User Distribution, Test data requirements, Workload criteria, Entry & Exit criteria, Deliverable, etc.) e) In-Scope and Out-of-Scope f) Test Environment (Configuration, Tool, Hardware, Server Monitoring, Database, test configuration, etc.) g) Reporting & Communication h) Test Metrics i) Role & Responsibilities j) Risk & Mitigation k) Configuration Management
#4. Performance Test Development
- Use cases are created for the functionality identified in the test plan as the scope of PT.
- These use cases are shared with the client for their approval. This is to make sure the script will be recorded with correct steps.
- Once approved, script development starts with a recording of the steps in use cases with the performance test tool selected during the POC (Proof of Concepts) and enhanced by performing Correlation (for handling dynamic value), Parameterization (value substitution) and custom functions as per the situation or need. More on these techniques in our video tutorials.
- The Scripts are then validated against different users.
- Parallel to script creation, performance team also keeps working on setting up of the test environment (Software and hardware).
- Performance team will also take care of Metadata (back-end) through scripts if this activity is not taken up by the client.
#5. Performance Test Modeling
Performance Load Model is created for the test execution. The main aim of this step is to validate whether the given Performance metrics (provided by clients) are achieved during the test or not. There are different approaches to create a Load model. “Little’s Law” is used in most cases.#6. Test Execution
The scenario is designed according to the Load Model in Controller or Performance Center but the initial tests are not executed with maximum users that are in the Load model.Test execution is done incrementally. For example: If the maximum number of users are 100, the scenarios is first run with 10, 25, 50 users and so on, eventually moving on to 100 users.
#7. Test Results Analysis
Test results are the most important deliverable for the performance tester. This is where we can prove the ROI (Return on Investment) and productivity that a performance testing effort can provide.Some of the best practices that help the result analysis process:
a) A unique and meaningful name to every test result – this helps in understanding the purpose of the test
b) Include the following information in the test result summary:
- Reason for the failure/s
- Change in the performance of the application compared to the previous test run
- Changes made in the test from the point of application build or test environment.
- It’s a good practice to make a result summary after each test run so that analysis results are not compiled every time test results are referred.
- PT generally requires many test runs to reach at the correct conclusion.
- It is good to have the following points in result summary:
- Purpose of test
- Number of virtual users
- Scenario summary
- Duration of test
- Throughput
- Graphs
- Graphs comparison
- Response Time
- Error occurred
- Recommendations
In the final report, all the test summaries are consolidated.
#8. Report
Test results should be simplified so the conclusion is clearer and should not need any derivation. Development Team needs more information on analysis, comparison of results, and details of how the results were obtained.Test report is considered to be good if it is brief, descriptive and to the point.
The following guidelines will smooth this step out:
- Use appropriate heading and summary
- Report should be presentable so that it can be used in the management meetings.
- Provide supporting data to support the results.
- Give meaningful names to the table headers.
- Share the status report periodically, even with the clients
- Report the issues with as much information and evidence as possible in order to avoid unnecessary correspondence
- Execution Summary
- System Under test
- Testing Strategy
- Summary of test
- Results Strategy
- Problem Identified
- Recommendations
No comments:
Post a Comment