1.Identification of the Test
Environment: In this phase performance team identifies the
physical test environment and the production environment as well as the tools
and resources available to conduct the performance tests. The physical
environment includes hardware, software, and network configurations. After
having a thorough understanding of the entire test environment at the beginning
enables performance team to design and plan the testing more
resourcefully.
2.Identification of the Performance
Acceptance Criteria: In this phase the cross functional team
defines the response time benchmarks, baselines for throughput, and resource
utilization constraints. In general terms, response time is a user concern,
throughput is a business concern, and resource utilization is a system concern.
Additionally, they identify project success criteria that may not be captured by
those goals and constraints; for example, using performance tests to evaluate
what blend of configuration settings will result in the most desirable
performance results.
3.Planning and Designing of
Tests: Here the performance team identifies the key business
scenarios to be tested, settle on variability among representative users and how
to simulate that variability, define test data, and establish metrics to be
collected for result evaluation. Then they consolidate this information into one
or more models of system usage to be implemented, executed, and
analyzed.
4.Configure Test
Environment: At this stage the test environment, testing
tools, and resources necessary to execute each strategy as features and
components become available for test.
5.Test Design:
Develop the performance tests in accordance with the test
design.
6.Test Execution:
In this phase performance tests are executed and monitored. Before starting
the actual executing the tests, it is advised to validate the tests, and the
test data to give accurate results.
7.Analysis of Results, Report, and
Retesting: After receiving the consolidated performance
metrics from the test team the results are shared among the cross- functional
team. After reprioritizing the tests objectives, re-execution of tests is done
until the desired SLA’s are achieved. When all of the metric values are within
accepted limits, none of the set thresholds have been violated, and all of the
desired information has been collected, you have finished testing that
particular scenario on that particular configuration.
Disclaimer:
- The original work can be found under this link.
- All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and
Dennis Rea
Microsoft Corporation®
Performance Testing
Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test. Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
Key Types of Performance Testing
The following are the most common types of performance testing for Web applications.
Term
|
Purpose
|
Notes
|
Performance
test
|
To determine or validate
speed, scalability, and/or stability.
|
|
Load
test
|
To verify application
behavior under normal and peak load conditions.
|
|
Stress
test
|
To determine or validate an
application’s behavior when it is pushed beyond normal or peak load
conditions.
|
|
Capacity
test
|
To determine how many users
and/or transactions a given system will support and still meet performance
goals.
|
|
Summary Matrix of Benefits by Key Performance Test Types
Term
|
Benefits
|
Challenges and
Areas Not Addressed
|
Performance
test
|
|
|
Load
test
|
|
|
Stress
test
|
|
|
Capacity
test
|
|
Additional Concepts / Terms
You will often see or hear the following terms when conducting performance testing. Some of these terms may be common in your organization, industry, or peer network, while others may not. These terms and concepts have been included because they are used frequently enough, and cause enough confusion, to make them worth knowing.
Term
|
Notes
|
Component
test
|
A component test
is any performance test that targets an architectural component of the
application. Commonly tested components include servers, databases, networks,
firewalls, clients, and storage devices.
|
Investigation
|
Investigation is
an activity based on collecting information related to the speed, scalability,
and/or stability characteristics of the product under test that may have value
in determining or improving product quality. Investigation is frequently
employed to prove or disprove hypotheses regarding the root cause of one or more
observed performance issues.
|
Smoke
test
|
A smoke test is
the initial run of a performance test to see if your application can perform its
operations under a normal load.
|
Unit
test
|
In the context of
performance testing, a unit test is any test that targets a module of
code where that module is any logical subset of the entire existing code base of
the application, with a focus on performance characteristics. Commonly tested
modules include functions, procedures, routines, objects, methods, and classes.
Performance unit tests are frequently created and conducted by the developer who
wrote the module of code being tested.
|
Validation
test
|
A validation test
compares the speed, scalability, and/or stability characteristics of the product
under test against the expectations that have been set or presumed for that
product.
|
Performance testing is a broad and complex activity that can take many forms, address many risks, and provide a wide range of value to an organization.
It is important to understand the different performance test types in order to reduce risks, minimize cost, and know when to apply the appropriate test over the course of a given performance-testing project. To apply different test types over the course of a performance test, you need to evaluate the following key points:
- The objectives of the performance test.
- The context of the performance test; for example, the resources involved, cost, and potential return on the testing effort.
- The original work can be found under this link.
- All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and
Dennis Rea
Microsoft Corporation®
No comments:
Post a Comment