Hi.. I will be posting all the testing related stuff here. The content posted here is a collection from different websites.

Thursday, June 14, 2012

Performance Testing - Main Activities


1.Identification of the Test Environment: In this phase performance team identifies the physical test environment and the production environment as well as the tools and resources available to conduct the performance tests. The physical environment includes hardware, software, and network configurations. After having a thorough understanding of the entire test environment at the beginning enables performance team to design and plan the testing more resourcefully.
2.Identification of the Performance Acceptance Criteria: In this phase the cross functional team defines the response time benchmarks, baselines for throughput, and resource utilization constraints. In general terms, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, they identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what blend of configuration settings will result in the most desirable performance results.
3.Planning and Designing of Tests: Here the performance team identifies the key business scenarios to be tested, settle on variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected for result evaluation. Then they consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
4.Configure Test Environment: At this stage the test environment, testing tools, and resources necessary to execute each strategy as features and components become available for test.
5.Test Design: Develop the performance tests in accordance with the test design.
6.Test Execution: In this phase performance tests are executed and monitored. Before starting the actual executing the tests, it is advised to validate the tests, and the test data to give accurate results.
7.Analysis of Results, Report, and Retesting: After receiving the consolidated performance metrics from the test team the results are shared among the cross- functional team. After reprioritizing the tests objectives, re-execution of tests is done until the desired SLA’s are achieved. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.
Disclaimer:
  • The original work can be found under this link.
  • All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
    Microsoft Corporation®


Performance Testing
Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test. Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
Key Types of Performance Testing
The following are the most common types of performance testing for Web applications.
Term
Purpose
Notes
Performance test
To determine or validate speed, scalability, and/or stability.
  • •ü A performance test is a technical investigation done to determine or validate the responsiveness, speed, scalability, and/or stability characteristics of the product under test.
Load test
To verify application behavior under normal and peak load conditions.
  • •ü Load testing is conducted to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service level agreement (SLA). A load test enables you to measure response times, throughput rates, and resource-utilization levels, and to identify your application’s breaking point, assuming that the breaking point occurs below the peak load condition.
  • •ü Endurance testing is a subset of load testing. An endurance test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
  • •ü Endurance testing may be used to calculate Mean Time Between Failure (MTBF), Mean Time To Failure (MTTF), and similar metrics.
Stress test
To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
  • •ü The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under extreme load conditions.
  • •ü Spike testing is a subset of stress testing. A spike test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.
Capacity test
To determine how many users and/or transactions a given system will support and still meet performance goals.
  • •ü Capacity testing is conducted in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels.
  • •ü Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.
The most common performance concerns related to Web applications are “Will it be fast enough?”, “Will it support all of my clients?”, “What happens if something goes wrong?”, and “What do I need to plan for when I get more customers?”. In casual conversation, most people associate “fast enough” with performance testing, “accommodate the current/expected user base” with load testing, “something going wrong” with stress testing, and “planning for future growth” with capacity testing. Collectively, these risks form the basis for the four key types of performance tests for Web applications.
Summary Matrix of Benefits by Key Performance Test Types
Term
Benefits
Challenges and Areas Not Addressed
Performance test
  • •ü Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
  • •ü Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.
  • •ü Identifies mismatches between performance-related expectations and reality.
  • •ü Supports tuning, capacity planning, and optimization efforts.
  • •ü May not detect some functional defects that only appear under load.
  • •ü If not carefully designed and validated, may only be indicative of performance characteristics in a very small number of production scenarios.
  • •ü Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.

Load test
  • •ü Determines the throughput required to support the anticipated peak production load.
  • •ü Determines the adequacy of a hardware environment.
  • •ü Evaluates the adequacy of a load balancer.
  • •ü Detects concurrency issues.
  • •ü Detects functionality errors under load.
  • •ü Collects data for scalability and capacity-planning purposes.
  • •ü Helps to determine how many users the application can handle before performance is compromised.
  • •ü Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.
  • •ü Is not designed to primarily focus on speed of response.
  • •ü Results should only be used for comparison with other related load tests.

Stress test
  • •ü Determines if data can be corrupted by overstressing the system.
  • •ü Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness.
  • •ü Allows you to establish application-monitoring triggers to warn of impending failures.
  • •ü Ensures that security vulnerabilities are not opened up by stressful conditions.
  • •ü Determines the side effects of common hardware or supporting application failures.
  • •ü Helps to determine what kinds of failures are most valuable to plan for.
  • •ü Because stress tests are unrealistic by design, some stakeholders may dismiss test results.
  • •ü It is often difficult to know how much stress is worth applying.
  • •ü It is possible to cause application and/or network failures that may result in significant disruption if not isolated to the test environment.

Capacity test
  • •ü Provides information about how workload can be handled to meet business requirements.
  • •ü Provides actual data that capacity planners can use to validate or enhance their models and/or predictions.
  • •ü Enables you to conduct various tests to compare capacity-planning models and/or predictions.
  • •ü Determines the current usage and capacity of the existing system to aid in capacity planning.
  • •ü Provides the usage and capacity trends of the existing system to aid in capacity planning
  • •ü Capacity model validation tests are complex to create.
  • •ü Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value.

Although the potential benefits far outweigh the challenges related to performance testing, uncertainty over the relevance of the resulting data — based on the sheer impossibility of testing all of the reasonable combinations of variables, scenarios and situations — makes some organizations question the value of conducting performance testing at all. In practice, however, the likelihood of catastrophic performance failures occurring in a system that has been through reasonable (not even rigorous) performance testing is dramatically reduced, particularly if the performance tests are used to help determine what to monitor in production so that the team will get early warning signs if the application starts drifting toward a significant performance-related failure.
Additional Concepts / Terms
You will often see or hear the following terms when conducting performance testing. Some of these terms may be common in your organization, industry, or peer network, while others may not. These terms and concepts have been included because they are used frequently enough, and cause enough confusion, to make them worth knowing.
Term
Notes
Component test
A component test is any performance test that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, clients, and storage devices.
Investigation
Investigation is an activity based on collecting information related to the speed, scalability, and/or stability characteristics of the product under test that may have value in determining or improving product quality. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues.
Smoke test
A smoke test is the initial run of a performance test to see if your application can perform its operations under a normal load.
Unit test
In the context of performance testing, a unit test is any test that targets a module of code where that module is any logical subset of the entire existing code base of the application, with a focus on performance characteristics. Commonly tested modules include functions, procedures, routines, objects, methods, and classes. Performance unit tests are frequently created and conducted by the developer who wrote the module of code being tested.
Validation test
A validation test compares the speed, scalability, and/or stability characteristics of the product under test against the expectations that have been set or presumed for that product.
Summary
Performance testing is a broad and complex activity that can take many forms, address many risks, and provide a wide range of value to an organization.
It is important to understand the different performance test types in order to reduce risks, minimize cost, and know when to apply the appropriate test over the course of a given performance-testing project. To apply different test types over the course of a performance test, you need to evaluate the following key points:
  • The objectives of the performance test.
  • The context of the performance test; for example, the resources involved, cost, and potential return on the testing effort.


Disclaimer:
  • The original work can be found under this link.
  • All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
    Microsoft Corporation®

No comments: