Testing notes

Hi.. I will be posting all the testing related stuff here. The content posted here is a collection from different websites.

Wednesday, January 30, 2013

sample webservices hosted on web


http://www.webservicex.net/WS/wscatlist.aspx

http://www.webservicex.net/globalweather.asmx?WSDL

http://www.webservicex.net/CurrencyConvertor.asmx?WSDL

http://webservices.daehosting.com/services/TemperatureConversions.wso?WSDL

SOAPUI testing

http://www.soapui.org/ - download SOAPUI 4.5.1 from this link(Windows 32-bit installer)

http://www.soapui.org/Downloads/latest-release.html  - download SOAPUI Pro4.5.1(Window 32- bit installer)
http://www.filehippo.com/download_jre_32/ - JDK Software
http://www.filehippo.com/download_notepad/ - notepad++ software

Friday, December 14, 2012

Decision Analysis and Resolution

We all come across bottle necks every now and then, not just in our works but also in our personal issues….
Then “You must choose …
but choose wisely.”
The higher the risks are depending on the sensitiveness the more you need to take a more conscious decision. Making a good choice
implies analyzing alternatives. Because analyzing alternatives actually is making a comparison, you must have a consistent criteria defined.
Decision Analysis and Resolution
helps us to choose the right path.
Definition:
Book Answer - The purpose of Decision Analysis and Resolution is
to analyze possible decisions using a formal evaluation process that
evaluates identified alternatives against established criteria
In other words
a process to make key decisions in your
organization more objectively and wisely.

Few of the Lessons Learned from the Field:
1. People are generally impatient in decision making
2. Using a formal decision making process is not a
natural act
3. Make sure you have criteria in place for
when to
invoke DAR
4. Just getting started is half the battle
5. Implementing is easy
but institutionalizing takes
time
“There's a difference between knowing the path, and
walking the path.”
Morpheus, The Matrix
Just DAR it

How to perform an effective RCA

Root cause analysis
(RCA) is a method of problem solving that tries to identify the root causes of faults or
problems that cause operating events. Here are some steps to perform an effective RCA process in software
development or testing projects:-
1. Step 1: Define the Problem
What do you see happening?
What are the specific symptoms?
2. Step 2: Collect & Analyze Data
What proof do you have that the problem exists?
How long has the problem existed?
What is the impact of the problem?
Use Pareto Charts
3. Step 3: Identify Possible Causal Factors
What sequence of events leads to the problem?
What conditions allow the problem to occur?
What other problems surround the occurrence of the central problem?
Use tools like Cause & Effect diagram, 5 Whys etc.
4. Step 4: Identify the Root Causes
Why does the causal factor exist?
What is the real reason the problem occurred?
5. Step 5: Recommend and Implement Solutions
What can you do to prevent the problem from happening again?
How will the solution be implemented?
Who will be responsible for it?What are the risks of implementing the solution?

Friday, July 20, 2012

Thursday, June 14, 2012

Performance Testing - Main Activities


1.Identification of the Test Environment: In this phase performance team identifies the physical test environment and the production environment as well as the tools and resources available to conduct the performance tests. The physical environment includes hardware, software, and network configurations. After having a thorough understanding of the entire test environment at the beginning enables performance team to design and plan the testing more resourcefully.
2.Identification of the Performance Acceptance Criteria: In this phase the cross functional team defines the response time benchmarks, baselines for throughput, and resource utilization constraints. In general terms, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, they identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what blend of configuration settings will result in the most desirable performance results.
3.Planning and Designing of Tests: Here the performance team identifies the key business scenarios to be tested, settle on variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected for result evaluation. Then they consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
4.Configure Test Environment: At this stage the test environment, testing tools, and resources necessary to execute each strategy as features and components become available for test.
5.Test Design: Develop the performance tests in accordance with the test design.
6.Test Execution: In this phase performance tests are executed and monitored. Before starting the actual executing the tests, it is advised to validate the tests, and the test data to give accurate results.
7.Analysis of Results, Report, and Retesting: After receiving the consolidated performance metrics from the test team the results are shared among the cross- functional team. After reprioritizing the tests objectives, re-execution of tests is done until the desired SLA’s are achieved. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.
Disclaimer:
  • The original work can be found under this link.
  • All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
    Microsoft Corporation®


Performance Testing
Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test. Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
Key Types of Performance Testing
The following are the most common types of performance testing for Web applications.
Term
Purpose
Notes
Performance test
To determine or validate speed, scalability, and/or stability.
  • •ü A performance test is a technical investigation done to determine or validate the responsiveness, speed, scalability, and/or stability characteristics of the product under test.
Load test
To verify application behavior under normal and peak load conditions.
  • •ü Load testing is conducted to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service level agreement (SLA). A load test enables you to measure response times, throughput rates, and resource-utilization levels, and to identify your application’s breaking point, assuming that the breaking point occurs below the peak load condition.
  • •ü Endurance testing is a subset of load testing. An endurance test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
  • •ü Endurance testing may be used to calculate Mean Time Between Failure (MTBF), Mean Time To Failure (MTTF), and similar metrics.
Stress test
To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
  • •ü The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under extreme load conditions.
  • •ü Spike testing is a subset of stress testing. A spike test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.
Capacity test
To determine how many users and/or transactions a given system will support and still meet performance goals.
  • •ü Capacity testing is conducted in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels.
  • •ü Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.
The most common performance concerns related to Web applications are “Will it be fast enough?”, “Will it support all of my clients?”, “What happens if something goes wrong?”, and “What do I need to plan for when I get more customers?”. In casual conversation, most people associate “fast enough” with performance testing, “accommodate the current/expected user base” with load testing, “something going wrong” with stress testing, and “planning for future growth” with capacity testing. Collectively, these risks form the basis for the four key types of performance tests for Web applications.
Summary Matrix of Benefits by Key Performance Test Types
Term
Benefits
Challenges and Areas Not Addressed
Performance test
  • •ü Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
  • •ü Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.
  • •ü Identifies mismatches between performance-related expectations and reality.
  • •ü Supports tuning, capacity planning, and optimization efforts.
  • •ü May not detect some functional defects that only appear under load.
  • •ü If not carefully designed and validated, may only be indicative of performance characteristics in a very small number of production scenarios.
  • •ü Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.

Load test
  • •ü Determines the throughput required to support the anticipated peak production load.
  • •ü Determines the adequacy of a hardware environment.
  • •ü Evaluates the adequacy of a load balancer.
  • •ü Detects concurrency issues.
  • •ü Detects functionality errors under load.
  • •ü Collects data for scalability and capacity-planning purposes.
  • •ü Helps to determine how many users the application can handle before performance is compromised.
  • •ü Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.
  • •ü Is not designed to primarily focus on speed of response.
  • •ü Results should only be used for comparison with other related load tests.

Stress test
  • •ü Determines if data can be corrupted by overstressing the system.
  • •ü Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness.
  • •ü Allows you to establish application-monitoring triggers to warn of impending failures.
  • •ü Ensures that security vulnerabilities are not opened up by stressful conditions.
  • •ü Determines the side effects of common hardware or supporting application failures.
  • •ü Helps to determine what kinds of failures are most valuable to plan for.
  • •ü Because stress tests are unrealistic by design, some stakeholders may dismiss test results.
  • •ü It is often difficult to know how much stress is worth applying.
  • •ü It is possible to cause application and/or network failures that may result in significant disruption if not isolated to the test environment.

Capacity test
  • •ü Provides information about how workload can be handled to meet business requirements.
  • •ü Provides actual data that capacity planners can use to validate or enhance their models and/or predictions.
  • •ü Enables you to conduct various tests to compare capacity-planning models and/or predictions.
  • •ü Determines the current usage and capacity of the existing system to aid in capacity planning.
  • •ü Provides the usage and capacity trends of the existing system to aid in capacity planning
  • •ü Capacity model validation tests are complex to create.
  • •ü Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value.

Although the potential benefits far outweigh the challenges related to performance testing, uncertainty over the relevance of the resulting data — based on the sheer impossibility of testing all of the reasonable combinations of variables, scenarios and situations — makes some organizations question the value of conducting performance testing at all. In practice, however, the likelihood of catastrophic performance failures occurring in a system that has been through reasonable (not even rigorous) performance testing is dramatically reduced, particularly if the performance tests are used to help determine what to monitor in production so that the team will get early warning signs if the application starts drifting toward a significant performance-related failure.
Additional Concepts / Terms
You will often see or hear the following terms when conducting performance testing. Some of these terms may be common in your organization, industry, or peer network, while others may not. These terms and concepts have been included because they are used frequently enough, and cause enough confusion, to make them worth knowing.
Term
Notes
Component test
A component test is any performance test that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, clients, and storage devices.
Investigation
Investigation is an activity based on collecting information related to the speed, scalability, and/or stability characteristics of the product under test that may have value in determining or improving product quality. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues.
Smoke test
A smoke test is the initial run of a performance test to see if your application can perform its operations under a normal load.
Unit test
In the context of performance testing, a unit test is any test that targets a module of code where that module is any logical subset of the entire existing code base of the application, with a focus on performance characteristics. Commonly tested modules include functions, procedures, routines, objects, methods, and classes. Performance unit tests are frequently created and conducted by the developer who wrote the module of code being tested.
Validation test
A validation test compares the speed, scalability, and/or stability characteristics of the product under test against the expectations that have been set or presumed for that product.
Summary
Performance testing is a broad and complex activity that can take many forms, address many risks, and provide a wide range of value to an organization.
It is important to understand the different performance test types in order to reduce risks, minimize cost, and know when to apply the appropriate test over the course of a given performance-testing project. To apply different test types over the course of a performance test, you need to evaluate the following key points:
  • The objectives of the performance test.
  • The context of the performance test; for example, the resources involved, cost, and potential return on the testing effort.


Disclaimer:
  • The original work can be found under this link.
  • All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
    Microsoft Corporation®

Introduction to Apache JMeter

Apache JMeter is a 100% pure Java desktop application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions. Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.
Stefano Mazzocchi of the Apache Software Foundation was the original developer of JMeter. He wrote it primarily to test the performance of Apache JServ (a project that has since been replaced by the Apache Tomcat project).
Apache JMeter features include:
  • Can load and performance test many different server types like Web - HTTP, HTTPS, SOAP, Database via JDBC, LDAP, JMS, Mail - POP3(S) and IMAP(S) etc.

  • Complete portability and 100% Java purity .

  • Full multithreading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups.

  • Careful GUI design allows faster operation and more precise timings.

  • Caching and offline analysis/replaying of test results.

  • Highly Extensible:

  • Pluggable Samplers allow unlimited testing capabilities.
  • Several load statistics may be chosen with pluggable timers .
    • Data analysis and visualization plugins allow great extensibility as well as personalization.
    • Functions can be used to provide dynamic input to a test or provide data manipulation.
    • Scriptable Samplers (BeanShell is fully supported; and there is a sampler which supports BSF-compatible languages)
Essential Components of JMeter:
1.Test Plan: The Test Plan is where the overall settings for a test are specified. Static variables can be defined for values that are repeated throughout a test, such as server names. For example the variable SERVER could be defined as www.example.com, and the rest of the test plan could refer to it as ${SERVER}. This simplifies changing the name later.
2.Thread Group: A Thread Group defines a pool of users that will execute a particular test case against your server. In the Thread Group GUI, you can control the number of users simulated (num of threads), the ramp up time (how long it takes to start all the threads), the number of times to perform the test, and optionally, a start and stop time for the test. When using the scheduler, JMeter runs the thread group until either the number of loops is reached or the duration/end-time is reached - whichever occurs first. Note that the condition is only checked between samples; when the end condition is reached, that thread will stop. JMeter does not interrupt samplers which are waiting for a response, so the end time may be delayed arbitrarily.
3.WorkBench:The Workbench simply provides a place to temporarily store test elements while not in use, for copy/paste purposes, or any other purpose you desire. When you save your test plan,WorkBench items are not saved with it. Your WorkBench can be saved independently, if you like (right-click on WorkBench and choose Save).
Certain test elements are only available on the WorkBench:
  • HTTP Proxy Server
  • HTTP Mirror Server
  • Property Display
4.Samplers: Samplers perform the actual work of JMeter. Each sampler (except Test Action) generates one or more sample results. The sample results have various attributes (success/fail, elapsed time, data size etc) and can be viewed in the various listeners.
Various types of samples are listed below:
  • FTP Request
  • HTTP Request
  • JDBC Request
  • Java Request
  • SOAP/XML-RPC Request
  • WebService(SOAP) Request
  • LDAP Request
  • LDAP Extended Request
  • Access Log Sampler
  • BeanShell Sampler
  • BSF Sampler
  • JSR223 Sampler
  • TCP Sampler
  • JMS Publisher
  • JMS Subscriber
  • JMS Point-to-Point
  • JUnit Request
  • Mail Reader Sampler
  • Test Action
  • SMTP Sampler
5. Logic Controllers: Logic Controllers determine the order in which Samplers are processed.
Various types of Logic Controllers are listed below:
  • Simple Controller
  • Loop Controller
  • Once Only Controller
  • Interleave Controller
  • Random Controller
  • Random Order Controller
  • Throughput Controller
  • Runtime Controller
  • If Controller
  • While Controller
  • Switch Controller
  • ForEach Controller
  • Module Controller
  • Include Controller
  • Transaction Controller
  • Recording Controller
6. Listeners: These are means to view, save, and read saved test results. Listeners are processed at the end of the scope in which they are found. The saving and reading of test results is generic. The various listeners have a panel whereby one can specify the file to which the results will be written. By default, the results are stored as XML files, typically with a ".jtl" extension. Results can be read from XML or CSV format files.
Various types of Listeners are listed below:
  • Sample Result Save Configuration
  • Graph Full Results
  • Graph Results
  • Spline Visualizer
  • Assertion Results
  • View Results Tree
  • Aggregate Report
  • View Results in Table
  • Simple Data Writer
  • Monitor Results
  • Distribution Graph (alpha)
  • Aggregate Graph
  • Mailer Visualizer
  • BeanShell Listener
  • Summary Report
  • Save Responses to a file
  • BSF Listener
  • JSR223 Listener
  • Generate Summary Results
  • Comparison Assertion Visualizer
7. Configuration Elements: Configuration elements can be used to set up defaults and variables for later use by samplers. Note that these elements are processed at the start of the scope in which they are found, i.e. before any samplers in the same scope.
Various types of Configuration Elements are listed below:
  • CSV Data Set Config
  • FTP Request Defaults
  • HTTP Authorization Manager
  • HTTP Cache Manager
  • HTTP Cookie Manager
  • HTTP Request Defaults
  • HTTP Header Manager
  • Java Request Defaults
  • JDBC Connection Configuration
  • Keystore Configuration
  • Login Config Element
  • LDAP Request Defaults
  • LDAP Extended Request Defaults
  • TCP Sampler Config
  • User Defined Variables
  • Random Variable
  • Counter
  • Simple Config Element
References
  1. http://jmeter.apache.org/usermanual/index.html