Hi.. I will be posting all the testing related stuff here. The content posted here is a collection from different websites.

Thursday, June 14, 2012

Performance Testing - Main Activities


1.Identification of the Test Environment: In this phase performance team identifies the physical test environment and the production environment as well as the tools and resources available to conduct the performance tests. The physical environment includes hardware, software, and network configurations. After having a thorough understanding of the entire test environment at the beginning enables performance team to design and plan the testing more resourcefully.
2.Identification of the Performance Acceptance Criteria: In this phase the cross functional team defines the response time benchmarks, baselines for throughput, and resource utilization constraints. In general terms, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, they identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what blend of configuration settings will result in the most desirable performance results.
3.Planning and Designing of Tests: Here the performance team identifies the key business scenarios to be tested, settle on variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected for result evaluation. Then they consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
4.Configure Test Environment: At this stage the test environment, testing tools, and resources necessary to execute each strategy as features and components become available for test.
5.Test Design: Develop the performance tests in accordance with the test design.
6.Test Execution: In this phase performance tests are executed and monitored. Before starting the actual executing the tests, it is advised to validate the tests, and the test data to give accurate results.
7.Analysis of Results, Report, and Retesting: After receiving the consolidated performance metrics from the test team the results are shared among the cross- functional team. After reprioritizing the tests objectives, re-execution of tests is done until the desired SLA’s are achieved. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.
Disclaimer:
  • The original work can be found under this link.
  • All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
    Microsoft Corporation®


Performance Testing
Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test. Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
Key Types of Performance Testing
The following are the most common types of performance testing for Web applications.
Term
Purpose
Notes
Performance test
To determine or validate speed, scalability, and/or stability.
  • •ü A performance test is a technical investigation done to determine or validate the responsiveness, speed, scalability, and/or stability characteristics of the product under test.
Load test
To verify application behavior under normal and peak load conditions.
  • •ü Load testing is conducted to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service level agreement (SLA). A load test enables you to measure response times, throughput rates, and resource-utilization levels, and to identify your application’s breaking point, assuming that the breaking point occurs below the peak load condition.
  • •ü Endurance testing is a subset of load testing. An endurance test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
  • •ü Endurance testing may be used to calculate Mean Time Between Failure (MTBF), Mean Time To Failure (MTTF), and similar metrics.
Stress test
To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
  • •ü The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under extreme load conditions.
  • •ü Spike testing is a subset of stress testing. A spike test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.
Capacity test
To determine how many users and/or transactions a given system will support and still meet performance goals.
  • •ü Capacity testing is conducted in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels.
  • •ü Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.
The most common performance concerns related to Web applications are “Will it be fast enough?”, “Will it support all of my clients?”, “What happens if something goes wrong?”, and “What do I need to plan for when I get more customers?”. In casual conversation, most people associate “fast enough” with performance testing, “accommodate the current/expected user base” with load testing, “something going wrong” with stress testing, and “planning for future growth” with capacity testing. Collectively, these risks form the basis for the four key types of performance tests for Web applications.
Summary Matrix of Benefits by Key Performance Test Types
Term
Benefits
Challenges and Areas Not Addressed
Performance test
  • •ü Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
  • •ü Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.
  • •ü Identifies mismatches between performance-related expectations and reality.
  • •ü Supports tuning, capacity planning, and optimization efforts.
  • •ü May not detect some functional defects that only appear under load.
  • •ü If not carefully designed and validated, may only be indicative of performance characteristics in a very small number of production scenarios.
  • •ü Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.

Load test
  • •ü Determines the throughput required to support the anticipated peak production load.
  • •ü Determines the adequacy of a hardware environment.
  • •ü Evaluates the adequacy of a load balancer.
  • •ü Detects concurrency issues.
  • •ü Detects functionality errors under load.
  • •ü Collects data for scalability and capacity-planning purposes.
  • •ü Helps to determine how many users the application can handle before performance is compromised.
  • •ü Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.
  • •ü Is not designed to primarily focus on speed of response.
  • •ü Results should only be used for comparison with other related load tests.

Stress test
  • •ü Determines if data can be corrupted by overstressing the system.
  • •ü Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness.
  • •ü Allows you to establish application-monitoring triggers to warn of impending failures.
  • •ü Ensures that security vulnerabilities are not opened up by stressful conditions.
  • •ü Determines the side effects of common hardware or supporting application failures.
  • •ü Helps to determine what kinds of failures are most valuable to plan for.
  • •ü Because stress tests are unrealistic by design, some stakeholders may dismiss test results.
  • •ü It is often difficult to know how much stress is worth applying.
  • •ü It is possible to cause application and/or network failures that may result in significant disruption if not isolated to the test environment.

Capacity test
  • •ü Provides information about how workload can be handled to meet business requirements.
  • •ü Provides actual data that capacity planners can use to validate or enhance their models and/or predictions.
  • •ü Enables you to conduct various tests to compare capacity-planning models and/or predictions.
  • •ü Determines the current usage and capacity of the existing system to aid in capacity planning.
  • •ü Provides the usage and capacity trends of the existing system to aid in capacity planning
  • •ü Capacity model validation tests are complex to create.
  • •ü Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value.

Although the potential benefits far outweigh the challenges related to performance testing, uncertainty over the relevance of the resulting data — based on the sheer impossibility of testing all of the reasonable combinations of variables, scenarios and situations — makes some organizations question the value of conducting performance testing at all. In practice, however, the likelihood of catastrophic performance failures occurring in a system that has been through reasonable (not even rigorous) performance testing is dramatically reduced, particularly if the performance tests are used to help determine what to monitor in production so that the team will get early warning signs if the application starts drifting toward a significant performance-related failure.
Additional Concepts / Terms
You will often see or hear the following terms when conducting performance testing. Some of these terms may be common in your organization, industry, or peer network, while others may not. These terms and concepts have been included because they are used frequently enough, and cause enough confusion, to make them worth knowing.
Term
Notes
Component test
A component test is any performance test that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, clients, and storage devices.
Investigation
Investigation is an activity based on collecting information related to the speed, scalability, and/or stability characteristics of the product under test that may have value in determining or improving product quality. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues.
Smoke test
A smoke test is the initial run of a performance test to see if your application can perform its operations under a normal load.
Unit test
In the context of performance testing, a unit test is any test that targets a module of code where that module is any logical subset of the entire existing code base of the application, with a focus on performance characteristics. Commonly tested modules include functions, procedures, routines, objects, methods, and classes. Performance unit tests are frequently created and conducted by the developer who wrote the module of code being tested.
Validation test
A validation test compares the speed, scalability, and/or stability characteristics of the product under test against the expectations that have been set or presumed for that product.
Summary
Performance testing is a broad and complex activity that can take many forms, address many risks, and provide a wide range of value to an organization.
It is important to understand the different performance test types in order to reduce risks, minimize cost, and know when to apply the appropriate test over the course of a given performance-testing project. To apply different test types over the course of a performance test, you need to evaluate the following key points:
  • The objectives of the performance test.
  • The context of the performance test; for example, the resources involved, cost, and potential return on the testing effort.


Disclaimer:
  • The original work can be found under this link.
  • All credit to J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
    Microsoft Corporation®

Introduction to Apache JMeter

Apache JMeter is a 100% pure Java desktop application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions. Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.
Stefano Mazzocchi of the Apache Software Foundation was the original developer of JMeter. He wrote it primarily to test the performance of Apache JServ (a project that has since been replaced by the Apache Tomcat project).
Apache JMeter features include:
  • Can load and performance test many different server types like Web - HTTP, HTTPS, SOAP, Database via JDBC, LDAP, JMS, Mail - POP3(S) and IMAP(S) etc.

  • Complete portability and 100% Java purity .

  • Full multithreading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups.

  • Careful GUI design allows faster operation and more precise timings.

  • Caching and offline analysis/replaying of test results.

  • Highly Extensible:

  • Pluggable Samplers allow unlimited testing capabilities.
  • Several load statistics may be chosen with pluggable timers .
    • Data analysis and visualization plugins allow great extensibility as well as personalization.
    • Functions can be used to provide dynamic input to a test or provide data manipulation.
    • Scriptable Samplers (BeanShell is fully supported; and there is a sampler which supports BSF-compatible languages)
Essential Components of JMeter:
1.Test Plan: The Test Plan is where the overall settings for a test are specified. Static variables can be defined for values that are repeated throughout a test, such as server names. For example the variable SERVER could be defined as www.example.com, and the rest of the test plan could refer to it as ${SERVER}. This simplifies changing the name later.
2.Thread Group: A Thread Group defines a pool of users that will execute a particular test case against your server. In the Thread Group GUI, you can control the number of users simulated (num of threads), the ramp up time (how long it takes to start all the threads), the number of times to perform the test, and optionally, a start and stop time for the test. When using the scheduler, JMeter runs the thread group until either the number of loops is reached or the duration/end-time is reached - whichever occurs first. Note that the condition is only checked between samples; when the end condition is reached, that thread will stop. JMeter does not interrupt samplers which are waiting for a response, so the end time may be delayed arbitrarily.
3.WorkBench:The Workbench simply provides a place to temporarily store test elements while not in use, for copy/paste purposes, or any other purpose you desire. When you save your test plan,WorkBench items are not saved with it. Your WorkBench can be saved independently, if you like (right-click on WorkBench and choose Save).
Certain test elements are only available on the WorkBench:
  • HTTP Proxy Server
  • HTTP Mirror Server
  • Property Display
4.Samplers: Samplers perform the actual work of JMeter. Each sampler (except Test Action) generates one or more sample results. The sample results have various attributes (success/fail, elapsed time, data size etc) and can be viewed in the various listeners.
Various types of samples are listed below:
  • FTP Request
  • HTTP Request
  • JDBC Request
  • Java Request
  • SOAP/XML-RPC Request
  • WebService(SOAP) Request
  • LDAP Request
  • LDAP Extended Request
  • Access Log Sampler
  • BeanShell Sampler
  • BSF Sampler
  • JSR223 Sampler
  • TCP Sampler
  • JMS Publisher
  • JMS Subscriber
  • JMS Point-to-Point
  • JUnit Request
  • Mail Reader Sampler
  • Test Action
  • SMTP Sampler
5. Logic Controllers: Logic Controllers determine the order in which Samplers are processed.
Various types of Logic Controllers are listed below:
  • Simple Controller
  • Loop Controller
  • Once Only Controller
  • Interleave Controller
  • Random Controller
  • Random Order Controller
  • Throughput Controller
  • Runtime Controller
  • If Controller
  • While Controller
  • Switch Controller
  • ForEach Controller
  • Module Controller
  • Include Controller
  • Transaction Controller
  • Recording Controller
6. Listeners: These are means to view, save, and read saved test results. Listeners are processed at the end of the scope in which they are found. The saving and reading of test results is generic. The various listeners have a panel whereby one can specify the file to which the results will be written. By default, the results are stored as XML files, typically with a ".jtl" extension. Results can be read from XML or CSV format files.
Various types of Listeners are listed below:
  • Sample Result Save Configuration
  • Graph Full Results
  • Graph Results
  • Spline Visualizer
  • Assertion Results
  • View Results Tree
  • Aggregate Report
  • View Results in Table
  • Simple Data Writer
  • Monitor Results
  • Distribution Graph (alpha)
  • Aggregate Graph
  • Mailer Visualizer
  • BeanShell Listener
  • Summary Report
  • Save Responses to a file
  • BSF Listener
  • JSR223 Listener
  • Generate Summary Results
  • Comparison Assertion Visualizer
7. Configuration Elements: Configuration elements can be used to set up defaults and variables for later use by samplers. Note that these elements are processed at the start of the scope in which they are found, i.e. before any samplers in the same scope.
Various types of Configuration Elements are listed below:
  • CSV Data Set Config
  • FTP Request Defaults
  • HTTP Authorization Manager
  • HTTP Cache Manager
  • HTTP Cookie Manager
  • HTTP Request Defaults
  • HTTP Header Manager
  • Java Request Defaults
  • JDBC Connection Configuration
  • Keystore Configuration
  • Login Config Element
  • LDAP Request Defaults
  • LDAP Extended Request Defaults
  • TCP Sampler Config
  • User Defined Variables
  • Random Variable
  • Counter
  • Simple Config Element
References
  1. http://jmeter.apache.org/usermanual/index.html

LoadUI for Load Testing

LoadUI is a free load testing tool that is customized to make load testing in SoapUI easier and more efficient. LoadUI allows users to easily distribute the load Tests to any desired number of LoadUI Agents by simply dragging and dropping the tests as required which makes it easy to simulate load on the application. There are various components of this tool. Below is a brief overview of the components available.

1. Generators:

The generators available in LoadUI are the components which generate a load of samples or send a trigger message at a certain rate over a period of time. Different types of generators are available like Fixed Load, fixed rate, ramp, random, variance, virtual user generators. All generators have an On/Off button on their top-left, allowing you turn them off while the Load Test is running.

2. Runners:

Runners are the components actually performing some kind of action against your target system and outputting the results, for example sending a Web Page request or executing a SoapUI TestCase. To get you started there are few runners available like web page runner, Soap UO runner, script runner, process runner etc.
Runners will handle any incoming message as a trigger and use any parameters in the incoming message as applicable. They will have at least two output terminals, one for publishing the results for each request and one for publishing the current number of running requests (a new message is sent every time this number changes).

3. Analysis:

Analysis components are used for analysing and asserting results output from a runner. There are two analysis components currently available:
These are
  • Statistics - calculates and outputs statistics on results from a runner. The graph is updated in real-time as configured and its contents are continuously written to the output-terminal, which could be used for asserting, logging, etc.
  • Assertion - asserts incoming message values, for example output from a runner or a statistics component or response times from a runner component. You can of course add as many assertion components as required to assert any desired response times, etc., in your Load Test.

4. Flow

Flow components are used to control the flow of the Load Test execution, for example to divide load between different components, filter messages, etc. There are currently two router components available: Splitter and Delay.
  • Splitter - divides incoming messages evenly or randomly between the configured output terminals, allowing you to for example split the trigger messages from a Load Generator to go to different runners.

  • Delay - pauses the incoming messages for the configured time, for example you could put this between two web-page runners which would delay the sequential execution between them.

5. Scheduler:

Schedulers are of 2 types – Scheduler and Interval
  • Scheduler: Scheduler display in the above figure gives an overview of the configured schedule and shows the current position in the schedule.
  • Interval: Sends a start message after a determined interval (Start At), followed by a stop message after a certain period of time (Duration).

6. Output:

Output components are for output messages from any other component to some destination. Currently only one component is available for this:
The TableLog is useful for visualizing individual results, messages, etc. It has settings allowing you to write its contents to a properties file, making it a good option for logging responses, assertion failures, statistics, etc.

7. Misc

  • Note - Used to add notes to the workspace, which you can use to add text to the workspace
  • soapUI MockService - This will allow you to add a soapUI MockService to your project, in order to simulate a Web Service.

Cross-browser Testing

What is Cross-browser Testing

Cross-browser  refers to the ability for a website, web application, HTML construct or client-side script to function correctly across all, or the majority of web browsers. The term cross-browser is often confused with multi-browser. Multi-browser is a new paradigm in web development that allows a website or web application to provide more functionality over several web browsers, while ensuring that the website or web application is accessible to the largest possible audience without any loss in performance. Cross-browser capability allows a website or web application to be properly rendered by all browsers.

Need of CBT -

The term cross-browser has existed since web development began.

1- The browser market has broadened, and to claim cross-browser compatibility, the website is nowadays expected to support browsers such as Mozilla Firefox, Opera, Google Chrome and Safari in addition to Internet Explorer and Netscape.

2- There has been an attitude shift towards more compatibility in general. Thus, some degree of cross-browser support is expected and only its absence needs to be noted.

Challenges –

Making a cross-browser site is usually pretty simple for basic websites. However, complex sites with a lot of HTML formatting and JavaScript may require significant extra coding in order to be compatible with multiple browsers. Some developers may even generate completely different pages for each browser. While CSS formatting has helped standardize the appearance of Web pages across multiple browsers, there are still several inconsistencies between Web browsers. Therefore, cross-browser design continues to be a necessary aspect of Web development. Developing professional websites can be a daunting task, especially when your client wants all the bells and whistles. After adding more and more of these fancy effects and complex layouts, the chances that your website might not display 100% correctly in other browsers rises. Microsoft now believes in web-standards (finally!!) and this could definitely make our lives as web devs/designers easier. Their new product line, Expression, aims to compete with Adobe Flash in the rich media internet experience, and they are claiming that Expression will build standards-based websites out of the box.

TIPS while working on Cross Browser Development –

There will never be an escape from tweaking small things to look perfect in every browser, but there are some tips that can help you develop a better, more browser-friendly website. For this one need to obtain a copy of VMware Server which is completely free. After that, install an old copy of Windows XP on the VM, and load it with all of the different browsers you could find. it is now possible to install multiple versions of Internet Explorer, and have them run synchronous in standalone mode.

List of Multiple Browsers available in Market –

Microsoft Internet Explorer v3/v4/v5.01/v5.5/v6
Mozilla Firefox v1.5/v2.0.0.6
Opera v9.22
Safari v3.0 beta

CBT Automation Tools –

Selenium RC
QTP 11

Useful Links –

http://www.my-debugbar.com/wiki/IETester/HomePage
http://crossbrowsertesting.com/
http://www.browsrcamp.com/
http://litmus.com/
http://www.netrenderer.com/
http://browsershots.org/

How to Test Web Applications against SQL Injection Attacks


SQL injection is a technique often used to attack databases through a website.

Definition: A malicious user could provide unexpected inputs to the application that are then used to frame and execute SQL statements on the database.

Explained As:
Injection is done by including portions of SQL statements in a web form entry field in an attempt to get the website to pass a newly formed rogue SQL command to the database (e.g. dump the database contents to the attacker).

Using this an attacker can do-
  • Bypassing Logins
  • Accessing secret data
  • Modifying contents of website
  • Shutting down the My SQL server

Since the consequences of allowing the SQL injection technique could be severe, SQL injection should be tested during the security testing of an application.

Example: User need to enter username and Password

User enters “ John ” and “ Smith ” as details. So SQL will be formed as
SELECT * FROM Users WHERE UserName = ‘John’ AND Password = ‘Smith’;

But if user enters UserName like “John’- “. Then SQL be formed as
SELECT * FROM Users WHERE UserName = ‘John’– AND Password = ‘Smith’;

Note that the part of the SQL statement after John is turned into a comment. If there were any user with the user name of John in the Users table, the application could allow the tester to log in as the user John. The tester could now view the private information of the user John.

Technical Implementations
  • Incorrectly filtered escape characters
This form of SQL injection occurs when user input is not filtered for escape characters and is then passed into an SQL statement.

Users can type input in user name like - John’ OR ‘1’=’1’
SELECT * FROM users WHERE name = '' OR '1'='1';
This will fetch any member with John as username.
Or If he can enter in username like a';DROP TABLE users; SELECT * FROM userinfo WHERE 't' = 't

This will build SQL like
SELECT * FROM users WHERE name = 'a';DROP TABLE users; SELECT * FROM userinfo WHERE 't' = 't';

  • Incorrect type handling
This form of SQL injection occurs when a user-supplied field is not strongly typed or is not checked for type constraints. This could take place when a numeric field is to be used in a SQL statement, but the programmer makes no checks to validate that the user supplied input is numeric

Statement := "SELECT * FROM userinfo WHERE id = " + a_variable + ";"

So user need to enter a numeric value for the “id”. If he enters like “ 1;Drop table users “
SQL build as
SELECT * FROM userinfo WHERE id=1;DROP TABLE users;

  • Conditional responses
One type of blind SQL injection forces the database to evaluate a logical statement on an ordinary application screen.

SELECT booktitle FROM booklist WHERE bookId = 'OOk14cd' AND '1'='1';

This will fetch all the book from the table though user do not have any particular search criteria.

Mitigation

Need to consider the below thing while doing the coding
  • Parameterized statements
  • Escaping
  • Pattern check
  • Database Permissions

References

Security Testing for Web Application




What is Security Testing?
Security testing is a process to determine that the information system protects data and maintains functionality as intended.
Security testing can also be defined as the process that determines that confidential data stays confidential and users can perform only those tasks that they are authorized to perform.



Six concepts of Security Testing
  • Confidentiality

Security measure that protects the disclosure of data or information to parties other than the intended.
  • Integrity

Whether the intended receiver receives the information or data which is not altered in transmission.
  • Authentication

Allows a receiver to have confidence that information it receives originated from a specific known source.
  • Authorization

The process of determining that a requester is allowed to receive a service or perform an operation.
  • Availability

Assuring information and communications services will be ready for use when expected.
  • Non-repudiation

Interchange of authentication information with some form of provable time stamp e.g. with session id etc.



Some Security problems in Web Applications
  • SQL injection

In this vulnerability, SQL queries can be injected in the form of user input data which can results in number of insecure behavior. For example, on a login page if the application is not protected against SQL injection, then anyone can use it to get all the user names and passwords stored in the database. This technique is mostly used in situations where SQL query is dynamically generated using the data or parameters supplied by user. This vulnerability can be extremely dangerous since SQL is often used for authentication, authorization, billing etc.

  • Cross Site Scripting (XSS)

Attacker can use this method to execute malicious script or URL on victim’s browser. Using cross-site scripting, attacker can use scripts like JavaScript to steal user cookies and information stored in the cookies.
Many web applications get some user information and pass this information in some variables from different pages.
E.g.: http://www.examplesite.com/index.php?userid=123&query=xyz
Attacker can easily pass some malicious input or <script> as a ‘&query’ parameter which can explore important user/server data on browser.

  • Spoofing


The creation of hoax look-alike websites or emails is called spoofing.



Tools used for Security Testing
  • Netsparker


This tool can detect SQL Injection + cross-site scripting issues.
Once a scan is complete, it displays the solutions besides the issues and enables the user to see the browser view and HTTP request/response.

  • Websecurify

Websecurify is a very easy-to-use and open sourcetool which automatically identifies web application vulnerabilities by using advanced discovery and fuzzing technologies.
It can create simple reports (that can be exported into multiple formats) once ran.
The tool is also multilingual and extensible with the add-on support.

  • Wapiti

Wapiti is an open source and web-based tool that scans the web pages of the deployed web applications, looking for scripts and forms where it can inject data.
It is built with Python and can detect:
- File handling errors (Local and remote include/require, fopen, readfile…)
- Database, XSS, LDAP and CRLF injections (HTTP response splitting, session fixation)
- Command execution detection (eval(),system(), passtru())

  • N-Stalker

The free edition performs restricted-yet-still-powerful set of web security assessment checks compared to the paid versions of the application.
It can check up to 100 web pages at once including web server and cross-site scripting checks.

  • Scrawlr

Scrawlr is free software for scanning SQL injection vulnerabilities on the web applications.
It is developed by HP Web Security Research Group in coordination with Microsoft Security Response Center.

  • Watcher

It is a plug-in for Fiddler and works as a passive-analysis tool for HTTP-based web applications.
Watcher runs silently in the background and interacts with the web-application to apply 30+ tests (where new ones can be added) while the user browses.
It will identify issues like cross-domain form POSTs, dangerous context-switching between HTTP and HTTPS, etc.



  • X5s


X5sis again a plug-in for Fiddler just like Watcher which is designed to find encoding and character transformation issues that can lead to XSS vulnerability.
It simply tests user-controlled input using special characters like <, >, ', and reviews how the output encodes how the output encodes the special characters.