- April 11, 2014
- Posted by: Mahesh Kulkarni
- Category: Blogs
Software performance testing service is a key part of overall quality assurance (QA). It involves testing software applications to ensure that they will perform well under their expected workload. Features and functionality supported by a software system is not the only concern; software application’s efficiency is also equally mission critical. The goals of any performance testing services are not to just find efficiency issues, but also to help in eliminating performance bottlenecks and eventually arriving at a known configuration of hardware and software that would support expected user load reliably and in consistent manner.
Performance testing is done for end to end system scenarios as well as at different application layers (e.g. web application layer, database layer, web services or enterprise services layer, UI layer)
Performance issue symptoms are common but may have different causes:
• Inefficient software code (unoptimized or not allowing concurrent use by design)
• Architecture / design issues (putting more load on certain components in the system)
• Platform tuning issues (for the web server, database, load balancers etc.)
• Insufficient hardware resources (for given user load)
Meaningful performance testing project needs some ground work. The first important question – what is the expected load pattern on the system? There are other questions such as what is the peak load on the system, what is the expected size of data at the end of 2 years in the system, etc.
In order to ensure that all the groundwork is adequately done, Performance Test Plan is prepared which includes performance test specification. This document describes performance goals.
Goals are defined considering user load, and types of tests to be done (stress, load, long haul, volume testing, etc.) Goals also define the expected response time of various components and the expected resource utilization. It also describes key actors of the system and workflows defined for each actor of the system. User load is distributed across all the workflows. Different test scenarios contain different combinations of workflows and user load distribution.
Another important aspect of performance test groundwork is – test data generation. The performance test plan document describes approaches for test data generation. Choosing right performance testing tool(s) and designing robust / maintainable performance testing framework is another important area while planning performance testing. The document describes approaches for tool selection.
The first challenge is appropriate tool selection. During tool identification one of the complex issues is related to encryption. The web page requests pass encrypted strings in the requests. The mechanism being used for the encryption is “Triple DES”. This problem can be solved by creating a custom library. It is easy to integrate that utility / library with JMeter using Bean Shell Script extensions. This is the major deciding factor for the tool as it’s difficult to address the issue using other tools.
Another interesting issue is recording file upload requests from the browser. Older version of JMeter does not record these requests completely. Hence open source tool called Fiddler can be used to get the complete request details and use that information to fill the corresponding HTTP requests in JMeter. One of the key things while developing performance things is verification of the responses. Since the responses are dynamic, regular expression is used (Regex Pre-Processor in JMeter) for extracting information from the previous response body. Useful information is stored in the variables and the variables are passed to the next request(s). After recording the scripts the next task is to parameterize the scripts to use test data from CSV files. It generally takes two weeks to complete the test recording, parameterization and validation of the scripts for concurrent users.
The next task is to prepare a performance test pass (a.k.a. one performance test iteration). Preparation of performance test pass needed things like setting up performance test environment, load generator machines. Performance test environment is a small replica of the production environment with different VMs playing role of different VMs in production. As per the production environment, it is ensured that the App server and Database Server are on different VMs. SQL Server Enterprise Edition is installed on the Database Server VM so that SQL profiling could be enabled.
Before every test pass run, the “OS Performance Counters” is started on App Server as well as DB Server and also SQL Profiler is started on Database server. With the help of OS Performance Counters, System Metrics are measured. SQL Profiler is useful in providing most time consuming queries.
Detailed performance test reports are prepared at the end of the performance test pass. The report contains details like test scenarios, system response time from IIS Logs, counters from Windows Perfmon, application logs, error logs, SQL profiling report, test data accuracy. The report then presents an analysis and observations based on this information.
Initial few test passes focused on identifying software bottlenecks. Post the fixes of software bottlenecks next set of test passes focused on system tuning for higher efficiency. This is achieved by activities like IIS tuning and database server tuning, etc. For IIS Tuning things like disabling of IIS logs, disabling of IIS ‘ASP Debugging Mode’, tuning of the ‘ASP Threads Per Processor’ parameter, tuning of the ‘ASP Queue Length’, enabling ‘IIS HTTP Compression’, ‘IIS 7 Output Caching’ is done. For database server tuning things like SQL query optimization, index optimization is done. Also the data recovery model is set to FULL Recovery mode and initial size set based on anticipated DB growth.
The final performance test pass is conducted on the optimized build using environment matching production environment with 2 years equivalent of base data in the system. The business metrics and system metrics are captured as baseline metrics for future benchmarking.