- July 1, 2020
- Posted by: Rajni Singh
- Category: Blogs
What is the need to do Distributed Testing?
When we do a Performance Test with a high number of Virtual Users, our test machine tends to get overwhelmed leading with high CPU, memory usage leading to failure of client/test machines. In such cases, the test machine bears the load but it tends to give wrong response time which is having 100% to 1000% higher values. We can overcome this issue by having more than 1 client machine generating load simultaneously and having optimal CPU and memory utilization. This kind of setup to do a Performance test is also known as the Master-Slave setup.
This Master-Slave setup also helps in doing performance tests across multiple locations.
Technologies / Tools Used
- JMeter to create performance test script (.jmx file)
- AWS EC2 instances will behave as master and slave machines.
AWS EC2 Instances Setup
- Master Server: It has the Test Plan (including .jmx file).
- Slave/Agent Machines: They have supporting CSV files, rmi_keystore.jks file.
Prerequisites for Distributed Performance Testing using JMeter
- All Servers (Master and all Slaves) should have Java installed preferably of the same version.
- All Servers (Master and all Slaves) should have JMeter installed exactly of the same version.
- Master should be able to communicate with each of its slave servers
- rmi_keystore.jks file created in Master should be copied to all slave systems.
- JMeter-server should be up and running in all slaves
Steps of Execution
- Create a test in JMeter or record a script using a Chrome Blazemeter extension.
- Spin up an AWS EC2 instance which will behave as Master.
- Copy the Test Plan (.jmx file) from Local System to AWS EC2 Master instance. Create rmi_keystore.jks file in Master and define folder for saving Test Script, Results (.jtl or .csv file) and HTML reports.
- Spin up ‘n’ number of AWS EC2 instances where ‘n’ is number of slaves required in respective AWS regions. Here, we are taking 3 slaves from AWS regions ‘US East (Ohio)’, ‘US West (Oregon)’, ‘US West (N.California)’ each.
- Ensure that Master and Slaves are in the same subnet or Establish a peering connection between Master and each slave not in the same subnet.
- Configure Route Table and Security Groups of Master and Slave Servers.
- Copy rmi_keystore.jks file created in Master across all Slave Systems.
- Start JMeter-server in all Slave machines and ensure that JMeter-server in each slave machine is up and running. This can be achieved either by manually logging in each slave EC2 instance and then starting the server or by using AWS Systems Manager’s “Run Command” service.
- Run the Test in Non-GUI mode in the Master machine and wait for the test to finish.
./jmeter -n -t ./jmeterscripts/Afour_Site_TestPlan.jmx -l ./jmeteroutputresults/Afour_Site_Results.jtl -e -o ./jmeterreports/Afour_Site_Report -R 10.0.0.98,172.31.28.9,172.31.3.44
Where “-n” tag means running JMeter in non-GUI mode, “-t” means the location of .jmx script file, “-l” means the location of result file, “-e” means a report to be generated at end of load test, “-o” means the location of HTML report folder.
- Once the test is finished, fetch Test results (.jtl / .csv file) and HTML report from Master.
Result file (.jtl / .csv) file can be viewed in Jmeter by importing it in any Listener
Jmeter HTML Report provides overall analysis and graphs. Some of the graphs are as shown below
Distributed Performance testing using JMeter is getting popularity and higher demands nowadays because of the availability and demand of applications from multiple geographical locations. Also, it identifies if there are any bottlenecks if there are a higher number of users accessing the application on some special day/event. It has the following advantages :
- It allows testing an application against traffic from multiple geographical locations.
- It allows testing an application against a high number of users which might not be possible from a single test machine.
- It protects against any distorted application performance results because of bottlenecks present in the test engine.
- It prevents any sudden failure of the performance test run because of high CPU usage or memory usage.