This is blog post #5 in a 5 post series
Blog post 2: Developing a Server Asset Scaling Strategy
Blog post 4: AWS Load Testing Project Questions
Load testing is an indispensable tool for efficiently managing web application infrastructure. The process yields insight into how an implementation holds up under different traffic levels, and how to increase hardware power for unique system components to handle increasing demand. Apica load testing helps provide a consistent, smooth user experience for your business’s customers while minimizing infrastructure costs to provide that high-quality experience.
Developing a Server Asset Scaling Strategy
When designing web application infrastructure, the production team needs to answer the question of “how much power” the implementation requires. Add too much power, and your business will waste money on any unused hardware. Lack sufficient power, and the web application will have sluggish response times–or fail outright when too many end users access the service simultaneously.
Failing to provide a smooth, responsive user experience tends to lead your audience to seek alternative services. The AWS test project answers how much server power a web application requires under typical, growth, and overload situations to provide the host with guidelines as to when the system will fail and under how heavy a load.
Load Testing-Assisted Capacity Planning
Load testing shines when helping web application hosts devise a capacity planning strategy. Capacity planning examines the individual components of the larger system and identifies the unique overload points. The individual parts of the system do not scale evenly, so it’s not possible just to turn up a dial when more power is needed.
When looking at the system hierarchy, component A may be the minimal identified point that can be evenly scaled up as more people access the system. Up the hierarchy, there’s component B, which can handle the traffic from up to four component A’s. If demand requires upgrading from five component A’s to six, there’s no need to add another component B. However, moving from eight component A’s to nine requires a third component B. If the additional component B is added too early, it will sit idle and waste money; but if it is added too late, the system will get overloaded. Understanding the relationship between these components helps devise configuration settings that allow the AWS platform to automatically add assets when needed.
Looking at Bottlenecks and Scaling
Capacity planning is another important part of executing a load test. If the testing infrastructure is unable to simulate enough simultaneous virtual users to run the test, the results will not be accurate. Apica’s team encountered several bottleneck points when configuring the load test program on the AWS platform
1 Operating System limitations that prevented the application from using enough resources
2 Limitations in the Nginx web server configuration together with AWS C-instances to receive the most efficient utilization of the server resources.
3 The Kinesis implementation needed to be configured to batch-process requests to simulate 19,032 requests per second.
4 Query results from DynamoDB were configured to be cached for longer durations to avoid unnecessary traffic and costs related to DynamoDB.
After addressing the bottleneck issues, the team was able to scale individual layers within the application and identify the amount of power needed to run the initial load.
Teams looking to understand how to better implement reliable load tests can look to the Apica AWS load test project for help in anticipating hiccups in configuring the test and implementing the test results for the client. Clients can delve into the project as an example of how to get the most out of their hosting infrastructure and prepare for future growth. The project is an excellent case study for understanding how to correctly perform load tests–and what to do with the results.
Check out the short video EMA did highlighting everything you need to know about this topic.