Development teams need to be aware of how well their web application server platforms are able to keep up with the demands of a constantly increasing number of users accessing services. The load testing experts at Apica recently took on a project addressing the issue of “how much power” a business’s cloud implementation needs to run web applications on the Amazon Web Services platform. The results of the project offer an insightful look at cloud server scaling–how best to configure the cloud platform’s automatic server capacity scaling features and avoid bottleneck issues in the load testing process.
About the Project
At its core, the cloud server scaling project asks this question: How much server power does your application need to avoid crashing or causing performance issues for your end users? When working with a cloud service for application hosting, the goal is to increase the size of your platform’s resources on demand so you’re only paying for resources you use–instead of paying in advance for a large pool of resources that may or may not be needed. This is one way to avoid unnecessary overhead hosting costs.
Scaling server resources to meet demand is a more complicated process than waiting for end users to report that the system is performing slowly and then adding more servers and databases. Instead, cloud server scaling should be configured preemptively to avoid overload situations so end users do not experience any disruption of service. This project runs the application against a virtual user count designed to emulate application server resource usage under both expected and surge use cases. The tools within this project analyze high traffic-rates and display usage results through a backend API for human interpretation to determine how to configure server scaling on the AWS platform. The results can be used as guidelines for scaling on other platforms.
Project Thresholds and Justification
The AWS project was built to match test traffic criteria rates at 8,000 requests per second, with peak requests hitting 12,000 per second. While that’s a lot of virtual users accessing information, the test needs to push the platform to its limits–so the final test pushed 16,000 requests per second. This kind of testing helps provide information that can influence service growth preparedness, in addition to testing how the system performs under incredible surge traffic. The data requests were designed to be small to prevent Internet connection bandwidth from bottlenecking and reporting bad results.
Project Results
The test results provided a wealth of data to help configure auto scaling rules on the AWS platform, including a solid relationship between cloud server scaling tests and application behaviors. This information provides insight into when the application will overload the server, identifying the point at which it’s necessary to increase AWS asset usage to prevent a collapse in performance. Essentially, the test results inform the development team on optimal AWS configuration to anticipate scaling needs and automatically apply them as needed. Instead of having to guess how much server power you need to host the application, the data can provide precise information on when the platform needs to reallocate unused server resources to keep applications running.