LA’s new Broad Museum (pronounced brode) is a work of art in itself. Housed in a new $140m three-story building, it hosts over 2,000 works from Eli and Edyth Broad’s unrivaled contemporary art collection. The museum is home to art giants Jeffery Koons, Roy Lichtenstein, Andy Worhol and many other iconic artists. No surprise that it immediately became the heart of LA’s art scene and the hottest ticket in town.
Mr. Broad, a billionaire who started two Fortune 500 companies, has shaped LA with his philanthropy. By offering free public admission, he created a cultural center and a very popular destination. The museum opened in 2015 to great fanfare, attracting over 200,000 visitors in the first 12 weeks. Sporting a slick mobile app and a dedicated technology team, The Broad strives to serve a new generation of always connected art lovers.
Becoming Too Popular
Due to immense popularity, tickets were hard to come by. A visit required you to book online months in advance. To increase accessibility, The Broad began to release tickets in batches, on the first of every month. Unfortunately, this created a scenario where a month’s worth of traffic was spread over minutes. 10,000 users were trying to book tickets near simultaneously, exposing performance bottlenecks and scaling issues. The first eight ticket releases were plagued by these issues, with the last one resulting in over three hours of downtime. Bad press and social media anger quickly spread. The museum’s brand and public image were at stake. Despite the tickets being free, the museum paid a huge toll — hundreds of man-hours were spent in customer support and short term fixes.
They discovered that cloud infrastructure and load balancers didn’t scale quickly enough. Testing the ticketing system in a real-world scenario became a significant problem. Every ticket release was larger than the last and became marred by new technical issues. The technology team acted valiantly, fixing issues as soon as they were discovered. However, every fix exposed a new set of scaling challenges which continued to snowball with the increased popularity.
Agile, Methodical Testing to the Rescue
With only three weeks before the next ticket release, the entire checkout system required testing under max load. Time was of the essence. Apica’s performance engineers quickly created a testing plan, which The Broad’s technology partner executed. Scripts were recorded that simulated real user scenarios, taking into account that users often booked different types of tickets in the same transaction.
Over the next week, 100+ performance tests were executed. The initial testing phase resulted in a 55% failure rate under max load. Digging deeper, The Broad discovered that a database instance wasn’t up to par and made the necessary fixes.
In subsequent tests their application performed flawlessly, achieving a 0% failure rate under max load! High-fives were exchanged. Beers were downed.
Apica Means More Sleep
The next ticket release went off without a hitch. There was no downtime and their server CPU hummed along at a cool 6%. Load testing with Apica provided a few additional benefits: due to bug fixes performance levels improved under normal conditions and the museum was able to save money by not spinning up additional cloud servers.
Most importantly, The Broad’s technology team was finally able to catch up on some 
well-deserved sleep.
Key Takeaways
- Carefully consider infrastructure requirements when changing business processes.
- Downtime not only costs potential revenue but hundreds of manhours of troubleshooting.
- Auto-scaling cloud infrastructure doesn’t scale quickly enough for bursts of traffic.
- Proactive load testing can save you money by reducing the need forcloud infrastructure.