Part One of the Humanizing Software Quality Series

A brand is not defined by a logo or a tagline. Nor is it encompassed by a website or marketing materials.

A brand represents the sum total of all experiences and impressions, good and bad, that customers and employees have with your company. Humans – whether the CTO giving a talk at a tradeshow, or a customer support representative on the phone – make an undeniable impact on the organization’s brand.

However important personal interactions are, in most modern industries, the majority of our interactions with a business now happen through software. Software defines the customer’s journey with a brand – meaning user journeys are at the center of software quality, now more than ever.

What constitutes a user journey?

For the purpose of this series, we’re talking about digital user journeys, which flow through software and infrastructure rather than through people in the field or at service desks. (Employees also have their own connections to digital user journeys that impact customer experience, which we’ll cover later).

A typical user journey would include all the actions in a typical human user’s session using a website or application – the screens they are presented with, what objects they click on, and what screens and data payloads are returned.

Along with that primary workflow of on-screen requests and responses, the journey also includes metrics that are directly experienced by users, such as response times, the frequency of interruptions, visual consistency, and accurate results.

All of these aspects of a digital user journey should be tested by a person. There’s nothing wrong with wanting the reassurance of good old-fashioned human-based user acceptance testing (UAT).

Indeed, there was a time when applications were vertical stacks of software atop networks and servers wholly owned by the company. There might have been one or two integrations with data or transaction services that resided elsewhere, but for the most part, we could validate user journeys with procedural test scripts, and have software testers run regressions and UAT cycles for any features that changed.

In today’s distributed cloud application environments, no matter how many front-end tests an organization runs, there are still infinite ways something could go wrong in front of users.

If we are to save our applications from chaos, we must find ways to make those infinite failure possibilities much less probable with more thorough testing. How can we still follow that user journey, when almost every aspect of an application is made up of API calls and ephemeral microservices that are hidden behind the scenes?

Distracted by distributed, dynamic architectures

The decentralization of our software architectures has become a Gordian Knot that is too hard to untangle through UAT and traditional functional and load testing scripts. Almost any on-screen element can change dynamically between test runs because of calls to external services and variable data inputs, so it always seems like the software is changing too fast to test.

As a consequence, devtest teams often turned away from testing user journeys, and focused their attention elsewhere, creating several new side effects in the process.

Hygiene before health. When sequential test plans and scripts break against dynamic on-screen results, teams start spending time conducting test hygiene and repairing broken tests, instead of focusing on the health of the user experience.

Component level testing. Changes happen much slower at the middle tier and integration layers than the UI, so testing third-party services and back-ends directly can offer longer-lasting reassurances about the integrity of individual components – even if the components don’t behave as expected once merged in production.

Metrics and observability data hoarding. The cost of storage alone used to be prohibitive to this practice, but now that we have better bandwidth and capacity, many companies started capturing and warehousing the increasing data emanating from observability agents and streaming sources, and then sorting and analyzing it later.

All of these changed habits are natural reactions to the challenge of testing user journeys. When the whole infrastructure under the user interaction landscape is decentralized, incoming data is hard to match to specific user actions, and when distributed teams attempt to make issues repeatable and collaborate on testing, results are hard to replicate and test beds get corrupted.

To understand user journeys, we still need an outside-in perspective that firmly grounds our devtest teams in the customer’s shoes, within a real production environment.

A turnkey turkey testing solution

For example, let’s say your team is testing a new software feature that tracks the home delivery of a grocery store’s turkey dinners ordered by customers around the holidays. Developers implemented a new form on the store’s website and mobile app for customers to place orders on the store’s online ordering system.

Customers can browse the app, configure the dinner, complete a transaction, charge a credit card – but that’s all par for the course – not the user journey we are testing now. The quality of their experience with the app is all about checking that the dinner will arrive when promised.

When users are ordering and checking delivery status, this app would need to talk to warehouse inventory systems to check availability, which may need to call on suppliers of the turkeys and other goods in the box. Oh, and before showing a delivery promise data, the app needs to check the in-store deli group’s system for order preparation and capacity, and finally, communicating with one or more delivery services dropping off the meals (ala Doordash or Uber or Grubhub, etc.) and getting pricing estimates from all of these parties.

Even if each system is validated in isolation with test data, there is absolutely no way the retailer can afford to wait until that holiday week to find out if all of the components will work together and rapidly and successfully show order status for hundreds or thousands of customer requests.

To avoid a nightmare when the holiday arrives, the store can either throttle orders well below their sales potential, or learn to ‘cause the cause’ with synthetic performance monitoring and user journey simulation at scale within a test automation platform like Apica. Driving real production-like user behaviors at cloud scale through the system can detect more defects before they can appear in front of customers.

The Intellyx Take

Quality user journeys are at a genuine crossroads. The procedural ways we formerly conducted user acceptance testing, functional validation and performance testing are not holding up at the scale and change rate of today’s distributed cloud applications.

Directly testing underlying services and infrastructure is still very useful at every step of the software delivery cycle, but there is no way component-level testing can tell teams everything that can go wrong in production, in front of customers.

Fortunately, our human thinking about testing user journeys is evolving along with our technical capabilities. We haven’t lost the plot yet!

Next up in this Humanizing Software Quality series: Look for Intellyx perspectives on following deeper user journeys, API-and-service-driven test connections, and the evolution of human-centric testing.


©2022 Intellyx LLC. Intellyx retains editorial control over the content of this document. At the time of writing, Apica is an Intellyx customer. Image sources: Jem Yoshioka, Flickr CC2.0 license, FontAwesome (turkey icon).