Components for Testing Software Quality

by | September 21, 2015

Testing software quality is an essential part of the development process that includes work before and after the software’s release.

Software quality testing is like a more robust form of software debugging–where the development team is concerned with how well the program works as opposed to whether it works. The process not only examines end user experiences with the released product, but also takes into consideration programming quality concerning the ease-of-use for the development team.

Standards and expectations can vary widely between software projects, so the components of the software quality analysis and implementation process are different for each project based on the implementation and construction of the finished product.

Human and Automated Elements

While human programmers are a required part of the software quality testing process, there are automated testing tools that can take on a large share of the work. Utilizing these tools can give your programming team a substantial amount of data to work with to identify, isolate, and resolve issues with the software.

Apica offers both on-demand and continuous delivery load testing solutions to test and optimize software performance throughout delivery life cycles. While performance testing is only one small part of any software testing program, it is an important one. 

Define Your Components

An organization can’t just assign programmers to review the software source code without a structured process plan and expect quality results. The developers in charge of the quality test need to answer what specific elements of the software need to be gauged, and what metrics they are going to use to measure those elements. The process should look at the user experience, as well as development aspects like scalability and maintenance. Some example qualitative metrics include ease-of-use, testability, portability, stability, and robustness.

Like a reconnaissance mission, it’s best to do an informal walkthrough of the code with the development team to familiarize all members with the different parts of the larger project they may not individually work on. Test the project as both a developer and an end user. This is where proper documentation standards come into play: The notes can make it much easier for developers to understand unfamiliar code. If the team finds that documentation is lacking, they would rate a “documentation” metric “poor.”

After the walkthrough, it helps to carry out a code inspection to analyze which parts of the software could benefit from improvements. For example, check to see if functions and variables are using a consistent naming pattern, and if the white space is being used to make the code legible.

This is also a good time to test the software on older versions of plug-ins, externally-hosted libraries, operating systems, and web browsers (when applicable) to ensure compatibility with older versions. If you find that you can change a few lines of code to make sure the software works with an older version your end users might reasonably use, it’s worth your while to make the change.

During this process, it also helps to examine the program for quantitative metrics like program speed, network bandwidth consumption, and memory use. Addressing issues with the previously mentioned criteria leads to a much more stable and usable program.

Automated tools can handle a substantial part of the work at this point, taking on tasks like identifying parts of the code that hit or even create performance bottlenecks.

Implementation

After you’ve established what needs to be improved and addressed in the software, it’s time for the development team to get busy. Mental fatigue can be a productivity killer during the implementation process, so developers can try reviewing code in 60- to 90-minute intervals, looking at about 200 lines of code at a time with at least 20 minutes of break time in between.

When you’ve identified code that can be improved and implemented changes, you need to verify that it works by actually running through the code to make sure developers fixed existing problems and didn’t create new ones. Additionally, it helps to have team members other than the ones that programmed the code verify the code: It does the double duty of familiarizing the rest of the team with the code.