When I started my software quality career a little more than twenty years ago, we worried about things like rendering images on the correct palette for a 256 color display and how the application would perform over a 9600 baud dial-up connection (usually provided by America Online). The company delegated these worries to a team of software testers. They called us the Quality Assurance (QA) Department and gave us titles like Software Quality Assurance Engineer.
Our software shop worked like an old-school assembly line, following a manufacturing model that you can trace all the way back to 1750. At the front door, we had a systems analyst gathering customer needs and converting them into software requirements. At the back door, we had the QA Department, verifying that the finished product actually met those requirements. Imagine inspectors wearing white lab coats and carrying clipboards with stacks of checkboxes and you won’t be far from the truth.
The approach seemed like common sense. Most other shops that I visited worked exactly the same way. Many still work that way today, perhaps thinking that they are doing what makes automobile assembly and food processing plants so successful.
Take the time to visit a successful automobile assembly or food processing plant, and you will find that they stopped following the factory model nearly a century ago. It was too slow, too expensive, and produced too much waste. If the product failed inspection at the end, it got scrapped or sent back to the beginning for rework. Industrial engineers recognized this shortcoming in the 1920s and re-organized their plants to build quality in instead of inspecting defects out.
Of course, there are important differences between software and manufactured goods, but those differences only increase the need to build quality into every step of the process. If a software product is found defective at the end, we don’t lose some fraction of the lot. Rather, we are forced to choose between releasing the defective product or asking our customers to endure a delay as we work the process backwards, find out what went wrong, correct it, start over, and hope that we don’t break something else along the way.
Customers choose CyberGRX because they take third-party risk management seriously. They have every right to expect the tools we provide to perform correctly every time. They also have the right to expect that we will continue to expand and improve those tools to meet more of their needs. To meet those quality expectations, we harness all of our product development staff rather than depending on a single QA department. Everyone builds quality into the product at every stage through the disciplined application of modern software development practices, delivering each part along with the assurance that it was built correctly.
We still have the team of inspectors, but we call them System Test rather than Software Quality. They start with the assumption that quality was built into all of the parts. Their job is to check the whole for unexpected behaviors that can emerge from integrating a system as complex as ours. We call them software testers rather than QA engineers to reflect what they actually do and to eliminate the implication that they bear special responsibility for quality.
When we ditched our QA department in favor of System Test, we did not simply change a name. We shifted responsibility for the quality of our software components upstream and limited the scope of inspection downstream, reducing the risk of discovering defects late in the process and maintaining control over the delivery schedule. It’s 2018. We have learned a thing or two since the days of AOL dial-up. With monitors that can display millions of colors, we don’t worry about palettes either.
To learn more about how CyberGRX can help you manage your third-party cyber risk, request a demo today.
Book Your Demo