What's Your New Currency?
There's a change spreading across the software engineering landscape. What you thought you knew about testing and shipping software is changing and it's time we all get on board. Some of us older engineers have learned how to ship software in a traditional way similar to the Waterfall methodology. But the software industry is changing too fast to continue to do things like we used to. There are a lot of terms floating around like Continuous Delivery, Agile/Scrum, Testing in Production, Telemetry, etc. and there are also a lot of articles and blogs on what all those mean. I'm not here to tell you the same thing. Instead, I'm hopefully going to help you embrace this change and gain some insight into how everything you know is going to need to be reinvented, especially if what you do revolves around bug counts.
So let's talk about bugs, or defects. That is the currency testers live by. Years ago, you were considered an expert tester if you were able to find a lot of bugs. You were considered a junior developer if your code contained a lot of bugs. Actually that way of thinking caused us to over-test a solution and try to get it into a pristine state before releasing it. This focus on defect counts made testers try uncommon scenarios to get the software into a strange state (scenarios the customers would never try). We used to have these debates around "when is a bug a bug?". Is it when you find a problem with the code the developer checked in, or can you find (and should you report) a bug found during requirements gathering or design. Um, who cares? I'm serious. In the new world, it really doesn't matter. It's not about reporting the bug. That currency system doesn't exist anymore. Testers are partnering with developers and looking at all aspects of a project release so that those issues can be found and fixed right then, in the design or in the code. No need to report, track, and fix at a later date. And it doesn't have to be a tester that finds it and it doesn't have to be a developer that fixes it. There has been a huge push over the last decade for testers to be able to write code and stand side-by-side with your developer counterparts. And the best developers are the ones who can write the highest quality code. As the roles of dev and test blur, developers are going to need to understand the practical engineering and thought processes behind testing software. The more speed you put into a product release through Agile methodologies, the more these roles become one and functions that each of these roles do need to get shared.
Now let's add some of the concepts floating around to see how they affect bugs. There's this idea of pushing quality upstream (I've also heard it called Left Shift Quality or Embedded SDET). In essence, it's what I mentioned about, dev and test pairs working together from the conception of a project. Less bugs will be reported while working in this way for two reasons, either the pairing of these two experts through the design and architecture discussions naturally causes less bugs to be in the final design that is implemented or you'll find that bugs get fixed when they are identified and therefore don't need reported and tracked because that work adds additional time that isn't required for the software to release successfully. Other concepts that mess up the timing of when bugs are found within a product cycle are Testing in Production and the practices around Experimentation or A/B Testing (the other end of the product cycle from design). All of these ideas allow code to get out to the customers faster and then use the customer data on their interactions with the product to determine if it is working correctly. If it isn't working correctly, bugs will be found. Years ago, we used to measure high severity bugs in Production as an indicator of quality. Well, that's not the case anymore. Agility and the ability to roll back releases quickly or for fix-forward (getting bug fixes out to Production quickly) eliminate the need to be as bug-free as possible before releasing to Production. This means that defects will be found in Production and that's not necessarily a bad thing. Sure the risk is higher that this will happen, but the trade-off is a faster release which means the customers get the features they want faster. The listening systems in place through Telemetry as well as a fast fix-forward philosophy of the engineering team will reduce exposure of defects to customers.
Over the last few years with all the software applications and services on the internet, the tolerance level of customers has gone up. Customers are becoming more tolerant of small little changes that come in regular updates to their apps and aren't as upset when things don't work perfectly. Sometimes, you just have to wait a few hours and things will start working again. That's the marvel of agile processes - if what you ship isn't quite right, you can fix it or revert to an older version very quickly.
In IT solutions, there is this middle release, what we call User Acceptance Test (UAT) and others may know of it as Integration testing or dogfooding. Even these releases aren't immune from the effects of Agile and the questions around bug counts. In the traditional waterfall approach, UAT was a phased milestone we went through with entrance and exit criteria and lots of other heavy-weight processes. We never want our users to find bugs in UAT so at times we will duplicate the same test cases in our testing phase and then let our users or business stakeholders run the same tests in UAT. This really slows down the process all for the sake of trying to avoid UAT bugs. In Scrum, we do a 2 week sprint and then demo the results to the stakeholders (sometimes the business, or customers, or partners, or internal clients). They can verify if we are heading in the right direction before we continue with more Sprints. And this should happen every two weeks or so. It's a demo and a discussion, and then a release to Production so no need for bug reporting or a UAT phase here. Just fix and go.
So when do you count bugs? And what's a good count of bugs? Do you even need to concern yourself with that anymore? Finding them early is good, but who's even reporting those? And finding them late has traditionally been bad, but if the customers are happy and your experimentation is working as planned, finding bugs in Production is expected. So without bugs as the currency for quality, what is? How about we focus on the whole reason we ever ship software, customer satisfaction. You have a high quality product when your customers are happy, when they get what they want and find it useful. That's your new currency. Welcome to the new world.
If you want to read more about how the world of testing is changing, I recommend starting with the concept of MVQ.
Comments
- Anonymous
March 06, 2016
Great post. I too worked on many projects in the past where Test metrics matters a lot. Test teams always want to increase test cases counts, bug counts and increase code coverage %. But, these days with agile, both test and dev are working closely to minimize the bugs in later phase and with DEV ops model, single engineer is responsible for both development and testing. We followed the pattern called DDDT. As an engineer, each one of us responsible for end to end Design, Develop, Deploy and Test (DDDT) of the feature (or User story). In few project, we followed TDD (Test Driven Development) where all the requirements will be defined in terms of test cases. First write all possible test cases for a feature and make them fail. Then start writing just enough code make those test cases pass. This way bug count will be less in the end. Well explained MVQ concept in the given link. We used another term called MVP (Minimum Viable Product) which has Minimum Viable Quality (MVQ). Usually after first or two Sprits, we will give this MVP (minimum viable Product) to the customer which has MOQ (Minimum Viable Quality) so that they can feel how the final product look like. This give early access to the product to the customers so it gives prediction about the final product and they can also share their experience to engineering team.