Developer involved testing establishes new low defect rate benchmark




Last week I was facilitating an interesting conversation between my development team and testing team. The great experience we had on that conversation was how the two teams broke the silos down to secure code quality together at an earlier stage. I related a story that shared how that team made a significant difference in the code quality. They established a goal that the developers should deliver fewer bugs the first time they check in code into the code repository, and they made that happen – they saved at least one Sprint for “Stabilization”, and when compared with historical data, the number of identified bugs decreased by about 50%. Below I have listed several key things they’ve been doing:
  • The testing team was working together with the developers to conduct functional verifications on dev local before code check-in.
  • Every day the whole team was doing a 30 minutes smoke test after the daily build, and verifing the new functionalities integrated that day.
  • The team committed to resolving today’s issues today.
But by the end of that project the team found they could do more; they found the effort the testers spent were heavier than normal because when doing smoke testing on the development workstations before code check-in the developers were relying too much on testers even if they had the required testing skills, and it was normal to have arguments about their understanding of requirement which lead to the definition of bug. Fortunately, later the same team got another chance to go further on their code quality improvement on a brand new project which has less technical risks so that the team could put more focus on testing and engineering practices. This time the team decided that they want to beat even more aggressive targets:
  • Continue improving our code quality – decrease the number of identified bugs more than 50%.
  • Release the testers from regular functional testing work, letting them take more valuable work like performance testing and automated testing.
  • Practice Test Case Driven Requirement, making it possible to throw those obscure requirement documents away and build up an understanding baseline among all teams.
When defining our team model and Sprint process, we realized that our developers already had enough manual functional testing experience in the prior projects so that they should be able to take over most of the testing work, but they were not experienced enough to design high quality test cases, which means if our testing team could train them the skills of designing good test cases the developers could entirely handle the functional testing from the technical skills perspective. Hence, we found that we have only one problem left for the developers to completely take over the functional testing work: as a developer, it’s normal to be less easy to test against his/her own code, they always find fewer bugs if they’re testing for themselves.
But again, this creative team resolved that problem. They decided that they can do cross testing among developers because it’s always easier to find others bugs. And in order to secure the quality of testing, they realized that they still need a dedicated functional tester role, whose responsibility would not be taking real testing work but to provide direct support to the developers when they’re doing testing work. That role would provide on-job training on how to write profession test cases, how to design test data, how to decide the best timing to conduct regression tests, etc.
And since the whole team (including the testers) had some good practice of TDD (Test Driven Development), it was easier for them to accept the concept quickly and to design the activities inside a Test Driven Requirement Analysis cycle. The testing team provided a standard format for functional test cases, and based on that the team came up with a hierarchical structure for decomposing high level requirements to user stories and functional test suites, and the mechanism to maintain the requirement traceability.
The team was feeling ready to go. They started the development using the new approach, and finally they were successful again. The code quality was even better than before, with the number of identified bugs per KLC only 1/3 of our organizational benchmark: the development team delivered only 33.33% bugs compared with other teams. And the more important thing, almost all of the testing work was performed by developers – only the test lead spent part of his time supporting the development team, with the rest of the testers doing automated testing and performance testing. As time went by the team kept improving their approach iteratively and when the project finished, they already had a formally defined team process for their day-to-day work.
Below is a brief introduction of their final development process, which could be summarized to several key development roles, activities and principle/commitments.
The key development team roles and their responsibilities:
  • Story Owner – for each user story, the team would have an owner who would be responsible for the final delivery with high quality. A Story Owner should NOT take any real development work inside that user story although the person who takes that role could be developing for another user story. Instead, a Story Owner should take care of test case development and make sure all necessary testing effort would take place to cover that feature.
  • Quality Goalkeeper – the development team needs an experienced functional tester to provide ongoing support and to measure the current quality status, e.g., quality statistics analysis, overall quality reporting, technical supporting, etc. Quality Goalkeeper would be the go/no-go decision maker for that Sprint from the quality perspective.
The key activities inside a development lifecycle:
  • Develop functional test cases and use them as the unique documents to present high level requirement.
  • Test Driven Development, with the specific target for Unit Test code coverage.
  • Continuous Code Review and local functional verification on development workstations before code check-in.
  • Continuous Integration and test automation. Testing automation is a key reason for us to be able to conduct frequent inspection – it reduces the huge manual effort for regression test if we want to do functional testing everyday.
  • Daily Functional Testing – verify the functionalities newly integrated on the same day. This is a whole team activity which on average takes 30 minutes per day.
  • Sprint regression test which happens before the Sprint Demo and a final quality inspection before the product is delivered.
Several principles and team commitments:
They defined three Quality Gates and made corresponding commitments:
  • Local Testing before code check-in – developers will resolve today’s issues today.
  • Daily Functional Testing after daily build – Story Owner will make sure all tests are passed on the development server.
  • Sprint Functional Testing before the Demo – Testing Team would make sure no bug with minor or above severity remained in that Sprint.
The following diagram illustrates this simple development approach:

Typical_Dev_Cycle3.png