Sub-optimization happens when we apply an improvement measure or process to one component of a system that ends up impairing the output of the system as a whole. Software development is inherently both complicated and complex. I like to describe it as an endless problem that we get to continually solve. Most of us learn […]
Software development is inherently both complicated and complex. I like to describe it as an endless problem that we get to continually solve. Most of us learn that the easiest way to tackle a tricky problem is to first break it down into smaller parts, then solve those smaller parts in isolation. This technique is effective in a lot of situations. However, if we use that technique across the board as a way to improve our organizations, we risk creating sub-optimizations. As Mary Poppendieck states in her book, Lean Software Development: An Agile Toolkit, “Local measurements often create systemwide effects that decrease overall performance, yet the impact of local optimization on overall results is often hidden.”
Understanding how the various components of software development interact with and affect each other is a daunting task. So, instead, we are often tempted to focus our improvement initiatives on the individual components in isolation, without regard for possible effects those changes may have on the system in general. People tend to sub-optimize developers and testers in particular.
The following is my go-to story when teaching about systems-thinking and the importance of optimizing the whole. Years ago, I worked as a tester on a complex application. The developers, business analysts, and testers all worked well together, but we were never able to meet our delivery dates. Plus, our last release had several serious defects that we didn’t catch.
The QA manager at the time, let’s call him Dale, encouraged me to find and log every defect I possibly could. At that time, logging a defect meant writing a description of it in a shared spreadsheet. It was a slow process, but it was effective. Our organization used this process for years. We were all familiar and comfortable with it.
Before long, senior management appointed a new director for our group, who I’ll call Matt. Matt was zealous and dead-set on “fixing” our group so we would meet our deadlines. One of his first disruptive acts was making everyone use DevTrack, a development and bug tracking tool. We moved requirements, technical documents, and the defects out of shared drives and physical filing cabinets and into DevTrack. Initially, most of us were wary, yet excited. The change made sense and would make finding information easier for everyone. However, the change came with some consequences that nobody anticipated.
In our previous spreadsheet logging process, artifacts were largely hidden, hard to find, and near impossible to build report metrics from. Conversely, DevTrack made our work very visible and easy to find. It exposed everything, not just to us, but to people at all levels of management. With just a few clicks, DevTrack allowed anybody to create reports that showed the actual status of the project. Report data could now be easily linked directly to individuals. Functional managers quickly figured out how to use the data to “motivate” their employees, and to promote their own silo to upper management.
One morning, Dale called me into his office. He had written a list of all the testers names on his whiteboard. He was so excited to tell me about his new plan. Each morning, he was going to run a DevTrack report listing the number of defects each tester logged the previous day. Then, he would write the results on his whiteboard in big black letters for everyone walking by to see. Our names were arranged by rank. The names at the top were employees who logged the most defects; those at the bottom, the least. I became obsessed with continually topping Dale’s list. I logged every unexpected condition I possibly could. I didn’t talk with business or development about them, but simply logged away. It felt like playing a video game, and I was winning!
While QA was getting kudos for finding so many defects, we were unknowingly creating a huge system sub-optimization. The developers were so bogged down with minor defects, their overall productivity plummeted. While QA was rewarded for writing defects, the developers were punished. Matt liked Dale’s system so much, he created something similar for the developers. The names on the top of his whiteboard were the developers who created the least number of defects, and those on the bottom, the most.
From a data perspective, our group appeared to be improving. We were finding and fixing more defects than ever before. Matt shared those numbers frequently with stakeholders to prove that we’re finally on the right track.
However, when the release date came, the application was not ready. There was still a backlog of defects to get through, and several components were not yet implemented. We missed another deadline… by a lot. The defect reporting strategy improved neither productivity nor quality. In fact, it made it worse. Worse yet, it destroyed morale. It fractured the working relationship between QA and developers. We stopped openly collaborating and turned into competitors. This culture-shift created bigger issues that further prevented our success.
I regret not realizing and speaking out about the ramifications these sub-optimal policies caused to our organization. While I can’t change the past, I can hopefully assist others to avoid similar situations.
Defects: Find Them Early, Fix Them Fast
“Most defects end up costing more than it would have cost to prevent them. Defects...