Whether it’s Scrum, Kanban, or something else, every Agile Framework is rooted in Empirical Process Control (EPC), which refers to any system of continual experimentation and adjustment to define products and approaches. The three pillars of Empirical Process Control are transparency, inspection, and adaptation. And while this entire philosophy is often referred to as simply […]
Whether it’s Scrum, Kanban, or something else, every Agile Framework is rooted in Empirical Process Control (EPC), which refers to any system of continual experimentation and adjustment to define products and approaches. The three pillars of Empirical Process Control are transparency, inspection, and adaptation. And while this entire philosophy is often referred to as simply an “inspect and adapt loop,” we shouldn’t forget that it’s the underlying condition of “transparency” that is required for inspection and adaptation to happen.
It should be no surprise, then, that most Agile teams are awash in metrics. Agile teams may utilize all sorts of measurements to visualize behaviors and outcomes they need to make transparent for purposes of inspection and adaptation. A software development team may use quality tools to visualize whether they’re increasing or decreasing technical debt in a production environment or user behavior indicators to see how people interact with the features of an application. A help desk team might focus on cumulative flow diagrams to identify bottlenecks in a repeated process or meter the change in resolution cycle time to gauge the impact of an adaptation. In a universe of possible measures looked at by Agile teams, the metric that’s most often reported as being used is velocity. Unfortunately, that’s also the measure that has the most potential to negatively impact Agile adoption in organizations and distract the focus of leaders away from what matters most.
As the name suggests, velocity is a measure of a team’s speed of backlog item completion. A Scrum team might measure their velocity as the number of Product Backlog Items completed in a Sprint. Or if the team is estimating Product Backlog Items with a scheme like complexity points, their velocity might be measured as the sum of the point estimates for all Items that are totally complete by the end of Sprint (with no partial credit!). However it’s calculated, a team’s velocity is essentially an output measure: how much stuff is the team able to complete in a given period of time.
Paying attention to velocity can help an Agile team in a number of ways. As a planning utility, knowing typical recent velocity can help a Scrum team make a reasonable guess about how many backlog items might fit into the next Sprint. Knowing recent velocity trends can help a team set practical expectations for stakeholders about how many more Sprints might be needed to complete a set of features in a release. Noticing a change in velocity can help a team assess whether a particular process adaptation is helping them. A sudden, significant dip in velocity might prompt a Scrum team to have a serious discussion in their next Retrospective about why that happened, and whether the causes are one-time events or likely to reoccur. If the team’s velocity differs wildly from capacity (how many things they planned to complete at the start of an iteration), that may indicate that they aren’t planning realistically, or aren’t doing enough Backlog Refinement in advance of Sprint Planning. For these reasons (and others), it makes sense for a team to include velocity as one of many measures they might use to visualize aspects of their planning, performance, and ability to self-organize.
Because velocity can be useful in so many ways to an individual Team, it’s easy for organizations with many teams to focus on velocity as a cross-cutting metric of team- and program-wide performance. Since teams are collecting it for their own use, it’s easy to aggregate velocity data and incorporate it into existing dashboard behaviors across an enterprise.
But comparing team velocity across the enterprise is fraught for a number of reasons; the most significant being that the velocity is calculated based on relative estimates that don’t have an absolute meaning that can be compared between teams. But even if it did, comparing velocity across teams creates a trap: while it summarizes output, velocity tells us nothing about outcomes, which is where leadership attention should be focused. If an organization focuses on output over outcome, they end up incentivizing ‘busy-ness’ over impact. A coaching experience I had a few years ago illustrates what it feels like when a company falls into this trap.
I had been hired as an Agile coach on a short-term basis to assess eight Scrum teams within a small company. The structure of the engagement was that I would spend a Sprint observing a team, provide some observations and recommendations, and then move on. As I was about to start my first day with Team #2, management pulled me aside:
“We’re very interested to hear what you think about this group. They’re kind of the black sheep of our organization.”
“Oh? And why are they seen that way?” I asked.
“Their velocity is horrible. They’re only doing 40-some points a Sprint. They should be doing closer to 60. So, we really want to know why they’re so bad compared to every other team.”
“I’ll let you know what I find,” I replied.
After spending a Sprint with Team #2, here’s what I found: the team consisted of a Product Owner, a ScrumMaster, and a five-person development team. Every Sprint they planned to do 6-8 Product Backlog Items, and they typically completed 6-8 Product Backlog Items. So whatever their velocity measure says, they’re planning realistically and producing output at the maximum pace that was sustainable for them. But what about outcomes? Team #2 was releasing new features to production about every 2-3 months. From a technical standpoint, they could release every single Sprint. But their Product Owner knew that every 2-3 months was the most often they could change their production system without alienating and losing users, and retaining/expanding their user base was a primary goal for this product. That means they’re also producing outcomes as fast as they responsibly could. I provided this feedback to management, and was mostly met with confused stares.
“…We’ll circle back on this later. Next up is Team #3. You’re going to love them. They’re our rock stars!”
“Why are they rock stars?” I asked, a little afraid of the answer.
“They do so many points per Sprint! Their velocity is a bright spot across the whole organization.”
After spending a Sprint with Team #3, here’s what I found: this group did do many, many points per Sprint… far more than Team #2! But the reasons for their apparently higher productivity weren’t good. For a start, Team #3 consisted of a ScrumMaster, a Product Owner, and a 23-person development team. Of course they do more Backlog Items per Sprint than Team #2; they’re basically 3 or 4 teams worth of people! But beyond that, looking at their Sprint Backlogs showed a deeper problem: in a typical iteration Team #3 was planning 200-250 Backlog Items, and completing about 75% of them. So much for planning realism! And focusing just on the 150-200 items they complete: even with a 23-person development team that means that most of those things aren’t Product Backlog Items. They were barely even tasks! For example: “Meet with [Other Team],” “Finish testing of [item from two Sprints ago],” or “Update project schedule.” I asked Team #3 how often they were releasing to production. Turns out they had just done that for the first time, even though they’d been building for 18 months! And how did that release go? “Oh… everything’s on fire, but we’re fixing it.”
In terms of output, Team #3 looks amazing. Finishing hundreds of items per Sprint! But they’re planning far more than they can complete, which means their typical Sprint plans are optimistic to the point of incredulity. And in terms of tangible business outcomes, Team #3 is delivering… nothing.
Because upper management was so focused on Velocity as the end-all and be-all indicator of Team performance, they had created a culture that rewarded looking busy and ignored real world impact. So they had a bunch of teams that excelled at ‘busy-ness’, but not many that were good at delivering Return on Investment. And a team that was actually great at delivering value was constantly on the verge of being punished for not playing that game.
Luckily, if an organization realizes it’s in a “velocity trap,” it can escape with some help. Often that help comes in one of three flavors:
Great training in foundational Agile Concepts like system thinking mindset, servant leadership, and Agile coaching can give teams the tools and language they need to tailor measurement strategies that serve the goals of inspection and adaptation. We make thinking about measurement and the relationship between management and teams a through-line of our curriculum, for example, introducing these themes in our Certified ScrumMaster (CSM) course, building on them with perspective-taking exercises in our Advanced-Certified ScrumMaster (A-CSM) course, and ultimately exploring how professionals have tailored metrics throughout their careers in our Certified Scrum Professional – ScrumMaster (CSP-SM) course. With this knowledge, Agilists gain the ability to coach the stakeholders and structures that surround their team to focus attention on the details that should matter most.
Companies aren’t born being great at Agile; they learn, adapt, and grow their enterprise maturity with Agile practices over time in the same way that a single team does. But that growth can be jumpstarted by training leaders to understand not just the basic mechanics of approaches like Scrum, but in the structural and cultural changes that are needed to support Agile teams. Working in partnership with several large organizations, we’ve built an Agile leadership training curriculum that gives leaders the knowledge to understand how to support teams, and an awareness of how surrounding structures may help – or hurt – them. For example, we explore how long-running institutional practices like individualistic performance reviews or a matrix-type org chart may directly undercut the ability of teams to deliver value. We also train leaders to understand wholistic value streams, and to focus their attention on tangible outcomes over output metering.
It can be hard to see a problem when you’re living in it – especially if you’ve always done things that way. Outside perspective from a seasoned Agile coach can help people at every level of an organization understand what practices seem to support the outcomes we want, and what habits seem to be getting in the way. Our coaches have extensive backgrounds in supporting Agile transition at both the team and enterprise level, and our experience includes helping organizations of all sizes in the public, private, and non-profit sectors.