Prioritization Techniques Compared — Part 1

Prioritization is the process of choosing which ideas/bets to invest in and which to postpone or park. It entails two cognitively-hard tasks: 1) evaluating the merit of each idea, and 2) comparing multiple ideas and choosing an order of precedence. If that’s not difficult enough, prioritization decisions are often a flash point of debate and power struggles within the company because they’re deemed crucial for business success. Are we good at it? Probably not. Many product people I interview report lack of good prioritization as a top problem in their companies. 

In this 2-part article series I’ll review common prioritization approaches and describe the pros and cons of each. They are:

  • Intuition, Consensus and HiPPO
  • Rules of Thumb — MoSCoW, Kano, Eisenhower … 
  • Cost of Delay — CD3, WSJF
  • User problems — Double Diamond, Continuous Discovery
  • Impact, Confidence, Ease — ICE, RICE

Prioritization in the Age of Evidence-Guided Development

Before we move forward , it’s important to note that a lot has changed in our thinking about prioritization since the introduction of evidence-guided approaches such as Design Thinking, Lean Startup, and Product Discovery:

  • Prioritization in the past was an all-or-nothing decision as to which ideas will be built and shipped and which will not. Today we think of prioritization as deciding what to test first.
  •  Prioritization in the past was based on opinions, consensus, and some data. Today we also factor in evidence, which stems from deliberate attempts to prove/disprove our assumptions about the idea. 
  • Prioritization in the past was a one-time process designed to create a roadmap or a backlog. Today we re-evaluate ideas every time we obtain new evidence, which may change prioritization decisions, roadmaps, and backlogs on the fly. 

More broadly, today idea evaluation (which is the term I prefer over prioritization) is just one part of product discovery, the other being idea validation

Product Discovery. Source: Evidence Guided

With product discovery we can explore more ideas while reducing the penalty of choosing wrong. In other words, if you test your ideas rigorously, accurate prioritization is of far lower importance. You could pull ideas from a hat and still do OK. 

With that in mind, let’s look at some prioritization methods.

Prioritizing Using Intuition, Consensus and HiPPO

At a basic level we can just choose what our gut, experience, and some data tell us. This may work in early-stage startups where the number of ideas is limited and the founders know intimately the customers, the product, and the data. But as we scale, keeping all prioritization decisions in the hands of the founders becomes ineffective. 

Larger and more mature organizations often do prioritization-by-committee, involving managers and stakeholders. The committees may help limit individual biases, but introduce group biases such as groupthink and politics. They also slow down decisions and disempower product teams. 

Perhaps a greater problem with prioritization by intuition and opinions is that with all the moving parts in the product, market, and technology, it’s practically impossible to say if idea A will work better than idea B, or will have any positive effect at all. Still, our minds easily fall for heuristics and cognitive biases that convince us that we can make such a call, and that our decisions are well-based and rational. 

For all these reasons we’re largely moving away from prioritization-by-opinion (at least on paper). Still, human judgement is always going to be a key component. In fact, I’d argue that the job of prioritization methods is not to make the decisions for us, but to help us make better judgement calls

Prioritization by Rules of Thumb

These are not necessarily full-fledged prioritization systems, but they do suggest which types of ideas we should prioritize: 

  • MoSCoW — Must-have, Should-have, Could-have, Won’t-have
  • The Kano model — Must-Be, One-Dimensional, Attractive, Indifferent, Reverse 
  • Jeff Bezos’ one-vs-two-way-door decisions – Reversible ideas vs. irreversible ideas 
  • The Eisenhower matrix — 2×2 matrix of Important vs. Urgent
  • Desirability, Feasibility, Viability — Created by design firm Ideo

Pros:

  • Catchy and easy to understand and to communicate
  • Frame the discussion and potentially lead to better decisions (but see caveats below)

Cons:

  • Very broad and subject to interpretation
  • No clear success metrics
  • Still largely rely on opinions, consensus, and rank 
  • Not always true 

In summary: I’m not a fan of any of these. 

Upcoming Workshops

Practice hands-on the modern methods of product management, product discovery, and product strategy.  

Secure your ticket for the next public workshop
or book a private workshop or keynote for your team or company. 


Prioritization by Cost of Delay 

Cost of Delay (CoD), introduced by Don Reinertsen in his seminal book “The Principles of Product Development Flow” (2009), is an estimate of how much money the company loses each week by not shipping a specific work item. Reinstern declared CoD the most important metric by which to optimize work.

CoD caught on mostly with Agile practitioners. The Two most popular implementations are Cost Of Delay Divided By Duration (CD3) and Weighted-Shortest-Job-First (WSJF). Here’s a brief overview of both. 

Cost Of Delay Divided By Duration (CD3)

Joshua Arnold created CD3 and has written extensively about it on his website Black Swan Farming. Here’s a quick overview video:

The formula of CD3 is: Value x Urgency / Duration

  • Value Arnold identifies four types of value: increasing revenue, protecting revenue, reducing costs, and avoiding future costs. Yes, it’s all about the money.
  • Urgency expresses how critical the timing of realizing that value is. Arnold defines 4 urgency profiles, each with its own pattern of value delivery over time. I’m not quite sure what’s the the unit of urgency (presumably 1/week)
  • Duration is the estimate of how long it will take to complete the project

Here’s an example of a CD3 prioritization table:

Source: BlackSwanFarming.com

Note: Arnold also proposed a non-numerical version of CD3 which was further adapted by John Cutler

Weighted-Shortest-Job-First (WSJF)

WSJF ( pronounced Wisjif) is the SAFe (Scalable Agile Framework) take on Cost of Delay and therefore quite a popular prioritization method today. 

Here’s a quick explainer video by Appfilre:

The formula of WSJF is: (Business Value + Time Criticality + Risk Reduction) / Estimated size

  • User-Business Value – the relative value of an item to the business or customer
  • Time Criticality – Does the cost of this problem increase over time if we do not act? Is there a deadline? Are there penalties for non-compliance that take effect? Are volumes increasing or remaining steady?
  •  Risk Reduction-Opportunity Enablement – Does this reduce risk, or allow us to take advantage of opportunities not available to us previously?

All values are taken from predefined 5-point scales such as this:

Here’s what a  WSJF idea bank might look like:

Source: kendis.io

Evaluation of Cost-of-Delay Methods

I haven’t practiced CD3 or WSJF myself (although I have consulted companies that use the latter), so these are my opinions only.

Pros:

  • Better than just relying on opinions or rules of thumb
  • Focus on consistent business metrics 
  • Factoring in the urgency of a feature makes intuitive sense — some things are critical by nature 

Cons:

  • Optimizing just for money (CD3) — Revenue and costs are lagging indicators and ones that most product teams cannot directly affect (there are too many other factors). Also, sometimes, companies are better off temporarily focusing on lower-level metrics — retention/churn, number of transactions, engagement levels… Lastly, focusing only on money is likely to reduce customer-focus, which is very risky in this age of customer choice and fast-moving competitors. 
  • Business value, Urgency, and Risk Reduction are vague and hard to estimateIn my experience product teams will really struggle to perceive and estimate such things and will have to rely on one of these bad choices: 1) gut-instinct guesses, 2) business stakeholders’ input, or 3) delegating the estimate to business-intelligence or finance. None of these is going to be remotely accurate, and some will be very slow. 
  • Not factoring Evidence — both implementations of CD3 lack any mention of available evidence which means that scores that are based on one-person’s guesses are just as good as scores based on thorough research and experimentation.
  • Is Cost of Delay really the most important optimization? — While some things are time-critical (and maybe we’re losing money not having them), should this be the guiding light for all our activities? What about the mission and the strategy? The goals of the company and its parts? Ethics and legal? 

Overall, Cost of Delay strikes me as the sort of revenue/cost optimization that may appeal to certain engineers and execs, but is missing a lot of nuance.  It looks as if it was conceived in the age of big, expensive projects that either were added to a roadmap or were parked. It doesn’t feel like a very scalable, or for that matter, agile process to me.  

 If you are using one of these methods I would consider these adaptations: 1) Broadening the definition of Value to allow for other metrics, including customer-value (which WSJF seems to have done) 2) Adding evidence-based Confidence (more on this when I talk about ICE) as one of the components. 

Takeaways

In this article I touched on the many limitations of intuition, consensus and HiPPO when it comes to prioritization. Old chestnuts like the Kano model or the Eisenhower Matrix may help frame the discussion somewhat, but won’t take you very far either. 

Cost of Delay with its two derivatives — CD3 and and WSJF — warrants more attention, but I I would suggest adapting it to include support various value metrics and some element of evidence-based-confidence. 

In part 2 of this article series I’ll discuss two very popular prioritization techniques: Teresa Torres’ Continuous Product Discovery and the popular ICE/RICE method that I’ve written a lot about in the past. This will be an opportunity to contrast a design-oriented approach that derides putting numbers on ideas, and very numerical alternative. I have my praise and my criticisms of both. This article is coming shortly.

Join my newsletter to get articles like this 
plus exclusive eBooks and templates by email

Share with a friend or colleague
Tired of Launching the Wrong Things?

Join my Lean Product Management workshops to level-up your PM skills and align managers and colleagues around high-impact products.