In the first article in this series I reviewed common prioritization approaches, including opinion/consensus/HiPPO, rules-of-thumb, and Cost of Delay.
I also explained why prioritization as part of product discovery — prioritizing what to test — is far more effective than prioritizing what to build. You greatly reduce the cost of prioritization errors, and at the same time allow for more ideas to be accepted, reducing tension and conflict, and improving the odds of finding those rare good ideas.
In this article I’ll focus on the two most common ways to prioritize — prioritizing using user problems and using impact, confidence, and ease (ICE). We’ll see the pros and cons of each, and whether they’re truly mutually-exclusive as some product experts will have you believe.
Prioritizing by User Problems (aka Opportunities / Needs)
If cost of delay is all about money, then user-problem prioritization is all about the users. The core idea is that business goals should be connected to user problems ( also called opportunities or underserved needs). Solving the problems will (hopefully) change user behavior in ways that will drive the desired business results. For example if the business goal is to increase the number of signed users, we may look for problems in the sign-up process, prioritize these problems and fix in order of priority, thus causing an uplift in the number of signups.
Derived from design philosophy, this approach often puts a strict order — first map the problem space and identify the most important problems, then think of solutions (aka ideas or bets). Design Thinking popularized, by the design firm Ideo, is by far the most influential methodology/philosophy in this space. Teresa Torres’ Continuous Discovery (CD), described in her book Continuous Discovery Habits is a modern and popular take, best known by its main artifact: Opportunity Solution Trees (OST).

Here’s an example that Torres provides: say Netflix wants to drive up view hours per user, a potential opportunity tree may include the top-level opportunity “I don’t want to miss out on something good”, with sub-opportunities such as “I don’t know when a new season is available” and “I want to know what my friends are watching”.
Continuous Discovery, as the name suggests, is a system for product discovery, but it also includes some implicit multi-level prioritization:
- Prioritizing opportunities — Torres recommends tackling one sub-opportunity at a time. You first pick the most important top-level opportunity, and then choose one of its sub-opportunities. The choice between sibling opportunities is based on: 1) opportunity sizing, 2) market factors, 3) company factors, and 4) customer factors. In her book, Torres explains those in general terms and urges practitioners not to try to use numbers, but rather make a “data-informed, subjective comparison of each of the factors”. The decision is deliberately “messy and subjective”. To ensure practitioners won’t over-dwell on the choice, Torres recommends using Jeff Bezos’ one-way/two-way door criteria to determine which decisions are reversible and which are not.
- Picking solutions/ideas — Through brainstorms the team produces 15-20 ideas per chosen opportunity. Ideas that don’t clearly address the opportunity are then filtered out. Out of the rest, the team picks 3 winners using multiple rounds of dot-voting. Torres is very adamant that you must Prioritize Opportunities, Not Solutions and specifically calls out the practice of grading ideas in a spreadsheet (see ICE below) as fundamentally flawed.
My Evaluation of Prioritization by User Problems
I’ll focus mostly on the prioritization method of continuous discovery here as other design-inspired methods vary widely in implementation. While I practiced variants of Design Thinking during my time at Google, I didn’t practice Continuous Discovery (CD) directly, but I’ve consulted companies that did. So these are my opinions, rather than direct hands-on observations:
Pros
- CD emphasizes making decision based on evidence (what I call evidence-guided development) through a combination of user research and experimentation — exactly as it should be
- CD encourages user research, and especially user interviews, which many companies don’t practice enough. Adopting CD should amplify the customer voice inside the company and elevate customer-centricity.
- Opportunity solution trees offer a practical way to connect business goals to action. I’d argue that’s not the only way, and it is more appropriate for certain types of goals and solutions.
- CD deliberately tries to avoid cognitive load by encouraging fast, imperfect decisions, which makes adoption and practice easier.
Cons
I leveled most of my criticism towards the solving-user-problems philosophy in the article You’re Not Just Solving User Problems. Here are the key points:
- “It’s all about solving customer problems” is too limited a worldview in my opinion. In practice, the company has needs of its own that don’t always overlap with customer needs. For example lowering costs is a legitimate need of the company, but interviewing customers will tell you little about the true opportunities (some of which may be technical or financial). Adopting AI is also a pressing need for many companies, but customer interviews will likely not tell you much.
- The map-the-problem-space-before-the-solution-space model (double diamond) is often overly restrictive and impractical. Many ideas come directly from customers, stakeholders, managers, and the team, not through user research. Rejecting these ideas outright or putting them through the sift of user interviews is impractical and will likely annoy your colleagues. Hence there’s a need for a general idea prioritization system (which can live alongside continuous discovery).
- User research is just one form of research; others include data research, market research, and technology research. Some of the most important innovations in the last decades have emerged from those.
- User research is rich in qualitative meaning, but does not guarantee statistical significance. For example, the choice of interviewees can greatly affect the findings. The opinions and biases of the researcher can inadvertently color the results. Hence I find it’s best to use multiple forms of research and to cross-correlate.
- While I share Torres’ recommendation not to try to make prioritization a pseudo-science, choosing what to work on using broad criteria like “opportunity size” or “market factors” in a deliberately “subjective and messy” way, sounds error-prone to me, especially as there’s no way proposed to test the assumptions behind the opportunities except user interviews. I’m also not a massive fan of team dot-votes as a way to pick ideas.
To be clear, Design Thinking and Continuous Discovery are widely practiced and I met many practicionaris who warmly recommend them. I feel they’re very valuable, but would recommend combining with other methods rather than using exclusively.
Upcoming Workshops
Practice hands-on the modern methods of product management, product discovery, and product strategy.
Secure your ticket for the next public workshop
or book a private workshop or keynote for your team or company.
Prioritizing by Impact, Confidence, and Ease (ICE)
ICE (Impact, Confidence, Ease) was invented by Growth guru Sean Ellis as a way to rank growth experiments, but is now widely used to prioritize product and business ideas. I’ve written extensively about ICE in my book Evidence-Guided, in my eBook on the topic, and in multiple articles.

To use ICE we need to first collect ideas in an Idea bank, and then estimate three values for each:
- Impact — how much does this idea stand to improve the target metric. ICE can flexibly be used to estimate on any metric — from the company north star or top business metric, down to a quarterly key result. Naturally you have to use the same metric across the ideas you wish to compare.
- Ease — how easy is it going to be to build and launch this idea in full (without any testing). Usually this is the opposite of person/weeks.
- Confidence — how strong is the evidence that we will have this expected impact and ease.
Each value is normalized to a range of 0-10. We multiply the three values, or average them to get the ICE score. The scores are just a hint — knowing what we know now, these ideas look most promising and should be tested first. ICE does not guarantee that these are the best ideas, or that they will even work.
RICE is a derivative of ICE invented by Intercom. It adds a fourth component — Reach — how many users/customers will be impacted. Some ICE practitioners, me included, argue that Reach is simply a component of Impact, and not necessarily a component you always want to factor.
Evaluation of ICE/RICE
ICE is an area where I feel I have deep experience both as a practitioner and as a coach and I’ve seen many companies benefit from using it. But in no way am I here to endorse the method as the best or only way to prioritize. As you’ll see below ICE has some clear challenges that must be addressed to make it effective.
Pros of ICE/RICE
- Flexible prioritization system that allows teams to focus on any metric. ICE works very well with Objectives and Key Results and with metrics hierarchies.
- Confidence, when used correctly, has several important benefits. a) it reflects how reliable and trustworthy the other guesstimates are, b) it encourages testing and validating ideas and making evidence-guided decisions, c) It can act as an important antidote to opinions, biases, and HiPPO.
- ICE is fairly easy to understand by anyone in the company, and it acts as a powerful tool to communicate prioritization reasoning.
- In my experience, switching to looking at ideas through the lenses of impact, confidence and ease, greatly shortens idea discussions and elevates the quality of the decisions. It focuses people to think of the impact on the goals, on costs, and on confidence/risk factor. Often the right decision is that we need to test more to decide and ICE helps drive the point.
Cons of ICE/RICE
- I regularly see product people use opinions and sparse data to estimate Impact and Ease, and yet assign a high Confidence value. This creates very bogus and subjective prioritization which earned ICE many detractors. To address this problem I created the Confidence Meter, which assigns weights to different categories of evidence.

- The ICE score itself (Impact * Confidence * Ease) is simplistic, and sorting ideas by it can easily lead to wrong decisions. For example most ideas start with low confidence scores just because we haven’t tested them yet. Even worse, while confidence levels are low, ICE scores can fluctuate widely given new information, so they are quite unreliable and at best a hint. I tried to address this challenge in my book Evidence Guided:
Here’s an alternative way to prioritize. At any given point you’ll need to pick some low-confidence ideas for early validation (typically handled by the product manager), some medium-confidence ideas for early testing (requiring other team members to help), and a few high-confidence ideas for advanced testing and delivery (requiring heavy engineering and design investment). Keeping all three funnels full ensures you never run out of ideas to work on. So you can split your list of candidate ideas into these three groups by level of confidence, and pick ideas in each.
(source Evidence Guided)
- ICE requires people to make cognitively-hard assessments of the three values (impact is especially hard). The rule “don’t make me think” applies also to product people and I find that triads/trios can tire of doing too much ICE, and start neglecting the practice. I tried to address this challenge in my book, by breaking ICE prioritization into several stages in the lifecycle of the idea, where most ideas will only require quick guesstimates, while few will require more in-depth analysis. Still this is the biggest challenge of ICE which requires quite a bit of discipline.

Final Thoughts
It would have been lovely to have one perfect prioritization system that predicts the future with high probability, but sadly no such thing exists (short of a future-seeing crystal ball, currently out of stock in Amazon). It’s clear that relying too much on intuition and rules of thumbs is very haphazard. The three main approaches: Cost of Delay, User problems, and ICE, all offer marked improvements, but also come with their own caveats and cons.
Counter to what you may hear from purists, you can mix and match methods. For example WSJF (weighted shortest job first) could be made more evidence-guided by incorporating a fifth value — Confidence (perhaps using the confidence meter). Opportunity Solution Trees can be paired with ICE (I know teams that do just that). The old truism holds — there isn’t just one right way to develop products; you need to try the methods and adapt them to your context and needs.
Perhaps the bigger takeaway is that you should see prioritization not in isolation, but as part of a larger system that includes research, idea evaluation, and idea validation. After nearly 30 years in the industry, this evidence-guided approach is the only effective way I’ve found to build high-impact products, and it doesn’t really matter whether you call it GIST, Continuous Discovery or something else.
Join my newsletter to get articles like this
plus exclusive eBooks and templates by email