A Culture of Good Decision-Making (Part 2)

In part 1 of this article series I discussed the principles high-performing companies use to accelerate and improve the quality of their decisions. In this article I want to give you specific techniques to do the same, including the ACID model, not judging decisions by their results, and publicly killing HiPPOs.   

This is a long article, but some of the most potent stuff comes towards the end — bear with me. 

Set Clear Decision Roles

In the early 2000s, while I was working for Microsoft, the company realized it had a major cultural problem. Most decisions involved many stakeholders (or just people with an opinion) who wanted to have a say, which led to long debates, escalations, and general confusion about who gets to decide. 

Micorosft’s solution was to adopt the ACID model of decision roles: 

The ACID Decision Responsibility Model (adapted from Microsoft’s original model)
  • Decider / Decision Owner — the person who has the right to make this decision. Usually this is the person who is best informed about the matter and most likely to be responsible for carrying out the decision (one Microsoft exec put it as “the person whose butt is most on the line”). 
  • Input-Giver — a person with important information or opinions. This person’s input should be collected, but he/she does not partake in making the decision. 
  • Consulted — a person that provides input and that is also consulted about options and tradeoffs. However he/she does not make, vote, or otherwise have control over the decision. 
  • Approver — this is usually a manager or a person with area responsibility who can push back or overturn a decision (an approver can also give input and be consulted with). Decision escalations go to the approver, but like overturning decisions, these should be kept to a minimum to maintain the integrity of the process.    

Deciders and approvers can sometimes be a very small group of people, for example a product trio.

Implicitly there’s a fifth group — everyone else. If you’re not assigned one of the four roles you simply are not invited to take part in this decision. That’s OK, you can’t involve every person in every decision. Still, be cautious of using the model to sideline people. If someone feels strongly about a particular matter, consider letting them be an Input-Giver, or even Consulted. 

Here’s an example of ACID in action:

A sales rep approaches the Customer Onboarding product team with a request to tweak the onboarding flow to accommodate a specific customer need. A decision has to be made whether or not to address this customer need and when. The decision owner in this case is the product manager. The team engineering lead and designer are consulted, as is the sales rep and a member of customer support. The relevant product marketing manager is an input-giver. The director of product responsible for this product area is the approver

After reviewing the matter, the PM concludes that this is a one-of request that is likely to have little or no impact on the experience of other customers and on business performance. The team proposed a few workarounds to satisfy the need of the customer albeit with higher overhead. The PM shares the information she found with the consulted group and informs she’s planning to reject the request. Everyone agrees except for the sales rep, however he accepts the decision and continues to communicate the workarounds to the customer.

ACID made a big difference for Microsoft (at least in my org). Whenever we found ourselves stuck in debate, someone would ask “ok, who’s the decider?” and once that person was identified, the matter would come to a resolution much quicker. On bigger decisions, knowing whether they’re an I, C, or neither, helped adjust people’s expectations and behaviors.  

Disagree and commit

In the story above, the sales rep voiced his disagreement with the decision, but then proceeded to execute it. This disagree-and-commit approach is very helpful and healthy. People can voice their disagreement without coming across as negative, and are able to commit to a decision they disagree with without resentment. Disagree-and-commit may be used between a report and a manager, in-between peers, and even by a manager who disagrees with a decision of a report.  

Here’s Jeff Bezos explaining the concept in the Lex Friedman podcast — well worth watching:

Here’s the most important part in Bezos’ explanation: 

“I’m agreeing to commit to that decision so I’m not going to be second guessing it, I’m not going to be sniping at it, I’m not going to be saying I told you so, I’m going to try actively to help make sure it works.”

Don’t Judge Decisions By Their Results 

“A mistake is not something to be determined after the fact, but in the light of the information until that point.” — Nassim Taleb

Imagine a continuation of the story above. Four months later the customer decides not to renew its yearly contract, citing the lack of flexibility on the onboarding request as a key factor. The company loses a lucrative account and the chief sales officer brings the matter before the executive board for discussion. 

What happens next depends on the culture of the company:

In company A the executive team concludes that the product manager clearly made a bad decision. The PM is reprimanded for the mistake and the company introduces a new process requiring every decision pertaining to customer requests to go for review before a committee of managers. 

In company B the exec team investigates the matter and concludes that the PM followed the proper decision process. She consulted with all relevant parties, collected information, and evaluated the impact, costs, and risks of the decision. The customer didn’t give an indication that this is a deal-breaker, nor is it clear that the onboarding request was the only factor in its decision. The managers conclude that the decision process is still correct, and that this was mostly a matter of bad luck. As a follow-up the CPO and CSO are asked to train their teams how to better gauge how critical a request is for a customer. 

Company A fell for what psychologists call Outcome Bias — the tendency to judge decisions by their outcomes rather than by their quality, and by the available information at that time. There’s also Hindsight Bias here — the assumption that the PM could have accurately predicted the future, which underestimates the role of luck. These biases led to a wrong conclusion — a mistake was made — and to a change in process that will slow down future decisions, take decision power away from product managers, and yet may not improve the quality of the decisions. If the company keeps making these types of changes, slowly but surely it will become more bureaucratic and centralized and less trusting of its employees. 

Company B avoided the trap of calling out a mistake, and kept the process agile and decentralized. Note that the decision owner is not absolved of responsibility. Had the managers found that the PM made a rush call solely based on her opinions, she would have been reprimanded, and that process would have been re-communicated and reinforced.

Publicly Kill HiPPOs

“Humans are not truth-seeking animals. We are social animals” —Jeff Bezos

One of our engrained social tendencies is deference to people in power, so even if reports have the power to decide, they may subconsciously defer to the manager’s opinion, making him/her the de-facto decider. In companies that understand how risky centralization of decisions is, managers explicitly and consistently show they want decision owners to decide: 

  • The book “No Rules Rules” tells the true story of Paolo, a newly-hired Netflix content marketer who wants to make a risky move — invest the entire Italy content marketing budget into one show: Narcos (rational: Italians love shows about the mafia). Paolo tried to convince his remote manager by sending him a compelling slide deck, but in their following meeting the manager said nothing of the proposal. When Paolo pressed the matter, the manager simply said: “It’s your decision, Paolo. Is there something I can do to help?”
  • At Microsoft I heard multiple managers chastising reports with “Don’t make me decide”. 
  • A VP of product at Google once gave a product team reporting to him a safe word (I think it was Platypus) to call out during product reviews whenever they felt he was trying to make their decisions for them. The word was used more than once.  
  • At HubSpot when a manager sends a suggestion to reports by email she may affix flashtags such as #fyi, #suggestion, #recommendation, #strongrecommendation, or #plea, to indicate how strongly she feels about it. Of those, only #plea is a directive, but it is very rarely used and the team still has the option to debate with the manager. 
  • At Amazon, during product reviews the senior leader always speaks last to allow junior people to express their opinions and desired decisions without influence or bias. 

Better Group Dynamics

Even when there’s a clear decision owner, groups play a major role in decisions, which can sometimes  lead to middle-of-the-road compromises, overly extreme decisions, or wasted time through long debates. 

Here are some things I found helpful. 

From “Everyone agrees” to “No one strongly disagrees” 

There’s more than one way to agree in a group:

  • Majority rule — We put the decision to a vote. This method is suited for cases where everyone’s opinion is of equal value. I find that’s not the case in most decisions — some people are usually closer to the situation, better informed, or are more responsible for the outcomes of the decision. Example: Should we go to play pool or go-cart during team night?  
  • Everyone agrees — Decisions are made through unanimous agreement. If there’s even a single dissenting voice, a decision cannot be made. This is a good model when each member of the group brings a critical insight. For example, should we hire this candidate? (posted to all interviewers), Should we commit to this goal this quarter (decided by the team trio: PM, eng lead and UX designer).  
  • No one strongly disagrees — decisions are made as long as no one strongly disagrees. For example, should we start validating idea A, B, or C (again, team trio). 

I found that for many product decisions “No one strongly disagrees” works much better, simply because in groups of smart and opinionated people there’s always someone who doesn’t completely agree (engineers, I’m looking at you). By raising the bar to “strongly disagree” we’re forcing people to consider their reasoning and encouraging them to use their veto rights sparingly. We should be careful not to use this method to suppress people’s voices, though. Any opinion and viewpoint is welcome, the “No one strongly disagrees” rule only applies to how we decide.  

Prepare Decisions Meetings Well

Many group decisions meetings get derailed from the start because the people in the room are not well-informed or well aligned. If you’re an owner of a decision, it’s important to ensure that everyone knows the following: 

  • Goals — What do we need to decide on? What goal is this decision aiming to serve? 
  • Context — What relevant information do we have? 
  • Assumptions — What are we assuming to be true, but don’t know for sure?
  • Proposals — What options are on the table
  • Process — By when do we need to decide? Who’s the decider? How do we decide?

It’s best to share the information beforehand so people can come prepared. However some people will not read the material, so many decision meetings start with a brief overview. I recommend at a minimum to always call out what needs to be decided on. 

As usual, Amazon has its own practice. Decision owners have to write 6-page documents that explain the context and proposal in detail, sometimes with extra FAQ pages answering common questions (the Working Backwards process). The initial 20 minutes of the meeting are devoted to silently reading the documents, giving participants a chance to absorb the information, and jot-down questions, comments, and suggestions. When everyone has the context, the discussion starts. 

Use Algorithms

“humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather.” — Noise, HBR, October 2016

Daniel Khaneman, a nobel-prize winner for psychology who spent a lifetime researching decision-making, found that the predictions and decisions of both individuals and groups suffer from biases and noise. Biases are consistent errors of judgment, for example risk-aversion, or over-optimism; noise is making different decisions with the same data at different times. For example “when software developers were asked on two separate days to estimate the completion time for a given task, the hours they projected differed by 71%, on average.”

A large body of research shows that an effective solution is to replace or augment human judgment with formal rules, also known as an algorithm. 

“People have competed against algorithms in several hundred contests of accuracy over the past 60 years, in tasks ranging from predicting the life expectancy of cancer patients to predicting the success of graduate students. Algorithms were more accurate than human professionals in about half the studies, and approximately tied with the humans in the others. The ties should also count as victories for the algorithms, which are more cost-effective.” Danieal Kahneman et al / Noise, HBR, October 2016

A good example of such an algorithm you may already be using is making bug triage decisions using a  bug priority matrix or a bug bar.

Example bug priority matrix (source: Fibery)

We can consider developing algorithms for other cases that rely on human judgment such as campaign budgeting, hiring decisions, and experiment result interpretation. However as Khneman points out, taking humans out of the loop completely is often “too radical and impractical”, so we’re looking to augment human judgment rather than replace it completely. This may become an important use case for artificial intelligence.    

ICE for Prioritization

Some of the most important, and yet hardest to resolve decisions are about which product changes to develop and in which order — AKA Idea Prioritization. Here too we have a popular algorithm, one that I’ve written a lot about in the past: ICE — Impact, Confidence, and Ease.

ICE helps us evaluate ideas by assigning three values to each idea:

  • Impact — What impact this idea stands to have on a goal (usually denoted by a metric)
  • Ease — How easy or hard is this idea going to be to implement in full
  • Confidence — How sure are we about the Impact and Ease estimates 

That aggregate ICE score is calculated by averaging or multiplying the three elements, but I find this to be the least reliable part of the algorithm. The scores are at best a weak signal.  

ICE improves prioritization decisions by structuring the discussion. Instead of supporting or rejecting ideas using logic (which is often a matter of opinion) the group has to consider what really matters— impact on the goals and costs — and has to conclude how much evidence is there in support of these estimates. This makes a big difference. I’ve witnessed how ICE shortens prioritization debates from hours to minutes and at the same time radically improves the quality of the decisions. It’s also a great HIPPo deflater.

The main challenge with ICE is generating estimates for Impact, Ease. If we only rely on opinions, judgment, and sparse data, the value of the ICE algorithm is limited. So both for impact and Ease I recommend considering using a range of assessment methods with escalating associated costs:

  • Guesstimates
  • Considering similar ideas from the past   
  • Fact-finding
  • Back-of-the-envelope calculations
  • Worst/Med/Best sensitivity analysis
  • Tests and experiments

(Sidenote: we practice these important product skills in my workshops). 

Use Evidence to De-Risk Decisions

If you’ve read some of my past articles, you won’t be surprised by this one. I believe that evidence (ie data or  that confirms or refutes our assumptions) is the powertool of decisions, and that high-performing product companies acquire a massive advantage by using evidence across the board. So much so that I wrote a whole book on this topic. 

Evidence helps us escape the “all-or-nothing” fallacy of having to make an expensive, high-risk decision on the spot.  The alternative is to decide to invest a bit more in research and/experimentation and revisit the decision in light of the new evidence. While we are deferring the decision, we are making forward progress, because we’re gaining a better understanding of the matter, and often find better alternatives we haven’t considered earlier. We’re also drastically reducing the risk of making a bad decision.

The Confidence element of ICE reflects this principle. As we’re evaluating ideas we’re having to assess: a) how much confidence we have in our Impact and Ease numbers, and b) is this confidence level sufficient to go all-in on the idea. To answer the latter question see my article — How Much Product Discovery is Enough. To answer the first question I created the Confidence Meter.  

The Confidence Meter (download it here as a free calculator)

Takeaways

Decisions don’t have to be hard and rife with politics and debate. Some powerful techniques to help you decide faster and better include:

  • Define decision responsibilities (including clear decision owners)
  • Use “disagree and commit” and “no one strongly disagrees” 
  • Judge decisions by their process and what was known at the time, not by their results
  • Managers should publicly show they don’t want to be the deciders
  • Use algorithms and evidence to augment human judgment and de-risk decisions
Join my newsletter to get articles like this 
plus exclusive eBooks and templates

Share with a friend or colleague