How to think about Bets, Success Metrics, and Roadmapping

By John Cutler

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Introduction        2

The Problem        2

Bets—All the Way Down        3

Bet Overview        3

High-Level Bet Types        4

Solution-focused bets        4

Opportunity-focused bets        4

Strategy-focused bets        5

Ongoing Bets        5

Confidence and Uncertainty        6

Risk, Time, Drivers, Etc.        7

Risk profile        7

Nesting / Chunking / “Breaking Down”        9

Drivers, Constraints, and Floats (Degrees of Freedom)        11

Mandate Levels        12

Bet Shapes        14

Knowing the Players        14

Summary        15

Success Metrics (Bet Metrics)        16

Goals, Decisions, Assumptions, Performance, and Models        17

Start With Powerful Ideas        20

Persistent Models and Bets        20

Putting it All Together (An Example)        22

The Metric Test        23

Summary        25

One-Pagers, PRDs, and Stuff        26

Roadmapping        28

Answers Questions        29

Resolution & Multi-Resolution        29

Honest and Strategic        32

Focus on the Work (Not the Workers)        32

Suitably Detailed        33

Other Artifacts        35

Bet Portfolios        35

Start Together, Work Together, Finish Together        37

Conclusion        39

Introduction

The Problem[a]

What problem does this short book hope to address?[b]

  1. Words like “problem”, “solution”, “opportunity”, “project”, “epic”, “MVP”, "MLP", and “story” fail to capture the nuance and complexity of product work[c]. Yet we don’t have decent alternatives. Impact[d]: Lack of coherence.
  2. People intuitively understand that they should adapt their approach based on the profile of their work, but lack a shared language and framework to do so. This makes it hard to persuade others to let go of “mono-process”. Impact: Picking less-than-optimal ways[e] of working.[f]
  3. Teams want to get out of the “Feature Factory” grind, but have trouble communicating context and assumptions  “above” specific features. This makes it hard to shift the conversation away from specific features and towards solving problems for people (and your company). Impact: Focus on features, not outcomes.[g]
  4. Teams often adopt restrictive, template-based approaches to documenting roadmap items. This causes premature convergence[h], and doesn’t set the team up for taking a more outcome-based approach.[i][j][k] Instead of living documents[l][m], this work is largely performative and temporary. Impact: lack of shared understanding and alignment[n] and/or forcing ineffective practices.[o][p]
  5. In an effort to communicate clearly, teams often simplify to the point of oversimplification. They lack coherent ways to communicate about uncertainty and “messiness” with certainty[q]. Impact: Communication backfires when the complexity becomes more apparent.
  6. Metrics and goals are tacked on to efforts after the fact, making goal setting and measurement feel like an “extra job”. Impact[r]: Teams either don’t measure at all, or [s]add rush establishing measures for their work.
  7. Roadmaps fail to convey actual context because they read as more of a delivery plan, and less of a point-in-time snapshot of the team’s thinking[t]. Impact: Premature convergence[u], sporadic updating and a lack of helpful / useful conversations triggered by the roadmap.[v]

Bets—All the Way Down

Bet Overview[w]

What is a Bet? In gambling, it describes the act of risking something of value (usually money) on an uncertain outcome with the hope of winning something of greater value. In our context, a bet represents a logical container of investment (time, attention, focus, energy, and funds) that we are willing to risk for achieving a desired impact[x]. [y]

Why is this a helpful framing? We use the word Bet because it acknowledges that many of the things we try will not work. But if we play the game correctly, each of our attempts allows us to learn and bounce back stronger and smarter. It also reminds us that our “bankroll” (the amount of funds and energy available to us) isn’t unlimited. We need to place our bets intelligently and strategically. Imagine you’re betting your own money!

With a project, the focus is on finishing a predetermined chunk of work. With a problem focus, our focus is on solving a specified problem. With a Bet, our focus is impact, value, and risk.

Bets range in terms of scope, duration, and level of prescription. You can have big bets, small bets, safe bets, and risky bets. Some bets are “all in” -- you cannot pivot midway, and may take a while to generate benefits. While other bets let you adjust your approach continuously to optimize outcomes.

Some bets are very prescriptive -- “I think we should do exactly this, and I expect we’ll get outcome X”. While other bets are more opportunity focused. Opportunity focused bets identify a point of leverage, and rely on the team to run experiments to seize that opportunity. You might have some “representative options” in mind, but the end result may look entirely different.

An interesting characteristic of bets is that they “nest”. A team might make a high level strategic bet like “I think the market will move in this direction, and we should capitalize on it”. This may sound a bit like a prediction and lacks a sense of investment, but as Chase Roberts remarks, “I think it's ok if it sounds like a prediction because ALL bets are predictions (though not all predictions are bets).” That high level bet will inspire all sorts of “sub bets” -- e.g. “because of that, we should build Feature X to meet the expected future demand should the market move in that direction.”

Companies have foundational bets that underpin their whole existence. These foundational bets may last years or decades! And every day we place lots of “micro bets” -- measured in minutes and hours -- with our allocation of time and energy.

High-Level Bet Types

The concept of Bets is powerful and flexible. With that flexibility comes a certain amount of ambiguity. The trick to framing bets is to:

  1. think about the “shape” or nature of the bet, and consider “nesting”
  2. think about drivers, constraints, and floats
  3. who should be involved

So how do you go about understanding the “shape” of your various bets?

Solution-focused bets

I’m in the shower before work and I have the most amazing idea. It’s perfect. We totally need to deliver this thing. It will take months, but my Amazing Idea will be totally worth it because there’s a good chance it will generate Important Outcomes. This is a solution focused bet. My challenge will be persuading people that the investment will be worth it, and that Amazing Idea is the best option for achieving Important Outcome. I might also get some questions about whether there are opportunities to learn faster and take a smaller risk. Is it a big bang bet (you can’t work incrementally or iteratively), or an experimentation friendly bet (you can learn more rapidly)?

Opportunity-focused bets

The next day, drinking my morning coffee, I have an epiphany. I’ve been noticing a trend in the market. There’s a huge opportunity if we can figure out how to get in on this trend early and beat our competitors to the competitive high ground. I’m sure of it. I’m not sure how we’ll do it, exactly, but if we can assemble a crack crew of team members and focus for a couple months, I think we’ll know very quickly if it is in reach. This is more of an opportunity focused bet, because I’m not trying to pitch an idea with an expected outcome, rather I am pitching the existence of an opportunity that is within reach.[z][aa][ab][ac][ad]

Solution focused and opportunity focused bets go hand in hand, because every opportunity focused bet will inspire multiple solution focused “child” bets, and every solution focused bet has an implied or explicitly stated “parent” opportunity. A great idea helps you figure out your view on the opportunity. And a powerfully stated opportunity can inspire great ideas. [ae]

Note how I skipped[af] problem-focused bets[ag]. The challenge with problem-focused bets is that they are not inherently valuable. Not all product problems are worth fixing. Problems that are valuable to solve are opportunities[ah][ai][aj][ak][al], which is in keeping with the idea of a “bet”.  Diagnosing problems, and matching approaches to the nature of the problem is hugely important. But it is helpful to view problems as part of the opportunity-focused bet idea.

Strategy-focused bets

At a certain scale and scope, opportunity-focused bets take on the characteristics of a strategy-focused bet. To draw on the work of Richard Rumelt, strategic bets relate to things like: our “map of the territory”, the guiding policies that guide our actions, an “insightful diagnosis”, and what Rumelt refers to as the “kernel”.

Good strategy has a simple logical structure I call the Kernel. These three elements are (1) a clear-eyed diagnosis of the challenge being faced, (2) an overall guiding policy explaining how the challenge will be met, and (3) a set of coherent actions designed to focus energy and resources. (Rumelt)

Is the use of the word “bet” appropriate at this level? Is a belief or assumption a “bet”? I would argue: Yes. We are betting that our view of things is correct, at least for now. We are betting that the map exists as we see it.[am] Especially if we buy into Rumelt's idea that a strategy should involve some sense of “coherent actions” (that action is possible, not necessarily that a plan exists). At the core of many successful companies exists foundational strategic bets.

Ongoing Bets

A big anti-pattern is only visualizing/talking about “net new” bets. We might leave “leftover” bets from prior quarters/years out of the picture because they complicate the picture or are “too much to look at.”

We want to be careful about sunk cost bias! If 60% of our time is spent on old bets, and we don’t visualize that work, then we may forget to make sure that our time is spent in the best way possible. Big bets, especially ones that are opportunity-focused and experimentation friendly, require ongoing or small bets to be successful. Product iterations, sunsetting, new services, add-on capabilities, demand programs, etc. – these can all be bets that are follow-ons to a big bet, like launching a new product. These are bets too. 

A similar situation happens with “patching up” or repeatedly fixing things we delivered in the past. Or a stream of occasional requests for feedback, fixes, and quick tasks. Bets don’t need to be “active”. They might represent a stream of investment over time...applied in bits and pieces. “Bets don’t have a one-time cost,” explains Nick Ruiz.[an][ao][ap][aq]

Confidence and Uncertainty

When we assess and shape bets we are juggling a couple of factors.

We consider the size of the opportunity. How valuable is it? Is it a one-time thing, or an opportunity that will span years and decades?

We consider how amenable the bet is to experimentation (experimentation friendliness). Some bets have lots of escape routes. Others are more big-bang, with no real opportunity to test assumptions. Realizing that there are opportunities and methods to experiment in low-risk ways (and safe ways, from the perspective of customers) is also a skill. [ar]

If we have a solution in mind already, we might have a sense of the level of effort required (intervention effort[as]). But that is not always the case (nor is it desirable, as we might want to wait to figure out solutions with a broader team).

Are we confident that an effective intervention exists (intervention certainty). Keep in mind that in some cases we’ll have high confidence that an intervention must exist, but we don’t actually know how to solve the problem right now. How do we know? Common signals include: prior work, competitor efforts, what we know about the customer’s goal, existing technology, etc.

Which brings us to intervention selection. Do we know how to solve it right now?

Here are the factors visualized in a handy worksheet.

Note how we indicate confidence intervals in this simple model/thinking tool. For a great resource on how to think about confidence intervals and how to arrive at confidence intervals check out the classic How to Measure Anything: Finding the Value of Intangibles in Business by Dougles Hubbard. [at]

A wider confidence interval for something high value, might beat out a narrow confidence interval for something less valuable with a known solution. Even if we “fail” a couple of times, and even if the opportunity ends up to be on the low end of the range, we still come out ahead—provided, of course, that the effort is reasonably experimentation friendly.

The important point is that there is a big spectrum of bet types and risk profiles. We shouldn’t only tackle certain things, with certain solutions. Nor should we blindly focus on the “biggest opportunity” without a sense of whether we can make any progress. In project work, success is finishing “on time and on budget”. In product-like work, success is working on the highest leverage opportunities. Successes can have an outsized, non-linear impact spanning years or even decades.[au][av]

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Risk, Time, Drivers, Etc.

Next we will explore a couple of models for appropriately classifying and describing bets.

Risk profile

When considering a bet, a good exercise is to think through the various risks. It can be helpful to create a checklist with the various types of risks you might encounter. Here’s a real world “risk list” a team put together to have more productive conversations about risks, and how to mitigate those risks:

Risk List

Target persona. We pick the wrong persona to focus on. Or cast the net too broadly (or narrowly). We might do everything “right” later, but if we get the Who wrong somehow, we could be in trouble.

Target persona objective. We misread what the persona is actually trying to accomplish.

Design / intervention type risk. Given an objective (and business drivers, and a target persona), the design of the feature/bet fails to deliver the expected results. Note: this could be because of a lack of clarity about objectives, missing context/research, or too narrow / too broad of a scope.

“Production” risk. We can’t actually “produce” the feature, it takes too long, the quality isn’t there, or we go way over budget.

Distribution risk. We don’t get the new bet in front of the right people, at the right time. We have trouble with adoption and uptake (despite nailing the above factors).

Customer impact risk. While effective—in the short-run—the experience fails to create lasting/meaningful impact for the target persona.

Business impact risk. While effective—and beneficial to our customers, even—the experience fails to deliver meaningful impact to [Company].

Cohesion risk. While effective, the bet does not “fit” with other aspects of the experience[aw]. It degrades the overall experience, and is not a cohesive part of the offering.

Scalability risk. How we execute is not extensible, breaks our “systems”, can’t be re-used, etc.

Measurement risk. We can’t measure this (qualitative and/or quantitatively), which makes assessing the other risks difficult.

Team morale/growth risk. While effective, how we execute on the effort degrades team health and morale.

Opportunity cost risk. Everything could be amazing, but it isn’t the highest value thing we could do RIGHT NOW. Or we seek perfection in this effort, but by doing that we fail to address other opportunities.

Having a risk list makes it much easier to discuss potential risks.

Remember, every initiative involves risk. If there wasn’t risk, there would be no opportunity! But by knowing and calling out these risks you can better map a way of working to the bet in question. Is distribution risk a big factor? Run experiments around distribution. Do you know nothing about the user profile/persona? Conduct some research.

Nesting / Chunking / “Breaking Down”

The relationship between more opportunity focused and more solution focused bets above is a great example of “nesting”. Even opportunities are typically “solutions” to higher level opportunities.

When it comes to bets, you’ll almost always have to consider nesting. A helpful framework to consider nesting is to think about TIME. A good clue that we’re too high level, is that the scope of the bet spans too much time. A good clue that we’re too in the weeds is that the bet spans too little time. If the bet is too big, consider “child” bets. If the bet is too small, consider parent bets. Even if you think your idea is the best idea ever, consider “sibling” bets -- other things at that resolution you might try -- and make sure you have a clear idea of the “parent” bet.

Here’s a handy, and easy to remember tool for thinking about time and nesting of bets.

Bets range from 1-3 hours, all the way up to 1-3 decades. And they are all connected, either implicitly or explicitly! If we have a 1-3 quarter bet, it would probably be best to specify a 1-3 month sub-bet. And maybe break that down into smaller 1-3 week bets. Conversely, if we have a 1-3 week bet, we should be sure to ask “what 1-3 month and 1-3 quarter bets does this tie into?”

This is just a guideline, but it can be handy. Most teams don’t bother too much with calling the small things Bets. Nor do they end up calling the bigger things (say >1-3 quarters) Bets, as at that point things tend to get more directional/theoretical. But they are bets, and many companies miss this! Stating an annual “pillar” or “focus area”, or even establishing a team to do X for a couple years, is still a “Bet”... but it is a more abstract one.

Another helpful byproduct of the 1-3s approach is that for most product work, provided your team is skilled at working small in the day-to-day, you’ll get enough estimate precision. “This will take somewhere between 1-3 months” or “this will take between 1-3 weeks” is good enough. Understanding the shape of the bet and focusing on what really matters is so much more important than precise estimates. In fact, chasing precise estimates can cause premature convergence, which in turn negatively impacts outcomes. .

Finally, many execs cringe at the word bet because it feels risky and uncertain. The same holds true for “experiment” (unless used as a code-word, like the term MVP, for “don’t ask too many questions, and ship the thing”). That’s understandable. We’d all love perfectly defined problems with perfect solutions. But it doesn’t match the nature of product work. Do what you can without getting in trouble!

Review Questions

Some questions to consider:

  • Is this bet an opportunity-focused bet, or a solution focused bet linked to a higher-level opportunity focused bet? Or something in between?
  • What are the risks involved? Is this a clear opportunity with high probability solutions? Or a vague opportunity -- potentially valuable, maybe not -- with unknown solutions?
  • Are there opportunities to learn quickly and pivot early? Or is this a situation where we need to commit to something fixed-scope and hope the benefits accrue later?
  • What is the time-frame we’re playing with?
  • Do we have the needed people/skills (available) to take advantage of this opportunity / take this bet?

Drivers, Constraints, and Floats (Degrees of Freedom)

In addition to classifying the bet and clarifying parent and child bets, it can be helpful to refine the bet down to its most essential components. A helpful tool to narrow bet scope and get super clear on your intended framing is to consider drivers, limiting constraints, floats, and enabling constraints.

  1. Drivers are what the bet is optimizing for (to increase ____, to decrease ____). Example: Generate quality Leads from our Top 100 Target Account List. Too many drivers, and you’ll end up with mediocre results (or straight up failure). 
  2. Limiting constraints are constraints we must navigate. Example: Have to work within current website information architecture and design. Too many, and you’ll also end up failing or with mediocre results.
  3. Floats are areas of flexibility. The more areas of flexibility you have -- to a point -- the better. Example: Don’t feel constrained by the current messaging framework. Too much flexibility can be paralyzing, so we can deploy enabling constraints.
  4. Enabling constraints are temporary constraints we place on our work like limiting scope and delaying decisions, for the purpose of focus and progress. Example: Let’s get something into production in 2 weeks. Let's delay deciding on the exact flow post sign up.

If you have more than two drivers and/or more than two constraints it is a good signal that you need to narrow your focus. Research shows that >2 drivers and >2 limiting constraints make it nearly impossible to successfully execute on a Bet (see Johanna Rothman’s great book Manage It for more details). This makes sense. Trying to achieve too many things, with too many constraints, tends to lead us astray.

Why is this helpful? Often people get so carried away with pitching, that they don’t distill the bet down to the crux issue. Too many drivers. Too many constraints. Not enough floats. And no enabling constraints to help make progress. This approach helps you sanity check the bet and get super crisp in terms of the forces shaping the outcome.

Mandate Levels

Here’s one more model/framework for thinking about bet “shapes”. Have you noticed how contextual the words “problem” and “solution” are in the context of product? One person’s problem is another person’s solution. And every problem is a nested solution to a higher level problem (as the models above describe). In short, oversimplification is fragile. [ax]

To address this I created another model called “Mandate Levels[ay]”. It uses a spectrum of prescription to describe nine “levels” of bets.

  1. Build exactly this [to a predetermined specification]
  2. Build something that does [specific behavior, input-output, interaction]
  3. Build something that lets a segment of customers complete [some task, activity, goal][az]
  4. Solve this [more open-ended customer problem]
  5. Explore the challenges of, and Improve the experience for, [segment of users/customers]
  6. Increase/decrease [metric] known to influence a specific business outcome
  7. Explore various potential leverage points and run experiments to influence [specific business outcome]
  8. Directly generate [short-term business outcome]
  9. Generate [long-term business outcome]

Importantly (you’re probably seeing the pattern here) these levels are related. A D-level bet may contain countless smaller A-level bets.  

There is no right/wrong implied in this list. A work is not somehow inferior. And G work is not somehow noble. But it will impact how your bet is perceived and evaluated, the team that will be assembled, and the data you will need to make a persuasive case. A one-pager with an #A focus, will look very, very different from a one-pager with an #I focus. Notice the trade-offs:

  • #A (Build exactly this) may be a perfectly good bet if there’s a lot of data on your proposed solution. But what if your solution has never been tested? What if a cross-functional team could have come up with a handful of better solutions? What if there’s a risk that your solution is not viable? What if the opportunity is tiny? And how long will it take for us to figure out if the bet paid off?
  • #D (Solve this customer’s problem) feels riskier — we may not know how to solve the problem yet — but if we do solve the problem, we will likely be more confident about the bet’s outcome. Assuming we have data to connect solving the customer’s problem to a business outcome, this might be a better bet.
  • #F (Optimize this metric) will require tight(er) feedback loops. Can the team isolate a leading indicator? Can they identify a beta group that is willing to try “new stuff”? Can they rule out other factors that might influence the metric? The risk: this is hard. The upside: more certainty of a positive outcome.

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Bet Shapes

[ba][bb]

Knowing the Players

Finally, consider who should be involved. Who will this bet touch? Spend a lot of time on this. A helpful tip here is to take some sort of org chart, and go through each department one at a time and ask:

  • Will this partner//person be impacted?
  • Will this partner//person be involved?
  • What would be the best way to collaborate with this department/person?
  • What type of buy-in do I need from them? Will it take up their time?
  • Will it impact their roadmap?
  • What risks must we mitigate to work effectively together?

A big anti-pattern is to overly optimize for things you can drive through to completion yourself, or to optimize for bets that are low-profile and less likely to get people’s attention (under the assumption that there will be less hassle). A worthwhile bet will involve other people. Embrace that, and make your pitch.[bc]

In an ideal world, you should be “shaping” your bet with other people from the start[bd]. Very few bets emerge perfectly formed from one person’s head. Yes, there are times for individual reflection and deep work. But efforts to answer every single question yourself, in a vacuum, are destined to backfire.

Summary

Bets are containers for investment with an expected outcome (and risks). It doesn’t matter if the work is new, or ongoing. If you’re investing energy/time/money, it is still a bet.

They come in all shapes and sizes. In many ways, the idea is intentionally vague and abstract. This is by design. By being abstract, it inspires people to frame the best “bet” possible and not form-fit their ideas to a pre-defined concept of a project, campaign, feature, problem, etc. You know the concept is working when it inspires helpful conversations about scope, risk, IMPACT, and feedback loops.

Don’t worry if this seems complex. After a while you begin to get a spidey sense about whether you need to trim down / sculpt down your bet, or whether it might pay off to go up a level or two. Moving up and down the “tree” of bets—from high level opportunities all the way down to specific ideas and concepts—becomes second nature. 

More than anything, focus on coherence. [be]Can you trace a line from a specific bet up the high level strategic bets of the company? Do your choice of bets and framing of bets form a coherent narrative? Can you have productive conversations based on the framing of your bets?

Coherence is not manufactured certainty. Nor is it simplification. Product involves making decisions under conditions of uncertainty. In fact, uncertainty is a signal of opportunity; so your goal is capturing the mess gracefully and succinctly in ways that help your team members make great decisions.

Not having all the answers is fine—framing questions and providing context is the goal..

Success Metrics (Bet Metrics)

The idea of “success metrics” can be a little misleading. No one wants to be unsuccessful! I’ve observed this in Amplitude customer workshops. The minute people believe that the metrics they choose will be used to measure them (teams, individuals) they shut down. But when prompted to brainstorm metrics to help the team learn…you’ll have 200 sticky notes on the wall.

Success is playing the game well, even when things don’t work out. Eventually, the “Score takes care of itself.” (Bill Walsh). Therefore, “success metrics” should help us play the game well.

A big danger is that people get paralyzed about how to measure things because they worry that their personal “success” is on the line. This is a problem for a couple of reasons. First, it defeats the purpose of measuring things! We measure things to increase our confidence that we are on the right track (making progress), to inform future decisions, to reduce uncertainty about our assumptions, and to help us make more resilient and powerful models. Second, conflating “success” with personal success causes people to resist attempting to measure something unless they can find a “perfect metric”.

We hear frequently about the idea of “gaming metrics”, but much less about the problem of creating work cultures that attempt to “game people through metrics”. "Gaming" puts the weakness back on employees when really the issue is perverse incentive structures. (For more information on metrics and perverse incentive structures, see C. Thi Nguyen and Paul Smaldino on the Santa Fe Institute’s Complexity Podcast, February 8, 2023, available at: https://complexity.simplecast.com/episodes/101-j7hP_Sf5).

We don’t want either of those things to happen!

Goals, Decisions, Assumptions, Performance, and Models

A better way to think about success metrics is to reframe them as “signals”. We use measurement to achieve our goals, make better decisions, reduce uncertainty around assumptions, understand performance and progress, and to make and refine useful models. Your goal isn’t 100% certainty in most cases. Rather its decreasing uncertainty where it really counts.

When thinking about KPIs and success metrics, people jump naturally to “performance”. This makes sense—performance is very salient, and everyone wants to know “how things did”. But it is only one of five connected frames through which to view measurement.

Goals

[fill in some prompts here]

Decisions

Should we _____________ or _____________?

Should we double-down on _____________?

How long should we try to _____________ before attempting a different approach?

When and how should we _____________?

Should we roll _____________ out to everyone?

We seem paralyzed when it comes to  deciding whether to _____________ or _____________.

Assumptions

Right now we are operating under the assumption that _____________.

If our assumption about _____________ proves to be wrong,

that will put us at risk.

A core hypothesis underpinning our product strategy is that customers who _____________ will _____________ more often than customers who _____________.

A key assumption to test is whether _____________.

Ideally, we could increase certainty around our assumption that _____________.

The hypothesis that _____________.

Performance

To understand the health of our product, we need to keep tabs on _____________.

I am so curious about how _____________ is performing. Is it working?

Is _____________ having a negative impact on _____________?

Before rolling _____________ out to everyone, we will need to see signals that _____________.

We need to figure out how to gauge our progress towards _____________.

Did _____________ end up _____________?

Models

A model underpinning our business focuses on the relationship between _____________ and _____________.

A key "virtuous loop" (or flywheel) exists between the following variables _____________.

We routinely need to revisit our _____________ models,

as they help us course correct.

I often find myself creating spreadsheets to understand how changes in _____________ might impact _____________.

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

The difference between picking “perfect” success metrics, and picking helpful success metrics is extremely important. If you have a big, valuable opportunity with lots of uncertainty, it can be extremely valuable to decrease uncertainty marginally while the bet is in play. Even just by a bit. This is WAY more important than 100% confidence in a low value bet after the fact. The goal is not the perfect metric. It is decision support and learning.

As with many things in this short book, you can take the model a little further. Here we “map” decisions, assumptions, performance, and models across different “levels” of bet. Assuming a simplistic model for bets spanning strategy and inputs, opportunities, and interventions, we can link our various decisions and assumptions. A high level strategic decision, will have many related “child” decisions, assumptions, and areas of performance to understand.

Ask yourself the following questions:

  • What decisions do I need to make along the way?
  • What assumptions am I making?
  • Do I have some sort of model -- flywheel, causal relationship diagram, financial model, collection of related variables -- underpinning this effort?
  • What signals would I need to see to suggest that we’re on the right track? To make the decisions I need to make in a timely fashion?
  • What signals would convince me I am on the wrong track?
  • What is my highest risk assumption? What would I need to observe in order to increase my confidence that this was working? “What would have to be true?”
  • Are there opportunities to do A/B type tests? Are they even necessary?
  • If the actual business impacts are a long way off, can I create a plausible “tree” of more leading inputs metrics, and inputs that themselves or outputs of even more leading inputs?

Start With Powerful Ideas

Another important distinction is the difference between WHAT you are trying to measure, and HOW you are measuring that idea, belief, variable, phenomenon. It is much more important to clarify what you are trying to measure, and then start with “minimally viable measurement” (MVM), than to only consider what you can measure.

For example, a B2B SaaS product may be trying to transform how small businesses operate. Some of those things happen in the product, and some of those things happen outside the product. The powerful idea is “transformation progress”. With that in mind we can consider how to measure transformation progress. What signals do we hypothesize we’ll observe if the customer is actually transforming how they operate? In this sense (but more abstractly), metrics are also bets.

[List some]

Finish thought…

Powerful ideas imperfectly measured, are far more valuable than perfect measures for less powerful ideas.

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Persistent Models and Bets

One of the best techniques is to create a “tree” of ideas (or a flywheel) and then associate measures with those ideas. Here’s an example activity from a real-world team. Yes, this is the North Star Framework, but you can apply this idea to almost any measurement task. Foundational in this approach is forming your hypothesis BEFORE attaching measures to the hypothesis.

The nice thing about the North Star Framework is that it PERSISTS. It isn’t a time-based goal setting framework. This means that you can tie your bets into North Star Inputs, and your interventions (solutions) into opportunities over a long period of time (as long as your strategy remains consistent).

Whenever possible consider making a framework that can span multiple bets. Yes, you may need to figure out leading indicators and inputs on a bet by bet basis, AND you should probably set time-based goals, but you will not need to reinvent the wheel each time you have a new bet.

High level, what we are doing here is avoiding the “cold start” problem whereby we reinvent the measurement wheel with each and every bet. Instead, we figure out things that will persist for longer periods of time (ideally years), which vastly simplifies our job when it comes to specific bets.

We basically need to ask: what variables/levers does this bet target?

Putting it All Together (An Example)

This is an example from Amplitude. Amplitude has a high-level North Star Metric of Weekly Learning Users. We have identified three primary levers to influence that North Star Metric (activating accounts, encouraging the broadcasting of learnings, and encouraging the consumption of learnings). Each of those inputs has sub-inputs. Activating accounts, for example, has many drivers like the ease of data ingestion, onboarding efficacy, and time to first insight.

Which brings us to our “bet” and associated bet metrics. Say our bet is to present interesting charts immediately after getting into the product as a way to increase onboarding efficacy and decrease time to first insight. [include some sample bet metrics here]

The Metric Test

It is time to put your metrics to the test. Effective use of metrics is all about context, intent, action, and learning. Below we’ve listed ten statements related to using metrics in a healthy way. For each statement, we suggest you:

  • Discuss the prompt with your team
  • Seek diverse perspectives
  • Flag items that need attention

Metric Test

S1: The team understands the underlying rationale for tracking the metric.

Tip: Include metrics orientation in your employee onboarding plan. Amplitude customers frequently use our Notebooks feature to provide context around key metrics.

S2: We present the metric alongside related metrics that add necessary context. When presented in isolation, we add required footnotes and references.

Tip: Normalize displaying guardrail and related metrics in presentations.

S3: The hypotheses (and assumptions) connecting the metric to meaningful outcomes and impact are clearly articulated, available, and open to challenge/discussion.

Tip: Use tree diagrams (driver trees, North Star Framework, assumption trees, etc.) and causal relationship diagrams to communicate hypothesized causal relationships. Consider playing the “Random Jira Ticket” game. Can you randomly pick a Jira ticket and “walk the tree” up from that item to something that will matter in the long term?

S4: The metric calculation/definition is inspectable, checkable, and decomposable. Its various components, clauses, features, etc., can be separated. Someone with good domain knowledge can understand how it works.

Tip:  Whenever possible, share the metric so that someone can “click in” to how it is calculated. For example, if the metric involves a filter like “shared with more than 7 users in the 7 days”, it should be possible to adjust that clause and see how that number compares to the total number of users. Build trust by enabling people to recreate the metric.

S5: The metric is part of a regularly reviewed and discussed dashboard, scorecard, or report. It has survived healthy scrutiny. If the metric is more exploratory and untested (or an “I was curious whether….”), that context is clear from the outset.

Tip:  Scrutiny is a good thing. The more eyes you can get on a metric, the better. Invite criticism. Record questions as they come up. Make each “showing” of the metric (e.g., at all-hands or product review) successively better.

S6: The team has a working theory about what changes in the metric indicate.

Tip: Here’s a basic prompt to get you thinking: “An increase in this metric is a signal that _______ , and a decrease in this metric is a signal that _______.”

S7: Over time, the metric provides increasing value and confidence. We can point to specific decisions and actions resulting from using the metric (and those actions are reviewable). The company would invest in continuing tracking it and communicating it.

Tip: Indicate confidence levels when displaying metrics, and keep a decision/action log. Try to normalize not being 100% sure at first and balancing displaying metrics with high confidence levels with new candidate metrics with lower confidence levels.

S8: The team establishes clear thresholds of action (e.g., “if it exceeds X, then we may consider Y”). The metric can go down. And if it goes down, it will likely inspire inspection/action.

Tip:  Conduct a scenario planning workshop to understand better how movements in the metric will dictate future behavior. Set monitors in your analytics tool to warn you when you have reached a threshold.

S9: The metric is comparative (over time, vs. similar metrics, etc.) Put more broadly, if tracking it for a protracted period, it is possible to make apples vs. apples comparisons between periods.

Tip:  Include period over period views in your dashboards to get more eyes on comparisons.

S10: The team uses the metric to communicate challenges AND wins. Not just wins.

Tip: Leaders set the tone here. Discuss situations that didn’t work out as you expected and how you used data to figure that out.

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Summary

Here’s a handy table to help you figure out if your use of Success Metrics is helping or hurting

Hurting

Helping

Stresses people out

Triggers better thinking and conversations

Analysis paralysis

Minimally viable measurement

Only make safe bets

Helps you make better decisions

Picking “safe” metrics

Picking helpful metrics

Starting with only what you currently measure

Start with the right idea. Then consider measurement

Only “succeed”

Learn….whether we “succeed”, or not

At the end of the day, you want metrics that will help you make great decisions, reduce risk, and improve the upside. What will help you place and manage your bets more effectively?

Recommendation: If you’re still stuck in terms of measuring something, consider reading How to Measure Anything: Finding the Value of Intangibles in Business, 3rd Edition.

One-Pagers, PRDs, and Stuff

Now that we’re thinking about bets and success metrics, it is worthwhile discussing how to capture all of this in the form of artifacts. Let’s get four big antipatterns out of the way:

  1. Rigid, formulaic, checkbox style PRD templates that completely negate the fact that bets take on very different shapes. Shoehorning everything into the same template, working style, or approach.
  2. Writing nothing down, and relying completely on implicit knowledge. Jumping straight to a bunch of context-free stories in Jira.

#1 forces premature convergence, certainty theater, unnecessary process consistency, and basically takes the joy and creativity out of product work. #2 is exclusionary. Context exists in people’s heads. No one can inspect the various assumptions and context behind a bet.

Mono-process kills companies[bf]. Chaos kills companies. Especially as companies scale, we often find a push for consistent processes. To quote Arlo Belshee “scaling is fundamentally a question of what must remain the same, and what you will lose/risk/spend to keep it that way”. In my experience, of all areas to force consistency, trying to treat all bets the same way is one of the lowest leverage approaches. So you need just enough consistency to get the job done.

I’ll add two other antipatterns:

  1. Treating documents as final, instead of continuously updated and refined artifacts.
  2. Siloing knowledge and going rogue/solo in an effort to write the “perfect pitch”

Which brings us to one-pagers, PRDs, six-pagers, canvases, etc. First off, if you thought about the models and activities in the Bet chapter, you are miles ahead when it comes to figuring out how to capture your thinking.

At the risk of serious oversimplification, all product artifacts serve one purpose:

To provide the right information, at the right time, to aid in decision-making and collaboration.

The goal then is to capture the “right” information at the right time, in ways that promote collaboration and good decision making. In most cases, you don’t need all of the information upfront. As mentioned above, trying to get all the answers upfront can hamper an effort. A 1-3 quarter, opportunity focused bet does not need a “requirements document”. If you have uncovered some solution options, it might help to include those. But you aren’t writing product requirements.

“What I've often seen is that Product Managers, and technical ones in particular, often already put tons of technical details and designs, and solutions into the bet, while not spending enough time on the actual problem (aka the discovery phase). I personally try to keep bets and their descriptions rather small and clean and then offload all the details into epics, stories, and in the actual solution finding to prevent biasing design and engineering with my ideas and to just answer enough around the why and what to get them started.” Thomas Ziegelbecker

Here’s an example activity to get you thinking about “right” information. In one-pager workshops, I never give a one-pager template. Instead we co-design the template. To that end, I will ask the team something like:

What must you EVENTUALLY know about this work to make good decisions at the right cadence?

I emphasize eventually to reinforce that you don’t need all the information upfront. Here is what a real world team came up with in around 10 minutes:

Dependencies on other groups

Deadline? (real or not)

Cost of being late (or not delivering feature)

Size of effort

Expected impact of item

Ability to measure impact (and time)

Data on existing usage

Who is the advocate? Who are stakeholders?

What is solution certainty? (range)

What is problem uncertainty?

Impacting current customers?

Type of design work

Core improvements

New feature development

Bandaids/fixes

Skills/functions required

Who needs to be around?

Drivers

Customer visibility

Collaboration model (e.g. contractor teams)

Urgency (P0, P1, P2)

Assumptions / beliefs

Learning goal? Permanence?

Risks willing to incur

Needed / caused by / blocked by tech debt

Background / prior story / veracity of data

Headwinds / blockers

Connected to core strategy?

Reactive vs proactive

Tech used

Org visibility

PIA factor / fun factor

Do not disturb - focus required

Solve tech debt / add new tech debt?

In 10 minutes the team collaboratively created a bet-artifact checklist that could be used in a variety of contexts. Combine these items with the activities in the prior section (Risk List, Drivers-Constraints-Floats, Confidence & Certainty, Assumption Trees, Mandate Levels, and 1-3s)  and you have everything you need to write a good one-pager, six-pager, or similar.

[write a bit more here]

Need a bit more help? Here is a post with 40 questions you can ask yourself about Bets. I call them “roadmap items” in the post, but same thing. It has one of my favorite questions: Here is another post about one-pagers, that includes these questions, but has a lot of other information about framing bets

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Roadmapping[bg]

Roadmapping is a big, big topic. Here are the basics:

  • A roadmap is primarily a communication tool.
  • A roadmap communicates your thinking about prioritized areas of focus and sequencing.
  • A great roadmap is a catalyst for conversation.

The worst possible roadmap is an artifact used at the start of the year (or quarter), never to be revisited until the next planning exercise. The best roadmaps are updated continuously based on new information. [bh]You should find yourself (and your key partners) in your roadmap – referencing back, making tweaks, sharing links, creating drill-ins (launch guides, one pagers, specs, etc.) as resolution increases for near-term items.

Bad roadmaps cause sunk cost bias, premature convergence (solutioning), committing to too much work, high work in progress, etc. Good roadmaps help you solicit feedback, test your thinking, surface potential dependencies, and help you achieve the best outcomes with the least amount of risk and work.

Answering Questions

A great roadmap answers the following questions

  • Where do you think we should focus?
  • How are you thinking about the shape and sequencing of your bets?
  • How is the work connected and layered?
  • What is your mental model for “value” (why, who, how, what?)
  • How are big things broken down into smaller things?
  • What is more and less clear in your head right now?
  • What is more committed, what is less committed?
  • How about routine upgrades, “reactive work”, and maintenance?
  • Question marks? (These are ok)[bi]

Resolution & Multi-Resolution

One of the realities of roadmapping is that there is never a “right” level for the roadmap. When you go super high level, someone will ask for a more detailed roadmap. When you go detailed, someone will ask you for a more high level roadmap. One person’s “too much information” is another person’s “this is awesome and is super helpful!”

Accept this now: you will need different versions of your roadmap depending on the audience, and the type of feedback you hope to receive.

Take this format for a one year roadmap.

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

You will notice a couple things. First, it is delivered in a “board” format, not a calendar format. This is appropriate for some situations, but not others (where you are highly dependent on other teams, and they have fixed-date type work). Second, note how it shows two levels of work! Work starts as briefly stated options. When an item is in queue, we elaborate with a one-pager. And then when we pull it in progress, we detail the work on the second level. The options become the one-pager, so that’s one level. The one-pager spawns “tickets” or tasks, so that’s the second level.

This board / roadmap uses a model (like a North Star Framework). Each of the Level 2 bets aligns with an Input. We see a similar “nesting” as above, with each 1-3 month Level 2 bet, spawning multiple Level 3 bets.

This idea of nesting on the road mapping can be accomplished in many ways. Here we actually have three layers of Bets.

You have options when it comes to visualizing multiple resolutions of work at the same time. Focus on representing things as they are, with the appropriate level of detail for the audience.

Honest and Strategic

This graphic by Pavel A. Samsonov pretty much sums up the differences between a roadmap that manufactures certainty, and a roadmap that is more honest and strategic.

A roadmap that manufactures certainty might represent your “best guess”, but it doesn’t really hint at the conversations that need to happen regularly. A “strategic roadmap” might not be suitable for everyday use, but having an artifact -- a mind map, a tree, or similar -- to trigger thoughtful discussions is very useful.

Strive for usefulness in context while attempting to layer in “honest” and strategy.

Focus on the Work (Not the Workers)

A common anti-pattern is to split up work by teams, even when that work is the same bet.

So you might see something like:

A better approach is to “tag” the item with the related teams. To quote the pithy phrase: “model the work, not the workers.” Why? This helps us visualize what work is connected.

Suitably Detailed

This is a very common discussion:

Executive: “What is your one-year roadmap?”

Product Manager (to themselves): “Oh geez, I don’t want to show all of these ideas right now, they will think it is a commitment. I do have some idea of how we might proceed, but I don’t want to show that!”

My take is that it is a reasonable ask for product managers to paint a picture of the next year, question marks and all. They shouldn’t manufacture certainty, but even a glimpse into their thinking can be a positive thing.

One way to achieve this is to make the roadmap “suitably detailed”, with near-term things more elaborated upon, and further off things less detailed. Note here how we provide more information for the things we are working on Now and Next, compared to Near Future and Later.[bj]

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Another trick here is to literally give yourself LESS SPACE for things in the future. That will keep you from elaborating too much on future work.

Other Artifacts

Finally, don’t forget that the roadmap is just the START of the discussion. You will need one-pagers, dashboards, notebooks, learning backlogs, decision logs, a research library, a place for comms, and retrospective notes. People often forget that the roadmap is but one of many artifacts.

Bet Portfolios

I encourage teams to think about their roadmap as a portfolio of bets. Different initiatives have different risk/reward profiles. When the strategy and goals change, the mix changes. Like an investment portfolio, you can hedge and over-hedge, take on too little or too much risk, and over-index on either short-term or long-term outcomes.

Amazingly, some leaders who heavily resist the idea of more outcome/opportunity focused bets, seem to come around when work is described this way. Everyone wants a balanced portfolio, right? We can all relate to the idea of wanting to take some risks where it might pay off.

People often describe startups as risky. But depending on the startup, that risk profile might vary. Some startups aren't amenable to experimentation yet tackle huge opportunities. Other startups tackle more modest opportunities, with more leeway for experimentation.

Early on you don't have much of a bet portfolio. It is what it is. But as you grow, you end up diversifying. The big question is whether you "balance" your portfolio. Or let the competition do it for you.

Here's a helpful way to think about your bet portfolio. Imagine these six categories of bets:

Looking at bets this way helps uncover some common anti-patterns[bk]. Some companies….

  1. Lack any sense that the things they are doing have different characteristics. They try to use a mono-process for everything.
  2. Have the wrong "mix" for their stage. They try to play it safe, when they should be taking more risks. Or, they try to take more risks when they should be nurturing D-F.
  3. *Perceive* their challenges incorrectly — they believe X is a solved problem, when it requires more learning and agility.
  4. Suffer from perpetual premature convergence. With A-C work, you will likely have no solution when you start. You will learn and experiment. You can't "spec" it out.
  5. Try to jump to doing many things...when they haven't figured out how to do one thing well. Or, they succeed, but don't know WHY they are succeeding.
  6. Believe that the only way to do A work is to silo it in an innovation lab. Some companies believe E and F work is transactional, and doesn't require creativity. This creates fragility.

Write more.

Hill Climbing

https://cutlefish.substack.com/p/tbm-4852-hill-climbing

Imagine different product opportunities / bets / puzzles at your company. Some are:

A: Very difficult. Lots of false starts. Exploratory

B: Can't-miss! Low-hanging fruit. Things are on the up-and-up

C: A bit tougher. But you can still make gains. Playground for disciplined optimizers

D: Wins are very hard to come by. Just preventing a slide is an accomplishment

E: A controlled "fall" of sorts as the cycle (hopefully) repeats itself with new As.

Value & Effort Shapes

One thing that always bugs me about (many) prioritization conversations is that teams often leave out the expected value curve of the work. “Level of effort” (or complexity, even) has never cut it for me because it assumes one fixed effort point. I am more concerned about the effort/time to impact curve because that (to me) more accurately represents product development.

Instead of prioritizing features, you prioritize a focus area over time, and features are a subset of that decision. Therefore, before jumping in, discussing the hypothesized relationships between effort/time and value for the high-level mission can be very helpful (not matter what framework you are using).

A. Slow to start, rapid rise, then tapers off (Sigmoid)

Example: Developing a new product feature that initially has low adoption but then becomes a key selling point, leading to a surge in user acquisition and eventually reaching market saturation.

B[bl]: Quick win that gradually tapers off with more work (Exponential Decay)

Example: Implementing UX improvements that significantly increase user engagement (initially), but subsequent improvements have diminishing returns as the interface becomes more refined.

C. Fairly linear relationship between work and impact (Linear)

Example: Regular refactoring—fixing debt as you go— can feel like this. There's no "big payoff," but regular practice has a fairly linear relationship with outcomes.

D. Quick win. No need to continue (Step Function)

Example: Fixing a critical bug that was causing major churn. Once the team fixes the bug, churn decreases quickly, and there's no need to continue working on that specific issue.

E. Lots of work. Potentially lots of value later (J-curve)

Example: Expanding into a new geographical market. It takes time and investment (localization, local support/sales, etc.) A potentially big payoff, but it will take time.

F. Even more extreme than "E". Flat value until a future point (Delayed Step Function)

Example: Investing in a major architectural overhaul. The benefits will not be visible until the transition is complete. Still, once done, it could significantly improve scalability and development speed in the future (in theory, these are very risky).

So What?

Why is it important to consider the effort vs. value curve?

It helps clarify statements like “quick wins”. D and B offer “quick wins”, but B doesn’t level off nearly as quickly.

A, E, and F can feel remarkably similar when starting! But the team must prepare for a very different experience and journey.

Sensing inflection points is critical—solid leading indicators can make all the difference.

It shows how much is at risk with F efforts when viewed together. Look at all the area above that line!

It is all very fractal. Consider that the top portion of A could easily resemble B.

When to expect results! As simple as it sounds, setting expectations and when the team might observe progress (instead of being “done”) is extremely helpful.

Some of these curves offer easy “escape routes”, while others demand that teams address areas of risk—and their assumptions of the curve in general—as early as possible.

In general, when I see teams stressing out about prioritizing individual features (or prioritization frameworks), I always recommend that they step back and paint a picture of the high-level curve they are dealing with. That is often far more telling than the effort/impact-specific tactical bets.

Start Together, Work Together, Finish Together

You have a roadmap of bets. You understand the relationship between bets—the “levels”. You are focusing your bets on actionable inputs that connect to the lagging metrics that really matter. Now, it is time to “start” the effort.

Except you’ve already started!

Ideally, “starting together” happened at bet inception[bm]. Your team was involved, and helped craft the bet. That is when the work actually started. Teams fool themselves into thinking that “the work” is only what is currently “in progress” in your ticketing tool. No. The work starts the minute you start crafting and shaping these bets. There’s a reason why product managers and designers are often stretched too thin. They take it upon themselves to attempt to “feed the beast” (the engineers) with fully formed and “specced” work, leaving very little time to actually work together.

The goal should be to plan and converge at the last responsible moment. Ideally…together.

While it is largely beyond the scope of this short book, the key to putting bets in motion is to tailor your working approach to the bet. Some bets sit at different ends of the spectrum:

Require a lot of upfront, generative research

Have a whole foundation of research to draw from

Are good candidates for more individual work

Are good candidates for more collaborative work

Will require many experiments to get right

Will probably be OK on the first attempt

Will benefit from operating “on an island”

Will benefit from a lot of integration, connection, and eyes on the problem

Are more optimization focused

Are looking for a “step change”

Can have parallel motions (e.g. discovering and building)

Will benefit from synchronous/serial motions (the whole team discovers)

Require daily syncs

Require biweekly syncs

Require attention to details

Require glossing over details

Need ruthless prioritization

Throwing a dart would suffice

The tendency is for companies to treat bets all the same. As hopefully I’ve made clear in this book, that is probably not a good idea—unless your work is factory-like and repeatable.

Teams should be empowered to set working agreements and change them to adapt to the situation.

Here’s an example of a real world team, and how engineers and designers collaborate on a bet:

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

Notice how they mix and match, parallelizing and synchronizing their efforts. Sometimes they divide and conquer. Sometimes they work together. Even as the effort progresses we find the team changing up their approach. The neat and tidy process diagrams in books fall apart quickly under real world demands (even if they have loops).

[Idea: table here with some shapes from 1st section, and working approaches]

Conclusion

Reflecting on the sections above, there’s an important thing to remember:[bn]

Perceptions about uncertainty, problem vs. solution, risk, etc. vary by individual, group, and company.

For  example, some product managers would welcome the opportunity to tackle a big mission like “earn the company an extra $10,000,000 per year”. Other product managers would be terrified by that goal. Some companies have cultures that inherently appreciate the idea of “placing bets, most of which will fail” (because, in many cases, they have seen the benefits), while other companies are extremely uncomfortable about the idea of launching into an initiative without a “certain solution”. Context matters.

Factors contributing to these differences include:

  • The collective experience of the individual/company
  • Direct experience developing products
  • Skill levels, and perceived skill levels
  • The quality (and speed) of feedback
  • The perceived cost/downside of being wrong
  • The perceived amount of uncertainty
  • Confidence in the ability to learn/reduce uncertainty
  • And so much more…

In workshops, I remind passionate product makers that in many cases their stakeholders are trying to do them a favor by figuring out an exact solution. Perhaps they (the stakeholders) aren’t aware of the risks. Maybe they aren’t aware of how often we are wrong. Or they aren’t aware of what you can learn with advanced research practices. They may not have observed a team grapple with a complex problem and have a measurable win.

The passionate product makers perceive pushback as a lack of confidence or trust. When in fact it is a lack of exposure and familiarity with the domain.

So as you interpret the advice in the prior sections, please keep this in mind. What makes sense to you may not make sense to other people. The best fix for this is patience, talking in specifics not generalities like “problem vs solution”, and showing not telling.

Hope this was reasonably interesting. Many of these concepts are “common sense”, but it can be hard to put it all together in practice (especially when we’re busy).

If there is one rule to guide them all, always ask:

Am I having productive conversations?

Am I manufacturing certainty or embracing reality?

Am I communicating how I think we should play the game?

Good luck!

Enjoy This? Leave a tip here: https://johncutlefish.gumroad.com/l/ktxkc

[a]One thing that I often find is difficult for teams is the transition from innovation product to mature product work profiles and how to maintain long term. When the innovate/operate ratio changes of features coming down into the pipeline. Not sure it fits the context of this book but it's always on my mind.

[b]Rewrite with the pain listed first, then the cause of the pain. Incoherence: Our language fails to let us capture or convey product work's nuance and complexity.

[c]I mean, also, these are often used to mean different things by different people (across companies, inside companies, across tools).

[d]Perhaps "harm" or "damage" or "risk" instead of "impact"?

[e]Also, picking all or nothing, zero-sum game, or other kinds of ways of working based on external factors (political pressure, crappy leadership, etc.) to the value we're trying to delivery for which users, or even the domain/context we're working (i.e., cynefin).

1 total reaction

Sandip Panda reacted with ✨ at 2024-03-14 12:58 PM

[f]Maybe I’m just not familiar enough with the rest of your writing but I struggle to understand this paragraph. Things like “adapt their approach based on the profile of their work” stay quite abstract. “Approach” to what? What do you exactly mean by “the profile of their work?” What’s “mono process”?

7 total reactions

Ivan Chiarelli Monteiro reacted with ➕ at 2024-01-03 02:58 AM

Matthew Stout reacted with ➕ at 2024-01-02 07:48 AM

Alexander Hipp reacted with ➕ at 2024-01-08 03:14 AM

Monica v reacted with ➕ at 2024-03-01 05:27 AM

Sandip Panda reacted with ➕ at 2024-03-14 12:58 PM

Fran Thring reacted with ➕ at 2024-01-04 04:25 AM

Eugene reacted with ➕ at 2024-01-17 12:32 PM

[g]Does it make sense to do a treatment on outcome/value for those in the "cheap seats"? This is the kind of misalignment that causes horrible problems.

[h]Define with example

[i]I don’t think that this is a particular problem of a “restrictive, template-based approach“ to roadmapping. I’d even say that properly designed templates can prompt critical questions, encourage a focus on the intended outcomes, and incentivize exploration.

[j]_Marked as resolved_

[k]_Re-opened_

[l]From my experience, this is rather a team culture thing and not necessarily connected to a particular approach to roadmapping.

[m]I agree, a template-based approach to documenting a roadmap doesn't restrict it from being a living document. I've found this is generally caused by team/company culture (i.e. roadmap work and timelines=commitment/must deliver)

[n]Define with example

[o]And your most creative, omnipotential or neurodivergent team members to shut down, burn out, leave, or meltdown all over your critical database infrastructure. Like, it makes no sense to innovate, but only after you stop any and all innovation on the process at the same time.

[p]Great point about inclusion and people. Needs its own point: that normal human problems get in the way of product teams. Perhaps a Wardley Doctrine version that says "before you even try to tackle the hard stuff, these basics should be in place". https://www.wardleymaps.com/doctrine

[q]ease? confidence?

1 total reaction

Ivan Chiarelli Monteiro reacted with ➕ at 2024-01-03 03:01 AM

[r]And measurements, as a result of this, are often related to internal company metrics, not any measure of how they improve outcomes for users?

[s]Or worse, subscribe to the Right Metrics(tm) and never actually inspect them against their actual value prop, product, industry, context or the actually system they're in..., which produces whatever they are making.

[t]I love this phrasing.

[u]Define with example

[v]I think these last two are addressing the "commitment" issue. That is, what the teams are committing to. Generally, people are committing to delivery a set of things. Instead of committing to delivering x value in x time. In this setting, whenever something is written down, it's taken as a promise, not simply a placeholder.

Value of throughput.

1 total reaction

Ivan Chiarelli Monteiro reacted with ➕ at 2024-01-03 03:02 AM

[w]Start with a short story of a team that placed a bet. Shiraz made a bet A with colleague Myron that had elements B and C so that a plan D started and they had a happy moment in their product day.

Then explain the bet.

[x]I always thought of bets as testing a hypothesis or theory about how the world works. If that bet is successful the confirmed knowledge can then be leveraged to some result or outcome.

[y]I reckon its worth discussing other definitions of bet in common use in product world (for eg lean value trees definition of bet vs initiative) and why this definition is chosen. https://openpracticelibrary.com/practice/lean-value-tree/

[z]There is a hierarchy here. Opportunity Bets -> Solution Bets. Place opportunity above Solutions to make reading comprehension easier?

[aa]I even wonder if this is an actual distinction -- isn't a solution-focused bet also an opportunity focused bet, just with more clarity? After all, "there's a good chance it will generate Important Outcome" is an opportunity, and so is "figure out how to get in on this trend early" a solution. 

Maybe the framing here is more of a spectrum of "level of detail of proposed solution" or "proximity to solution" or even a 'magic square' with "solution clarity" on one axis and "opportunity clarity" on the other?

[ab]That maintains the hierarchy concept as well, e.g. higher level bets often have more opportunity clarity than solution clarity and many lower level bets have very high solution clarity than opportunity clarity.

[ac]It seems to say the distinction lies in the environment.

Solution Bet: given current financial regulation and customers having problem XYZ, this solution can improve customers' lives.

Opportunity Bet: there's news of a change in financial regulation coming next quarter, and we may be able to capitalize on it (eg, gain more customers) if we deliver something that rides the wave.

[ad]The bins seem vague and arbitrary and not terribly useful. Do you assess your bet differently? Frame it, catalog it, track it differently? You have the wager,  the stakes, the players, the risks and odds, and mapping the bet in time, geography, social graphs, and bet-graphs (dependencies up and downstream).

[ae]I think it would be great to diagram this for easy readability

[af]Including a word about problem-focused bets might be helpful if only to complete the frame & provide language for diagnosing which kind of bet is being discussed.

[ag]It would be great to elaborate on what you mean by "problem" instead of "opportunity", using a similar analogy: either as an internal problem or an external problem.

[ah]Does this have any flavor that a chosen problem might not be the right problem to solve within that opportunity. Essentially you would like the team to sort out what problem to tackle under an opportunity.

[ai]I also don't know if the distinction here is real -- is a problem-focused bet just an opportunity-focused bet where the opportunity value is low?

1 total reaction

Ivan Chiarelli Monteiro reacted with ➕ at 2024-01-03 03:16 AM

[aj]IMO it's a great practice to clearly articulate the underlying problem(s), (partly) causing the opportunity to be an opportunity.

1 total reaction

Ivan Chiarelli Monteiro reacted with ➕ at 2024-01-03 03:20 AM

[ak]Interviewing users/customers will highlight problems they have, and it's the product team's responsibility to figure out whether that's a problem worth solving. That's an interpretation issue.

[al]Each opportunity has underlying problems, it is the value to the customer and the business that makes it worth solving.

[am]At this level, we should not be betting the map exists, but rather that it reflects the land accurately enough and will enable us to move through it confidently.

This map shouldn't be like the first ones drawn by the first Europeans to land on the Americas – that level of detail would belong to "opportunity-focused bet" maps.  Rather, it should be like finding an old print map in the glove compartment of the car – or even better, like using google maps for an area you're not familiar with, meaning you may end up doing a longer path than actually required, but you're not going to need to build any roads or bridges.

[an]I'm not convinced by this. Although consistent with the assertion of bets as logical containers of investment, the word has an implied connotation of risk. What the paragraph describes has no/little risk involved, and here the terminology falls flat.

[ao]I think the initial bet needs to stay net positive - all maintenance and fixes included. If not, it's time to sunset the solution.

[ap]I like the idea that these are "ongoing" bets ... you're still paying the price (and repeating the reward) of the initial bet, and part of the bet involved the recurring work.

[aq]I like the stock picks analogy where you're holding the investment. Also like that you can analyse them in a similar manner: upside, risk, cost (initial, operational, exit), horizon (expecting value realisation timeline), liquidity (ease/cost of exit), compounding value, diversification, macro trends etc.

[ar]There seems to be an implied correlation between how experimentation-friendly a bet is and its opportunity size.

[as]Switching to effort/cost estimates just for buying the next slice of information required to decrease uncertainty instead of trying to estimate the full intervention as in traditional project management (with all related problems) is really helpful. Also, the intervention effort most often becomes irrelevant compared to opportunity size and certainty.

[at]Hubbard would warrant further expansion. Not only for the confidence intervals (plus the value distribution!) and for estimates calibration but for the probabilistic approach in general. HtMA might be the most underindexed/under-read book in the whole product canon. By and large people do not understand measurements and even less so incremental ones nor statistics outside the older frequentist school (which is partly why there are nonsense A/B tests all over the place). The concept of buying information for decreasing uncertainty about a decision/bet is at the core of PM practices. But instead of saying it with numbers many serve RICE plates with T-shirt sized values pulled out of thin air and weave pseudo-rational prioritisation fairy tales.

[au]This is the core idea of this section and shouldn't be at the end of it.

[av]Common in this draft.

[aw]Can I suggest extending this to include the technology stack. This takes account of the increase in cognitive load for the extension. NB cognitive load applies to for both the user and the developer

i.e. other aspects of the existing experience and/or technology stack.

[ax]Examples could help make this less ambiguous.

[ay]This hierarchical focus ties nicely into big-tech structures: org, group, team, ic missions etc.

[az]1 total reaction

Hendrik Sommerfeldt reacted with ➕ at 2024-01-03 07:12 AM

[ba]Image is hard to read.

2 total reactions

Gergely Hodicska reacted with 💯 at 2024-03-24 14:51 PM

Pietro Schaff reacted with 💯 at 2024-01-02 10:23 AM

[bb]Can we get more details here around the composition of a bet? This section feels like it's missing context.

[bc]Feels like a good trade-off discussion is warranted here. You want to make a decent-sized bet, but depending on the org size, the org chart can get in the way to get the initial learnings that warrant a buy-in. I come from the vantage point of building AI/ML products, where more effort is needed to demonstrate viability.

I've seen some big bets were made without understanding the machine learning (ML) model baselines. And vice-versa, I've seen great performing ML models that don't have a pathway to be folded into a product.

[bd]It feels like this shaping itself implies considerable investment, depending on the number of people involved.

[be]Summary is the first place you call this out - maybe also bring it up in a section above in more detail?

[bf]Maybe reference/link this in your argument at the top of the document

[bg]In my experience tons of friction occur because there's a lot of different ideas about what a roadmap is and for many it's still seen as a delivery plan (probably often outside the product org, eg commercial teams that sell the future). 

I personally like the approach here, but in many places I think it only works, if you find a complementing solution that's more communicating the actual deliveries. That is, the bets where we are far enough we are "90%" certain what we'll deliver with eg. a monthly precision. 

So for folks where that scenario is a reality, I think it would be great to describe how you can approach that need and still make room for the Bet oriented roadmap as the main reference point for the product teams. 

IMO the role of sales enablement or product marketing can play a vital part in this setup.

[bh]Do you think this is true for companies/orgs of any size? How would you manage stakeholder alignment when multiple teams/projects have to agree on priorities due to limited resourcing? I'm trying to think up a good answer here myself, but that's been the biggest blocker imho to NOT doing the Big Unchangeable Artifact--the need to make tradeoffs across teams that have their own lower-level priorities.

[bi]Does this maybe include work not accounted for such as routine upgrades, operation and management of systems? I advocate for representing tech debt work and daily operations on roadmaps

[bj]1 total reaction

Olivia Lin reacted with ♥️ at 2024-04-24 06:40 AM

[bk]An anti-pattern chapter? You may be doing it wrong when A so try B?

[bl]Are these A,B,C points references to the image? I am not sure if they are correct? For example Isn't B in the image C and vice versa?

[bm]bet lifecycle

[bn]Perhaps an essay or longer discussion on adoption. More how-to...

60 Days To Start Betting.

7 Conversations To Spread Betting.

5 things that will kill betting and how to work through them.

How to be the only person betting.

Cutler's Better Literacy Quiz: You Know You're Ready To Bet When...