Predictable Delivery
Matthew Story <matt@directorof.me>
Founder DirectorOf.Me
Who are you?
About Me.
Started Programming | ‘95 |
Founded DayRaven Inc | ‘04 |
Graduated U of Chicago | ‘06 |
Engineer | ‘06 - ‘13 |
Prod & Eng Manager | ‘10 - ‘17 |
Founded DirectorOf.Me | ‘17 |
About Me.
Learning
Building things
Contributing to Open Source
Started Programming | ‘95 |
Founded DayRaven Inc | ‘04 |
Graduated U of Chicago | ‘06 |
Engineer | ‘06 - ‘13 |
Prod & Eng Manager | ‘10 - ‘17 |
Founded DirectorOf.Me | ‘17 |
About Me.
Learning
Building things
Contributing to Open Source
Missing Deadlines / Goals
Started Programming | ‘95 |
Founded DayRaven Inc | ‘04 |
Graduated U of Chicago | ‘06 |
Engineer | ‘06 - ‘13 |
Prod & Eng Manager | ‘10 - ‘17 |
Founded DirectorOf.Me | ‘17 |
Nothing is more demotivating and demoralizing than setting goals and consistently failing to hit them.
-- Anonymous Employee // Upward Feedback, Axial 2012
“
“”
About Me.
Learning
Building things
Contributing to Open Source
Hitting Goals
Started Programming | ‘95 |
Founded DayRaven Inc | ‘04 |
Graduated U of Chicago | ‘06 |
Engineer | ‘06 - ‘13 |
Prod & Eng Manager | ‘10 - ‘17 |
Founded DirectorOf.Me | ‘17 |
Missing Deadlines / Goals
About Me.
Good Programmer
Bad Manager
Started Programming | ‘95 |
Founded DayRaven Inc | ‘04 |
Graduated U of Chicago | ‘06 |
Engineer | ‘06 - ‘13 |
Prod & Eng Manager | ‘10 - ‘17 |
Founded DirectorOf.Me | ‘17 |
Not Bad Manager
I’m Matt Story and I used to be a bad manager.
Why do we have Product and Engineering?
To build stuff.
Product & Engineering Build Stuff
Organization believes it knows how to solve a problem. If you build it, they will come. |
Team will value individuals with a bias towards action. People who get shit done. |
Engineering Goals are likely to measure effort. Estimate-based velocities |
Product Goals are likely to measure delivery. Roadmaps, Deadlines, Sprint attainment |
Build Stuff: Pros & Cons
It is important to finish projects. Many organizations fail to get over this required barrier. |
It promotes a binary way of thinking about the things you build. It either exists or not, when we know that this should be linear. |
It disregards the concept of marginal utility. That different solutions have different costs and values. |
It leads to the false sense that the job ends at “shipped” Even if it is done, the stuff you build might not deliver value (or even be used) by users. |
PRO
CON
CON
CON
To build valuable stuff.
Product & Engineering Build Valuable Stuff
Organization believes it has identified a problem. One that they need to better understand. |
Team will value individuals with a bias towards understanding. People with critical thinking ability. |
Engineering Goals are likely to measure delivery. Value-estimate based velocities. Engineers still need to ship. |
Product Goals are likely to measure value creation. Adoption, Satisfaction, Unit Profitability or EV Creation. |
My best programmer made us $3 million last year without writing a line of code.
-- Anonymous CTO // $1BN Company
“
“”
Product & Engineering create value by solving problems.
How do you measure that?
Like you manage your investments.
A project is an investment.
Defining Project Success
What ROI are we looking for? How much value does this need to create for us? |
What is our timeframe for demonstrating that ROI? Does this need to pay us back in 6 months, 12 months, 3 years? |
What are our leading indicators? How do we determine if we are on-track for our targets within the timeframe, without waiting? |
What risks are we taking, what risks are we not willing to take? Investments are calculated risks. What risk are you taking? How will you isolate that risk. |
How much risk are we willing to take? How much are we going to spend on this? |
Your roadmap is your portfolio.
Defining Roadmap Success
What ROI are we looking for? Across all projects, what time-frame are we looking for? |
What percentage of projects need to succeed? To get to the overall ROI, how many projects need to meet their own ROI criteria? |
How do we mitigate unintended risks across projects? Can we ensure that failures are isolated? Can we diversify our risk through project selection? |
What is our rate of return? How much value does our roadmap need to generate every quarter, every year? |
When is it going to be done?
In 2 weeks.
-- Me, Every Project 2004 - 2014
“
“”
Unpacking “When is it going to be done”?
Might be a question about schedule risk “Will it be done by this date? The client has a deadline. Can we cut scope?” |
But might also be a question about domain risk “How well do we understand this problem?” |
But might also be a question about staff risk “Can someone else get this done faster? Does our team actually understand the domain?” |
It is always the start of a conversation about balancing risks. And you should choose between them based on your project success definition. |
Portfolios make money by taking specific and bounded risks.
Types of project risk
BUDGET | COMPETITION |
STAFF | DOMAIN |
Types of project risk
BUDGET | COMPETITION |
STAFF | DOMAIN |
FEASIBILITY
Why they are asking.
BUDGET | COMPETITION |
STAFF | DOMAIN |
FEASIBILITY
Schedule factors.
BUDGET | COMPETITION |
STAFF | DOMAIN |
FEASIBILITY
Bigger-picture schedule factors
BUDGET | COMPETITION |
STAFF | DOMAIN |
FEASIBILITY
Budget has a circular relationship with staff & domain.
And if we factor in scope ...
BUDGET | COMPETITION |
STAFF | DOMAIN |
FEASIBILITY
We often care because of opportunity cost.
When is it going to be done?
It’s a simple question.
-- CEO, VC-Backed Startup NYC, 2017
“
“”
The Predictability Problem.
Simple <> Easy
Before we even get to estimates and forecasts
BUDGET | COMPETITION |
STAFF | DOMAIN |
FEASIBILITY
Domain means understanding
Do you understand the problem? Have you solved a similar problem in the past? Has someone else solved it? |
Do you understand the solution? Is there a known solution? Is there a best-practice, or even good practice? |
Have you solved this problem before? Or has someone on your team? |
Do you understand your current solution landscape? Could you solve this problem here, now? |
Orienting yourself to the problem
Complex Experiment | Complicated Continuous Improvement |
Chaotic Do Anything, Get Out ASAP | Obvious Best Practice |
DISORDER
With Cynefin
Complex Experiment | Complicated Continuous Improvement |
Chaotic Do Anything, Get Out ASAP | Obvious Best Practice |
DISORDER
Predictability comes with understanding
Complex Experiment | Complicated Continuous Improvement |
Chaotic Do Anything, Get Out ASAP | Obvious Best Practice |
DISORDER
These domains are where most parts of the business live most of the time.
Most of product development happens ...
Complex Experiment | Complicated Continuous Improvement |
Chaotic Do Anything, Get Out ASAP | Obvious Best Practice |
DISORDER
Complex Domain: Understand & Define
Complex Can we solve this? How? | Complicated Continuous Improvement |
Chaotic Do Anything, Get Out ASAP | Obvious Best Practice |
DISORDER
Complicated Domain: Skilled Execution.
Complex Experiment | Complicated Who do we need for roughly how long? |
Chaotic Do Anything, Get Out ASAP | Obvious Best Practice |
DISORDER
Now we understand our domain.
And so you ask an engineer for an estimate ...
So you ask an engineer for an estimate ...
So you ask an engineer for an estimate ...
So you ask an engineer for an estimate ...
So you ask an engineer for an estimate ...
Of course a good manager knows this is all BS ...
Managers know it needs to be padded.
The problem is with how we usually model time as people.
50% of the time isn’t 50% of the time
But even more fundamentally
Our mental model should should look like this.
Because schedules are outlier dominated
And even after we’ve adjusted our mental model, there are a lot of variables.
Variables to consider
Staffing How familiar is the team with the code-base? |
Vacations, Holidays and Sick Days Can all push out delivery substantially. |
Completeness of understanding Understanding of requirements, implementation, infrastructure |
“Done” means … Value delivered, which means shipping, measuring and iterating. |
Setting Yourself Up For Predictability.
What domain are you in?
Goals by domain
If you haven’t solved this before: COMPLEX Time-box prototype your solution before making an estimate. |
If you’re not familiar with the infrastructure: COMPLEX Time-box how you’ll solve this problem in this environment. |
If you’re not familiar with this team: COMPLEX Time-box learning about the capabilities of this team (let’s see how this first sprint goes). |
You have solved this problem before: COMPLICATED You can start to talk about ranges when it might be done. |
Change your mental model.
Eliminating the Event Horizon
Never ask: Will it be done by <date> This is usually how the event horizon is created. |
Recognize your “default” answers are really “I don’t know”. Everyone has a go-to or two when they don’t know (2 weeks!). Start saying “I don’t know” |
Have a conversation when you’re asked “when will it be done”. Fully understand what types of risk are being discussed. |
Talk about delivery ranges and confidence intervals. Never give one date, give 2 or 3. Start with a 50%, 75% and 95% confidence estimate. |
Remember Fat Tails
Look at your past issue delivery curves. How long did it take to get things done by estimate. All the way done. In wall-clock time. |
Analyze your time-to-value on past projects. Delivery risk compounds with each todo. Understand how this affects the bigger picture. |
Your 50%, 75% and 95% confidence guesses should be far apart. Like … quite possibly months apart. |
Don’t change your model because you don’t like the results. Verify your results against past projects, but don’t change them to fit a broken mental model. |
Think less granularly.
Initiative-level thinking.
Look at all the way done, not dev done. Time to value may or may not have a relationship with dev done. Don’t think in dev time. |
Calendar days, not dev days. An ideal developer day doesn’t matter to your customers. Only ever talk in wall-clock time. |
Always days, not hours. Hours are too granular for projecting bigger projects. Only measure and forecast to the day. |
Scope will expand, look at past projects of similar size. Expect that the project will change and don’t rely on bottoms-up estimates. |
Stop caring about estimates.
Humans are bad at estimating
Predictability does not require good estimation. It does require either many paths to success, good forecasting, or both. |
Estimates are still useful, do them fast Low fidelity estimates (SWAGs) are useful to categorizing issues, epics and initiatives. |
Rely on past performance. Look at the wall-clock time-to-value for previous projects and issues with similar staff and est. |
Try not to talk about time when estimating. But if you have to, only ever talk about wall-clock time-to-value. |
Using Data To improve Predictability.
Replacing your mental model with a statistical model.
We can build distributions from past projects
AND ULTIMATELY
Automatic Goal Setting, Tracking & Monitoring
BUT BEFORE THAT
Just track simple, big things.
Track simple, big things.
How long has it taken from start-to-finish on each project? Segmented by size and team if you can. |
How long has each ticket taken you over the past quarter? Start-to-finish, Segmented by size if you can. |
All projects that have failed to launch. Segmented by team and size if you can. |
How long it takes a project to fail to launch. Failing fast is important. |
The Limitations of Data.
We can only tell you what, so that you have a conversation about why.
Thanks. Questions?
matt@directorof.me