1 of 16

2 of 16

Our Journey in the Next 20 minutes

  • Background Context
    • The Performance Imperative
    • More Continuous Futures
    • Continuous Performance Feedback�
  • 5 Myths and Anti-patterns
    • “Prohibitive Ubiquity”
    • “Expedited Gridlock”
    • “Mandated Ignorance”
    • “Escape Philosophy”
    • “Predictable Unreliability”

3 of 16

The Performance Imperative

4 of 16

More Continuous Futures

Siloed

Consultative

Self-service

Siloed

Self-service

Consultative

Large orgs

Small orgs

Performance as strategic

Performance as tactical

...

...

...

...

...

...

...

...

...

...

...

A more �continuous model

...

“Agile”

“DevOps”

...

“Siloed”

  • Late in waterfall cycle
  • No view into architecture

“Consultative”

  • Some knowledge transfer
  • Repeatable, lowering toil

“Self-service”

  • Replayable process and infra
  • Guardrails in place in process

5 of 16

Continuous Performance Feedback

Development Cycle

  • Fast and reliable feedback �on API changes ; “low hanging”�
  • Statistically significant �API throughput assessments�
  • Fast regression testing on large data sets (algorithm modeling)

Packaging/Testing Process�

  • Automated verification of operational readiness�
  • Reproducible SLOs across environments�
  • Service SLI impact and comparison to baselines

Transition & Operation�

  • Reuse same tests between dev and ops�
  • Scalable approach to bigger versions of systems�
  • Low-risk blue/green, canary rollouts

FEEDBACK LOOP!!!

FEEDBACK LOOP!!!

FEEDBACK LOOP!!!

6 of 16

7 of 16

5 Myths and Anti-patterns

  • “Prohibitive Ubiquity”�
  • “Expedited Gridlock”�
  • “Mandated Ignorance”�
  • “Escape Philosophy”�
  • “Predictable Unreliability”

8 of 16

A Complimentary Contradiction

Mantra to remember while working to improve things

  • Contributing factor �(principle or dynamic)
    • Suggestion for how to �make the difference �real and impactful

  • Statement you might hear in the real world which is a MYTH or ANTI-PATTERN
    • A question you can ask to punch through the horsecrap

9 of 16

#1 “Prohibitive Ubiquity”

Make it easy to do right, �make it hard to do wrong.

  • Let devs be devs
    • Simple performance intake form that schedules a chat (and Git flows)
    • Hold office hours, dojos, workshops
    • Build testing pipeline templates
    • Ask SLO questions b/c this is “operational, not non-functional”
  • Let orgs be orgs
    • Unwaveringly obtain access to architectural diagrams and architects
    • Develop relationships with all team members, esp. who you don’t work with
    • Put ‘operational requirements’ into feature work checklists
  • “Quality is everyone’s responsibility, but performance isn’t mine.”
    • Who contributes to the quality of the final product? Who’s on the hook to verify & validate?�
  • “Make performance testing self-service so that anyone �can do it!”
    • ...what would good enough look like?
    • When are there trainings, reinforcement, and support?

10 of 16

#2 “Expedited Gridlock”

Balance first / last responsible moments (by partnering with project/product managers)

  • Don’t lose velocity to rework
    • Quantify performance pain with production issue data, then report
    • Establish error budgets around SLOs
    • Capture continuous performance trend data from automated pipelines
  • Take care of the little things...
    • Place trend metrics in a very visible place for the team
    • Be/Present in stand-ups and follow up on Qs and issues
  • “Hurry up because I need it!”
    • How are you going to use this information / feedback?�
  • “Performance testing is hardening”
    • When is a better time to get bad news; early or later cycles?�
  • “Don’t start testing until needed!”
    • When will requirements become available (already)?
    • How long do you think performance testing may take?

11 of 16

#3 “Mandated Ignorance”

Together, we are the strategy!

  • THINK upstream AND downstream
    • What do you know that POs don’t?�(complex integration points, cost of infra in prod over time)
    • What will ops need to know downstream? → key metrics, SLOs�
  • Alignment is more important than the async busy work
    • Effective vs. efficient work patterns �(5 Whys before ‘just do it’)
    • Architectural risk prevention vs. production mitigations
  • “It’s what we’ve always done.”
    • If you didn’t do this, what would happen and why?�
  • “We wrote the retro (or not)”
    • How do you refactor retros into future work and procedures?�
  • “Those performance reports didn’t change anything we do.”
    • Why, and what does / would?�

12 of 16

#4 “Escape Philosophy”

Subject-matter expertise must never be single-points-of-failure

  • Drive programs, not just tasks
    • Get ‘funding’ and approval for performance initiatives
    • Use performance pipeline patterns to measure engagement�
  • Go from co-pilot to Ground Control
    • Start easy, progressively elaborate
    • Peer retro and hold each other accountable for improvements
  • “So busy with [performance] testing work, we don’t have time to X…”
    • What would you improve if you had 10% of your time?
    • What’s the first thing to improve that GETS you 10% time back?�
  • “No one else knows how to do this [complex thing] right, so it always falls to me!”
    • What are the complex things we can’t automate?
    • What are the other things we can automate?

13 of 16

#5 “Predictable Unreliability”

  • “It’s not prod [like], so testing it doesn’t matter.”
    • What issues do we see in single-user and API trends?
    • How resilient is the overall architecture when components go slow/down?�
  • “We only monitor production”
    • So what does prod tell us about areas of performance risk?
    • Why can’t we reproduce this issue in a lower environment?

Comfort is not Growth - Sal G

  • The best defence is a good offense
    • Your reality is [cloud downtimes], so we deal with that together, proactively
    • Come with plenty of performance failure examples (with reasons)�
  • Expect excellence AND evidence
    • Access to measurable ‘impacts’ (not just RED but USE signals)
    • What do we benefit from automating performance in pipelines?

14 of 16

15 of 16

Reading List

16 of 16

LinkedIn. @paulsbruce

Twitter. @paulsbruce

Email. p.bruce@tricentis.com