Ticketmaster Tech Maturity Model
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
View only
 
 
ABCDEFG
1
CategoryCapabilityLevel 1Level 2Level 3Level 4Minimum for
Public Cloud
2
Code
3
CodeCode Commenting StrategyNo or inconsistent code comments that do not tend to follow any defined standards, comments cannot be used to generate documentationAll new code is self-documenting and comments are suitable for documentation generation tools Most code is self-documenting and existing comments are suitable for documentation generation tools All code is self-documenting and comments are consistently suitable for documentation generation toolsn/a
4
Code Management StrategyCode is in SCM (e.g. git) and used for release, but there is little to no documented or agreed strategy of how to branch, merge, or release codeDevelop on version branches. Every deployment can be tracked back to understand all changes which went into it by anyone in the teamDevelop on feature branches that are short-lived (i.e. less than two weeks) and release from merged masterDevelop and release from master with at least daily code check-ins
using a process allowing traceability to the requested feature
1
5
Test SuiteNo or some unit tests, functional tests, critical path tests, and performance testsSome unit tests, functional tests, critical path tests, performance tests with all of them passing successfullyActively builds and maintains unit tests, functional tests, critical path tests, performance tests with all of them successfully passing for positive flowsActively builds and maintains unit tests, functional tests, critical path tests, performance tests with all of them successfully passing for positive and negative flows maintaining 100% critical path coverage3
6
Logging & TelemetryDefault or customized logging and no telemetryRudimentary logging and telemetry in placeAdherence to established logging & telemetry standards

Suitable information available in logs and telemetry for troubleshooting common issues
Adherence to established logging & telemetry standards

Most issues can be diagnosed through logs and telemetry
2
7
Backward / Forward CompatibilityBreaking changes (i.e. tested locally)Changes are regressed by users of the product prior to releaseCoding practices supports forward compatibilityCoding practices support backward and forward compatibility2
8
Monitoring & AlertingLogs have enough data to set up monitoring and alerts onSome monitoring and some alerting is prioritized in the work queuePrioritization of monitoring and alerting as part of the acceptance criteria for all work

Access to log archives and telemetry is available for troubleshooting
Prioritization of monitoring, alerting, and validation of triggers (e.g. SLAs) as part of the acceptance criteria for all work

Logs are indexed and telemetry is readily available for troubleshooting
2
9
Quality Engineering ModelContributors have separate roles (i.e. only code or test)Some contributors can both code and testMost contributors both code and testAll contributors both code and testn/a
10
Code ReuseContributors usually code what they needContributors can highlight where they have reused open source or code from other projectsContributors aim to reuse vs rebuild while coding and actively evangelize to maximize code reuse by othersContributors seek to reuse vs rebuild as part of the planning process, actively evangelize to maximize code reuse by others, and actively contributes to other coden/a
11
Build for AvailabilityProduct is not tested for extreme failures (e.g. a node/instance becoming unavailable)Product is manually tested for extreme failures and automatically tested for error use casesAutomated resilience testing framework (e.g. Chaos Monkey) runs rampant on the product in a staging environment without failuresAutomated resilience testing framework (e.g. Chaos Monkey) runs rampant on the product in a staging and production environment without failures and all errors (e.g. code, web server, OS, etc...) are caught and escalated2
12
Incremental Coding (Prototyping)Contributors do not use prototyping to estimate or validate any featuresContributors sometimes use prototyping to estimate larger features more confidentlyContributors often use prototyping to validate features with users before completionContributors always use prototyping to validate features with users before completionn/a
13
Feedback & RequirementsContributors start coding before requirements are fully understoodContributors code from wireframes / design comps and understand the requirements and business value before building the featureContributors code from wireframes / design comps, and understand how the feature interacts within the ecosystem before building the featureContributors code from clickable wireframes / design comps that were validated by users and understand how the feature interacts within the ecosystem before building the featuren/a
14
Behavior Driven Development (BDD)Contributors do not have an understanding of BDD methodologyContributors understand BDD methodology, and practice it on some featuresContributors understand BDD methodology, and practice it on most featuresBDD methodology is how things get donen/a
15
Build & Test
16
Build & TestDefinition of Done CompletenessContributors do not follow any documented or agreed upon definition of "done" Contributors mostly follow a defined definition of "done"Contributors always follow definition of "done" as a gate to making a releaseContributors actively update definition of "done" to improve quality and prevent issues from reoccurringn/a
17
Code QualityCode coverage is unknown or out of dateCode coverage is actively tracked80%+ code coverage is maintained90%+ code coverage is maintained or less than 20% of build rejections by regression test coveragen/a
18
Security Code AnalysisCode has never been scanned with a web application security scanner Code has been previously scanned with a security scanner Code is regularly scanned with a security scannerCode is automatically scanned with a security scanner and defects are prioritized into active workload2
19
Automated TestingNo defined acceptance testsSome existing acceptance tests, but little to no automationMost existing tests are automated, but all new acceptance tests are fully automatedAcceptance tests are actively built and maintained with full automation for every build2
20
Continuous IntegrationNo automated build pipeline

Code is manually compiled and may not always compile successfully
Build pipeline contains manual steps but the build is never left in a failed state

Some failures may be missed
Build pipeline requires automated tests to pass before feature is considered "complete"Build pipeline requires automated tests to pass and failures are actively monitored and a process for handling failures is in place3
21
Performance Testing & Capacity PlanningThe operational capacity of the production software is not clearly understoodPerformance is manually tested during the release process using load scripts of common scenarios

Contributors understand the algorithmic complexity of the software
Performance is automatically tracked in a staging environment to gauge changes in application performance

Contributors understand the optimal load that each instance can handle, and there is a process in place to make release decisions based on acceptance of new SLAs

Capacity provisioning and scaling up & down requires manual steps
Performance is automatically tracked in both staging and production with a full understanding of the application performance characteristics.

Contributors actively collaborate with the business to determine acceptance of new SLAs based on actual production traffic and predications created by load testing.

Capacity provisioning and scaling up & down is fully automated
2
22
Configuration File ManagementManual configurationsEach environment has predefined configurationsSensitive data has been abstracted, and configurations are human readableSensitive data has been abstracted, and configurations are human readable

All configurations are automated with tools that support monitoring & alerting with minimal environment-specific data
3
23
Service Consumer TestsNo or some tests simulating a consuming application or serviceManual tests are executed to simulate a consuming application or serviceAutomated tests of main use cases from a consuming application or service are integrated into the build pipelineAutomated tests from a consuming application or service are triggered by the build pipeline, and cause the build to fail if there are errors2
24
Release
25
ReleaseDeployment StrategyContributors do not follow a documented or consistent deployment strategyContributors follow a defined deployment strategyContributors follow a defined deployment strategy that includes automated rollbacks, regression tests, configs, and trackingContributors follow a defined deployment strategy that is fully automated and includes regression tests, configs, tracking, and database releases2
26
Release FrequencyReleases take longer than a cycle (iteration / sprint)1 release every cycle (sprint / iteration)Multiple releases every cycle (sprint / iteration)Code is released to production on every successful buildn/a
27
Feature FlagsNo feature flaggingSome feature flaggingFeature flags adhere to an established standard, allow for run-time based configuration, and are consistently maintained as the product evolvesFeature flags adhere to an established standard, allow for run-time based configuration, are consistently maintained as the product evolves, and different categories of feature-flags are controlled by different stakeholdersn/a
28
Build Pipeline TraceabilityCode can be built correctly - manually or via a build pipelineThere is a build pipeline with a visual representation and contributors are automatically alerted when a build failsBuild is triggered by source control check-in or is scheduled, with alerts being sent out on failuresBuild is triggered by source control check-in or a build of its dependent services, with alerts being sent out on failures, and if successful the build is pushed across environments to production1
29
Modular ReleasesEntire product is a single deployable unitSome of the product is separated into different deployable unitsMost of the product is separated into many deployable unitsPieces of product/service is independently deployable and the lifecycle of change for different parts of the product is well understood and taken into account for the deployment architecturen/a
30
Continuous DeliveryManual deployment and testing are performed in stagingManual deployment, and automatic testing are performed in stagingAutomated deployment and tests are performed in stagingAutomated deployment and tests are performed in production when code is checked in as "zero touch" continuous deployments2
31
Deployment MethodologyAble to automatically or manually deploy a new release to a single server/cluster before rolling to the nextAble to manually determine the impact of a partial (canary) deploymentAble to automatically determine the impact of a partial (canary) deploymentZero downtime, fully automated blue-green or red-black deployments spin up and validate a canary instance in production with the ability to segment a group or percentage of traffic, switch traffic over, and shut down the previous version once successfuln/a
32
Dependency ManagementDependencies are uncertainManual dependency managementAutomatic dependency managementContributors follow a defined strategy to regularly update dependencies to newer versionsn/a
33
Push Button ReleasesReleases require more than one contributor to deployReleases require manual interventionCode can be deployed via a push button release, but not the environmentProduction-like environments can be prepared through version controlled scripts and run via push button deploymentsn/a
34
Scriptable DB ReleasesDatabase specialist makes schema / migrations on behalf of the contributorsContributors create scripts to perform schema changes and migrations, but database specialist executes themDB schema changes and migrations are made directly from version control as a manual set during releaseDB schema changes and migrations are made directly from version control and consistent across all environments, including productionn/a
35
Operate
36
OperateDevOps PracticeEnvironments in production are not controlled by contributors building the productEnvironments in staging are controlled and partially managed by the contributors building the product and receive issues escalations for that environmentEnvironments in production are owned by the contributors building the product, but controlled by someone elseDevOps model is followed - environments in production are fully controlled and owned by the contributors building the product, including alerts and issue escalationsn/a
37
Runbook AdoptionNo triage runbook has been createdContributors have created a triage runbook, but is it not actively usedContributors have created a triage runbook, and it is integrated into the alerting infrastructure for easy referenceContributors have created a useful triage runbook that is actively maintained and integrated into the alerting infrastructure for easy referencen/a
38
Monitoring & AlertingSLAs havn't been defined or if SLAs are monitored and alerts are set up, they mostly just encompass the standard casesSLAs are monitored and some alerts are sent when thresholds are not met, healthchecks are monitored, and alerts are configured for many standard error conditionsSLAs in staging and production are consistently being met and alerted on when thresholds are not met, and healthchecks are monitored

Alerts are configured for a majority of error conditions
SLAs in staging and production are consistently being met, and a business disruption alert is escalated when thresholds are not met or a healthcheck fails

Non-standard HTTP responses trigger an alert

Alerts are triggered for main use cases when expected results are not met (i.e. lower than normal conversion rate)
2
39
On-Call StrategyOthers know how to escalate to the teamContributors follow a defined on-call strategyOn-call strategy is efficient as evidenced by consistently low MTTD and MTTR but sometimes requires more than one party to resolveContributor who is on-call is usually the resolver for all issues within their product as evidenced by a consistently low MTTD and MTTR2
40
Risk ManagementContributors do not fully own risk management or mitigation of the product. Disaster recovery is normally defined and/or managed by someone else who has full ownershipContributors think about disaster recovery plans while the code is built and released, but requires the involvement from many other partiesThere is an established disaster recovery plan (DRP) and business continuity program (BCP)There is an established disaster recovery plan (DRP) and business continuity program (BCP) which has been tested within the past 6 months2
41
Synthetic MonitoringNo synthetic monitoring is in placeSynthetic monitoring is used in staging and production with some alertingSynthetic monitoring is used in staging and production for major use cases, with escalation alerts for failuresSynthetic monitoring is used in staging and production for both positive and negative use cases, with escalation alerts for failures2
42
Log Management StrategyAll logs, all the time!Log rotation is based off a default templateLog rotation takes into account available disk space

Logs are archived for retention
There is an effectively defined log rotation strategy including timing of business activities like periods of high demand

Logs are retained according to business and legal requirements
2
43
Business DashboardSome business metrics are tracked in a dashboard, and / or some metrics are still mined manually, but these may not be visible or accessible to all contributorsBusiness metrics are tracked in a dashboard that illustrates product performance, and is constantly referenced by others to quantify how the product performs

All contributors have access and regular consistent visibility of the dashboard
Business metrics are tracked in a dashboard that illustrates product performance, is constantly referenced by others to quantify how the product performs, and used to measure the success of new feature rollouts

The dashboard is clearly visible at all times to all contributors
Business metrics are tracked in a dashboard that illustrates product performance, is constantly referenced by others to quantify how the product performs, and used to measure the success of new feature rollouts

Main use cases trigger alerts to stakeholders when business metrics do not match expected values (e.g. lower than expected conversion rates)
n/a
44
Optimize
45
OptimizeContinuous Process ImprovementFew processes are defined and contributors rely on tribal knowledge to succeedProcesses are documented and can be repeated by any contributorContributors simplify / automate processes whenever possible and documentation is maintained by as they evolveContributors are actively focused on continuous process improvement by identifying and enhancing processes; performance is predictable, and quality is consistently highn/a
46
Tech Debt ManagementContributors do not track debt in any consistent wayContributors can track debt via a defined processContributors avoid taking on any new debt by actively tracking and managing itContributors actively prioritize and reduce all debtn/a
47
Root Cause PreventionProduction issues happen and sometimes it's known why, but it is mostly difficult to find the underlying causeContributors follow a defined process for determining the root cause of issuesContributors follow a defined and accepted process for determining the root cause of issues, and major issues are prioritized and correctedContributors follow a defined and accepted process for root cause analysis which includes consistently preventing future issues by:
1) putting the issue into the work queue
2) prioritizing and correcting the issue, and
3) adding monitoring / alerting to detect such issues
2
48
Data-Driven MetricsIt takes a lot of time to gather metrics and sometimes it's too late to get the data after the factMetrics can be pulled after an issue happens to determine whyMetrics illustrate the product health, and action (e.g. product decisions) is taken based on the metricsMetrics illustrate the product health, predictive rules create alerts, and action (e.g. product decisions) is taken based on the metricsn/a
Loading...
 
 
 
Model
README
Summary
Scores