Feature Management Improv
Reduce Risk, Conquer Compliance, and Perfect Previews with OpenFeature
Todd Baert�OpenFeature Maintainer, Dynatrace
Picture this…
Picture this…
Then…
Then…
Feature flags to the rescue!
Then…
Standardizing Feature Flagging for Everyone
What’s OpenFeature?
OpenFeature is an open specification that provides a vendor-agnostic, community-driven API for feature flagging that works with your favorite feature flag management tool or in-house solution.
Feature flags 101: what are feature flags?
“Feature Toggles (often also referred to as Feature Flags) are a powerful technique, allowing teams to modify system behavior without changing code.”
“Feature flags are a software development technique that allows teams to enable, disable or change the behavior of certain features or code paths in a product or service, without modifying the source code.”
Feature flags 101: the feature flag maturity model
The ENV_VAR swamp
0
Dynamic configuration
1
Dynamic evaluation
2
Operationalized feature-flags
3
Dynamic Evaluation: Where the magic happens
Practically, dynamic evaluation means:
Dynamic Evaluation: Where the magic happens
Dynamic Evaluation: evaluation context
Dynamic Evaluation: implicit evaluation context
OK, so we have the contextual data, but what do we do with it?
Targeting: “The application of rules, specific user overrides, or fractional evaluations in feature flag resolution.”
Dynamic Evaluation: How do we use the data?
Dynamic Evaluation: How do we use the data?
Dynamic Evaluation: How do we use the data?
Dynamic Evaluation: How do we use the data?
Dynamic Evaluation: flagd
Dynamic Evaluation: flagd
To summarize: in the context of feature flagging, dynamic evaluation means feature flags are evaluated during runtime, and use contextual data as the basis for their resolved flag values.
Dynamic Evaluation in Depth: Is that it?
No! As is often the case, when you take into account architectural and infrastructural concerns, user-experience, and compliance, things get hairier. I’d like to dive deeper into 3 “hairballs” that you might run into and want to consider in advance of designing your feature flag system.
Dynamic Evaluation in Depth: Nuances
Hairball 1
Context contents
Hairball 1: Context contents
We want to give stakeholders maximum flexibility in building their flag rules; remember, one of the main points of using feature flags is to allow us to defer our “feature flag” logic. This depends on putting relevant data into the context! We can’t select for user’s based on their org, if we don’t have their org-id in the evaluation context!
Hairball 1: Context contents
Screen resolution
IP addr (v4 and v6 ofc)
Session id
Every HTTP header known to man, especially if it starts with ‘X’
COOKIES (YUM)
User’s grandmother’s maiden name || ‘MacDonald’
# of items in cart
bcrypt(md5(sha1(user’s password)))
Battery level
Hairball 1: Context contents
Consider:
TLDR: you probably don’t want to just stuff everything into your evaluation context.
Hairball 2
Client-side or Server-side evaluation?
Some implementations evaluate flags on the server, while other evaluate flags on the client. Where flag evaluations take place have massive implications on performance, security, and availability of the solution. Making the right decision here is critical.
Hairball 2: Client-side or Server-side evaluation?
Hairball 2: Client-side or Server-side evaluation?
| Client-side | Server-side |
Targeting rules | Sent to client | Stay on server |
PII | Stays on client | Sent to server |
Evaluation latency | Low (evaluated locally) | High (can be mitigated with caching) |
Implementation difficulty | Higher, often multiple languages, subtle differences | Lower, only one evaluation engine need be written and deployed |
Hairball 3
“Sticky” pseudorandom evaluation
Hairball 3: “Sticky” pseudorandom evaluation
Feature flags enable experimentation. In its simplest form that amounts to pseudorandomly assigning a flag value during an evaluation.
Let’s imagine a hypothetical: we’ve deployed our new feature with two slightly different design schemes. If we assign values at random, would we want the same user to see a different design every time they load our new page?
Hairball 3: “Sticky” pseudorandom evaluation
It’s important to do pseudorandom assignment based on a static identifier (user email, session id, organization id) and use that as the basis for a deterministic bucketing algorithm.
Hairball 3: “Sticky” pseudorandom evaluation
Pseudorandom assignment pro-tips:
Demo Time
Live demo: https://bit.ly/opf-kubecon-24
Standardizing Feature Flagging for Everyone
@openfeature
company/openfeature
github.com/open-feature
openfeature.dev
cloud-native.slack.com #openfeature�(or get an invite at slack.cncf.io)
OpenFeature Contribfest
OpenFeature + OTel Session
Feedback for this session
Demo GitHub repo