1 of 38

Feature Management Improv

Reduce Risk, Conquer Compliance, and Perfect Previews with OpenFeature

Todd Baert�OpenFeature Maintainer, Dynatrace

2 of 38

Picture this…

3 of 38

Picture this…

4 of 38

Then…

5 of 38

Then…

6 of 38

Feature flags to the rescue!

Then…

7 of 38

Standardizing Feature Flagging for Everyone

8 of 38

What’s OpenFeature?

OpenFeature is an open specification that provides a vendor-agnostic, community-driven API for feature flagging that works with your favorite feature flag management tool or in-house solution.

9 of 38

Feature flags 101: what are feature flags?

Feature Toggles (often also referred to as Feature Flags) are a powerful technique, allowing teams to modify system behavior without changing code.

  • Pete Hodgson, martinfowler.com

“Feature flags are a software development technique that allows teams to enable, disable or change the behavior of certain features or code paths in a product or service, without modifying the source code.”

  • openfeature.dev

10 of 38

Feature flags 101: the feature flag maturity model

The ENV_VAR swamp

0

Dynamic configuration

1

Dynamic evaluation

2

Operationalized feature-flags

3

11 of 38

  • Returned feature flag value is determined dynamically
    • Targeting rules or “Flag evaluation logic”
  • Returned value is based on contextual information
    • App version
    • User geolocation
    • Organization/user entitlement
    • Date/time
  • Flag evaluation logic is centralized, and independent of application
    • Application authors don’t need to implement targeting rules
    • Application defers flag evaluation to the flag management system or its SDK
    • Logic isn’t duplicated in multiple services/applications
    • Think in terms of: “defer/delegate”

Dynamic Evaluation: Where the magic happens

12 of 38

Practically, dynamic evaluation means:

  • No more if customer in legacyCustomers code, littered across multiple services
  • Reduced blast radius (test new implementations and features only on a small subset of users)
  • A robust platform for experimentation that doesn’t require developer intervention
  • Compliance agily (enable/disable features instantly based on geo; remember our initial hypothetical)

Dynamic Evaluation: Where the magic happens

13 of 38

Dynamic Evaluation: evaluation context

14 of 38

Dynamic Evaluation: implicit evaluation context

15 of 38

OK, so we have the contextual data, but what do we do with it?

Targeting: “The application of rules, specific user overrides, or fractional evaluations in feature flag resolution.”

Dynamic Evaluation: How do we use the data?

16 of 38

Dynamic Evaluation: How do we use the data?

17 of 38

Dynamic Evaluation: How do we use the data?

18 of 38

Dynamic Evaluation: How do we use the data?

19 of 38

  • flagd is OpenFeature’s cloud-native reference implementation of a feature-flag evaluation backend
    • Written in Go
    • Easily containerized
    • Can source flags from various syncs (files, HTTP endpoints, Kubernetes CRDs)
  • Flags and targeting defined as JSON and a custom flag evaluation DSL build on JSONLogic

Dynamic Evaluation: flagd

20 of 38

Dynamic Evaluation: flagd

  • “enable-mainframe-access” flag is a simple boolean flag with true/false variants
  • By default, it returns false (“off” variant)
  • Has a targeting rule that returns “on” variant if email address supplied in context ends with “@ingen.com”
  • Let’s check it out in the playground

21 of 38

To summarize: in the context of feature flagging, dynamic evaluation means feature flags are evaluated during runtime, and use contextual data as the basis for their resolved flag values.

Dynamic Evaluation in Depth: Is that it?

22 of 38

No! As is often the case, when you take into account architectural and infrastructural concerns, user-experience, and compliance, things get hairier. I’d like to dive deeper into 3 “hairballs” that you might run into and want to consider in advance of designing your feature flag system.

Dynamic Evaluation in Depth: Nuances

23 of 38

Hairball 1

Context contents

24 of 38

Hairball 1: Context contents

We want to give stakeholders maximum flexibility in building their flag rules; remember, one of the main points of using feature flags is to allow us to defer our “feature flag” logic. This depends on putting relevant data into the context! We can’t select for user’s based on their org, if we don’t have their org-id in the evaluation context!

25 of 38

Hairball 1: Context contents

Screen resolution

IP addr (v4 and v6 ofc)

Session id

Every HTTP header known to man, especially if it starts with ‘X’

COOKIES (YUM)

User’s grandmother’s maiden name || ‘MacDonald’

# of items in cart

bcrypt(md5(sha1(user’s password)))

Battery level

26 of 38

Hairball 1: Context contents

Consider:

  • PII
  • Serialization, network, and storage costs
  • Telemetry challenges, especially with high cardinality values

TLDR: you probably don’t want to just stuff everything into your evaluation context.

27 of 38

Hairball 2

Client-side or Server-side evaluation?

28 of 38

Some implementations evaluate flags on the server, while other evaluate flags on the client. Where flag evaluations take place have massive implications on performance, security, and availability of the solution. Making the right decision here is critical.

Hairball 2: Client-side or Server-side evaluation?

29 of 38

Hairball 2: Client-side or Server-side evaluation?

Client-side

Server-side

Targeting rules

Sent to client

Stay on server

PII

Stays on client

Sent to server

Evaluation latency

Low (evaluated locally)

High (can be mitigated with caching)

Implementation difficulty

Higher, often multiple languages, subtle differences

Lower, only one evaluation engine need be written and deployed

30 of 38

Hairball 3

“Sticky” pseudorandom evaluation

31 of 38

Hairball 3: “Sticky” pseudorandom evaluation

Feature flags enable experimentation. In its simplest form that amounts to pseudorandomly assigning a flag value during an evaluation.

Let’s imagine a hypothetical: we’ve deployed our new feature with two slightly different design schemes. If we assign values at random, would we want the same user to see a different design every time they load our new page?

32 of 38

Hairball 3: “Sticky” pseudorandom evaluation

It’s important to do pseudorandom assignment based on a static identifier (user email, session id, organization id) and use that as the basis for a deterministic bucketing algorithm.

33 of 38

Hairball 3: “Sticky” pseudorandom evaluation

Pseudorandom assignment pro-tips:

  • Identify an ideal “bucketing value” (may be different for each use case, ex: bucketing by org vs bucketing by user)
  • Bucketing algorithm should should be based on a fast hash, doesn’t need to be “security grade”, and should be widely implemented
  • It’s optimal if the algorithm exhibits “low thrash” when a new bucket is added

34 of 38

Demo Time

35 of 38

Live demo: https://bit.ly/opf-kubecon-24

  • Simple App deployed in Kubernetes
  • Two feature flags
  • flagd is configured to get it’s flags and targeting from a K8s custom resource instance

36 of 38

Standardizing Feature Flagging for Everyone

@openfeature

company/openfeature

github.com/open-feature

openfeature.dev

cloud-native.slack.com #openfeature�(or get an invite at slack.cncf.io)

37 of 38

OpenFeature Contribfest

OpenFeature + OTel Session

38 of 38

Feedback for this session

Demo GitHub repo