1 of 12

Cloud Native Performance

Standardizing Cloud Native Value Measurement

2 of 12

Cloud Native Performancevendor neutral Cloud Native performance measurement standard

Directly enables:

  • capturing details of infrastructure capacity, Cloud Native configuration, and workload metadata.

Facilitates:

  • benchmarking of Cloud Native performance�
  • exchange of performance information from system-to-system / mesh-to-mesh�
  • apples-to-apples performance comparisons of Cloud Native deployments.�
  • a universal performance index to gauge a Cloud Native’s efficiency against deployments in other organizations’ environments.

3 of 12

MeshMark

from Cloud Native Performance

MeshMark:

  • MeshMark distills a variety of overhead signals and key performance indicators into a simple index.

  • MeshMark’s purpose is to convert measurements into insights about the value of functions your cloud native infrastructure is providing.

  • MeshMark specifies a uniform way to analyze and report on the degree to which measured performance provides business value.

Problem:

  • Measurement data may not provide a clear and simple picture of how well those applications are performing from a business point of view, a characteristic desired in metrics that are used as key performance indicators.
  • Reporting several different kinds of data can cause confusion.

An open standard for measuring performance of Cloud Nativees in context of the value they provide.

4 of 12

Cloud Native PerformanceCadence

Specification

Participation

Research

Publication

5 of 12

Objectives

Favor leveraging existing models / specifications to the extent possible.

Focus on the primitives of Cloud Nativees as the first-class concern.

Afford for extensibility in the specification.

Area: Specification

  1. Initial Specification
    1. Capture all significant and common components of a Cloud Native ✔️
    2. Extend…
  2. Specification Integration
    • Enhance workload metadata model with integration with Open Application Model
    • directly
    • via Meshery ✔️
    • Enhance Cloud Native configuration model with integration with Cloud Native Interface
    • directly
    • via Meshery ✔️
  3. Identify hardware specification for integration
  4. Curate the specification
  5. Curated metrics like errors per second
  6. Golden signal - - KEDA

6 of 12

  1. Every Cloud Native self-reporting performance
    1. using GitHub Action or Meshery directly
    2. Including managed Cloud Native offerings
    3. Including unmanaged Cloud Native deployments
  2. Performance information exchange
  3. Common scoreboard using MeshMark

Objectives

Area: Participation

Increase individual Cloud Native project participation.

Increase individual infrastructure participation.

Exchange of performance information from system-to-system / mesh-to-mesh.

7 of 12

Objectives

Collaborative, academic research on a universal performance index to gauge a Cloud Native’s efficiency against deployments in other organizations’ environments.

Collaborative, academic research on performance characterization of new distributing tracing sampling algorithms.

Area: Research

1. Machine Learning Models

  1. Exploration of Nighthawk’s adaptive load controller.

2. Establish value measurement index

  1. MeshMark

8 of 12

  1. Publication on various accredited venues
    1. IEEE (1st paper): Analyzing Cloud Native Performance ✔️
    2. IEEE (2nd paper): Techniques of Adaptive Cloud Native Optimization
    3. CNCF Blog and KubeCon
  2. Establishment of suite of benchmarks
  3. Identify representative workload types.
  4. Identifying the performance impact of sidecar.
  5. Identify benchmark configurations.
  6. Automation via performance test harness
  7. Cloud Native and workload provisioning: Meshery ✔️
  8. Single load generation: GetNighthawk ✔️
  9. Distributed load generation: GetNighthawk
  10. GitHub Action for Cloud Native Performance ✔️
  11. Testing
    • Exercise use of CNCF’s Labs for testing on dedicated hardware.
    • Exercise use of Cloud Native Credits for testing in public clouds.

Objectives

Area: Publication

Continuously inform the world of generally and specifically expected value and overhead of this modern layer infrastructure through research rooted in academic analysis and publication of unbiased, third-party analysis.

Strongly encourage participation from all parties involved (Cloud Nativees and infrastructure providers and so on).

Facilitate characterization of comparative differences (apples-to-oranges) performance comparisons of Cloud Native deployments.

9 of 12

Project Alignment

SMP, SMI, and Meshery

MESHERY

SMP

SMI

Meshery runs conformance for

Meshery implements

Meshery implements

SMP goes deeper and broader

WASM

Filters

Workloads

Traffic Metrics

Git integrations

Workflow

Traffic only

Scheduling

Orchestration

Policy

Benchmarks

Users

-more-

Visual Topology

See Meshery’s logical object model in next slide

Load Generators

Load Profile

Patterns

Configuration Analysis

Traffic Specs

Traffic Split

Meshery goes deeper, incorporating strategies

Access

Retries

Canaries

Rate Limiting

Configuration Designer

MULTI-

MESH

Dry-run

Adaptive optimization

GitHub Actions

10 of 12

11 of 12

Project AlignmentSMP, SMI, Meshery, … other?

Benefits:

Focused goals; alignment with GetNighthawk

Cons:

Missed opportunity?

Benefits:

Projects are stronger together, covering more surface area.

Cons:

Expansion of charter: more to do; split focus.

Benefits:

Empowers CNCF say, “this is what a Cloud Native is and this how you run it.”

Cons:

Many maintainers and initiatives to align and organize.

Conflates specifications with tooling.

Benefits:

Drop “SM” of SMP. Is “Cloud Native” too small a focus? Collab with KEDA.�

Cons:

Expansion of charter: more to do.

Clarity on distinction and alignment with OpenMetrics and OpenTelemetry

5) SMP Expand Charter

4) Combine SMP, SMI & Meshery

2) SMP Combine w/SMI

1) SMP Stay Current Course

3) Take SMI Traffic Metrics and put into SMP

MAYBE

MAYBE

NO

MAYBE SOMETHING TO EVOLVE INTO.

YES

12 of 12

System Preferences

k8s contexts

User Data and Preferences

extension point

Infrastructure

accounts

users

groups

roles

permissions

Identity

test schedule

test results

test profiles

extension point

board config

validators

perf test

static board

environments

System Deployment

Meshery

Server

Environment

docker-compose

analytics

adapters

K8s manifests

Helm charts

The extensible mesh manager

Legend

Prometheus

Grafana

1:1

1:1

N:N

Cluster

Provider

1:N

1:1

Adapter

Cluster

Local Provider

temporary storage�default functionality

Adapter

Meshery owns this Object

Meshery is aware of this Object

Meshery Extension Point

Cloud Native

Prometheus

Grafana

Meshery Preferences

Remote Provider

permanent storage�additional functionality

N:N

Load Generator

Load Generator

context

mesheryctl config

Control Plane

Defaults

System-wide Settings

SMI

UI

Extension�Point

DB

Extension�Point

Extension�Point

Performance

Meshery owns this Sub-object

Data Plane

Filter

Extension�Point

Meshery Operator

Prometheus

Application

GraphQL Server

Extension�Point

Pattern

Pattern

Extension�Point

Extension�Point

Extensions

N:1

Jaeger