1 of 71

CNFs vs. VNFs

Dan Kohn

Executive Director, CNCF

1

2 of 71

Cloud Native Computing Foundation

  • Non-profit, part of the Linux Foundation; founded Dec 2015
  • Platinum members:

Incubating

Sandbox

Service Mesh

Storage

Service Discovery

Distributed Tracing

Software Update Spec

Storage

Security

Identity Spec

Identity

Policy

Graduated

Serverless

Tooling

Package Management

Registry

Orchestration

Monitoring

Networking

API

Service Mesh

Logging

Remote Procedure Call

Distributed Tracing API

Container Runtime

Container Runtime

Metrics Spec

Messaging

Key/Value Store

© 2018 Cloud Native Computing Foundation

2

3 of 71

TODAY THE LINUX FOUNDATION IS MUCH MORE THAN LINUX

3

We are helping global privacy and security through a program to encrypt the entire internet.

Security

Networking

We are creating ecosystems around networking to improve agility in the evolving software-defined datacenter.

Cloud

We are creating a portability layer for the cloud, driving de facto standards and developing the orchestration layer for all clouds.

Automotive

We are creating the platform for infotainment in the auto industry that can be expanded into instrument clusters and telematics systems.

Blockchain

We are creating a permanent, secure distributed ledger that makes it easier to create cost-efficient, decentralized business networks.

We are regularly adding projects; for the most up-to-date listing of all projects visit tlfprojects.org

Web

We are providing the application development framework for next generation web, mobile, serverless, and IoT applications.

3

4 of 71

KubeCon + CloudNativeCon

  • China
    • Shanghai: November 14-15, 2018
    • Sponsorships open
  • North America
    • Seattle: December 11 - 13, 2018
    • Sponsorships open
  • Europe
    • Barcelona: May 21 - 23, 2019

Cloud Native Computing Foundation (CNCF)

4

5 of 71

Network Architecture Evolution

  • 1.0: Separate physical boxes for each component (e.g., routers, switches, firewalls)
  • 2.0: Physical boxes converted to virtual machines called Virtual Network Functions (VNFs) running on VMware or OpenStack
  • 3.0: Cloud-native Network Functions (CNFs) running on Kubernetes on public, private, or hybrid clouds

5

6 of 71

Network Architecture 1.0

6

7 of 71

Network Architecture Evolution

  • 1.0: Separate physical boxes for each component (e.g., routers, switches, firewalls)
  • 2.0: Physical boxes converted to virtual machines called Virtual Network Functions (VNFs) running on VMware or OpenStack

7

8 of 71

Network Architecture 2.0

8

9 of 71

Network Architecture Evolution

  • 1.0: Separate physical boxes for each component (e.g., routers, switches, firewalls)
  • 2.0: Physical boxes converted to virtual machines called Virtual Network Functions (VNFs) running on VMware or OpenStack
  • 3.0: Cloud-native Network Functions (CNFs) running on Kubernetes on public, private, or hybrid clouds

9

10 of 71

Network Architecture 3.0

(hardware is the same as 2.0)

10

11 of 71

Evolving from VNFs to CNFs

  • ONAP Amsterdam (Past) runs on OpenStack, VMware, Azure or Rackspace
  • ONAP Casablanca (Present) runs on Kubernetes and so works on any public, private or hybrid cloud
  • Virtual Network Functions (VNFs) are virtual machines that run on OpenStack or VMWare, or can be run on K8s via KubeVirt or Virtlet

VNFs

ONAP Orchestrator

OpenStack or VMWare

Bare Metal

Azure or Rackspace

Past

VNFs

OpenStack

Bare Metal

Kubernetes

Present

CNFs

ONAP�Orchestrator

Any Cloud

Bare Metal

Any Cloud

Future

VNFs

CNFs

ONAP�Orche-�strator

Kubernetes

KubeVirt/Virtlet

11

12 of 71

Major Benefits

  1. Cost savings (with public, private, and hybrid clouds)
  2. Development velocity
  3. Resiliency (to failures of individual CNFs, machines, and even data centers)

12

13 of 71

The challenge of transitioning VNFs to CNFs

  • Moving from network functionality from physical hardware to encapsulating the software in a virtual machine (P2V) is generally easier than containerizing the software (P2C or V2C)
  • Many network function virtualization VMs rely on kernel hacks or otherwise do not restrict themselves to just the stable Linux kernel userspace ABI
    • They also often need to use DPDK or SR-IOV to achieve sufficient performance
  • Containers provide nearly direct access to the hardware with little or no virtualization overhead
    • But they expect containerized applications to use the stable userspace Linux kernel ABI, not to bypass it

13

14 of 71

Areas for More Discussion

  • The strength of no longer being locked into specific OSs
    • Any version of Linux >3.10 is acceptable
  • Multi-interface pods vs. Network Service Mesh
  • Complete parity for IPv6 functionality and dual-stack support in K8s
  • Security, and specifically recommendations from Google and Jess that come into play when hosting untrusted, user-provided code
    • Possible use of isolation layers such as gVisor or Kata
  • Scheduling container workloads with network-related hardware constraints (similar to what’s been done for GPUs)
    • Network-specific functionality like QOS

14

15 of 71

Demo Plans Underway

  • VNFs vs. CNFs
    • Working on a demo of boot-time and throughput of VNFs on OpenStack vs. CNFs on Kubernetes, where the networking code and underlying hardware is identical
    • Will deliver opens source installers and Helm charts
  • Cloud-native Customer Premises Equipment (CCPE) Project
    • Modify the ONAP vCPE use case and VNF deployment to show VNF vs. CNF deployments of chained network functions

15

16 of 71

Roll-Out Plans

  • Open Source Summit NA, Vancouver, August 28: Joint workshop by CNCF executive director Dan Kohn and LF Networking head Arpit Joshipura on Cloud-native Network Functions
  • Open Network Summit Europe, Amsterdam, September 25: Marketing launch
  • KubeCon + CloudNativeCon NA, Seattle, December 11: Planned demo
  • Mobile World Congress, Barcelona, February 25: Major roll-out
  • Ongoing close collaboration with LF Networking and specific carriers providing feedback (AT&T, Bell Canada, Vodafone, etc.)

16

17 of 71

The Networking aspects of Cloud Native

Arpit Joshipura

GM Networking

The Linux Foundation

18 of 71

Industry Direction: Any Cloud + Portable Apps in Containers

  • Utilize the best of Cloud with the best of Telecom Networks
    • Promise of Containers – allow for any cloud portability
    • Promise of Network – full network automation & zero touch services
  • Telecom Network Transformation require a hybrid strategy
    • Migration of VM to containers step by step approach to VNF/Workloads
    • VM and Container Interworking in a Multi-VIM environment

The Linux Foundation Internal Use Only

18

8/27/18

19 of 71

Two leading de-facto platforms –Networking & Cloud

The Linux Foundation Internal Use Only

19

8/27/18

  • Network Automation & Orchestration Platform

  • ONAP has a multi-VIM strategy (Openstack, Vmware, Azure,..)

  • Project within ONAP – OOM looking at Containers
  • Cloud Automation & Orchestration Platform

  • Project within CNCF looking at ONAP – Cross Cloud CI

Open Source projects at LF can bring the best of both worlds to the Telecom Industry

&

20 of 71

Sustainable Innovation: Open Source Networking Creating De-Facto Platforms to Enable Next Generation Solutions in Telecom, Enterprise & Cloud

Value

Solutions

Network Automation/Zero Touch

New Services

5G/IOT/Edge/AI

New Services in Minutes

$576M Shared

Innovation

De-Facto Platform for ~70% Global Sub

Carrier Services

Cloud Services

Enterprise Services

10/10 Top Vendors Active

LF 9/10 Most important projects

SDO+OSS

Harmonization

21 of 71

LF Networking Vision: Automating Cloud, Network, & IOT Services

21

Residential Services

Enterprise Services

Cloud Services

Data Centers

Carrier Network

Cloud Network

IOT Services

Cloud Automation

IOT Automation

Infrastructure

Software &

Automation

Services

Enterprise Software Defined Data Centers (SDDC)

Public/Hybrid

Cloud Service Providers

Cloud Hosting

Private Cloud Providers

Web Service Providers

Service Providers

MSO/CableCo

(ONAP, OPNFV, ODL, FD.io, SNAS, PNDA)

22 of 71

Bringing It All Together Core to Edge – LF Open Source�Network + Disaggregation + Edge + IOT + AI + Cloud + Blockchain

SMB

Residential

Mobile

Open Edge

Open Access

Carrier

Access

Enterprise

& IIOT

Standards for Edge

Carrier Core

Carrier Cloud

Data Center

Carrier

Interconnect

Internet /

Web

Hosted Private Cloud

Public Cloud

Edge

Ref Implementation

IoT, Gateway &

Cloud Ref Arch

Enterprise

Other Edge Activities

23 of 71

Open Source Networking Landscape�Linux Foundation hosts 9/10 Top projects

23

Product, Services & Workloads

CI/CD

Linux Foundation Hosted

Disaggregated Hardware

Network Control

Network Operating Systems

Cloud & Virtual Management

Orchestration, Management, Policy

Application Layer / App Server

IO Abstraction & Data Path

System Integration & Test Automation

Network Data Analytics

Automation of Network + Infrastructure + Cloud + Apps + IOT

LF Networking Harmonized

Outside Linux Foundation

Standards

Infrastructure

Software

Services

SONiC

Updated

2018

24 of 71

Linux Foundation Path to Open Source Harmonization 2.0

The Linux Foundation Internal Use Only

24

8/27/18

Key Drivers of Each Layer

Edge

Infrastructure

Automation. Control

& Orchestration

Analytics/AI/Blockchain

Devices/IIOT

Services

AI & Marketplace/By Vertical X Projects

Core to Edge Zero Touch Automation

VM to Container Migration, Portability

Integrated Edge Stack – Zero Touch

Include OpenStack, Azure, RS, VMware…

Apps, Location and Service Portability

IIOT Framework For Core Services

EdgeX Foundry

Akraino Edge Stack

Carrier Cloud & Enterprise

AcumosAI / Deep Learning / Blockchain

LFN / ONAP+ODL+OPNFV+FD.io

CNCF / Kubernetes

Hybrid

Orchestration/ VIMs

Any Cloud�(Public, Hybrid, Service Provider Core, Edge…)

25 of 71

The deep dive – VNFs on ONAP & Cloud Native journey

The Linux Foundation Board Confidential

25

26 of 71

ONAP Beijing Architecture

Integration

VNF Requirements

Modeling (Utilities)

VNF Validation Program

27 of 71

A Day in the Life of an ONAP Service

27

Constant data collection, analytics, event monitoring; S3P

Design/test teams �onboard VNFs

Designers create �products, services, recipes

Vendor packages VNF �as per ONAP requirements; �can use VNF SDK

Vendor provided VNF (cloud-hosted, �optimized or native)

Service lifecycle management

OSS/BSS system triggers service deployment

1

VNF

Design

Run- Time

Closed Loop

Credit: Aarna Networks, ONAP Training course

2

3

4

5

6

7

28 of 71

Kubernetes Gap Analysis & Transition plans

Top 3 Areas of Investigation

  1. Support for VNFs/Apps from different vendors (Ref guest OS and VNF ecosystem alignment)
  2. Security – Access rights and privileges, known rules for admins etc
  3. Network Specific Requirements – focus on performance, scale and capabilities
    • Enabling scheduling container workloads with network-related hardware constraints (similar to what’s been done for GPUs)
    • Multi-homed containers (Multis, a CNI plugin is working on this)
    • Functional parity when deployed with IPv6
    • Network-specific functionality like QOS
    • multiple vNIC for a given container, which is currently not supported.

Transition plan

  • Demo of Any app, Any cloud with ONAP running on Kubernetes @ ONS opening keynote
  • Kubernetes as the Virtual Infrastructure Manager (VIM) for running the ONAP Orchestrator
  • Consider Kubernetes as the ONAP Application Controller (AppC)
  • As VNFs can be containerized, do so and manage them with Kubernetes – prioritize use cases (eg CDN, DNS)

The Linux Foundation Internal Use Only

28

8/27/18

29 of 71

Evolving from VNFs to CNFs

  • ONAP Amsterdam (Past) runs on OpenStack, VMware, Azure or Rackspace
  • ONAP Casablanca (Present) runs on Kubernetes and so works on any public, private or hybrid cloud
  • Virtual Network Functions (VNFs) are virtual machines that run on OpenStack or VMWare, or can be run on K8s via KubeVirt or Virtlet

VNFs

ONAP Orchestrator

OpenStack or VMWare

Bare Metal

Azure or Rackspace

Past

VNFs

OpenStack

Bare Metal

Kubernetes

Present

CNFs

ONAP�Orchestrator

Any Cloud

Bare Metal

Any Cloud

Future

VNFs

CNFs

ONAP�Orche-�strator

Kubernetes

KubeVirt/Virtlet

30 of 71

Today’s Agenda

BACKGROUND AND VISION

1:30 Introduction to VNFs and CNFs & Cross-cloud Dan Kohn

2:00 Networking & Telecom Automation: VNF to CNF journey Arpit Joshipura

REQUIREMENTS

2:15 Cloud Native lessons and requirement: A view from end user - Telus, Sanah Tariq

2:30 (Dan/Arpit facilitate) Why Telecom and Cloud Native technologies are coming together – discuss challenges and requirements

BREAK

PROJECTS AND ROADMAP

3:30 Overview of projects solving the migration Roadmap to Cloud Native

3:50 Network Service Mesh (VPP/Ligato) Ed Warnicke

4:10 Cross-Cloud CI working group Taylor

4:30 Wrap up and How to get involved �

The Linux Foundation Board Confidential

30

31 of 71

32 of 71

Envoy&Istio overview

Ihor Dvoretskyi, @idvoretskyi,

Developer Advocate, CNCF

© 2018 Cloud Native Computing Foundation

32

33 of 71

34 of 71

History / Community

  • Inception May ‘15
  • Open sourced September ‘16: https://github.com/envoyproxy/envoy
  • Joined CNCF September’17: https://www.envoyproxy.io/
  • Users/contributors (partial list): Lyft, Google, IBM, Verizon, Apple, Microsoft, Pivotal, Red Hat, EBay, Stripe, VSCO, Tencent QQ, Twilio, Yelp … and more all the time.
  • Integrations (partial list): Istio (Google/IBM), Nomad (Verizon), Tigera, Covalent, Turbine Labs, Datawire … more on the way.
  • Lyft deployment: 10s of thousands of hosts, 100s of services, 3M + mesh RPS.

35 of 71

State of Service-Oriented Architecture networking

  • Protocols (HTTP/1, HTTP/2, gRPC, databases, caching, etc.).
  • Infrastructures (IaaS, CaaS, on premise, etc.).
  • Intermediate load balancers (AWS ELB, F5, etc.).
  • Observability output (stats, tracing, and logging).
  • Implementations (often partial) of retry, circuit breaking, rate limiting, timeouts, and other distributed systems best practices.
  • Authentication and Authorization.
  • Per language libraries for service calls.

36 of 71

Envoy overview

  • Out of process architecture: Let’s do a lot of really hard stuff in one place and allow application developers to focus on business logic.
  • Modern C++ code base: Fast and productive.
  • L3/L4 filter architecture: A byte proxy at its core. Can be used for things other than HTTP (e.g., MongoDB, redis, stunnel replacement, TCP rate limiter, etc.).
  • HTTP L7 filter architecture: Make it easy to plug in different functionality.
  • HTTP/2 first! (Including gRPC and a nifty gRPC HTTP/1.1 bridge).
  • Service discovery and active/passive health checking.
  • Advanced load balancing: Retry, timeouts, circuit breaking, rate limiting, shadowing, outlier detection, etc.
  • Best in class observability: stats, logging, and tracing.
  • Edge proxy: routing and TLS.

37 of 71

38 of 71

Istio overview

  • Connect, manage and secure microservices
  • Separation of concerns: developer and operations
  • For both containerized and non-containerized workloads
  • Leverage great functionality in Envoy, adding pluggable management and control planes
  • Intelligent routing, load balancing, metrics collection, policy enforcement, end-to-end authentication
  • Istio 1.0 announced in July’2018

39 of 71

Istio components

  • Mixer - Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
  • Pilot - A component responsible for configuring the proxies at runtime.
  • Citadel - A centralized component responsible for certificate issuance and rotation.
  • Node Agent - A per-node component responsible for certificate issuance and rotation.

40 of 71

Istio and Envoy

  • Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services
  • Providing a rich set of functions like:
    • discovery
    • l7 routing
    • circuit breakers
    • policy enforcement
    • telemetry recording/reporting functions.

41 of 71

More details

© 2018 Cloud Native Computing Foundation

41

42 of 71

Extra

© 2018 Cloud Native Computing Foundation

42

43 of 71

Envoy overview

Service Cluster

Envoy

Service

Discovery

Service Cluster

Envoy

Service

External Services

HTTP/2

REST / gRPC

44 of 71

Architectural overview

svcA

Envoy

Pod

Service A

svcB

Envoy

Service B

Pilot

Control Plane API

Mixer

Routing & load balancing info

Policy checks, telemetry

Istio-Auth

TLS certs to Envoy

Traffic is transparently intercepted and proxied. Application is unaware of Envoy’s presence

Telemetry

Secured w/ mTLS

45 of 71

Bookinfo application sample

46 of 71

Cross-Cloud CI Overview

Cloud-native Network Functions Seminar

August 28, 2018

© 2018 Cloud Native Computing Foundation

46

47 of 71

Presented by:

Taylor Carpenter, Vulk Coop

taylor@vulk.coop

© 2018 Cloud Native Computing Foundation

47

48 of 71

Cross-Cloud CI Project Overview

Why?

© 2018 Cloud Native Computing Foundation

48

49 of 71

Cross-Cloud CI Project Overview

Why? CNCF ecosystem is growing rapidly with new projects and cloud providers!

© 2018 Cloud Native Computing Foundation

49

50 of 71

Cross-Cloud CI Project Overview

  • The CNCF ecosystem is large, diverse and continues to grow. CNCF would like to ensure cross-project interoperability and cross-cloud deployments of all cloud native technologies and show the daily status of builds and deployments on a status dashboard.

Why?

© 2018 Cloud Native Computing Foundation

50

51 of 71

Cross-Cloud CI Project Overview

What?

  • Cross-cloud testing system
  • Status repository server
  • Status dashboard

  • The CNCF ecosystem is large, diverse and continues to grow. CNCF would like to ensure cross-project interoperability and cross-cloud deployments of all cloud native technologies and show the daily status of builds and deployments on a status dashboard.

Why?

© 2018 Cloud Native Computing Foundation

51

52 of 71

Build and provision CNCF projects

Graduated

Incubating

Sandbox

© 2018 Cloud Native Computing Foundation

52

53 of 71

Project CI artifacts and non-CNCF projects

Implemented

© 2018 Cloud Native Computing Foundation

53

54 of 71

Deploy to public/bare metal/private clouds

Implemented

In Progress

© 2018 Cloud Native Computing Foundation

54

55 of 71

© 2018 Cloud Native Computing Foundation

55

56 of 71

© 2018 Cloud Native Computing Foundation

56

57 of 71

© 2018 Cloud Native Computing Foundation

57

58 of 71

© 2018 Cloud Native Computing Foundation

58

59 of 71

© 2018 Cloud Native Computing Foundation

59

60 of 71

© 2018 Cloud Native Computing Foundation

60

61 of 71

© 2018 Cloud Native Computing Foundation

61

62 of 71

© 2018 Cloud Native Computing Foundation

62

63 of 71

© 2018 Cloud Native Computing Foundation

63

64 of 71

© 2018 Cloud Native Computing Foundation

64

65 of 71

Technology Overview

© 2018 Cloud Native Computing Foundation

65

66 of 71

CI System Technology Overview

Unified CI/CD platform:

  • GitLab

Cross-cloud provisioning:

  • Terraform, Cloud-init, and per cloud K8s configuration

App deployments:

  • K8s manifest management with Helm

E2e tests:

  • Custom containers + Helm

Automated builds and deployments:

  • Git + per project yaml configuration

© 2018 Cloud Native Computing Foundation

66

67 of 71

Dashboard Technology Overview

Frontend:

  • Vue.js

Status repository:

  • Elixir and Erlang

Automated builds and deployments:

  • Git + per project yaml configuration �

© 2018 Cloud Native Computing Foundation

67

68 of 71

Q&A

© 2018 Cloud Native Computing Foundation

68

69 of 71

How to Collaborate

Attend CI WG meetings:

  • https://github.com/cncf/wg-ci
  • 4th Tuesday of the month at 11:00am Pacific Time
  • Next Meeting is on Tuesday, Sept 28th

Subscribe to the CNCF CI public mailing list:

Create issues on GitHub:

Join the #cncf-ci channel on slack:

  • Request invite at https://slack.cncf.io/
  • Cloud-native.slack.com

���

© 2018 Cloud Native Computing Foundation

69

70 of 71

Connect with Cross-Cloud CI

@crosscloudci

@crosscloudci

crosscloudci@vulk.coop

© 2018 Cloud Native Computing Foundation

70

71 of 71

For more details and an in-depth demo, please contact Dan Kohn �& Cross-Cloud CI team �at CNCF booth at #OSSNA18

Also presenting at:

  • KubeCon + CloudNativeCon China
    • November 13-14, Shanghai

© 2018 Cloud Native Computing Foundation

71