Kubernetes Networks
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

View only
 
 
ABCDEFGHIJK
1
.FlannelWeave NetCalicoCiliumCloudNative LabsRomanaContivTungsten Fabrickopeioamazon-vpc-cni-k8s
2
CompanyRed HatWeaveWorksTigera IncIsovalentPani Networks IncCiscoJuniperKopeioAmazon
3
Latest Stable Version0.11.02.6.03.10.11.6.50.2.1 12.0.21.25.0,1experimental1.5.5
4
Last Release dateJanuary 2019November 2019March 2020
5
Start DateJuly 2014August 2014July 2014December 2015April 2017November 2015December 20142012May 2016September 2017
6
LanguageGoGoGoGoGoGoC / C++GoGo
7
Minimum OS Version-Linux Kernel > 3.8RHEL 7, Centos 7, Ubuntu 16.04, Debian 8Linux Kernel >= 4.9--CentOS 7, Ubuntu 16.04Linux Kernel > 2.6--
8
Minimum Kubernetes Version1.61.61.61.61.61.81.81.81.81.8
9
IP Versionipv4ipv4ipv4, ipv6ipv4, ipv6ipv4ipv4ipv4, ipv6ipv4, ipv6ipv4ipv4, ipv6
10
Open SourceYesYesYesYesYesYesYesYesYesYes
11
EncryptionExperimentalNaCl libraryNoYes (IPSec)NoNoNoNoExperimentalNo
12
Network policyIngress, EgressIngress, EgressIngress, EgressIngress, EgressIngress, EgressIngress, EgressIngress, EgressNoNo
13
Network policy auditingNoNoPaidYes (via Cilium's Hubble)NoNoNoNoNoNo
14
Recommended Max Nodes50050005000+2000
15
Default Network ModelLayer 2 VXLANLayer 2 VXLANLayer 3Layer 3 (L2 avaliable with chaining)Layer 3CentLayer 2, Layer 3 or ACI optionsLayer 2, VXLAN or IPSECLayer 3
16
Layer 2 EncapsulationVXLANVXLan---VXLANVXLANVXLAN-
17
Layer 3 Routingiptablesiptables, kubeproxyiptables, kubeproxyBPF, kubeproxyIPVS, iptables, ipsetsiptablesiptablesTF VRouterip routeeni
18
Layer 3 Encapsulation-Sleeve (fallback)IPIP or VXLan (optional)VXLan or Geneve (optional)IPVS/LVS DR mode, GRE/IPIP-VLANMPLSoUDP, MPLSoGRE, VXLAN--
19
Layer 4 Route Distribution--BGPCRD, kvstore, BGPBGPBGP, OSPFBGPBGP---
20
vnic per containernoyesyesyesyesnoyesnono
21
Multicast Supportnoyesnonononoyesnono
22
Subnet PerHostClusterOne or more of Cluster / Host / Namespace / DeploymentHost, Cluster or Custom (via CRD)HostHostOverlapping IP poolsVRFsHostCluster
23
Isolationcidrcidr, networklabel, host, cidr, network setslabel, services, AWS metadata, entities (host, cluster, world), cidr, dns, L7 (http, kafka, cassandra, memcached, ...)cidrcidrlabel, cidrnono
24
Load BalancingnoyesyesYes, can replace kube-proxyyesyesyesNono
25
Multi Cluster Routingnoyesyesyesyesyesyesnoyes
26
Partially Connected Networksnoyesnonononononono
27
IP Overlap Supportnonononononoyesnono
28
Name Servicenoyesnonononononono
29
Datastorekubernetes CRDs or etcdv3file inside podskubernetes CRDs, or etcdv3CRD, etcd3, consulkubernetes etcdkubernetes etcdkubernetes etcd, etcd or consulkubernetes etcdkubernetes etcd
30
Paid SupportNoYesYesYesnoYesNoNoNo
31
Docshttps://coreos.com/flannel/docs/latest/https://www.weave.works/docs/net/latest/overview/https://docs.projectcalico.org/v3.3/introduction/http://docs.cilium.io/en/stable/https://github.com/cloudnativelabs/kube-routerhttps://romana.readthedocs.io/en/latest/http://contiv.github.io/documents/https://github.com/tungstenfabric
https://github.com/Juniper/contrail-controller
https://github.com/kopeio/networkinghttps://github.com/aws/amazon-vpc-cni-k8s
32
IntegrationsFlannel + Calico-Flannel + CalicoCilium + Kube RouterCilium + Kube Router-amazon-vpc-cni-k8s + Calico
33
PlatformsLinux, WindowsLinuxLinux, WindowsLinuxLinuxLinuxLinuxLinuxLinuxLinux
34
Why?Layer 2 solution. Simple and mature. Overlays are useful when network address space is limited. Overlays also mostly auto-configure. Combines Layer 2 overlay networking with network policies and other features. Best solution for partially connected networks.Layer 3 solution. Good network policy support. Default on most Kubernetes distributions. Easy to debug on hosts by looking at route table. BGP allows for access both inside and outside the cluster.Security focused. Uses BPF which is faster than iptables to enforce identity based policies. Policies also operate at Layer 7 allowing for application specific enforcement. The cluster mesh feature is simpler than BGP to confgure.Single Go binary built from the ground up for Kubernetes. Uses new IPVS/LVS kernel features to improve service load balancing performance. Also does direct server return to improve latency.Aims for performance by using native Linux routing, iptables and no encapsulation.Integrates with On-Prem Cisco ACI. Has a cool bandwidth network policy.Really simple. Uses the default Kubernetes network and sets up layer 3 routes between pods using ip route.Best and fastest CNI when running Kubernetes on AWS. Allocates ENI's to each pod so all standard AWS networking can be used for routing. Note: You should still use Calico for network policy.
35
ScenarioOn-Prem or custom cloud where native routing isn't possibleSmall to medium size On-Prem or custom cloudOn-Prem with native routing or cloud Kubernetes servicesOn-Prem (direct routing), cloud integrated, security focusedOn-Prem or custom cloud latency focussedLarge scale On-Prem or AWSOn-Prem with ACI investmentAWS EKS, or AWS custom
36
Why Not?Native routing is faster and easier to debug. You need to also use Calico if you want network policies.Some people are scared of overlay networks because they aren't as easy to debug as native routing. It's a full mesh so very large clusters will require custom config with autodiscovery disabled.IPIP mode is needed when routing between subnets (AWS AZ's) which negates some of the performance benefits vs an overlay. BGP is slighly scary to some people.Requires later Kernel version. Overlay can be disabled if direct routing is preferred. Depending on circumstances you may need to pair it with BIRD or kube-router for BGP.Similar to Calico in that it uses IPIP by default to encapsulate traffic between subnets. Quite a new project and although it's in use in production at some companies it's still not v1.Community not very largeNo defaults make the recommended setup confusing.Still in experimental or Alpha stage.Max pods on each host are limited to the ENI's available to that instance type. This creates real world issues when not managed correctly. You still need to use Calico for network policy so it may be easier to use Calico for everything even if there is a slight performance hit using IP-IP between AZ's.
37
References[0] https://cilium.io/blog/2019/04/24/cilium-15
Loading...