COE
CNI specification
K8s Node internal network
eth2
ovsbrk8s
Pod1 (10.11.1.4) (NameSpace 1)
tap1
Pod1 (10.11.1.5) (NameSpace 2)
tap2
Veth pair
Veth pair
K8s Node (Worker 2)
192.168.33.13
ovs-tap1
ovs-tap2
eth1
192.168.50.13
192.168.33.13
eth2
ovsbrk8s
Pod1 (10.11.2.8) (NameSpace 1)
tap1
Pod1 (10.11.2.9) (NameSpace 2)
tap2
Veth pair
Veth pair
K8s Node (Worker 1)
192.168.33.12
ovs-tap1
ovs-tap2
eth1
192.168.50.12
192.168.33.12
192.168.33.1
K8s Node (master)
192.168.33.11
eth1
192.168.33.12
eth1
COE watcher
Odl-CNI
Odl-CNI
Odl-CNI
K8s API server
K8s kublet
K8s kubectl
Internal plan
External network
External plan
Public IP-Address
Public IP-Address
COE - CNI (current config)
{
"cniVersion":"0.3.0",
"name":"odl-cni",
"type":"odlovs-cni",
"mgrPort":6640,
"mgrActive":true,
"manager":"192.168.33.1",
"ovsBridge":"ovsbrk8s",
"ctlrPort":6653,
"ctlrActive":true,
"controller":"192.168.33.1",
"externalIntf":"eth2",
"externalIp":"192.168.50.13",
"ipam":{
"type":"host-local",
"subnet":"10.11.1.0/24",
"routes":[{
"dst":"0.0.0.0/0"
}],
"gateway":"10.11.1.1"
}
}
{
"cniVersion":"0.3.0",
"name":"odl-cni",
"type":"odlovs-cni",
"mgrPort":6640,
"mgrActive":true,
"manager":"192.168.33.1",
"ovsBridge":"ovsbrk8s",
"ctlrPort":6653,
"ctlrActive":true,
"controller":"192.168.33.1",
"externalIntf":"eth2",
"externalIp":"192.168.50.12",
"ipam":{
"type":"host-local",
"subnet":"10.11.2.0/24",
"routes":[{
"dst":"0.0.0.0/0"
}],
"gateway":"10.11.2.1"
}
}
K8s Node (Worker 1)
192.168.33.12
K8s Node (Worker 2)
192.168.33.13
K8s ODL communication
COE watcher
K8s-Master Node
192.168.33.11
COE Agent +
COE CNI
K8s-Minion1 Node
192.168.33.12
COE Agent +
COE CNI
192.168.33.10
4- COE Agent → request network config from ODL
5- ODL → validate the COE Agent request by matching IP
5- ODL → reply with the network info (subnet, gateway IP address, ..etc)
1- COE Watcher → send node info to ODL
2- ODL → store the node info in mdsal
3- ODL user → create the required network for the k8s node.
COE - CNI (current config)
{
"cniVersion": "0.3.0",
"name": "odl-cni",
"type": "odlcoe",
"ovsConfig": {
"isMaster": true,
"manager": "192.168.33.1",
"mgrPort": 6640,
"mgrActive": true,
"ovsBridge": "brk8s",
"controller": "192.168.33.1",
"ctlrPort": 6653,
"ctlrActive": true
},
"ipam": { // just for now till we figure out what we will do with odl dhcp service
"type": "host-local",
"subnet": "10.11.0.0/16",
"rangeStart": "10.11.2.10",
"rangeEnd": "10.11.2.50",
"gateway": "192.168.33.0",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
K8s Node internal network
eth1
Brk8s
Pod1 (NameSpace 1)
tap1
Pod2 (NameSpace 2)
tap2
OVS port 1
OVS port 2
OVS port
ovs
K8s Node
192.168.33.12
Discussion about networking inside the Pod same NetNS, How physically they are constructed ?
Pod (n) has NetNS (x)
OVS port (x)
Container 1
Google paus
Container
tap(x)
tap(x)
Container 2
tap(x)
K8s Node internal network
eth1
Br-int
Pod1 (NameSpace 1)
tap1
Pod2 (NameSpace 2)
tap2
OVS port 1
OVS port 2
OVS port
ovs
eth1
Br-int
Pod1 (NameSpace 1)
tap1
Pod2 (NameSpace 2)
tap2
Veth pair
Veth pair
OVS port
ovs
K8s Node
192.168.33.13
ovs-tap1
ovs-tap2
K8s Node
192.168.33.12
Which one we should use ? cni binary under /opt/cni/bin/
COE - CNI (discussion)
/etc/cni/net.d/coe-odl.conf
// This json if we will let the agent read the subnet ips from odl
// then the agent will write the coe-odl.conf file in order to let k8s
// able to assign the IP-Address to the Pod at creation time.
{
"name": "odl-coe",
"type": "ovs-odl",
"bridge": "br-int",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
// we specify the ipam type as host-local, and setup the other info after
// connecting opendaylight
"ipam": { // this for linux-bridge case will need to write the IPs at coe-odl.conf
"type": "host-local",
"subnet": "10.11.0.0/16",
"rangeStart": "10.11.1.10",
"rangeEnd": "10.11.1.50",
"gateway": "10.11.1.1",
// default gateway at the node, if we didn't specify a gateway.
"routes": [ { "dst": "0.0.0.0/0" } ]
}
"ovs": {
"bridge": "br-int", // this is ovs bridge
"vxlan": "00042" // as example
}
}
/etc/cni/net.d/coe-odl.conf
// This json if we will use a DHCP service to assign the IPs.
// Can we use the DHCP service from netvirt ?
{
"name": "odl-coe",
"type": "ovs-odl",
"bridge": "br-int",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
// the ipam-odl should be able to assign the Pod IP-address how this communication suppose to happen ?
"ipam": {
"type": "ipam-odl",
}
// OR we can use the default dhcp plugin as below but we assume that there
// is a dhcp server already running in the network and we still need to setup
// the dhcp daemon configuration.
"ipam": {
"type": "dhcp",
}
"ovs": {
"bridge": "br-int", // this is ovs bridge
"vxlan": "00042" // as example
}
}
Watcher & Agent (discussion)
What info the watcher can provides:
Do we need Agent beside the watcher?
K8s Nodes network
COE watcher
K8s-Master Node
192.168.33.11
OVS (br-int)
COE Agent +
COE CNI
K8s-Minion1 Node
192.168.33.12
OVS (br-int)
COE Agent +
COE CNI
K8s-Minion(n) Node
192.168.33.xx
OVS (br-int)
COE Agent +
COE CNI
192.168.33.10
K8s Node internal network
- This is the one suggested by K8s but I don’t think we want to use Linux bridge, are we?
- Less work from ODL side.
K8s service types
ExternalIP (This is what we care about): Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns. In the ServiceSpec, externalIPs can be specified along with any of the ServiceTypes. In the example below, my-service can be accessed by clients on 192.168.50.120:80 (externalIP:port)
Mapping between K8s net and netvirt /neutron APIs
List of operations: based on the email
1- Network-create : (Can we consider this as net-ns create)
create elan instance,
create VPN for external network, external Vpn Interface for NAT, if its an external network
2- Subnet-create : (What is the relation between subnet and net-ns ? Can we have different subnet for same net-ns? NO)
create external subnet Vpn Instance for PNF and hairpinning cases
3- Port create: (same I guess)
Create elan interface
Create vpn interface if its part of a VPN
Creates DMAC related flows if it’s a floating IP port.
4- Nova boot. (refer port create for the operations)
Creates a neutron port.
5- Router-create (We have Kube-router but not working with ovs bridge. We may don’t need it as odl handle this)
Creates an internal VPN Instance
Might handle SNAT/DNAT if some external n/w / Floating IP configs have changed.
6- Router-interface-add
Creates a neutron port for the router interface
Adds a vpn interface for the router interface port
Creates a vpn interface for all the ports belonging to the subnet which was added to the router.
<<<<There are some commands that update fixed-ips/ extra routes in the router are added. This will update the corresponding VPN interface>>>>>
<<<<There are cases where an external gateway is set for a router. This will lead to SNAT flows addition and further NAT processing>>>>
7- Bgpvpn create
Creates a vpn instance
8- Bgpvpn-net-assoc-create
Creates a vpn interface for all the ports of the subnet belonging to the network
9- Bgpvpn-router-assoc-create
Moves all the ports from the vpn interfaces belonging to the router to the respective vpn interfaces belonging to the bgpvpn
Mapping between K8s net and netvirt /neutron APIs
Things to keep in mind to investigate:
IPAM: host-local and dhcp
{
"ipam":{
"type":"host-local",
"subnet":"10.11.1.0/24",
"routes":[{
"dst":"0.0.0.0/0"
}],
"gateway":"10.11.1.1"
}
}
{
"ipam":{
"type":"host-local",
"subnet":"10.11.2.0/24",
"routes":[{
"dst":"0.0.0.0/0"
}],
"gateway":"10.11.2.1"
}
}
K8s Node (Worker 1)
192.168.33.12
K8s Node (Worker 2)
192.168.33.13
Host-local:
DHCP: