Summary

Rationale

User stories (simplified)

Assumptions

Design

Glossary

Deployment Options

Hardware Load Balancer

Virtual Load Balancer

Sequence Diagram

LBaaS Architecture

Object Model

API reference

Summary

This document describes proposal of integrating LBaaS with Quantum.

The intent is to extend Quantum API with 2 subsets of API calls which will implement:

The initial goal is to build a proof of concept and present it at OpenStack design summit in San Diego during 10/15-10/18.

Rationale

Integration of load balancer service with Quantum is a step towards providing all networking functions to OpenStack tenants through a single comprehensive API, and evolving Quantum up the networking stack.

User stories (simplified)

As a cloud provider, I want to manage heterogeneous network with L3-L7 load balancing services provided by hardware and software appliances, using single Provider API.

As a cloud provider, I want Provider API isolated from cloud tenants.

As a cloud service tenant admin, I want to add virtual load balancers with back-end servers farm of instances from my tenant to my infrastructure on demand.

As a cloud service tenant admin, I want to configure parameters of virtual load balancer, e.g. algorithm, sessions handling and health check methods.

As a cloud service tenant admin, I want to have internal load balancers (accessible only from the Tenant network) and external load balancers (accessible from Public network via Floating IP).

Assumptions

Design

Glossary

In this document we will use the following terms:

Load Balancing Device - a hardware appliance or virtual machine that performs load balancing

Load Balancer - a single load balancing system, that includes Virtual Server, Nodes

Virtual Server - a client’s representation of services behind the load balancer. This is a combination of virtual IP, port and other traffic selection rules.

Syn.: Virtual Server (F5, Cisco ANM), Virtual IP (Cisco ACE), Service (Barracuda)

Node - a back-end service/application that is balanced by LB.

Syn.: Resources member (F5), RServer (Cisco), Real Server (Barracuda)

Sticky - a mark for traffic, it allows to track client sessions and bind them to the same back-end service.

Syn. Persistence (F5), Sticky Group (Cisco), Service Persist (Barracuda)

Probe - an algorithm of health-monitoring.

Syn.: Health Monitor (F5), Probe (Cisco), Service Monitor (Barracuda)

Algorithm - an algorithm of traffic balancing.

Syn.: Scheduling Policy (Barracuda)

Deployment Options

The deployment of load balancer depends on its type: hardware balancers can be shared between several tenants, but software can be used per single tenant only.

Virtual services created on LB device can be internal or external. The access to the first is provided for tenant only. The second are bound to floating IP and can be accessed from public network.

Management of LB devices is done via separate management interface, that is bound to provider network. This interface is utilized by LBaaS agent.

Hardware Load Balancer

Hardware load balancer device can be shared between several tenants. The separation is done using vlans. The management interface is linked to provider network and utilized by LBaaS agent.

The workflow of creating LB is following:

Possible options for LB creation:

Virtual Load Balancer

Virtual load balancer device belongs to a single tenant only. For some cases it may be reasonable to create a single LB instance per application, but user is not limited to do so. The management interface is bound to provider network and utilized by LBaaS agent.

Virtual LB is created out of template, the user may choose to reuse existing balancers or launch new. LBaaS scheduling mechanism may help user and show current capacity of running balancers and propose to split applications between several balancers.

The workflow of creating LB is following:

Sequence Diagram

LBaaS Architecture

The intent is to make LBaaS a part of Quantum, and have its API and core part implemented as a Quantum extension. Device-dependent drivers are implemented according to Quantum Agent model. The communication between these parts is on standard MQ channel (AMQP).

Note:

This scheme may not go inline with typical quantum architecture where plugin works with DB and agent talks to plugin via RPC. For the PoC purposes LBaaS agent will initially use it’s private database instead of quantum database. Then we’ll decide on the appropriate solution.

Extension:

Agent:

Object Model

The model may be split on Admin and Tenant parts. The first is exposed to cloud admins and allows managing load balancing devices, the second is exposed to tenant users and allows to operate with logic items.

Refer to https://github.com/Mirantis/openstack-lbaas/wiki/Data-model for more details on the model.

API reference

Full REST API is described at https://github.com/Mirantis/openstack-lbaas/wiki/REST-API 

Implementation

Unresolved issues

External links

  1. https://docs.google.com/document/d/10HpTpvRzjqDhnGnWFdMv19mENbb_TsMMvAAozzSu8hc/edit
  2. https://docs.google.com/open?id=0B2Sy8nv-GIrVdVZhUWIyRU80VE0
  3. https://docs.google.com/document/pub?id=11WWy7MQN1RIK7XdvQtUwkC_EIrykEDproFy9Pekm3wI
  4. https://docs.google.com/a/mirantis.com/document/pub?id=1DRgQhZJ73EyzQ2KvzVQd7Li9YEL7fXWBp8reMdAEhiM