Published using Google Docs
LBaaS proposal
Updated automatically every 5 minutes

Summary

Rationale

Key features

User stories

Automatic Device Selection

“Sticky” sessions

Dynamically adding/removing VMs to LB

Graceful Exclusion of a VM from LB

Health Monitoring and High Availability

SSL offload/acceleration

L7 traffic shaping

DoS attack protection

Design & Implementation

API Client Authentication

Manager Component Functions

Service Database

DriverMap

API

Horizon Integration

Summary

This specification introduces the Load Balancer as a Service (LBaaS) concept. The proposed service allows to manage multiple hardware and software based load balancers in an OpenStack cloud environment using a RESTful API, and to provide LB services to OpenStack tenants. It is designed specifically for OpenStack, but can also be used as a standalone service to manage a set of load balancers via a single unified API.

Rationale

Load balancing is an important part of cloud infrastructure which allows to spread incoming traffic across multiple back-end application instances. The most common example is HTTP, but the list definitely goes on and includes RADIUS, RDP, SIP, and also various protocols on top of TCP. Ultimately, load balancers help distribute payloads across application instances, increase application performance and prevent application downtime. Load balancing requires the application instances to be stateless (perhaps using global shared data storage in a database or cache) or requires the balancer to have more knowledge about the protocol (e.g. queries from a single user within a single session should be directed to the same HTTP server by the balancer, otherwise session data will be lost).

Currently OpenStack does not allow to leverage fully automated management of load balancers. This proposal aims to address this gap by creating a service which exposes a single unified API for managing different hardware-based LB and virtualized software LB appliances: both on the cloud administrator side (manage the LB appliances) and on the tenant side (provide LB for an application).

Key features

User stories

Cloud administrator adds several hardware and software load balancers to the pool and specifies the model of each device. It’s also up to the cloud administrator to set up individual parameters for devices, such as authentication method/credentials for provisioning access. To cloud tenants, all devices are exposed as universal load balancing appliances. Thus, an end user is generally not aware of the nature of device he or she is using. We’re exploring the options for allowing tenants to choose particular devices for cases where the tenant needs advanced device-specific functionality.

Automatic Device Selection

LB Service allows to select a device automatically for a particular provisioning request. LB Service has a pool of registered physical and virtual LB devices and keeps track of the status of each device. Each device can serve multiple tenants at once. LB Service can pick a device for deployment by evaluating a number of different criteria, such as the number of remaining VIPs, current load, CPU consumption.

To restrict access, cloud administrator can configure which LB devices from the pool will be used to provide service for a particular tenant. If a tenant decides to create a new Load Balancing pool (i.e. a set of servers over which load balancing occurs), he specifies LB parameters, selects VM instances (in theory it can be arbitrary IP endpoints, e.g. to arrange a hierarchy of balancers, but currently only VMs are supported) to use in balancing and asks LB Service to deploy the created configuration. LB Service will select an LB device to service the user according to its internal logic (currently just the first free device, potentially something more complex).

“Sticky” sessions

Many web applications are stateful and the requests from a single session must go (“stick”) to the same physical server. This applies not just to stateful protocols such as TCP, but to higher-level sessions, such as HTTP sessions, identified by a cookie. LB service supports a variety of stickiness rules, including HTTP cookies, IP netmasks etc.

Dynamically adding/removing VMs to LB

LB service allows you to include/exclude VMs from the load balancing pool at any time. This allows to implement auto-scaling capabilities: a component could monitor the load, add more VMs when a certain load threshold is reached and remove VMs when the load decreases. Removal of VMs will be graceful, without impacting existing connections, as described in the next paragraph.

Graceful Exclusion of a VM from LB

LB service exposes simple methods for activating and suspending traffic to VMs, so it’s possible to take VMs out of rotation by just making a simple REST API call.  If the underlying LB device supports graceful suspension, it will stop accepting new traffic to a VM instance but will let it finish processing the existing connections. This allows to remove VMs from the load balancing pool without interruptions in traffic processing.

Health Monitoring and High Availability

LB service monitors the health of back-end servers and immediately stops directing traffic to a server that is found unresponsive to minimize its impact on the users. A variety of health checks are supported, such as simple ICMP ping, TCP connection or running a particular HTTP or HTTPS request.

Below we list some features that are not currently implemented, but are planned for implementation or just highlight an interesting aspect of load balancing.

SSL offload/acceleration

A load balancing device may serve as an SSL accelerator, allowing the back-end application to only implement HTTP and not bear the CPU load of SSL encryption/decryption.

L7 traffic shaping

A load balancing device may perform complex routing decisions depending on level-7 (application-level) protocol parameters, e.g. based on the URL of an HTTP request. For examples, requests ending with “.jpg” may be balanced against a cluster of high-performance static HTTP servers and requests ending with “.php” may be directed to a cluster of PHP servers.

DoS attack protection

A load balancing device may serve provide a layer of protection against DoS attacks by both low-level and high-level means, starting with introduction of TCP SYN cookies for protection against the “SYN flood”-type attacks, up to aggregating access statistics per IP address or subnet and denying access to suspiciously active ones, etc.

Design & Implementation

As an OpenStack ecosystem WSGI-based service, the LBaaS is based on openstack-common code. The following scheme describes the major components of the service and its relations to external entities:

Modular structure of the service allows to support different load balancing vendors using the mechanism of “drivers”. Drivers are responsible for communicating with underlying devices and for translation of API calls into configuration entities on load balancer appliances. The standard API can also be extended via the plugin mechanism provided by the openstack-common basic service code.

The internal structure of the Load Balancer Service itself is described in the following scheme:


API Client Authentication

Client authentication is performed via OpenStack Identity Service (Keystone) or other Identity service. If Keystone identity is used, the client should first authenticate themselves with Keystone using their credentials; if successful, Keystone responds with an authentication token. This token must be included in every request to the Load Balancer API. Keystone middleware authorizes the user in Keystone before request is passed to controller.

The LB service has full support for multi-tenancy. Tenant configs are completely isolated, so for example a tenant can’t access the LB configuration of another tenant.

Manager Component Functions

OpenStack Load Balancers API itself is implemented in the manager module of the service as a Python class. It has two primary functions:

Load balancer configuration requires that the Load Balancer service queries OpenStack for IP address pool, VLAN ID and other parameters associated with the tenant/project. This information can also be requested from OpenStack IP management component (Quantum).

Service Database

All information about load balancers configured in the system and their settings is stored in the MySQL database configured for the given OpenStack deployment.

DriverMap

DriverMap is a pluggable component of LB service that allows to provide a unified API to different devices through appropriate drivers while retaining access to advanced device-specific operations. For instance, this approach makes it possible to use different Cisco devices, such as Cisco Catalyst 6500 Series Switch, in order to implement some additional API extensions. Upon receiving an API command, the Manager component retrieves the appropriate device driver and forwards the command to this driver, which in turn translates it to device instructions.

API

The proposed service will expose 2 API endpoints.

Horizon Integration

Integration with Horizon is provided, giving the ability to manage load balancing devices (both from the viewpoint of the cloud administrator and a cloud tenant) from the web GUI. This includes a new page for managing a tenant’s load balancers, a new page for the cloud administrator to manage the pool of load balancing appliances and additions to existing pages, e.g. new actions for VMs: add/remove/suspend the VM within a particular load balancer.