FME Server Kubernetes Deployment

Deploying with Kubernetes

Prerequisites

Install Kubernetes

Install Helm

Install NGINX

Development and Test Deployments

Production Deployments

FME Server Installation using Helm

Installation with a custom hostname

Delete FME Server Deployment

Scaling FME Engines

TLS/SSL Certificates

Custom Certificate

Let’s Encrypt Certificate

Multi-host Deployment (Highly Available Core)

Deployment with NFS Client Provisioner

Parameters

Gotchas

Deployment with external database

Cloud Provider Specific Instructions

Deployment in Azure AKS using Azure File

Launching the Cluster

Connect to the Cluster

Install Helm and an NGINX ingress controller

Setup Shared Storage using Azure File

Deploy FME Server

Deploy FME Server on Kubernetes using Google Kubernetes Engine

Launching the cluster

Connect to the cluster

Setup shared storage

Deploy FME Server

Deployment in AWS EKS using EFS

Launching the Cluster and Connecting

Install Helm and an NGINX ingress controller

Setup Shared Storage using EFS

Deploy FME Server

Current Gotchas

Deploying with Kubernetes

To deploy FME Server using Docker Compose, follow the documentation here.

Prerequisites

Install Kubernetes

You will need to install and configure a Kubernetes cluster. If you are just doing it locally, the easiest way is to install Docker Desktop and follow the Getting Started guide. Once you have it installed, be sure to Enable Kubernetes to start a single node cluster.

Tip: Make sure the resources assigned to your Docker Engine are large enough. The defaults too low and I had to bump the memory up. Recommended: CPUs 4 and Memory 8GB.

Install Helm

Helm is a tool for managing Kubernetes charts. It needs to be installed in the cluster so we can pull the fmeserver chart down.

  1. Follow the Getting Start Guide to install Helm. Once installed, be sure to initialize into the cluster by running helm init.
  2. Add the fmeserver chart to the cluster :

helm repo add safesoftware https://safesoftware.github.io/helm-charts/

Install NGINX

Rather than shipping our own NGINX ingress controller container with our deployment, we leverage the official nginx-ingress controller. The instructions to deploy this are here. Using helm is probably the most simple:

helm install stable/nginx-ingress --name my-nginx

Development and Test Deployments

To deploy FME Server into the cluster helm is used. To make things easier, a quickstart script has been created. The script walks you through a series of questions which—under the hood—builds up the helm command and then executes it for you. You can use the script if you are deploying into any of these environments.

  1. Docker for Mac
  2. Docker for Windows
  3. Linux with host dir
  4. Azure Kubernetes Service                
  5. Google Kubernetes Service
  6. Amazon EKS

You can access the quickstart script here.  Decide which version of FME Server you wish to install, and locate the k8s-quickstart.sh script (k8s-quickstart.ps1 if you are on WIndows) for that build. Download and run the script. Note, the quickstart scripts under latest are pointing to a fixed build number. This means the script file needs to be re-downloaded to launch the latest available build.

After running the quickstart script, you will be able to connect to the FME Server Web UI using https://<hostname>.  The admin user and password will default to admin/admin. You can change this in the Web UI after you log in for the first time.


Summary - All Steps for Docker for Mac (Assuming Docker for Mac is installed and K8S enabled)

  1. Install helm: 

brew install Kubernetes-helm

  1. Initialize helm:

helm init

  1. Install FME Server chart into cluster:

helm repo add safesoftware https://safesoftware.github.io/helm-charts/

  1. Install NGINX ingress controller into cluster:

helm install stable/nginx-ingress --name my-nginx

  1. Download the quickstart script.
  2. Run the script (sh k8s-quickstart.sh) and follow the prompts.
  3. Run helm status fmeserver and wait until all of the Pods have the status RUNNING.
  4. Access the following endpoints:
  1. FME Server will be running on http://localhost
  2. The Kubernetes dashboard, can be accessed by following the instructions here.


Production Deployments

For more complex and production deployments, helm should be used directly rather than the quickstart script.

FME Server Installation using Helm

Using helm directly as opposed the the quickstart gives you a lot more control. You can access the fmeserver helm chart here. As you can see, there are over 25 parameters that you are able to set. Most of the parameters have a default value and you only need to override them if you wish. As a minimum, you need to set the release and build number. Available build numbers or releases can be found here.

Example: Install FME Server 2018.1.0 (build 18478)

helm repo update

helm install -n fmeserver safesoftware/fmeserver-2018.1.0-beta --set fmeserver.buildNr=18478

After running the install command, you will be able to connect to the FME Server Web UI using https://<hostname>.  The admin user and password will default to admin/admin. You can change this in the Web UI after you log in for the first time.

Installation with a custom hostname

Often you don’t want to install using localhost. Set the deployment.hostname parameter to override this.

helm install -n fmeserver safesoftware/fmeserver-2018.1.0-beta --set fmeserver.buildNr=<fmeBuildNumber>,deployment.hostname=<your.custom.hostname>

Delete FME Server Deployment

helm delete --purge fmeserver

Scaling FME Engines

To set the number of Engines, use the FME Server helm chart (install or upgrade command) and set the following parameter: deployment.numEngines=<numberOfDesiredEngines>

It is recommended that you only scale Engines using helm and not the Kubernetes dashboard. If you scale using the Kubernetes dashboard, it risks putting the deployment out of sync with helm which means the value will be reverted if something is updated through helm in the future.

Note: Changing the number of Engines in the FME Server UI is disabled in the Kubernetes deployment.

TLS/SSL Certificates

It is recommended that all FME Server deployments use HTTPS. Kubernetes makes it easy to deploy and manage your certificates.

Custom Certificate

This walks you through configuring FME Server to use a custom certificate. These steps assume you have created a created a certificate and private key.

  1. Upload the certificate to the cluster.

kubectl create secret tls fmeserver-tls-cert --key localhost.self.fmeserver.key --cert localhost.self.fmeserver.crt

  1. Install FME server setting the TLS certificate.

helm install -n fmeserver safesoftware/fmeserver-2018.1.0-beta --set fmeserver.buildNr=18478,

deployment.tlsSecretName=fmeserver-tls-cert,

deployment.hostname=localhost

  1. Access your FME Server using https://localhost

Let’s Encrypt Certificate

LetsEncrypt is a free TLS certificate authority and it can be used to create certificates for production environments. To use it you require a publicly available Kubernetes cluster (hostname passed in needs to point to the Kubernetes cluster otherwise certificate can’t be issued)

Note: This documentation is only partly complete and requires some understanding about Kubernetes and requires reading of external sources. If you would like to use the DNS-01 challenge, you can create the certificate beforehand and just pass in the key-name (the certificate needs to be in the same namespace as the fmeserver deployment).

  1. Install cert-manager: helm install --name cert-manager --namespace kube-system stable/cert-manager
  2. Add a letsencrypt issuer. We recommend using a ClusterIssuer for letsencrypt (can be shared across namespaces, more info here). It is highly recommended to test the configuration first against the LE staging environment. The production API has quite strict rate-limiting.
  3. Ensure the email address configured in the issuer is valid.
  4. Install FME Server: helm install -n fmeserver safesoftware/fmeserver-2018.1.0-beta --set fmeserver.buildNr=<buildNr>,deployment.tlsSecretName=fmeserver-tls-cert-le-<env>,deployment.hostname=<your.custom.hostname>,deployment.certManager.issuerName=<issuerName>,deployment.certManager.issuerType=cluster

Multi-host Deployment (Highly Available Core)

A multi-host Kubernetes deployment of FME Server allows for components to be duplicated across multiple hosts. This duplication ensures a higher level of uptime, as if one host fails the other components on the other host can continue functioning. Currently, the FME Core can be scaled (which is the most important part), the queues (Redis), WebSocket service, and database can not.

To deploy FME Server across multiple hosts, a distributed file system like NFS is required. In Kubernetes terms, this means the storage volume needs to support ReadWriteMany. If no existing distributed file system is available, the easiest way is to use an existing NFS Server and configure Kubernetes to use that to provision volumes, or use an offering from a Cloud provider like Azure Files.

Note: The helm chart requires multiple hosts (nodes) to work with more than one core. Only one core will be launched per node.

Architecture of a highly available FME Server Kubernetes deployment (Link to full resolution image)

Deployment with NFS Client Provisioner

Note: This has not yet been tested for production workflows.

An NFS Client provisioner can be deployed to use an existing NFS Server to provision volumes for Kubernetes to use. The provisioner needs to be installed before FME Server and should not be removed before FME Server is deleted from the cluster.

  1. Follow the instructions at the following link to deploy the NFS client provisioner using the settings of your existing NFS Server: https://github.com/Kubernetes-incubator/external-storage/tree/master/nfs-client
  2. Install FME Server (other parameters mentioned above can be added if required):

helm install --namespace <fmeserver-namespace> -n <fmeserver-deployment-name> safesoftware/fmeserver-2018.1.0-beta --set fmeserver.buildNr=<buildNr>,deployment.numCores=2,storage.fmeserver.class=managed-nfs-storage,storage.fmeserver.accessMode=ReadWriteMany

Parameters

Gotchas

Deployment with external database

The FME Server database is a critical part of FME Server deployment. With the default Kubernetes deployment it is a single point of failure—so if it fails then the FME Server will stop functioning. To have a highly available deployment of FME Server, it is recommended that you switch to using a highly available PostgreSQL database. For example, if you are deploying on AWS you may want to use RDS which is a highly available and scalable managed database.

To use an external outside of the Kubernetes cluster you need to do the following.

  1. Provision the database and ensure to make note of the host, master username, master password.
  2. Make sure the cluster has access on port 5432.
  3. Create a secret in your cluster in the namespace you will deploy into that contains the admin password for the database. The password must be base64 encoded.

Create a YAML file that looks like this. Note, you will need to manually base64 encode the password.

apiVersion: v1

kind: Secret

metadata:

  name: fmeserversecret

type: Opaque

data:

  password: Zm15473ja3M

To create the secret run: kubectl create -f secrets.yaml

Note down the secret name and the name of the key that holds the value of the password.

  1. Run the helm chart setting the following values:

When FME Server is deployed, one of the containers will connect to the external database and run all of the SQL setup scripts.

Cloud Provider Specific Instructions

Below are the instructions for initiating and configuring a cluster with FME Server running for each of the main cloud providers.

Deployment in Azure AKS using Azure File

Launching the Cluster

Note: Official Azure Doc for deploying a cluster and connecting can be found here.

  1. Navigate to the Azure portal in a web browser
  2. Navigate to the Kubernetes services section.
  3. Click “Add”
  4. Specify the parameters:
  1. Validation checks should now run. Once that has completed, review the settings, then press “Create”. The Kubernetes cluster will now be deployed. It should take 5-10 minutes.

Connect to the Cluster

  1. Once it has completed deployment, press the “Go to Resource” to view the Kubernetes deployment.
  2. To connect to the Kubernetes dashboard, click the “View Kubernetes dashboard” link at the bottom of the page and follow the instructions. This will open the dashboard and also set up your kubectl to point at the new cluster. You can check your kubectl is pointing at the new cluster by running “kubectl get nodes”. You should see 3 nodes in the cluster.

Install Helm and an NGINX ingress controller

  1. Install helm by running “helm init”
  2. Give helm permissions to deploy things. If running a production cluster, this is probably bad practice, but it makes life easier for our test cluster. For more information, see https://github.com/helm/helm/blob/master/docs/rbac.md 
    Run:
    kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
  3. Install the NGINX ingress controller with “helm install stable/nginx-ingress --namespace kube-system -n nginx-ingress

Setup Shared Storage using Azure File

Note: The following instructions are for Azure File Premium which is currently in a limited preview. Standard Azure File can also be used, but is slower.

  1. In order to deploy FME Server across multiple nodes, we need to set up the storage for the system share using Azure File.

  1.  Create a storage class in Kubernetes for using our new Azure File storage account. Create a file on your system with these contents:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
    name: azurefile-fmeserver
    provisioner: kubernetes.io/azure-file
    parameters:
    skuName: Premium_LRS
    storageAccount: <storageaccountname>
    mountOptions:
    - uid=1363
    - gid=1363

    Replacing <storageaccountname> with the name you specified previously when creating the account. Run the command:

    kubectl apply -f <path_to_file>

    where
    <path_to_file> is the yaml file created above. In the dashboard you should now see “azurefile-fmeserver” under the “Storage Classes” section.
  2. If RBAC is enabled, grant permissions for Azure File:


kubectl create clusterrolebinding add-on-cluster-vol-binder --clusterrole=cluster-admin --serviceaccount=kube-system:persistent-volume-binder

Deploy FME Server

  1. Add the safe software helm repository with:

    helm repo add safesoftware https://safesoftware.github.io/helm-charts/
  2. Get the IP address of the ingress controller.

    kubectl get services --namespace kube-system

    Note the “External IP” of the nginx-ingress-controller. This is our ingress into the cluster. In a production environment we would set up a DNS that points to this IP address and use that as the hostname when we deploy FME Server. In this example we will just access FME Server directly using the IP.

  1. Install FME Server.

    helm install -n fmeserver safesoftware/fmeserver-2019.0.0-beta --set fmeserver.buildNr=19152,deployment.hostname=x.x.x.x,deployment.numCores=2,storage.fmeserver.class=azurefile-fmeserver,storage.fmeserver.accessMode=ReadWriteMany,storage.fmeserver.size=100Gi,storage.postgresql.size=10Gi,deployment.useHostnameIngress=false,deployment.numCores=2,deployment.numEngines=4

    There are a few parameters in this command to note.:
  1. Once FME Server is finished deploying, access it by going to the external IP noted in step 18 in your browser.

Deploy FME Server on Kubernetes using Google Kubernetes Engine

These instructions walk you through how to deploy a multi-node FME Server Kubernetes deployment onto Google Cloud using the Google Kubernetes Engine (GKE) and Google Filestore for storage.

Note, if you want to deploy a single node deployment then you can use the quick start script.

Launching the cluster

  1. Navigate to the Google Kubernetes Engine section.
  2. Choose Create Cluster to deploy a new Kubernetes Cluster.
  3. From the Cluster templates select to create a Standard cluster. Set the following parameters:
  1. Single core deployment: 2 vCPUS, 8GB RAM, 2 nodes.
  2. Two-core deployment: 4 vCPUs, 16GB RAM, 3 nodes. Note, when you specific more than one Core container, you need a minimum of two nodes because there is a constraint in the helm chart which prevents Core containers residing on the same node. Three nodes were chosen because if you lose a node, you will still have some fault tolerance as two cores can still run on the remaining two nodes.

  1. Click “Create” to create a new cluster with these settings. Wait for your cluster to be created and become available.

Connect to the cluster

Let’s setup your local machine so that you can run kubectl and helm commands in your local terminal to deploy resources into the cluster. It is assumed you have the gcloud, kubectl and helm commands available locally.

Note, you can skip this section if you wish to run the commands in the Cloud Shell.

  1. Click on the Connect button next to Kubernetes cluster you just launched. Then copy the “Command-line access” command and execute it in your terminal.

  1. You can check if you are connected to the cluster by running “kubectl get nodes”. You should see the 3 nodes in the cluster.

  1. Install helm into the cluster by running “helm init”

  1. Install the NGINX ingress controller.

helm install stable/nginx-ingress --namespace kube-system -n nginx-ingress

Setup shared storage

FME Server needs a storage solution for the system share. The share needs to be accessible across all nodes. To do this on Google Cloud we’ll use Google Filestore.

  1. To set up the storage for the FME Server system share, we need to create a Google Filestore instance to host an NFS server for us to use in our Kubernetes cluster. Go to “Filestore” in the Google Cloud Platform menu.
  2. Click “Create Instance” on the Filestore page, set the following then click Create.

  1. Once we have the Filestore instance up and running, we need to deploy a storage class in our cluster to make use of this nfs server. Kubernetes provides a nice helm chart called nfs-client-provisioner that makes this easy. First, go to the details of your Filestore instance and note the IP address and path under “Mount point”.

  1. Next run the command

    helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner

    Replace the x.x.x.x and /exported/path with the values of your Fileshare. For example:

    helm install --set nfs.server=10.104.162.202 --set nfs.path=/fmeservernfs stable/nfs-client-provisioner

    This will create a storage class in our Kubernetes cluster called “nfs-client” that will utilize this Filestore NFS server for volumes in Kubernetes.

Deploy FME Server

  1. Add the safe software helm repository with:

    helm repo add safesoftware https://safesoftware.github.io/helm-charts/
  2. Get the IP address of the ingress controller.

kubectl get services --namespace kube-system

Note the “External IP” of the nginx-ingress-controller. This is our ingress into the cluster. In a production environment we would set up a DNS that points to this IP address and use that as the hostname when we deploy FME Server. In this example we will just access FME Server directly using the IP.

  1. Install FME Server.

    helm install -n fmeserver safesoftware/fmeserver-2019.0.0-beta --set fmeserver.buildNr=19152,deployment.hostname=x.x.x.x,deployment.numCores=2,storage.fmeserver.class=nfs-client,storage.fmeserver.accessMode=ReadWriteMany,storage.fmeserver.size=10Gi,storage.postgresql.size=10Gi,deployment.useHostnameIngress=false,deployment.numCores=2,deployment.numEngines=4

    There are a few parameters in this command to note.:

  1. Once FME Server is finished deploying, access it by going to the external IP noted in step 3 in your browser.

Deployment in AWS EKS using EFS

Launching the Cluster and Connecting

The steps for launching and connecting to a cluster in EKS are fairly involved and require setting up some prerequisite resources such as a VPC and Role for the cluster. Please follow the instructions in the official AWS EKS Quickstart to launch a cluster. To setup an FME Server for test purposes, in “Step 3” we recommend the following settings:

You can skip “Step 4: Launch a Guest Book Application” in those instructions

Install Helm and an NGINX ingress controller

  1. Install helm by running “helm init”
  2. Give helm permissions to deploy things. If running a production cluster, this is probably bad practice, but it makes life easier for our test cluster.

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

For more information, see https://github.com/helm/helm/blob/master/docs/rbac.md

  1. Install the NGINX ingress controller with

helm install stable/nginx-ingress --namespace kube-system -n nginx-ingress

Setup Shared Storage using EFS

Note: EFS has different performance options, and the default options may not be adequate for your FME Server depending on how FME Server is used. If you experience performance issues see the AWS documentation on EFS performance.

  1. In order to deploy FME Server across multiple nodes, we need to set up the storage for the system share using EFS.
  1. Using the AWS Console, navigate to EFS.
  2.  Click “Create file system
  3. Select the new VPC we created for our Kubernetes cluster. The subnets should change to our EKS subnets. Add the EKS Node security group to the list of security groups so that our nodes have permissions to mount them.
  4. Click “Next Step”
  5. Add any tags as desired. Everything can be left as default. Click “Next Step”
  6. Click “Create File System”

  1. Deploy the EFS Provisioner helm chart to provide a storage class in our cluster that will use our new EFS file system:

    helm install stable/efs-provisioner --set efsProvisioner.efsFileSystemId=<file-system-id> --set efsProvisioner.awsRegion=<region> --set efsProvisioner.path=/kubernetes-pv


Replacing <file-system-id> with the id of your created EFS from the Amazon Console. It should be of the form fs-12345678. Replace <region> with the region you deployed your cluster in.

Deploy FME Server

  1. Add the safe software helm repository with:

    helm repo add safesoftware https://safesoftware.github.io/helm-charts/
  2. Get the IP address of the ingress controller.

    kubectl describe service nginx-ingress-controller --namespace kube-system

    Note the “LoadBalancer Ingress:” of the nginx-ingress-controller. This is our ingress into the cluster..

  1. Install FME Server.

    helm install -n fmeserver safesoftware/fmeserver-2019.0.0-beta --set fmeserver.buildNr=19152,deployment.hostname=<ingress>,storage.fmeserver.class=efs,storage.fmeserver.accessMode=ReadWriteMany,storage.fmeserver.size=10Gi,storage.postgresql.size=10Gi,deployment.numCores=2,deployment.numEngines=4

    There are a few parameters in this command to note.:
  1. Once FME Server is finished deploying, access it by going to the external IP noted in step 18 in your browser.

Current Gotchas

The Kubernetes FME Server deployment is currently integrated into our formal build pipeline, that means we treat it exactly the same as the other installers. There are, however, a few gotchas that need to be taken into consideration when working with the tech preview.