Oracle Fusion Middleware Deployments Using Docker Swarm Part III

Overview

This is the third in a series of blogs that describe how to build a Fusion Middleware (FMW) Cluster that runs as a number of Docker images that run in docker containers.  These containers are coordinated using Docker Swarm and can be deployed to a single host machine or multiple hosts.  This simplifies the task of building FMW clusters and also makes it easier to scale them in and out (adding or subtracting host machines) as well as up and down (using bigger or smaller host machines).  Using docker also helps us to avoid port conflicts when running multiple servers on the same physical machine.  When we use swarm we will see that we also get benefits from a built in load balancer.

This blog uses Oracle Service Bus as an FMW product but the principles are applicable to other FMW products.

In our previous blog we talked about how to build the required docker images for running FMW on Docker Swarm and created a database container.

In this entry we will explain how to create an FMW domain image and how to run that in a docker container.  The next blog will cover how to run this in Docker Swarm.

Key Steps in Creating a Service Bus Cluster

When creating a service bus cluster we need to do the following:

  1. Create the required schemas in a database.
  1. Create service bus domain.
  1. Create a Service Bus cluster within the domain.

These steps need to be factored into the way we build our docker images and containers and ultimately into how we create Docker Swarm services.

Mapping the Service Bus Cluster onto Docker

There are a number of ways in which we could map a Service Bus cluster onto Docker.  We have chosen the following approach:

Container Summary

We effectively have two images, which are both built from multiple layers as explained previously.

  1. Database image holds the binaries for database and scripts to create a database instance.
  2. Fusion Middleware image holds the FMW binaries and scripts to create a domain, or extend an existing domain.

We have a single Docker Container to run the database from the database image.

We have multiple Docker Containers, one per managed server, to run Fusion Middleware from the single domain image.

To simplify starting the containers the git project includes run scripts (run.sh for database and runNode.sh for FMW) that can be used to create containers.

Database Container

The database container runs from the database image and has the following characteristics:

The database container is started using the following command:

docker run -d -it --name Oracle12cDB --hostname osbdb -p 1521:1521 -p 5500:5500 -e ORACLE_SID=ORCL -e ORACLE_PDB=OSB -v /opt/oracle/oradata/OracleDB oracle/database:12.1.0.2-ee

We expose ports 1521 (database) and 5500 (em).

Admin Server Container

The admin server container runs from the FMW domain image, or just the FMW image if the layers have been collapsed.  It has the following characteristics:

We start the admin server using the following command

runNode.sh admin

This translates to

docker run -d -it \

        -e "MS_NAME=AdminServer" \

        -e "MACHINE_NAME=AdminServerMachine" \

        --name wlsadmin \

        --add-host osbdb:172.17.0.2 \

        --add-host wlsadmin:172.17.0.3 \

        --add-host wls1:172.17.0.4 \

        --add-host wls2:172.17.0.5 \

        --add-host wls3:172.17.0.6 \

        --add-host wls4:172.17.0.7 \

        --hostname wlsadmin \

        -p 7001:7001 \

        oracle/osb_domain:12.2.1.2 \

        /u01/oracle/container-scripts/createAndStartOSBDomain.sh

We need to add the hostnames of the managed servers and the database to the /etc/hosts file so that the admin server can access them.  We will show how to avoid doing this in the final blog post.

Managed Server Containers

The managed server containers runs from the same FMW domain image as the Admin Server.  It has the following characteristics:

We start the managed servers using the following command

runNode.sh N

Where N is the number of the managed server.

When N=2 this  translates to

docker run -d -it \

        -e "MS_NAME=osb_server2" \

        -e "MACHINE_NAME=OsbServer2Machine" \

        --name wls2 \

        --add-host osbdb:172.17.0.2 \

        --add-host wlsadmin:172.17.0.3 \

        --add-host wls1:172.17.0.4 \

        --add-host wls2:172.17.0.5 \

        --add-host wls3:172.17.0.6 \

        --add-host wls4:172.17.0.7 \

        --hostname wlsadmin \

        -p 8013:8011 \

        oracle/osb_domain:12.2.1.2 \

        /u01/oracle/container-scripts/createAndStartOSBDomain.sh

Note that all the managed servers listen on port 8011.  Because they each run in their own container their is no conflict in their port numbers but we need to map them so that they can be accessed externally without conflicts.

Special Notes for Service Bus

The first managed server in a Service Bus cluster is special because it runs singleton tasks related to reporting, collecting performance information from other nodes in the cluster and aggregating it and making it available to the console.  Because of this we decided to always create a Service Bus domain with a pre-existing single Managed Server in the cluster with the correct singleton targeting.

Because this server already exists if a container detects it is supposed to run Managed Server 1 then it does not create the server or associate it with a cluster, it justs assigns it to a machine (see next section for details) and creates the local managed server domain.

Containers and FMW Mapping

Each container maps to a single WebLogic server, either Admin Server or a Managed Server in a cluster.

The Admin Server container is responsible for running the repository configuration utility, creating the domain and configuring it for that particular FMW product (in our case Service Bus).

The Managed Server containers are responsible for adding a new Managed Server to the cluster and creating a local managed server domain.

Both Admin and Managed Server containers need to figure out key facts about themselves:

Starting the FMW Cluster

The FMW cluster is started as follows:

  1. A database container is created/started
  2. An admin server is created/started
  3. One or more managed servers are created/started

Containers are created using the “docker run” command.  We use the “-d” flag to run them as daemon processes.  By default the CMD directive in the dockerfile is used choose the command or script to run on container startup.  We use this for the database container.  For the admin and managed containers we identify which type of container they are and pass in an appropriate script to the “docker run” command.

Containers are started using the “docker start” command and the container uses the same command or script as when it was created.  That means we must detect if this is a new container or a container being started after previously being shutdown.  With the FMW containers we do this by looking for the existence of the domain directory, if it exists we have previously been started, if not this must be our first run.

Tools such as docker compose and docker compose simplify the task of deploying our multiple container FMW cluster and we will look at these in the next blog entry.  One of the benefits we will find with swarm is that it includes a load balancer.  The current multi-container approach would require either another container to run a load balancer or an external load balancer, we will see that swarm removes this need.

Retrieving Docker Files for OSB Cluster

We are posting all the required files to go along with this blog on github.  You are welcome to fork from this and improve it.  We cloned many of these files from the official Oracle docker github.  We removed unused versions and added a simplified build.sh file to each product directory to make it easy to see how we actually built our environment.  We are still updating the files online and will shortly add details on building the swarm services.

Summary

In this entry we have explained how to create a Fusion Middleware Domain and run it on docker containers.  In our next entry we will simplify the deploymentof of our cluster by taking advatage of swarm mode by defining swarm services for the database, WebLogic Admin Server and WebLogic managed servers.