Please note: The official website for Lattice information and documentation is: lattice.cf

Lattice Getting Started

Lattice is a project for running containerized workloads on a cluster. Lattice includes built-in web load-balancing, a cluster scheduler, log aggregation and health management. Lattice containers are described as long-running processes or short-lived one-off tasks.

Run Lattice easily on your laptop with a Vagrant VM or use a cluster of machines with AWS,  Digital Ocean or Google Compute Engine. This tutorial shows using Lattice to start applications based on a docker image, scale them up and down, retrieve logs and show how the components interact.

See the Lattice FAQ for more detail.

Pre-requisites for Vagrant VM

Starting the Lattice Vagrant VM

Download the ltc CLI

Shell 1 - Invoke ltc target

Shell 1 - Start tailing the log stream for an app

Shell 2 - Start a Docker Image

Shell 1 - Observe the Streaming Logs

Shell 1 - Logs aggregated from multiple containers

Shell 3 - Visualization of containers

Kill app instances and see them recover

Updating the Vagrant Box

Shell 0 - Show the simple app using Docker (optional)

Getting Fancy - Create a docker image starting with application code with Cloud Rocker

Getting Fancy - Routing table and router metrics

Getting Fancy - Explore etcd contents

Getting Fancy - Use Diego Tasks

Getting Fancy - Show receptor event stream

Getting Fancy - Show Multiple Diego Cells

Getting Fancy - Run Data Services

Getting Fancy - Troubleshooting with Lattice Logs

Getting Fancy - Compiling ltc CLI

Getting Fancy - Get shell access to a container

Pre-requisites for Vagrant VM

  • Vagrant
  • VirtualBox or VMware Fusion[a][b][c]
  • Go 1.3+ - instructions
  • git client
  • docker client (optional) - running existing docker images from Docker Hub does not require docker, but doing something useful with a real app likely requires creating a docker image

Validated with OSX 10.10.1, VirtualBox 4.3.20, Vagrant 1.7.1, go 1.4

Validated with OSX 10.10.1, Fusion 7.1, Vagrant 1.7.1, go 1.3.2

Starting the Lattice Vagrant VM

Download the Lattice repository, default VM memory configuration is 4GB. If you do not have enough memory on your system, while the VM is stopped you can change the memory settings in Virtualbox, Fusion, etc.

mkdir -p ~/workspace

cd ~/workspace

git clone https://github.com/cloudfoundry-incubator/lattice.git

cd lattice

Choose the right vagrant command for either virtualbox or fusion:[d]

vagrant up --provider virtualbox

OR

vagrant up --provider vmware_fusion

The VM should download and start, to verify try and ping the default IP address of 192.168.11.11 from the Vagrantfile.

$ ping 192.168.11.11

PING 192.168.11.11 (192.168.11.11): 56 data bytes

64 bytes from 192.168.11.11: icmp_seq=0 ttl=64 time=0.642 ms

Download the ltc CLI

ltc is the CLI for Lattice.

Binaries:

Shell 1 - Invoke ltc target

The target domain should be printed after the installation scripts run. It’s typically IPADDRESS.xip.io of a VM running the receptor. If prompted, the default username is “user” and the default password is “pass”.

ltc target 192.168.11.11.xip.io

Api Location Set

Shell 1 - Start tailing the log stream for an app

Tailing the logs is very helpful for indicating what is going on, especially for troubleshooting container startup. So before starting an app, in this case lattice-app, we recommend using ltc to tail the logs and keep that shell open.

ltc logs lattice-app

Shell 2 - Start a Docker Image

ltc create lattice-app cloudfoundry/lattice-app

No port specified, image metadata did not contain exposed ports. Defaulting to 8080.

No working directory specified, using working directory from the image metadata...

Monitoring the app on port 8080...

No start command specified, using start command from the image metadata...

Start command is:

/lattice-app

Creating App: lattice-app

27 Feb 15:57 [APP|0] Successfully created container

27 Feb 15:57 [APP|0] {"timestamp":"1425081439.956862450","source":"lattice-app","message":"lattice-app.lattice-app.starting","log_level":1,"data":{"port":"8080"}}

27 Feb 15:57 [APP|0] {"timestamp":"1425081439.957069397","source":"lattice-app","message":"lattice-app.lattice-app.up","log_level":1,"data":{"port":"8080"}}

27 Feb 15:57 [HEALTH|0] healthcheck passed

27 Feb 15:57 [HEALTH|0] Exit status 0

27 Feb 15:57 [APP|0] Lattice-app. Says Hello. on index: 0

lattice-app is now running.

http://lattice-app.192.168.11.11.xip.io

This starts a container running /lattice-app using Docker Hub cloudfoundry/lattice-app, which is a simple demo app. lattice-app has /env and /exit endpoints. The first time this app starts the cell must download the docker image layers, so it may take a little time and the command could even timeout. If your connection to Docker Hub is slow, give it a few minutes.

Notes on Docker Image support in Diego:

  • The Docker image must be on Docker Hub and public until this story is completed for supporting custom docker registries and authentication is added for private images[e][f]
  • The user executing the command in the Garden Linux container is vcap by default whereas in docker it is root. Sometimes using a non-root user may result in a permissions issue, which is usually obvious from the ltc logs APPNAME command. To start the app process as root in lattice, then issue the ltc start command with --run-as-root or -r. See the redis example.

If the app is RUNNING, check http://lattice-app.192.168.11.11.xip.io/

http://lattice-app.192.168.11.11.xip.io/env will show environment variables

Shell 1 - Observe the Streaming Logs

Invoke the lattice-app endpoint a few times to see access log entries, there is also a periodic health check shown in the log stream.

Shell 2 - List the apps

$ ltc list

App Name        Instances        DiskMb                MemoryMB        Routes

lattice-app        1/1                1024                128                lattice-app.192.168.11.11.xip.io

Shell 3 - Container distribution on Diego cells

In a separate shell

$ ltc visualize --rate 1s

You should see something like the following where the green dot represents that there is one container running on the Diego cell:

Shell 2 - Scale the app

$ ltc scale lattice-app 3

Scaling lattice-app to 3 instances...

App Scaled Successfully

$ ltc list

App Name        Instances        DiskMb                MemoryMB        Routes

lattice-app        3/3                1024                128                lattice-app.192.168.11.11.xip.io

Shell 1 - Logs aggregated from multiple containers

Request http://lattice-app.192.168.11.11.xip.io/ several times and see the index change and multiple health checks happening now.

$ ltc logs lattice-app

Shell 3 - Visualization of containers

Now there are 3 containers running on the cell.

Kill app instances and see them recover

Request: http://lattice-app.192.168.11.11.xip.io/exit

You will see the instance exiting detected in the log stream. Wait and see an instance being automatically recreated for the killed instance.

Updating the Vagrant Box

Lattice development is moving fast and has updates daily. You can update Lattice with the latest builds by fetching the latest repository updates and then run the vagrant up to start it. Also update ltc on your local client. Note that this wipes all data in the cluster.

$ cd ~/workspace/lattice/

$ vagrant destroy --force

$ git pull

Already up-to-date.

$ vagrant up --provider virtualbox

Make sure to download the latest CLI when you update the new Vagrant box.

Shell 0 - Show the simple app using Docker (optional)

To emphasize Lattice support for Docker images, you may want to show the example application using the docker cli, which will help show a distinction from Lattice which provides additional features like routing, log aggregation, advanced scheduling, etc:

docker run --rm -i -t -p 8080:8080 -e INSTANCE_INDEX=0 -e PORT=8080 cloudfoundry/lattice-app /lattice-app

and access the app on port 8080 of the boot2docker ip (change the ip below if needed)

open http://`boot2docker ip`:8080

Getting Fancy - Create a docker image starting with application code with Cloud Rocker

Lattice is great if you already have a docker image, but what if you just have application code that you would like to turn into a docker image without much fuss? Cloud Rocker was created to address this problem by taking an application and processing it with Cloud Foundry Buildpacks and creating a docker image. Follow the instructions in the ReadMe to run Cloud Rocker as a separate VM (only supports virtualbox currently).

  • Validate the VM has a valid docker environment with fock docker
  • Download the Cloud Foundry root file system image with fock this
  • Add the Buildpack with fock add-buildpack https://github.com/cloudfoundry/java-buildpack
  • Create your docker container with fock up from the directory that contains your app and make sure it works with localhost:8080
  • Shutdown the container with fock off
  • Create the docker image with fock build
  • Test the docker image you created with docker run -d -p 8080:8080 IMAGE_ID Note that an issue on CloudRocker that it currently does not build the docker CMD properly, you can modify the command manually with docker run where the main changes are replacing $PWD with /app and “ with \”. Example:  $ docker run -d -p 8080:8080 jbayer/java-example /bin/bash /app/cloudfocker-start-1c4352a23e52040ddb1857d7675fe3cc.sh /app JAVA_HOME=/app/.java-buildpack/open_jdk_jre JAVA_OPTS=\"-Djava.io.tmpdir=/app/tmp -XX:MaxMetaspaceSize=64M -XX:MetaspaceSize=64M -Daccess.logging.enabled=false -Dhttp.port=\$PORT\"  /app/.java-buildpack/tomcat/bin/catalina.sh run
  • Tag the docker image you just verified with the public docker hub location e.g. docker tag IMAGE_ID jbayer/java-example
  • Push it to the public docker hub e.g. docker push jbayer/java-example
  • Some images built with Cloud Rocker do not properly format the docker command (which you can see via docker inspect), so you can specify the properly formatted command on the CLI. e.g. ltc start java-example -i "docker:///jbayer/java-example" -- /bin/bash /app/cloudfocker-start-1c4352a23e52040ddb1857d7675fe3cc.sh /app JAVA_HOME=/app/.java-buildpack/open_jdk_jre JAVA_OPTS=\"-Djava.io.tmpdir=/app/tmp -XX:OnOutOfMemoryError=/app/.java-buildpack/open_jdk_jre/bin/killjava.sh -XX:MaxMetaspaceSize=64M -XX:MetaspaceSize=64M -Daccess.logging.enabled=false -Dhttp.port=\$PORT\" /app/.java-buildpack/tomcat/bin/catalina.sh run

Ruby example app built with Cloud Rocker:

lattice-cli start ruby-example -i "docker:///jbayer/ruby-example" -- /bin/bash /app/cloudfocker-start-1c4352a23e52040ddb1857d7675fe3cc.sh /app export HOME=/app \&\& source /app/.profile.d/ruby.sh \&\& bundle exec rackup config.ru -p \$PORT

Node.js example app built with Cloud Rocker:

lattice-cli start node-example -i "docker:///jbayer/node-example" -- /bin/bash /app/cloudfocker-start-1c4352a23e52040ddb1857d7675fe3cc.sh /app export HOME=/app \&\& source /app/.profile.d/nodejs.sh \&\& node web.js

PHP example app built with Cloud Rocker:

lattice-cli start php-example -i "docker:///jbayer/php-example" -- /bin/bash /app/cloudfocker-start-1c4352a23e52040ddb1857d7675fe3cc.sh /app export HOME=/app \&\& source /app/.profile.d/php.sh \&\& vendor/bin/heroku-php-apache2

Python example app built with Cloud Rocker:

issue open with Cloud Rocker

Getting Fancy - Routing table and router metrics

From the lattice repo directory

$ vagrant ssh

$ sudo su -

$ apt-get install yajl-tools

# find the IP address the router binds to for varz

$ netstat -a | grep 8090

tcp        0      0 10.0.2.15:8090          *:*                     LISTEN

$ curl "http://router:router@10.0.2.15:8090/routes" 2>/dev/null | json_reformat

{

    "doppler.192.168.11.11.xip.io": [

        "10.0.2.15:8082"

    ],

    "grace.192.168.11.11.xip.io": [

        "10.0.2.15:61004",

        "10.0.2.15:61006",

        "10.0.2.15:61005"

    ],

    "loggregator.192.168.11.11.xip.io": [

        "10.0.2.15:8070"

    ],

    "receptor.192.168.11.11.xip.io": [

        "10.0.2.15:8888"

    ]

}

Getting Fancy - Explore etcd contents

Diego currently uses etcd for storing a consistent view of state about both the desired and actual state of the world. The Diego team considers etcd is a private interface and may replace it for other approaches at a later time. However, it can be useful to see the equivalent of a database dump when troubleshooting Lattice. jq is a tool used to pretty-print JSON on OSX, install it with brew install jq

curl -L http://192.168.11.11:4001/v2/keys/?recursive=true 2>/dev/null | jq .

You’ll see the combination of cells, desired things, actual things.

See example output here: https://gist.github.com/jbayer/ded3814ae14e1ea5a5c6

Getting Fancy - Use Diego Tasks

The Diego API supports Tasks, which are temporary applications. Demo tasks in a compelling way.[g]

curl -X POST -d @task.json http://receptor.192.168.11.11.xip.io/v1/tasks -vvv

{

    "task_guid": "task-guid",

    "domain": "diego-edge",

    "stack": "lucid64",

    "root_fs": "docker:///cloudfoundry/lucid64",

    "action":  {

      "run": {

        "path": "/bin/sh",

        "args": ["-c", "exit 1"],

        "dir": "/tmp"

      }

    },

    "log_guid": "task-guid",

    "log_source": "task-guid"

}

curl http://receptor.192.168.11.11.xip.io/v1/tasks/task-guid -vvv

{

  "action": {

    "run": {

      "path": "/bin/sh",

      "args": [

        "-c",

        "exit 1"

      ],

      "dir": "/tmp",

      "env": null,

      "resource_limits": {}

    }

  },

  "completion_callback_url": "",

  "cpu_weight": 0,

  "disk_mb": 0,

  "domain": "diego-edge",

  "log_guid": "task-guid",

  "log_source": "task-guid",

  "memory_mb": 0,

  "result_file": "",

  "stack": "lucid64",

  "task_guid": "task-guid",

  "root_fs": "docker:///cloudfoundry/lucid64",

  "cell_id": "cell-01_z",

  "created_at": 1419662961360519400,

  "failed": true,

  "failure_reason": "Exited with status 1",

  "result": "",

  "state": "COMPLETED"

}

Getting Fancy - Show receptor event stream

Diego includes a receptor API endpoint that will stream events back to clients representing state changes in the system. The stream is implemented with HTTP Server-Sent Events (SSE) and therefore can be observerd with curl.

curl http://receptor.192.168.11.11.xip.io/v1/events

See a full example of scaling an app up to 5 instances and back down to 2 instances.

Getting Fancy - Show Multiple Diego Cells

This story split Lattice into two VM types, a coordinator and a cell. Follow the instructions on the README to create a cluster on AWS, Digital Ocean, or GCE.

Getting Fancy - Run Data Services

Redis

ltc create redis redis -r

Rabbit

ltc create rabbit rabbitmq -r

(Starts on port 5762.)

MySQL

ltc create mysql mysql -r -e MYSQL_ROOT_PASSWORD=somesecret

Postgres

ltc create postgres postgres -r --no-monitor -e POSTGRES_PASSWORD=somesecret

if you do not use --no-monitor, you continually see the log message:

LOG:  incomplete startup packet

Mongo

ltc create mongo mongo -r -e LC_ALL=C -- /entrypoint.sh mongod --smallfiles

Neo4J

ltc create neo4j tpires/neo4j -r -m 512

it may take some time for the container to load such that the ltc create operation times out, you can monitor the status with ltc status neo4j -r 2s or ltc debug-logs | veritas chug

Ubuntu

ltc create ubuntu library/ubuntu -- nc -l 8080

Nginx

ltc create nginx library/nginx -p 80 -r

After the container starts, the ltc status APPNAME shows you the port being used.

$ ltc status nginx

================================================================================

      nginx

--------------------------------------------------------------------------------

Instances        1/1

Stack                lucid64

Start Timeout        0

DiskMB                1024

MemoryMB        128

CPUWeight        0

Ports                80

Routes                nginx.192.168.11.11.xip.io

LogGuid        nginx

LogSource        APP

Annotation

--------------------------------------------------------------------------------

Environment

PORT="80"

================================================================================

      Instance 0  [RUNNING]

--------------------------------------------------------------------------------

InstanceGuid        616db740-0f50-4c4e-6409-88b72840ff36

Cell ID        lattice-cell-01

Ip                10.0.2.15

Ports                61035:80

Since                1421702036795054960

--------------------------------------------------------------------------------

You can see it from the receptor or the content what the value of the host:port is and connect to that endpoint from inside other containers.

$ curl http://receptor.192.168.11.11.xip.io/v1/actual_lrps/redis 2> /dev/null | jq .

[

  {

    "process_guid": "redis",

    "instance_guid": "a58e5d16-85af-4269-4eab-576ad0911fc9",

    "cell_id": "cell-01_z",

    "domain": "diego-edge",

    "index": 0,

    "host": "10.0.2.15",

    "ports": [

      {

        "container_port": 6379,

        "host_port": 61006

      }

    ],

    "state": "RUNNING",

    "since": 1419402445857793300

  }

]

Spring Cloud Config Server

ltc create --run-as-root configserver springcloud/configserver

If you want to register with Eureka add an env var:

ltc create --env EUREKA_SERVICE_URL=http://eureka.192.168.11.11.xip.io --run-as-root configserver springcloud/configserver

Spring Cloud Eureka Server

ltc create --run-as-root eureka springcloud/eureka

Spring Cloud Clients

ltc create --run-as-root --env CONFIG_SERVER_URL=http://configserver.192.168.11.11.xip.io --env EUREKA_SERVICE_URL=http://eureka.192.168.11.11.xip.io myapp mygroup/myapp

Getting Fancy - Troubleshooting with Lattice Logs

You are able to retrieve all of the lattice system component logs (not app logs) with the following command.

$ ltc logs lattice-debug

06 Mar 12:51 [rep|lattice-cell-01] {"timestamp":"1425675094.001937628","source":"rep","message":"rep.running-bulker.sync.starting","log_level":1,"data":{"session":"6.16"}}

06 Mar 12:51 [rep|lattice-cell-01] {"timestamp":"1425675094.002042055","source":"rep","message":"rep.running-bulker.sync.batch-operations.started","log_level":1,"data":{"session":"6.16.1"}}

06 Mar 12:51 [executor|lattice-cell-01] {"timestamp":"1425675094.003902912","source":"executor","message":"executor.request.serving","log_level":1,"data":{"method":"GET","request":"/containers","session":"43"}}

06 Mar 12:51 [rep|lattice-cell-01] {"timestamp":"1425675094.009559631","source":"rep","message":"rep.running-bulker.sync.batch-operations.succeeded","log_level":1,"data":{"batch-size":0,"session":"6.16.1"}}

06 Mar 12:51 [rep|lattice-cell-01] {"timestamp":"1425675094.009677887","source":"rep","message":"rep.running-bulker.sync.finished","log_level":1,"data":{"session":"6.16"}}

06 Mar 12:51 [executor|lattice-cell-01] {"timestamp":"1425675094.009293795","source":"executor","message":"executor.request.done","log_level":1,"data":{"method":"GET","request":"/containers","session":"43"}}

06 Mar 12:51 [executor|lattice-cell-01] {"timestamp":"1425675117.676995516","source":"executor","message":"executor.allocation-store-pruner.no-expired-allocations-found","log_level":1,"data":{"session":"4"}}

06 Mar 12:51 [executor|lattice-cell-01] {"timestamp":"1425675117.677023411","source":"executor","message":"executor.container-metrics-reporter.tick.started","log_level":1,"data":{"session":"3.18"}}

if you want to do it the old fashioned way see below

veritas is a CLI tool the Diego development team uses for debugging and troubleshooting how Diego is working. The important functionality in veritas is being ported over to ltc. The veritas CLI is also available as a binary from the github README link.

The majority of the Lattice logs for the various components are located in /var/log/upstart. The Diego development team has built very nice log tooling into veritas to support log viewing, filtering and quickly unifying logs from various system components. Use vagrant ssh to get login to the VM and then sudo su to become root so you can view the upstart logs. Download veritas for Linux using as described in the veritas README. Now you can combine two commands to unify important logs into a single stream and then serve those on web address with additional filtering tools.

For example:

root@ubuntu-trusty-64:/var/log/upstart# ./veritas chug-unify auctioneer.log file-server.log gorouter.log  garden-linux.log executor.log rep.log receptor.log converger.log | ./veritas chug-serve

Serving up on http://127.0.0.1:35326

Now you can visit 192.168.11.11:35326 (your port will be different, check the port output from the chug-serve command) in the browser from your laptop and you should see a very nice display of logs. It has filtering capabilities, shows errors distinctly in a global zoomed out view, and other nice pretty format options.

Getting Fancy - Monitoring with Veritas

You can run the veritas commands to look inside diego’s internals by connecting to etcd and running some commands.

$ export ETCD_CLUSTER=http://192.168.11.11:4001

$ veritas dump-store -rate=1s

This should dump the contents of the etcd store in a formatted way. The app should transition from a STARTING state to a RUNNING state. If the app starts successfully, you should see something like the below. The first time this app starts it must download the docker image layers, so it may take a little time.

Getting Fancy - Compiling ltc CLI[h][i]

The CLI is written in go and are easy to compile and install locally if you have go configured. Go uses a packaging system invoked with go get to download remote dependencies and try and install them locally in a place like ~/go/bin. You may need git, mercurial, bazaar and other dependencies to get this working. Instructions on installing go with brew are here. Before attempting to install the ltc CLI, make sure your $GOPATH is set, for example:

$ echo $GOPATH

/Users/jamesbayer/go

Additional “install from source” CLI instructions are here: https://github.com/cloudfoundry-incubator/lattice/tree/master/ltc#installing-from-source

Getting Fancy - Get shell access to a container

If you have more than 1 Cell in your lattice cluster, then you must locate the cell host that is running the container you’re trying to troubleshoot. If you know the name of the app such as lattice-app, you can do this by looking at the actual lrp output from the API.

curl http://receptor.192.168.11.11.xip.io/v1/actual_lrps/lattice-app 2> /dev/null | jq .

[

  {

    "process_guid": "lattice-app",

    "instance_guid": "669de81c-55a7-4653-5a56-fe241c987227",

    "cell_id": "lattice-cell-01",

    "domain": "lattice",

    "index": 0,

    "address": "10.0.2.15",

    "ports": [

      {

        "container_port": 8080,

        "host_port": 61065

      }

    ],

    "state": "RUNNING",

    "since": 1421563329831053800

  }

]

The output indicates it is on the first cell, so you can ssh to that machine, perhaps using the private ssh key if you are using one of the public cloud providers. If you are using vagrant, just go to the directory where the Vagrantfile is located and vagrant ssh followed by sudo su to become root, which is required to interface with Garden.

Garden containers are accessible in /var/lattice/garden/depot. If you list the contents of the Garden depot, you will find all of the running containers. How do you know which container handle matches the container you want to troubleshoot? Once you have shell access to the host where Garden is running, you can issue a request to the Garden API to list all of the container guids. In this case, notice the one highlighted in red matched the instance-guid from the actual lrp above.

curl http://127.0.0.1:7777/containers

{"Handles":["edc2c392-8e0b-4bb2-575e-28be0a398334"]}

Now you can issue a command to find out which container handle using the info endpoint

curl http://127.0.0.1:7777/containers/edc2c392-8e0b-4bb2-575e-28be0a398334/info

Full output example is in this gist. The important part is to notice the container path attribute, like:

"ContainerPath":"/var/lattice/garden/depot/h3201udncuo"

Now you can move to that directory and execute bin/wsh to get a shell in the container.

[a]I got this working with virtualbox but fusion bombs with... An error occurred while executing `vmrun`, a utility for controlling

VMware machines. The command and output are below:

Command: ["enableSharedFolders", "/Users/duncwinn/DappDev/CloudFoundry/lattice/.vagrant/machines/default/vmware_fusion/97dec8d7-2a84-4a8c-a5ca-720e9c104cfb/packer-ubuntu-14.04-amd64.vmx", {:notify=>[:stdout, :stderr]}]

Stdout: Error: The VMware Tools are not running in the virtual machine: /Users/duncwinn/DappDev/CloudFoundry/lattice/.vagrant/machines/default/vmware_fusion/97dec8d7-2a84-4a8c-a5ca-720e9c104cfb/packer-ubuntu-14.04-amd64.vmx

Stderr:

[b]there is a new fusion provider for vagrant released, did you try that one?

[c]I'm using vagrant-vmware-fusion (3.1.2)  for fusion 7.1.0

[d]There is currently sometimes a bug when restarting the VM that was previously halted [1]. To get around the bug you can either do a vagrant destroy followed by vagrant up to completely reset the environment or you can vagrant ssh, then issue sudo initctl start auctioneer and similar for  converger, rep and metron upstart services.

[1] https://www.pivotaltracker.com/story/show/85093548

[e]Seems like this is fixed already?

http://godoc.org/github.com/cloudfoundry-incubator/garden#ContainerSpec

[f]yes, +mnicosia@pivotal.io can address this if it's not already on lattice.cf site and maybe clean this up for people that still end up here somehow

[g]Our demo web app could spawn tasks in response to user input -- that would be cool.

[h]I think a certain amount of this section can be simplified with the addition of the install script: https://github.com/cloudfoundry-incubator/lattice/blob/master/ltc/scripts/install ?

[i]modify however you and team see fit please