Docker Security with a Zero Trust Model

An open-source, zero-trust model doesn’t just mean that the source code is open, but that the process of compilation and delivery can also be checked at each step of the way. In building a secured system - a good check is running the delivery process backwards until one gets all the way back to the source code. Container setup, delivery, and maintenance should be no exception to this rule.

In running that check backwards from docker delivery (docker pull) I noticed that the SHA256SUM hash for the image released by Canonical ( https://partner-images.canonical.com/core/focal/YYYYMMDD/SHA256SUMS ) didn’t match the official, signed, blessed SHA256SUM hash for the same image released by Docker on hub.docker.com. I wanted to know WHY they were different. I opened tickets on Docker and Canonical. The answer I got was that Docker “does stuff” to get the image ready for download on their hub. I wanted to know exactly what that difference was and so created (what’s now known as) the program “Snare” to document the exact changes between the two versions. A full binary diff found unsigned binary images added in the hub.docker.com version. That security ticket (and another related to it) is now closed and so I’m doing a writeup of why “Snare” was created and what I feel is a hole in existing (as of 2021-12-22 )  docker image verification tests.

What actually happens in the complete docker process that goes into a  “docker pull” command? The steps are as follows:

  1. The vendor releases their own image with HASH A
  2. The vendor’s image is taken by docker, made “docker container ready” and placed at hub.docker.com and given a new signature (HASH B).  Some things to note:
  1. Vendor’s image is often hashed but not cryptographically signed
  2. Docker’s image is often hashed but not cryptographically signed
  3. HASH A won’t match HASH B if docker adds low-level modifications.
  1. “docker pull”  checks that the hash on hub.docker.com matches the hash delivered to your local machine
  2. Docker extracts that downloaded image (essentially a tar file) into a de-tarified location based on the hash.

This process is mapped out by the following diagram

The location of where images are saved or deployed might not be on the same system that did the “docker pull” and with NFS or cloud-based file storage it’s unlikely that the machine running “docker up” would be the same as the one storing that data. This means it’s even more critical to make sure that what’s delivered for “Step 3” is verified.

 

So what parts can we check with Docker security checks? The Docker Command Line Interface (CLI) provides:

Additionally, Docker has partnered with Snyk to do additional “scanning” on the manifest of docker images (not the actual deployed image), and we’ll touch on why that’s insufficient.

But first, what do you get when you pull a container image?

You get two parts. A “Manifest” which has details about the image itself and the actual files that make up the images. The tar files represent different layers.

Docker Pull:

We’ve covered docker pull as doing the hash (and signing, if enabled) check between steps 1 and 2.

Docker inspect:

Docker inspect checks the manifest and gives you the details about the image. It doesn’t actually scan the tar files, but can be used by other security checks to check hashes on layers in the tar files.

Docker save:

The command “docker save” reads the manifest, checks to see if the files saved are different than the signature in the manifest and rebuilds a tarfile/manifest based on the original manifest. This is essentially taking “Step 3” and recreating at the local level a tar file based on that manifest. There is a security issue however here. If the local directory in which the image has been modified, “docker save” does not detect these changes. This has three consequences: (1) When you run “docker save” the injected files will not be encapsulated into the .tar file created,  which has the further consequence that (2) if you send the “docker save” version for forensic analysis, it will miss those files. Since the injected files remain even as one destroys and creates new containers - this means these files (3) are a persistent and undetected threat even though one has just created a “new” and “fresh” container.

Docker diff:

If one is inside a container and makes changes, those changes are recorded in that layer's manifest. The command “docker diff” looks at the manifest and reports changes as stated in the manifest.

Snyk:

According to Snyk, Snyk doesn’t actually send the entire image to Snyk but instead uses the manifest to send a list of files that Snyk analyzes. (Note: Snyk is closed source and so I cannot verify this information)

So how does this impact Zero Trust? One needs to analyze changes between steps that are not covered by docker security checks. I KNOW that the image hasn’t changed because docker verifies the download hash does not change, but when I look at vendor deployed images a Zero Trust model would check between Steps 0 and 1 and between steps 2 and 3.

I started writing Snare (https://github.com/ajrepo/snare/ ) to analyze what was happening between steps 0 and 1. In that investigation I discovered an unsigned binary added to the package cache which could have been exploited to corrupt an entire image. That ticket was closed, but it started my deep dive into understanding how layers are created and tracked in the container infrastructure.

Some colleagues suggested that I create a screen capture that more easily explains how this process works and how one can detect this changes over time..

In creating screen capture I also discovered that by exploiting how Docker trusts the manifest to track the changes on disk, I could inject files into a docker container completely undetected by the docker CLI integrity checks.

https://vimeo.com/658829709/927ec53f2d

The above video has a walk-through of exploiting step 3 and it being undetectable by docker image security checks. It also shows a program I wrote to detect these issues as well as differences between images in step 0 and step 1 or any differences over time in an image, even if not detected by docker diff.

By bypassing the docker manifest and checking the raw data on disk, this also allows one to run automated, detailed differences of images over time and be sure that your checks do not rely on anything other than the security of the scanning machine (which can be a separate, minimal, secured device with only read access to the docker image storage engine).