Docker is a popular container technology and has been very well accepted by industries across the world. It is used in production as well as UAT environments. However, with every new layer in the technology stack, a number of security issues can be introduced either because of loose configurations, insecure code or a combination of both. As docker comes with a number of components, often devops engineers do not give much thought to security, which can end with catastrophic results.
In this blog post, NotSoSecure Consultant Shubham Mittal will discuss security issues raised by an unauthenticated Docker Registry API exposed over network. But before that let’s discuss some basics about docker and docker registry.
What is Docker?
Docker is a very popular platform used by developers to eliminate “works on my machine” problems when collaborating on code with co-workers. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for apps.
What are Docker Images and Containers?
An image is a file-system and parameters to use at run time. It doesn’t have state and never changes. A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, run time, system tools, system libraries, settings. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.
What is Docker Registry?
The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images using HTTP API. Earlier versions of docker registry api i.e. v1 had a few problems and hence v2 was released and considerably improves security.
However it should be noted that both versions of Docker Registry have no authentication enabled by default.
What are blobs?
Layers are stored in as blobs in the v2 registry API, keyed by their content digest.
Why Docker Registry API should be authenticated? Docker registry allows any user to pull any of the container images and read any changes made by the owner. These changes might include hard-coded credentials, connection strings, changes in file permissions, custom scripts, etc. Not only that, but a user might also be able to upload blobs and make changes to the base images, eg. placing a backdoor, etc. As this image will be deployed next time, the backdoor will also get deployed on the server.
How to pull Docker images using docker pull?
The command to pull images from private registry is as follows:
docker pull HOST:PORT/IMAGE_NAME
During one of the recent pentest we encountered a server exposing Docker registry API which didn’t require any kind of authentication. While we tried to pull the images, we faced certificate errors followed by errors related to keys. We assumed it to be because of content trust or certificate keys. Multiple options, i.e. –tls, –tls- check, –disable-content-trust, etc. were tried, but none worked. However, the API was easily accessible from a web browser. Since it was a time restricted pentest, instead of digging more into why docker pull is not working, we opted for manual approach.
We, therefore decided to explore the API and fetch the system information from the API the Docker Registry Server
Note: For demo purposes, we have installed a private Docker Registry on local.example.com (127.0.0.1), port 30000.
The first check to perform is to verify which version of the API is supported. Sending a GET request to /v1 and /v2 will confirm which version of Registry API is being used, as shown in figure below:
Check registry version
Once the API version is confirmed, we can find the list of repos in the registry using /_catalog as shown in the figure below:
In context of Docker Registry, a repository is basically a collection of related images, typically providing different versions (or say, tags) of the same service or the application.
As we can see from the above image, this local installation has three repositories. Let’s explore the testrepo1 and find out the tags listed for this repo using the /REPO_NAME/tags/list endpoint as shown in the figure below:
Repo tag list
Since we identified that there are two tags, i.e. v1 and v2, let’s download the manifest file for the v2 tag using the / manifests/v2 endpoint as shown in the figure below:
Once we have a list of blobs, we can download each blob using the endpoint:
v2//blobs/sha256:/. For this instance our complete endpoint URL is:
This will download a gzipped file for each commit (or let’s say, configurational changes in base image) one blob is assigned).
Once we have downloaded all the blobs, we can unzip them and go through the folder structure to locate configurations which have been applied. As shown in figure below, one of the blobs of testrepo1 was unzipped and folder structure was available as shown in the figure below:
Within this example we have purposely planted some sensitive information while uploading the blob to the image (v2 tag of the testrepo1 repository). This is to represent the type of trophy that might be found within a real-world scenario.
As we can see, there is file ‘login.py’ which contains sensitive information.
We have written a small python script to reduce manual efforts. The script automatically performs all of these operations (based on the URL of Registry API and repository name passed by the user) by downloading all of the unzipped blobs in a user defined folder.
You can download / clone the script from https://github.com/NotSoSecure/docker_fetch/
Script asks few inputs from user based on which it selects the repository and tag to be downloaded, as shown in figure below:
docker_fetch in action
This script will save all the gzipped blobs in the directory defined by the user. In case there is a large number of blobs downloaded for a particular tag, we can use the following for loop (works only in *nix) to unzip all of them in one go:
for i in *.tar.gz; do tar -xzvf $i; done
Once blobs are unzipped, the user can manually look search for any sensitive information contained in the blob.
In similar manner, an attacker can also push code to base images via the Docker Registry API service. An attacker might be able to replace, or add to, an existing Docker image with malicious files, and as soon as this image is used for configuring any box, this will allow an attacker to take control of the new machine.
Below are some of the recommendation which organizations / individuals should implement while setting up Docker Private Registry.
- Version 2 of Docker Registry API supports multiple Token Based Authentications (i.e. bearer, oauth, etc.) which should be implemented while deploying docker registry api.
- Enable Content Trust to enforce client-side signing and verification of image tags.
- Use TLS.
As we have seen, Docker Private Registry allows read and write operations on base images used for deployment process, this becomes a sensitive area and hence should not allow unauthenticated access.