Monthly Archives: April 2017

Is there any way to truly secure Docker container contents?

There is much to like about Docker. Much has been written about it, and about how secure the containerization is.

This post isn’t about that. This is about keeping what’s inside each container secure. I believe we have a fundamental problem here.

Earlier this month, a study on security vulnerabilities on Docker Hub came out, and the picture isn’t pretty. One key finding:

Over 80% of the :latest versions of official images contained at least on high severity vulnerability!

And it’s not the only one raising questions.

Let’s dive in and see how we got here.

It’s hard to be secure, but Debian makes it easier

Let’s say you want to run a PHP application like WordPress under Apache. Here are the things you need to keep secure:

  • WordPress itself
  • All plugins, themes, customizations
  • All PHP libraries it uses (MySQL, image-processing, etc.)
  • MySQL
  • Apache
  • All libraries MySQL or Apache use: OpenSSL, libc, PHP itself, etc.
  • The kernel
  • All containerization tools

On Debian (and most of its best-known derivatives), we are extremely lucky to have a wonderful security support system. If you run a Debian system, the combination of unattended-updates, needrestart, debsecan, and debian-security-support will help one keep a Debian system secure and verify it is. When the latest OpenSSL bug comes out, generally speaking by the time I wake up, unattended-updates has already patched it, needrestart has already restarted any server that uses it, and I’m protected. Debian’s security team generally backports fixes rather than just say “here’s the new version”, making it very safe to automatically apply patches. As long as I use what’s in Debian stable, all layers mentioned above will be protected using this scheme.

This picture is much nicer than what we see in Docker.

Problems

We have a lot of problems in the Docker ecosystem:

  1. No built-in way to know when a base needs to be updated, or to automatically update it
  2. Diverse and complicated vendor security picture
  3. No way to detect when intermediate libraries need to be updated
  4. Complicated final application security picture

Let’s look at them individually.

Problem : No built-in way to know when a base needs to be updated, or to automatically update it

First of all, there is nothing in Docker like unattended-updates. Although a few people have suggested ways to run unattended-updates inside containers, there are many reasons that approach doesn’t work well. The standard advice is to update/rebuild containers.

So how do you know when to do that? It is not all that obvious. Theoretically, official OS base images will be updated when needed, and then other Docker hub images will detect the base update and be rebuilt. So, if a bug in a base image is found, and if the vendors work properly, and if you are somehow watching, then you could be protected. There is work in this area; tools such as watchtower help here.

But this can lead to a false sense of security, because:

Problem : Diverse and complicated vendor security picture

Different images can use different operating system bases. Consider just these official images, and the bases they use: (tracking latest tag on each)

  • nginx: debian:stretch-slim (stretch is pre-release at this date!)
  • mysql: debian:jessie
  • mongo: debian:wheezy-slim (previous release)
  • apache httpd: debian:jessie-backports
  • postgres: debian:jessie
  • node: buildpack-deps:jessie, eventually depends on debian:jessie
  • wordpress: php:5.6-apache, eventually depends on debian:jessie

And how about a few unofficial images?

  • oracle/openjdk: oraclelinux:latest
  • robotamer/citadel: debian:testing (dangerous, because testing is an alias for different distros at different times)
  • docker.elastic.co/kibana: ubuntu of some sort

The good news is that Debian jessie seems to be pretty popular here. The bad news is that you see everything from Oracle Linux, to Ubuntu, to Debian testing, to Debian oldstable in just this list. Go a little further, and you’ll see Alpine Linux, CentOS, and many more represented.

Here’s the question: what do you know about the security practices of each of these organizations? How well updated are their base images? Even if it’s Debian, how well updated is, for instance, the oldstable or the testing image?

The attack surface here is a lot larger than if you were just using a single OS. But wait, it gets worse:

Problem #3: No way to detect when intermediate libraries need to be updated

Let’s say your Docker image is using a base that is updated immediately when a security problem is found. Let’s further assume that your software package (WordPress, MySQL, whatever) is also being updated.

What about the intermediate dependencies? Let’s look at the build process for nginx. The Dockerfile for it begins with Debian:stretch-slim. But then it does a natural thing: it runs an apt-get install, pulling in packages from both Debian and an nginx repo.

I ran the docker build across this. Of course, the apt-get command brings in not just the specified packages, but also their dependencies. Here are the ones nginx brought in:

fontconfig-config fonts-dejavu-core gettext-base libbsd0 libexpat1 libfontconfig1 libfreetype6 libgd3 libgeoip1 libicu57 libjbig0 libjpeg62-turbo libpng16-16 libssl1.1 libtiff5 libwebp6 libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxml2 libxpm4 libxslt1.1 nginx nginx-module-geoip nginx-module-image-filter nginx-module-njs nginx-module-xslt ucf

Now, what is going to trigger a rebuild if there’s a security fix to libssl1.1 or libicu57? (Both of these have a history of security holes.) The answer, for the vast majority of Docker images, seems to be: nothing automatic.

Problem #4: Complicated final application security picture

And that brings us to the last problem: Let’s say you want to run an application in Docker. exim, PostgreSQL, Drupal, or maybe something more obscure. Who is watching for security holes in it? If you’re using Debian packages, the Debian security team is. If you’re using a Docker image, well, maybe it’s the random person that contributed it, maybe it’s the vendor, maybe it’s Docker, maybe it’s nobody. You have to take this burden on yourself, to validate the security support picture for each image you use.

Conclusion

All this adds up to a lot of work, which is not taken care of for you by default in Docker. It is no surprise that many Docker images are insecure, given this picture. The unfortunate reality is that many Docker containers are running with known vulnerabilities that have known fixes, but just aren’t, and that’s sad.

I wonder if there are any practices people are using that can mitigate this better than what the current best-practice recommendations seem to be?