Docker Makes Creating Secure Sandboxes Easier Than Ever

Today dotCloud announced that they have open sourced their LXC container runtime Docker. This is exciting news because Docker features many important things that have been missing from the stock LXC packages. The two most significant features in my mind are the out of the box support for creation of AUFS based images for true copy-on-write read-only file systems (similar to how distro live CDs work) and fast launching of ephemeral containers! With these two components in place quick launching of sandboxes to run arbitrary code in relative safety is now super easy.

To give you an example of just how easy, lets take a look at how to run some JavaScript on node safely inside a sandbox with Docker.

Prerequisites

  1. Access to an Ubuntu Machine or an Ubuntu VM
  2. Install docker using the instructions here: http://docker.io/gettingstarted.html

Let’s Do It!

The base docker image is a bare bones ubuntu server install. We’re going to use that as a base and create our own image with node installed.

In case you’d rather just watch check out the screencast: http://youtu.be/KkSbEvuRbfo

Step 1) Open a terminal and start an instance using the base image. This will launch you into a shell where we can begin to customise the image:

$ sudo docker run -i -t base /bin/bash

Step 2) Install node:

$ apt-get update
$ apt-get install python-software-properties python g++ make
$ add-apt-repository ppa:chris-lea/node.js
$ apt-get update
$ apt-get install nodejs

Note: replace the `python-software-properties` package with `software-properties-common` on Ubuntu 12.10 and above.

Step 3) Next bake our own image. Open another terminal session leaving the other one active in the background, then:

$ sudo docker ps

This should show the ID of the running container in the other terminal. Copy it then run:

$ sudo docker commit <paste your container ID here> node

Step 4) Check that the image was created:

$ sudo docker images

If all went to plan you should now be able to see your new image called “node” appearing in the list.

Step 5) Run some code inside your new sandbox:

$ echo "console.log('Hello World');" | sudo docker run -i node /bin/bash -c "cat > hello.js; node hello.js"

Profit

And that’s all there is to it! You’ve just successfully run arbitrary code in a safe, secure sandbox. Better still, since it’s based on LXC there are options for setting resource quotas to limit CPU and Memory usage meaning that denial of service by resource starvation is now a thing of the past.

I’m really looking forward to seeing how this develops over the coming months and how others put it to use.

About these ads
Tagged , , , , , , , , , , ,

6 thoughts on “Docker Makes Creating Secure Sandboxes Easier Than Ever

    • Dave Kuhn says:

      Thanks for the link. I wasn’t aware of that particular issue.

      When I say secure I mean it in the sense that it’s one of the best solutions out there today. There’s no such thing as a perfect solution, every sandbox has its vulnerabilities. Compared to the old chroot jails Linux containers are practically akin to Fort Knox.

  1. So I’ve been looking into Docker a great deal as a potential solution to a sandboxing environment I’m needing to create (a simple environment to test submitted, potentially dangerous, student code). That said, do you have any recommendations for preventing a good ol’ fashion fork bomb within a docker container?

    • Dave Kuhn says:

      Yeah there’s quite a bit you can do to prevent this actually. First thing is configuring some sensible PAM limits for the number of processes (e.g. 100) a single user can run in any one session. This is a good start and will prevent the classic fork bomb but doesn’t help with things like good old resource starvation (which is a form of DoS I suppose). For this you can use resource limits using the -c and -m options of the Docker run command .

      • I’ve now attempted to set PAM limits for a user inside a running container as well as a user who executes the docker command to run the container. In either case, the spawned children are always run by the root user. I’m assuming this is because the docker daemon runs under the root user. Is there a means of having the user than ran the docker command own any children that are spawned from inside the container?

  2. When you say limit the PAM limits, is that inside the container or inside the host? I attempted to limit it inside the container, but from my gathering all the processes are spawned from the host user. I could limit the PAM limits for the host, but that would make it such that each docker container I run potentially malicious code in would have to be run in its own user (if I were to ensure each container could have the same number of processes allowed).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 984 other followers

%d bloggers like this: