When it comes to (simple) web applications, then most of the time Docker is a perfect fit. However, as you begin to migrate your applications into Docker containers, you might ask yourself how to forward all the requests to the different containers. A Docker Reverse Proxy can help!
Virtual Hosts vs. Containers
In a classic setup without Docker you might have a web server like Apache or nginx. The web server is in charge of multiple websites and web applications, all separated by virtual hosts. The virtual hosts are based on the hostname (i.e. ServerName in Apache or server_name in nginx) and / or listen on different IP addresses.
However, in a really simple setup you might have:
- one public IP address
- a web server listening on this IP address via port 80 (HTTP) and 443 (HTTPS)
- multiple virtual hosts based on the Host: HTTP header
All requests will be handled by this single web server, which will evaluate the Host: header and forward the request to the desired virtual host.
When we look at Docker containers we realise that each virtual host is now a separate container. Every web application has its own container with its own web server instance. Instead of a single web server, we eventually have multiple web servers. Of course you can easily use different ports for each container / web server, but this isn’t very handy.
A reverse proxy for Docker containers
What you’re looking for
Instead of forwarding all the requests directly to the container, you should use a reverse proxy. The reverse proxy is listening for incoming HTTP(S) requests and forward them to your containers. However, if you use a default Docker setup, the IP addresses of your containers can change any time. So there are two options:
- Give your containers fix IP addresses
- Use a more dynamic proxy configuration
Even if there are benefits in fixed IP addresses, it isn’t the nature of Docker and you’ll lose a lot of other advantages. So let’s focus on the dynamic proxy configuration.
The thing you’re looking for is:
- A reverse proxy process.
- A process which “knows” your web application containers.
- A process which updates your reverse proxy with the correct configuration.
Let’s focus on the simple part first, the reverse proxy.
There are a lot of different options out there for reverse proxying (e.g. Squid, Apache, nginx). I’m a big fan of nginx, because it’s easy to configure and it’s fast! So I always use the official nginx Docker image.
Of course it’s only a generic nginx image, so we need to provide it an nginx configuration. Because we don’t want to overwrite the default nginx config, we mount the nginx conf.d directory into the docker container. Of course we also use HTTPS (SSL), so we need some certs as well:
Unfortunately, the conf.d directory is empty right now but we’ll soon provide a configuration in the next chapter. Please also make sure your HTTP 80 and HTTPS ports 443 are properly forwarded to the nginx container.
A guy called jwilder built a really nice Docker image which does some magic. docker-gen “knows” your containers and will render a configuration file based on a template. However, docker-gen needs to have read access to your Docker socket, because it needs to monitor the start and stop of containers.
So we need to mount 3 different volumes into this Docker container:
/var/run/docker.sock:/tmp/docker.sock:ro /var/lib/docker/data/proxygen:/templates /var/lib/docker/data/proxy/conf.d:/conf
Before docker-gen can do anything you need to feed it with a Go template. Here’s my nginx template:
This template will create a configuration file for an nginx reverse proxy. The nice thing about docker-gen and this template is:
- docker-gen “knows” your containers
- docker-gen will create an upstream / server for each container with a VIRTUAL_HOST environment variable
- docker-gen will re-create the config each time you stop / start a container
The only thing you need to do is providing docker-gen the template and the path for the rendered config. You can do that by specifying these command arguments:
-watch -notify-sighup=proxy /templates/proxy.tmpl /conf/proxy.conf
Run the container and docker-gen will now create a /conf/proxy.conf based on the /templates/proxy.tmpl template. When the template has changed, docker-gen will also send a SIGHUP to the proxy container.
Please read the docs on the Docker Hub for more informations about docker-gen. There are already other nginx configuration templates available. However, I needed to modify mine a bit because I use web sockets in one of the containers.
There’s also a nginx proxy available on Docker Hub, which combines docker-gen and nginx in one container. However, from a security point of view, I don’t recommend to mount the critical docker socket directly into a public available Docker container 😉
Running the reverse proxy
Here’s my docker-compose file:
proxygen: image: 'jwilder/docker-gen' container_name: 'proxygen' volumes: - '/var/run/docker.sock:/tmp/docker.sock:ro' - '/var/lib/docker/data/proxygen:/templates' - '/var/lib/docker/data/proxy/conf.d:/conf' command: '-watch -notify-sighup=proxy /templates/proxy.tmpl /conf/proxy.conf' tty: true stdin_open: true restart: always proxy: image: 'nginx' container_name: 'proxy' volumes: - '/var/lib/docker/data/proxy/conf.d:/etc/nginx/conf.d:ro' - '/var/lib/docker/data/proxy/certs:/etc/nginx/certs:ro' ports: - '80:80' - '443:443' tty: true stdin_open: true restart: always
Customise the paths of the volumes for your own needs, add the certs to the certs/ directory and make sure the proxy.tmpl exists in the templates/ directory. Then run the containers by executing:
docker-compose [-f COMPOSE-FILE.yml] up -d
Connect to your host via HTTP and HTTPS and check if you get a response. You should get a HTTP 503 response which is fine!
Adding upstream servers
Now it gets magic 🙂
When you start a new container you can easily add the following environment variables:
- VIRTUAL_HOST sets the virtual hostname of your service
- VIRTUAL_PORT is optional and sets the HTTP(S) port of your service
- VIRTUAL_PROTO is optional and sets the protocol of your service (http or https)
Whenever you start a container with the VIRTUAL_HOST environment variable, the proxy container will forward all requests belonging to this hostname to your container. By default http and the exposed port of your container will be used. However, you can override that by setting the additional environment variables.
A nice test environment
If you use the configuration above you can easily setup a web test environment based on Docker for your own needs. You only have to make sure that you’ve a subdomain which points at your Docker host.
Let’s say you docker host is called docker.confirm.ch and you want all your containers in the testing.confirm.ch subdomain:
docker.confirm.ch. IN A 220.127.116.11 *.testing.confirm.ch. IN CNAME docker.confirm.ch.
Now you can start multiple Docker containers, all with a VIRTUAL_HOST in the subdomain *.testing.confirm.ch. Via DNS you make sure that all requests land on docker.confirm.ch and the nginx forwards the requests to your containers.
To make everything more secure you can completely disable HTTP and create a wildcard SSL certificate for your subdomain.