This is a tutorial developed in March 2016 for informative purposes. It will not be updated for future versions of the technology or the dependencies used in it.
In this tutorial, we will see how we can deploy a full stack (Django web app, with PostgreSQL and Redis) using Docker Compose.
First, make sure you have a docker setup ready, or follow this documentation, depending on your distribution, here we will suppose the host and development machine is a Debian 8.
You will also need Docker Compose.
We will deploy a total of 5 Docker containers :
- 1 Front, that contains the application code, if you don’t know Django, this will give you a little introduction to this awesome web framework.
- 1 Reverse proxy, that will handle the static assets and forward the dynamic ones to the Django WSGI process.
- 1 PostgreSQL database
- 1 Redis database
- 1 Data instance, that will contain the PostgreSQL data files, so we can freely rebuild and upgrade the PostgreSQL instance without impact on the data.
Here is a schema of the final docker platform :
We will also create an automatic nightly backup of the database.
To start, we create a new git repository what will contain everything (Just for ease, this is not mandatory), let’s place inside the following directories :
- nginx : that will contain the nginx Dockerfile and the vhost definition
- postgres : That will contain the PostgreSQL initialization script
- web : That will contain our application code and the Dockerfile for the front
- scripts : That will contains all our scripts (The backup script here)
First, we will create an
env file ad the root directory of the project that will contain environment variables shared by many systems and scripts:
Then, we create the docker-compose.yml file with all of the following content, let me explain each host definition:
- Restart : This container should always be up, and it will restart if it crashes
- Build : We have to build this image using a Dockerfile before running it, this specifies the directory where is located the Dockerfile
- Expose : We expose the port 8000 to linked machines (that will be used by the NGINX container)
- Links : We need to have access to the postgres instance using the postgres name (This create a postgres entry in the /etc/hosts files that point to the postgres instance IP), idem for the redis.
- Env_file : This container will load all the environment variables from the env file.
- Volumes : we specify the different mountpoints we want on this instance
- Command : What command run when starting the container ? Here we start the wsgi process.
- Here instead of the
buildoption, we have the
imageoption that targets an existing image on the Docker registry
- we also use
volumes_fromto load all the volumes of another container. We use it on the Nginx container to load the static directory from the application and serve it, and on the PostgreSQL container to load the persistent tablespace that is in the data container.
• We create a standard redis container, using the latest version of the official image and exposing the redis port.
• On the data container, we use true (That does nothing and just keep the container running) as a command as we just want this container to hold the PostgreSQL tablespace.
• Using a data container is the recommended way when we want to manage data persistence, using this, we don’t risk any accidental deletion during, for example, an upgrade of the PostgreSQL container (That will delete all the data from its container)
The PostgreSQL container
- On the data container, we use
trueas command as we just want this container to hold the PostgreSQL tablespace.
First, we will configure the PostgreSQL container to initialize itself. The Postgres image loads by default all the scripts in the
/docker-entrypoint-initdb.d directory, let’s create a simple script that will create a user and a database using the information from the
env file :
The reverse proxy container
We will configure the nginx container, the recommended way when using this
tutum/nginx is to use a Dockerfile, let’s create a simple one:
And we just have to create a file in sites-enabled/ to serve the static files and forward the rest to the application:
The web application container
The easy way is to just use an official Python image which is based on Debian, but the resulting image will be quite big (mine was like 900MB).
We want to create a very light front image so we can easily and rapidly upgrade it when we do code change and we want to reduce as maximum the attack surface from the outside. For this, we will base our image on Alpine Linux that is specialised into this and very common in the Docker world.
Then, let’s create our custom Alpine for the Django application, we will first run an interactive session to create the project and then we will create the Dockerfile.
But first, fill the web/requirements.txt with all the Python modules we need:
Then, we can start:
We’ll create an instance to create our Django application. Note that we do this to not have to install Python and it’s dependencies on our host system. We’ll start an Alpine instance, and interactively :
Let’s populate the configuration file (mydjango/settings.py) with parameters usable with the information Docker provides in the containers, remove all the content between the “DATABASE” part and the “INTERNATIONALIZATION” and replace it with this :
Once done, just leave the container with
exit (the –rm flag when creating the instance will destroy it when leaving, the application is saved in the mounted volume)
Here we load the Docker database if we find the environment variable provided by the env file, and use all those information to connect, if not, it’s because we are building the image, we don’t want this to block some commands (Like if you want to compile the gettext translation of your website)
For the redis container, we just target to the
redis DNS that will be present once we have deployed it using docker-compose.
Now we have our little django application, let’s put all that we need in the Dockerfile of the
As you see, here we also clean the packages we will not need after we installed the Python modules (For compiling the PostgreSQL module for example)
Take a coffee (depending of your connection speed) and build everything
Then you can check your containers with this :
The “restarting” on the data container is normal as it does not have persistent process.
If everything is fine, go to your public IP and then you should see the default Django page (That should only work if it can connect successfully to the database and load the cache engine):
Then if everything went fine, we can setup backups for the PostgreSQL database, here we just created a small script to do it every night and upload it to S3 (on a bucket with versioning enabled and a lifecycle policy)
Don’t forget to install the
boto3 PIP module, the AWS cli and to configure your credentials with
Then, just configure a cron job to schedule it when you want.
You now have a fully working docker compose environment 🙂 When you do a modification, you just need to run a new
docker-compose build; docker-compose up -d and everything will be automatically updated.