In this tutorial, we will see how we can deploy a full stack (Django web app, with PostgreSQL and Redis) using Docker Compose.

First, make sure you have a docker setup ready, or follow this documentation, depending on your distribution, here we will suppose the host and development machine is a Debian 8.

You will also need Docker Compose.

We will deploy a total of 5 Docker containers :

Here is a schema of the final docker platform :

docker_compose

We will also create an automatic nightly backup of the database.

To start, we create a new git repository what will contain everything (Just for ease, this is not mandatory), let’s place inside the following directories :

The basics

First, we will create an env file ad the root directory of the project that will contain environment variables shared by many systems and scripts:

DB_NAME=myproject_web
DB_USER=myproject_web
DB_PASS=shoov3Phezaimahsh7eb2Tii4ohkah8k
DB_SERVICE=postgres
DB_PORT=5432

Then, we create the docker-compose.yml file with all of the following content, let me explain each host definition:

web:
  restart: always
  build: ./web/
  expose:
    - "8000"
  links:
    - postgres:postgres
    - redis:redis
  env_file: env
  volumes:
    - ./web:/data/web
  command: /usr/bin/gunicorn mydjango.wsgi:application -w 2 -b :8000
nginx:
  restart: always
  build: ./nginx/
  ports:
    - "80:80"
  volumes_from:
    - web
  links:
    - web:web

postgres:
  restart: always
  image: postgres:latest
  volumes_from:
    - data
  volumes:
    - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
    - ./backups/postgresql:/backup
  env_file:
    - env
  expose:
    - "5432"
redis:
  restart: always
  image: redis:latest
  expose:
    - "6379"

data:
  restart: always
  image: alpine
  volumes:
    - /var/lib/postgresql
  command: "true"

• We create a standard redis container, using the latest version of the official image and exposing the redis port.
• On the data container, we use true (That does nothing and just keep the container running) as a command as we just want this container to hold the PostgreSQL tablespace.
• Using a data container is the recommended way when we want to manage data persistence, using this, we don’t risk any accidental deletion during, for example, an upgrade of the PostgreSQL container (That will delete all the data from its container)

The PostgreSQL container

First, we will configure the PostgreSQL container to initialize itself. The Postgres image loads by default all the scripts in the /docker-entrypoint-initdb.d directory, let’s create a simple script that will create a user and a database using the information from the env file :

> cat postgres/docker-entrypoint-initdb.d/myproject_web.sh
#!/bin/env bash
psql -U postgres -c "CREATE USER $DB_USER PASSWORD '$DB_PASS'"
psql -U postgres -c "CREATE DATABASE $DB_NAME OWNER $DB_USER"
> chmod a+rx postgres/docker-entrypoint-initdb.d/myproject_web.sh

The reverse proxy container

We will configure the nginx container, the recommended way when using this tutum/nginx is to use a Dockerfile, let’s create a simple one:

> cat nginx/Dockerfile
FROM tutum/nginx

RUN rm /etc/nginx/sites-enabled/default
ADD sites-enabled/ /etc/nginx/sites-enabled

And we just have to create a file in sites-enabled/ to serve the static files and forward the rest to the application:

> cat nginx/sites-enabled/django
server {

    listen 80;
    server_name not.configured.example.com;
    charset utf-8;

    location /static {
        alias /data/web/mydjango/static;
    }

    location / {
        proxy_pass http://web:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

}

The web application container

The easy way is to just use an official Python image which is based on Debian, but the resulting image will be quite big (mine was like 900MB).

We want to create a very light front image so we can easily and rapidly upgrade it when we do code change and we want to reduce as maximum the attack surface from the outside. For this, we will base our image on Alpine Linux that is specialised into this and very common in the Docker world.

Then, let’s create our custom Alpine for the Django application, we will first run an interactive session to create the project and then we will create the Dockerfile.

But first, fill the web/requirements.txt with all the Python modules we need:

> cat web/requirements.txt
Django==1.8.5
redis==2.10.3
django-redis==4.3.0
gunicorn==19.3.0
psycopg2==2.6

Then, we can start:

> docker run -ti --rm -v /root/myproject/web/:/data/web alpine:latest sh
/ # cd data/web/
/data/web # apk add --update python3 python3-dev postgresql-client postgresql-dev build-base gettext vim
/data/web # pip3 install --upgrade pip
/data/web # pip3 install -r requirements.txt
/data/web # django-admin startproject mydjango
/data/web # mkdir mydjango/static

We’ll create an instance to create our Django application. Note that we do this to not have to install Python and it’s dependencies on our host system. We’ll start an Alpine instance, and interactively :

> docker run -ti --rm -v /root/myproject/web/:/data/web alpine:latest sh
/ # cd data/web/
/data/web # apk add --update python3 python3-dev postgresql-client postgresql-dev build-base gettext vim
/data/web # pip3 install --upgrade pip
/data/web # pip3 install -r requirements.txt
/data/web # django-admin startproject mydjango
/data/web # mkdir mydjango/static

Let’s populate the configuration file (mydjango/settings.py) with parameters usable with the information Docker provides in the containers, remove all the content between the “DATABASE” part and the “INTERNATIONALIZATION” and replace it with this :

if 'DB_NAME' in os.environ:
    # Running the Docker image
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': os.environ['DB_NAME'],
            'USER': os.environ['DB_USER'],
            'PASSWORD': os.environ['DB_PASS'],
            'HOST': os.environ['DB_SERVICE'],
            'PORT': os.environ['DB_PORT']
        }
    }
else:
    # Building the Docker image
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
        }
    }

CACHES = {
    "default": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": "redis://redis:6379/0",
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
        }
    }
}

SESSION_ENGINE = "django.contrib.sessions.backends.cache"
SESSION_CACHE_ALIAS = "default"

Once done, just leave the container with exit (the –rm flag when creating the instance will destroy it when leaving, the application is saved in the mounted volume)

Here we load the Docker database if we find the environment variable provided by the env file, and use all those information to connect, if not, it’s because we are building the image, we don’t want this to block some commands (Like if you want to compile the gettext translation of your website)

For the redis container, we just target to the redis DNS that will be present once we have deployed it using docker-compose.

Now we have our little django application, let’s put all that we need in the Dockerfile of the web container:

FROM alpine

# Initialize
RUN mkdir -p /data/web
WORKDIR /data/web
COPY requirements.txt /data/web/

# Setup
RUN apk update
RUN apk upgrade
RUN apk add --update python3 python3-dev postgresql-client postgresql-dev build-base gettext
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt

# Clean
RUN apk del -r python3-dev postgresql

# Prepare
COPY . /data/web/
RUN mkdir -p mydjango/static/admin

As you see, here we also clean the packages we will not need after we installed the Python modules (For compiling the PostgreSQL module for example)

Let the magic happen

Take a coffee (depending of your connection speed) and build everything

> docker-compose build
> docker-compose up -d

Then you can check your containers with this :

> docker-compose ps
        Name                      Command               State         Ports
----------------------------------------------------------------------------------
myproject_data_1       true                             Up
myproject_nginx_1      /usr/sbin/nginx                  Up      0.0.0.0:80->80/tcp
myproject_postgres_1   /docker-entrypoint.sh postgres   Up      5432/tcp
myproject_redis_1      /entrypoint.sh redis-server      Up      6379/tcp
myproject_web_1        /usr/bin/gunicorn mydjango ...   Up      8000/tcp

> docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                          PORTS                NAMES
d7162329302d        myproject_nginx     "/usr/sbin/nginx"        2 minutes ago       Up 2 minutes                    0.0.0.0:80->80/tcp   myproject_nginx_1
402c2ca47789        myproject_web       "/usr/bin/gunicorn my"   2 minutes ago       Up 2 minutes                    8000/tcp             myproject_web_1
2c92e1fa0021        postgres:latest     "/docker-entrypoint.s"   8 minutes ago       Up 2 minutes                    5432/tcp             myproject_postgres_1
ad58f3138339        alpine              "true"                   9 minutes ago       Restarting (0) 58 seconds ago                        myproject_data_1
29ece917fcbb        redis:latest        "/entrypoint.sh redis"   9 minutes ago       Up 2 minutes                    6379/tcp             myproject_redis_1

The “restarting” on the data container is normal as it does not have persistent process.

If everything is fine, go to your public IP and then you should see the default Django page (That should only work if it can connect successfully to the database and load the cache engine):

django

The backups

Then if everything went fine, we can setup backups for the PostgreSQL database, here we just created a small script to do it every night and upload it to S3 (on a bucket with versioning enabled and a lifecycle policy)

/scripts/do_backup.py

import os
import socket
import boto3

BUCKET = "myproject-backups"
S3_DIRECTORY = socket.gethostname()
DB_BACKUP_FILE = "myproject_postgres_1.sql"
DB_BACKUP_PATH = "/tmp/{filename}".format(filename=DB_BACKUP_FILE)
DB_S3_DIRECTORY = "{directory}/postgresql".format(directory=S3_DIRECTORY)

s3 = boto3.resource('s3')

os.system("docker exec -u postgres myproject_postgres_1 pg_dumpall > {path}".format(path=DB_BACKUP_PATH))

backup = open(DB_BACKUP_PATH, "rb")
s3.Object(BUCKET, "{directory}/{filename}".format(directory=DB_S3_DIRECTORY, filename=DB_BACKUP_FILE)).put(Body=backup)

Don’t forget to install the boto3 PIP module, the AWS cli and to configure your credentials with aws configure

Then, just configure a cron job to schedule it when you want.

The end

You now have a fully working docker compose environment 🙂 When you do a modification, you just need to run a new docker-compose build; docker-compose up -d and everything will be automatically updated.

 

TAGS: django, docker

speech-bubble-13-icon Created with Sketch.
Comments
Bryan | May 5, 2017 5:38 am

Why you do not use also alpine linux versions for postgres and nginx?

Thansk for the tutorial!

Reply
L. Alberto Giménez | May 5, 2017 9:41 am

Hi Bryan, thanks for your comment!

Of course, we could use an alpine based image if image size was a concern, but this is just a PoC tutorial, of course you can use any base images you like : )

Best regards,
L. Alberto

Reply
Uriel | April 25, 2017 4:55 pm

This is an awesome tutorial, thank you!

I just have one issue, when running docker-compose ps the web container is always restarting can´t seem to make it work

Reply
Uriel | April 25, 2017 5:43 pm

Managed to fix it, Anders comment below solved my problem!

Reply
Emma Briones | April 26, 2017 7:02 am

Great Uriel! Thanks for commenting!

Reply
Ihor | March 26, 2017 9:38 am

“`if ‘DB_NAME’ in os.environ:
# Running the Docker image
DATABASES = {
‘default’: {
‘ENGINE’: ‘django.db.backends.postgresql_psycopg2’,
‘NAME’: os.environ[‘DB_NAME’],
‘USER’: os.environ[‘DB_USER’],
‘PASSWORD’: os.environ[‘DB_PASS’],
‘HOST’: os.environ[‘DB_SERVICE’],
‘PORT’: os.environ[‘DB_PORT’]
}
}
else:
# Building the Docker image
DATABASES = {
‘default’: {
‘ENGINE’: ‘django.db.backends.sqlite3’,
‘NAME’: os.path.join(BASE_DIR, ‘db.sqlite3’),
}
}
“` What this comment about?

Reply
L. Alberto Giménez | March 27, 2017 10:36 am

Hi Ihor, thanks for commenting!

The meaning of the comment is explained a little bit later. In this particular case, we want to make a distinction in the settings between Docker image build time and container run time.

When running the container, Docker provides the environment with a bunch of variables, and one of them (DB_NAME) is the one we use to distinguish between image build/run time.

Hope we could make it clear for you.

Best regards,
L. Alberto

Reply
Vetos | March 9, 2017 7:17 am

Hi.
How we can do commands;
migrate & createsuperuser?

Reply
L. Alberto Giménez | March 9, 2017 9:55 am

Hi Vetos, thanks for your comment!

Depending on your exact needs, you have several options:

You can execute any arbitrary command in a running container using “docker attach”, executing the commands and then detaching with Ctrl+p Ctrl+q. Be sure not to “control+d” to

If you need to do it as part of the image build process, you can do that in the Dockerfile using RUN commands.

The third option would be to add those commands to your entrypoint script and in the Dockerfile COPY that script to the image and then set it as ENTRYPOINT (or CMD).

Hope that helped!

L. Alberto

Reply
Diogo | April 8, 2017 8:33 pm

docker attach {{id}} dont work.
Can someone help?
Thanks

Reply
Emma Briones | April 13, 2017 9:51 am

Hi Diogo,
thanks for your comment!
As we don’t know your particular case (what you’ve tried, the errors that it is showing…) we’re not able to give you a proper answer.
Maybe you could try with Docker’s support channels. 🙂

Reply
Francis | February 26, 2017 11:47 pm

hello im doing this tutorial but i got an error can you help?
this is the error:
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
myproject_web_1 exited with code 3
myproject_data_1 exited with code 0
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Using worker: sync
web_1 | [2017-02-26 23:43:19 +0000] [9] [INFO] Booting worker with pid: 9
web_1 | [2017-02-26 23:43:19 +0000] [9] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/arbiter.py”, line 557, in spawn_worker
web_1 | worker.init_process()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 136, in load_wsgi
web_1 | self.wsgi = self.app.wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/base.py”, line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py”, line 65, in load
web_1 | return self.load_wsgiapp()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py”, line 52, in load_wsgiapp
web_1 | return util.import_app(self.app_uri)
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/util.py”, line 357, in import_app
web_1 | __import__(module)
web_1 | ImportError: No module named ‘mydjango.wsgi’
web_1 | [2017-02-26 23:43:19 +0000] [9] [INFO] Worker exiting (pid: 9)
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Shutting down: Master
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Reason: Worker failed to boot.
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Using worker: sync
web_1 | [2017-02-26 23:43:21 +0000] [9] [INFO] Booting worker with pid: 9
web_1 | [2017-02-26 23:43:21 +0000] [9] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/arbiter.py”, line 557, in spawn_worker
web_1 | worker.init_process()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 136, in load_wsgi
web_1 | self.wsgi = self.app.wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/base.py”, line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py”, line 65, in load
web_1 | return self.load_wsgiapp()

Reply
L. Alberto Giménez | February 28, 2017 8:12 am

Hi Francis,

Thanks for commenting! It is quite hard to troubleshoot your particular problem since we don’t have the overall view of your project layout, but taking into account the error you get (“web_1 | ImportError: No module named ‘mydjango.wsgi’”), the problem might be related to some wrong path/filename/module name in your scripts or code.

Can you please double check that you followed all the steps?

Reply
Anders | March 2, 2017 9:56 pm

I had the same problem, but added a line in the docker-compose.yml under the web entry to make sure the working_dir was where my manage.py file was sitting. Maybe it has something to do with the newer version of Django. So, now my docker-compose.yml file has the following line:

working_dir: /data/web/mydjango

The alternative, is to pass a –pythonpath string to gunicorn.

Anders

Reply
eduDorus | February 12, 2017 8:30 pm

Thank you for this awesome tutorial. I’m looking forward to use this in production

Reply
Lospejos | October 9, 2016 11:21 am

Well, great article. My suggestions to improve:
1) add a virtualenv to a Django application
2) create a github repository as an example and put here a link to it

Reply
Vladimir Vovk | December 20, 2016 6:15 am

We don’t need virtual environment here because we have only one application in our container.

Reply
Emma Briones | October 10, 2016 8:21 am

Hi Lospejos,

thank you very much for your suggestions!
We’ll keep it in mind for future occasions! 🙂

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*