This is a tutorial developed in March 2016 for informative purposes. It will not be updated for future versions of the technology or the dependencies used in it.

In this tutorial, we will see how we can deploy a full stack (Django web app, with PostgreSQL and Redis) using Docker Compose.

First, make sure you have a docker setup ready, or follow this documentation, depending on your distribution, here we will suppose the host and development machine is a Debian 8.

You will also need Docker Compose.

We will deploy a total of 5 Docker containers :

Here is a schema of the final docker platform :


We will also create an automatic nightly backup of the database.

To start, we create a new git repository what will contain everything (Just for ease, this is not mandatory), let’s place inside the following directories :

The basics

First, we will create an env file ad the root directory of the project that will contain environment variables shared by many systems and scripts:


Then, we create the docker-compose.yml file with all of the following content, let me explain each host definition:

  restart: always
  build: ./web/
    - "8000"
    - postgres:postgres
    - redis:redis
  env_file: env
    - ./web:/data/web
  command: /usr/bin/gunicorn mydjango.wsgi:application -w 2 -b :8000
  restart: always
  build: ./nginx/
    - "80:80"
    - web
    - web:web

  restart: always
  image: postgres:latest
    - data
    - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
    - ./backups/postgresql:/backup
    - env
    - "5432"
  restart: always
  image: redis:latest
    - "6379"

  restart: always
  image: alpine
    - /var/lib/postgresql
  command: "true"

• We create a standard redis container, using the latest version of the official image and exposing the redis port.
• On the data container, we use true (That does nothing and just keep the container running) as a command as we just want this container to hold the PostgreSQL tablespace.
• Using a data container is the recommended way when we want to manage data persistence, using this, we don’t risk any accidental deletion during, for example, an upgrade of the PostgreSQL container (That will delete all the data from its container)

The PostgreSQL container

First, we will configure the PostgreSQL container to initialize itself. The Postgres image loads by default all the scripts in the /docker-entrypoint-initdb.d directory, let’s create a simple script that will create a user and a database using the information from the env file :

> cat postgres/docker-entrypoint-initdb.d/
#!/bin/env bash
psql -U postgres -c "CREATE USER $DB_USER PASSWORD '$DB_PASS'"
> chmod a+rx postgres/docker-entrypoint-initdb.d/

The reverse proxy container

We will configure the nginx container, the recommended way when using this tutum/nginx is to use a Dockerfile, let’s create a simple one:

> cat nginx/Dockerfile
FROM tutum/nginx

RUN rm /etc/nginx/sites-enabled/default
ADD sites-enabled/ /etc/nginx/sites-enabled

And we just have to create a file in sites-enabled/ to serve the static files and forward the rest to the application:

> cat nginx/sites-enabled/django
server {

    listen 80;
    charset utf-8;

    location /static {
        alias /data/web/mydjango/static;

    location / {
        proxy_pass http://web:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


The web application container

The easy way is to just use an official Python image which is based on Debian, but the resulting image will be quite big (mine was like 900MB).

We want to create a very light front image so we can easily and rapidly upgrade it when we do code change and we want to reduce as maximum the attack surface from the outside. For this, we will base our image on Alpine Linux that is specialised into this and very common in the Docker world.

Then, let’s create our custom Alpine for the Django application, we will first run an interactive session to create the project and then we will create the Dockerfile.

But first, fill the web/requirements.txt with all the Python modules we need:

> cat web/requirements.txt

Then, we can start:

> docker run -ti --rm -v /root/myproject/web/:/data/web alpine:latest sh
/ # cd data/web/
/data/web # apk add --update python3 python3-dev postgresql-client postgresql-dev build-base gettext vim
/data/web # pip3 install --upgrade pip
/data/web # pip3 install -r requirements.txt
/data/web # django-admin startproject mydjango
/data/web # mkdir mydjango/static

We’ll create an instance to create our Django application. Note that we do this to not have to install Python and it’s dependencies on our host system. We’ll start an Alpine instance, and interactively :

> docker run -ti --rm -v /root/myproject/web/:/data/web alpine:latest sh
/ # cd data/web/
/data/web # apk add --update python3 python3-dev postgresql-client postgresql-dev build-base gettext vim
/data/web # pip3 install --upgrade pip
/data/web # pip3 install -r requirements.txt
/data/web # django-admin startproject mydjango
/data/web # mkdir mydjango/static

Let’s populate the configuration file (mydjango/ with parameters usable with the information Docker provides in the containers, remove all the content between the “DATABASE” part and the “INTERNATIONALIZATION” and replace it with this :

if 'DB_NAME' in os.environ:
    # Running the Docker image
        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': os.environ['DB_NAME'],
            'USER': os.environ['DB_USER'],
            'PASSWORD': os.environ['DB_PASS'],
            'HOST': os.environ['DB_SERVICE'],
            'PORT': os.environ['DB_PORT']
    # Building the Docker image
        'default': {
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),

    "default": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": "redis://redis:6379/0",
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",

SESSION_ENGINE = "django.contrib.sessions.backends.cache"

Once done, just leave the container with exit (the –rm flag when creating the instance will destroy it when leaving, the application is saved in the mounted volume)

Here we load the Docker database if we find the environment variable provided by the env file, and use all those information to connect, if not, it’s because we are building the image, we don’t want this to block some commands (Like if you want to compile the gettext translation of your website)

For the redis container, we just target to the redis DNS that will be present once we have deployed it using docker-compose.

Now we have our little django application, let’s put all that we need in the Dockerfile of the web container:

FROM alpine

# Initialize
RUN mkdir -p /data/web
WORKDIR /data/web
COPY requirements.txt /data/web/

# Setup
RUN apk update
RUN apk upgrade
RUN apk add --update python3 python3-dev postgresql-client postgresql-dev build-base gettext
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt

# Clean
RUN apk del -r python3-dev postgresql

# Prepare
COPY . /data/web/
RUN mkdir -p mydjango/static/admin

As you see, here we also clean the packages we will not need after we installed the Python modules (For compiling the PostgreSQL module for example)

Let the magic happen

Take a coffee (depending of your connection speed) and build everything

> docker-compose build
> docker-compose up -d

Then you can check your containers with this :

> docker-compose ps
        Name                      Command               State         Ports
myproject_data_1       true                             Up
myproject_nginx_1      /usr/sbin/nginx                  Up>80/tcp
myproject_postgres_1   / postgres   Up      5432/tcp
myproject_redis_1      / redis-server      Up      6379/tcp
myproject_web_1        /usr/bin/gunicorn mydjango ...   Up      8000/tcp

> docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                          PORTS                NAMES
d7162329302d        myproject_nginx     "/usr/sbin/nginx"        2 minutes ago       Up 2 minutes          >80/tcp   myproject_nginx_1
402c2ca47789        myproject_web       "/usr/bin/gunicorn my"   2 minutes ago       Up 2 minutes                    8000/tcp             myproject_web_1
2c92e1fa0021        postgres:latest     "/docker-entrypoint.s"   8 minutes ago       Up 2 minutes                    5432/tcp             myproject_postgres_1
ad58f3138339        alpine              "true"                   9 minutes ago       Restarting (0) 58 seconds ago                        myproject_data_1
29ece917fcbb        redis:latest        "/ redis"   9 minutes ago       Up 2 minutes                    6379/tcp             myproject_redis_1

The “restarting” on the data container is normal as it does not have persistent process.

If everything is fine, go to your public IP and then you should see the default Django page (That should only work if it can connect successfully to the database and load the cache engine):


The backups

Then if everything went fine, we can setup backups for the PostgreSQL database, here we just created a small script to do it every night and upload it to S3 (on a bucket with versioning enabled and a lifecycle policy)


import os
import socket
import boto3

BUCKET = "myproject-backups"
S3_DIRECTORY = socket.gethostname()
DB_BACKUP_FILE = "myproject_postgres_1.sql"
DB_BACKUP_PATH = "/tmp/{filename}".format(filename=DB_BACKUP_FILE)
DB_S3_DIRECTORY = "{directory}/postgresql".format(directory=S3_DIRECTORY)

s3 = boto3.resource('s3')

os.system("docker exec -u postgres myproject_postgres_1 pg_dumpall > {path}".format(path=DB_BACKUP_PATH))

backup = open(DB_BACKUP_PATH, "rb")
s3.Object(BUCKET, "{directory}/{filename}".format(directory=DB_S3_DIRECTORY, filename=DB_BACKUP_FILE)).put(Body=backup)

Don’t forget to install the boto3 PIP module, the AWS cli and to configure your credentials with aws configure

Then, just configure a cron job to schedule it when you want.

The end

You now have a fully working docker compose environment 🙂 When you do a modification, you just need to run a new docker-compose build; docker-compose up -d and everything will be automatically updated.


TAGS: django, docker

speech-bubble-13-icon Created with Sketch.
Rizwan Ghzzaal | August 13, 2020 6:48 pm

Hi I would run it wil mysql rather than postgres The required modification will be really helpful thanks

saeid | May 15, 2020 3:49 pm

great article, also can you tell us how to setup this to use hostname instead of ip?

Mikaeil | August 14, 2018 12:48 pm

i have a problem in dockerize my djanog project

related project run with bellow command:

docker-compose up -d

with this command my tools has been run completely. but after build my project i don’t have any container! please find bellow command to build and run my project:

docker build -t my-app .
docker run -e myapp
after execute Previous command and use “docker ps” my tools has not been run and my container list is empty!

my Dockerfile value is:

FROM python:3
RUN mkdir /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/

and value of docker-compose.yml is:

version: ‘3’

image: postgres
build: .
command: bash -c “python makemigrations && python migrate && python runserver”
 — .:/code
 — “8000:8000”
 — db

would you please help me?

thank you

Saeed Farzian | June 6, 2018 10:44 am

Hi, thanks for the great article. It was quite useful for me even after two years of writing that.
I just wanted to say that you have not made it clear which version of the docker-compose this article has been written for, in the beginning of filling up the docker-compose.yml file. I tried to run it by v3 but it failed as the `volumes_from` has been removed from v3 and something else (top-level `volumes` key) is introduced which is not quite the same thing. So to run exactly this setup, we would need v2.
And also having an aggregated docker-compose.yml file including everything, at the end of the article would be much appreciated.

Keep up the good work 🙂

Emma Briones | June 6, 2018 3:10 pm

Hi Saeed,
Thank you for your comment, we will take into account your recommendations.
Glad it was useful!

Yuri | February 27, 2018 3:54 pm

The port is correct?

“Expose : We expose the port 8080 to linked machines (that will be used by the NGINX container)”

Emma Briones | March 1, 2018 11:44 am

Hi Yuri,

there was a typo. Thanks for the heads up! The port is 8000.

Soorena | January 15, 2018 8:01 pm

do we experience downtime whenever we build and run docker-compose up?

Marc Egea i Sala | January 18, 2018 8:08 am

Hi Soreena, thanks for participating 🙂

It’s a really good question. There will be, indeed, a downtime.

This downtime will not be between the ‘build’ and the ‘up’. You can run ‘build’ any number of times without affecting your already running service.

But if the image has changed, when you run ‘docker-compose up -d’ it will restart the service(s) affected. You will be without service while:
– The previous container stops. Some processes will take a while to stop.
– The process in the new container starts. Again, it can take a while depending in your setup.

I hope that clarifies the issue 🙂

Bryan | May 5, 2017 5:38 am

Why you do not use also alpine linux versions for postgres and nginx?

Thansk for the tutorial!

L. Alberto Giménez | May 5, 2017 9:41 am

Hi Bryan, thanks for your comment!

Of course, we could use an alpine based image if image size was a concern, but this is just a PoC tutorial, of course you can use any base images you like : )

Best regards,
L. Alberto

Uriel | April 25, 2017 4:55 pm

This is an awesome tutorial, thank you!

I just have one issue, when running docker-compose ps the web container is always restarting can´t seem to make it work

Uriel | April 25, 2017 5:43 pm

Managed to fix it, Anders comment below solved my problem!

Emma Briones | April 26, 2017 7:02 am

Great Uriel! Thanks for commenting!

Ihor | March 26, 2017 9:38 am

“`if ‘DB_NAME’ in os.environ:
# Running the Docker image
‘default’: {
‘ENGINE’: ‘django.db.backends.postgresql_psycopg2’,
‘NAME’: os.environ[‘DB_NAME’],
‘USER’: os.environ[‘DB_USER’],
‘PASSWORD’: os.environ[‘DB_PASS’],
‘HOST’: os.environ[‘DB_SERVICE’],
‘PORT’: os.environ[‘DB_PORT’]
# Building the Docker image
‘default’: {
‘ENGINE’: ‘django.db.backends.sqlite3’,
‘NAME’: os.path.join(BASE_DIR, ‘db.sqlite3’),
“` What this comment about?

L. Alberto Giménez | March 27, 2017 10:36 am

Hi Ihor, thanks for commenting!

The meaning of the comment is explained a little bit later. In this particular case, we want to make a distinction in the settings between Docker image build time and container run time.

When running the container, Docker provides the environment with a bunch of variables, and one of them (DB_NAME) is the one we use to distinguish between image build/run time.

Hope we could make it clear for you.

Best regards,
L. Alberto

Vetos | March 9, 2017 7:17 am

How we can do commands;
migrate & createsuperuser?

L. Alberto Giménez | March 9, 2017 9:55 am

Hi Vetos, thanks for your comment!

Depending on your exact needs, you have several options:

You can execute any arbitrary command in a running container using “docker attach”, executing the commands and then detaching with Ctrl+p Ctrl+q. Be sure not to “control+d” to

If you need to do it as part of the image build process, you can do that in the Dockerfile using RUN commands.

The third option would be to add those commands to your entrypoint script and in the Dockerfile COPY that script to the image and then set it as ENTRYPOINT (or CMD).

Hope that helped!

L. Alberto

Diogo | April 8, 2017 8:33 pm

docker attach {{id}} dont work.
Can someone help?

Emma Briones | April 13, 2017 9:51 am

Hi Diogo,
thanks for your comment!
As we don’t know your particular case (what you’ve tried, the errors that it is showing…) we’re not able to give you a proper answer.
Maybe you could try with Docker’s support channels. 🙂

Francis | February 26, 2017 11:47 pm

hello im doing this tutorial but i got an error can you help?
this is the error:
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
myproject_web_1 exited with code 3
myproject_data_1 exited with code 0
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Listening at: (1)
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Using worker: sync
web_1 | [2017-02-26 23:43:19 +0000] [9] [INFO] Booting worker with pid: 9
web_1 | [2017-02-26 23:43:19 +0000] [9] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/”, line 557, in spawn_worker
web_1 | worker.init_process()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/”, line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/”, line 136, in load_wsgi
web_1 | self.wsgi =
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/”, line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/”, line 65, in load
web_1 | return self.load_wsgiapp()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/”, line 52, in load_wsgiapp
web_1 | return util.import_app(self.app_uri)
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/”, line 357, in import_app
web_1 | __import__(module)
web_1 | ImportError: No module named ‘mydjango.wsgi’
web_1 | [2017-02-26 23:43:19 +0000] [9] [INFO] Worker exiting (pid: 9)
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Shutting down: Master
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Reason: Worker failed to boot.
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Listening at: (1)
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Using worker: sync
web_1 | [2017-02-26 23:43:21 +0000] [9] [INFO] Booting worker with pid: 9
web_1 | [2017-02-26 23:43:21 +0000] [9] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/”, line 557, in spawn_worker
web_1 | worker.init_process()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/”, line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/”, line 136, in load_wsgi
web_1 | self.wsgi =
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/”, line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/”, line 65, in load
web_1 | return self.load_wsgiapp()

L. Alberto Giménez | February 28, 2017 8:12 am

Hi Francis,

Thanks for commenting! It is quite hard to troubleshoot your particular problem since we don’t have the overall view of your project layout, but taking into account the error you get (“web_1 | ImportError: No module named ‘mydjango.wsgi’”), the problem might be related to some wrong path/filename/module name in your scripts or code.

Can you please double check that you followed all the steps?

Anders | March 2, 2017 9:56 pm

I had the same problem, but added a line in the docker-compose.yml under the web entry to make sure the working_dir was where my file was sitting. Maybe it has something to do with the newer version of Django. So, now my docker-compose.yml file has the following line:

working_dir: /data/web/mydjango

The alternative, is to pass a –pythonpath string to gunicorn.


eduDorus | February 12, 2017 8:30 pm

Thank you for this awesome tutorial. I’m looking forward to use this in production

Lospejos | October 9, 2016 11:21 am

Well, great article. My suggestions to improve:
1) add a virtualenv to a Django application
2) create a github repository as an example and put here a link to it

Vladimir Vovk | December 20, 2016 6:15 am

We don’t need virtual environment here because we have only one application in our container.

Emma Briones | October 10, 2016 8:21 am

Hi Lospejos,

thank you very much for your suggestions!
We’ll keep it in mind for future occasions! 🙂


Leave a Reply

Your email address will not be published. Required fields are marked *