This is a tutorial developed in March 2016 for informative purposes. It will not be updated for future versions of the technology or the dependencies used in it.
In this tutorial, we will see how we can deploy a full stack (Django web app, with PostgreSQL and Redis) using Docker Compose.
First, make sure you have a docker setup ready, or follow this documentation, depending on your distribution, here we will suppose the host and development machine is a Debian 8.
You will also need Docker Compose.
We will deploy a total of 5 Docker containers :
- 1 Front, that contains the application code, if you don’t know Django, this will give you a little introduction to this awesome web framework.
- 1 Reverse proxy, that will handle the static assets and forward the dynamic ones to the Django WSGI process.
- 1 PostgreSQL database
- 1 Redis database
- 1 Data instance, that will contain the PostgreSQL data files, so we can freely rebuild and upgrade the PostgreSQL instance without impact on the data.
Here is a schema of the final docker platform :
We will also create an automatic nightly backup of the database.
To start, we create a new git repository what will contain everything (Just for ease, this is not mandatory), let’s place inside the following directories :
- nginx : that will contain the nginx Dockerfile and the vhost definition
- postgres : That will contain the PostgreSQL initialization script
- web : That will contain our application code and the Dockerfile for the front
- scripts : That will contains all our scripts (The backup script here)
The basics
First, we will create an env file ad the root directory of the project that will contain environment variables shared by many systems and scripts:
Then, we create the docker-compose.yml file with all of the following content, let me explain each host definition:
- Restart : This container should always be up, and it will restart if it crashes
- Build : We have to build this image using a Dockerfile before running it, this specifies the directory where is located the Dockerfile
- Expose : We expose the port 8000 to linked machines (that will be used by the NGINX container)
- Links : We need to have access to the postgres instance using the postgres name (This create a postgres entry in the /etc/hosts files that point to the postgres instance IP), idem for the redis.
- Env_file : This container will load all the environment variables from the env file.
- Volumes : we specify the different mountpoints we want on this instance
- Command : What command run when starting the container ? Here we start the wsgi process.
- Here instead of the build option, we have the image option that targets an existing image on the Docker registry
- we also use volumes_from to load all the volumes of another container. We use it on the Nginx container to load the static directory from the application and serve it, and on the PostgreSQL container to load the persistent tablespace that is in the data container.
• We create a standard redis container, using the latest version of the official image and exposing the redis port.
• On the data container, we use true (That does nothing and just keep the container running) as a command as we just want this container to hold the PostgreSQL tablespace.
• Using a data container is the recommended way when we want to manage data persistence, using this, we don’t risk any accidental deletion during, for example, an upgrade of the PostgreSQL container (That will delete all the data from its container)
The PostgreSQL container
- On the data container, we use true as command as we just want this container to hold the PostgreSQL tablespace.
First, we will configure the PostgreSQL container to initialize itself. The Postgres image loads by default all the scripts in the /docker-entrypoint-initdb.d directory, let’s create a simple script that will create a user and a database using the information from the env file :
The reverse proxy container
We will configure the nginx container, the recommended way when using this tutum/nginx is to use a Dockerfile, let’s create a simple one:
And we just have to create a file in sites-enabled/ to serve the static files and forward the rest to the application:
The web application container
The easy way is to just use an official Python image which is based on Debian, but the resulting image will be quite big (mine was like 900MB).
We want to create a very light front image so we can easily and rapidly upgrade it when we do code change and we want to reduce as maximum the attack surface from the outside. For this, we will base our image on Alpine Linux that is specialised into this and very common in the Docker world.
Then, let’s create our custom Alpine for the Django application, we will first run an interactive session to create the project and then we will create the Dockerfile.
But first, fill the web/requirements.txt with all the Python modules we need:
Then, we can start:
We’ll create an instance to create our Django application. Note that we do this to not have to install Python and it’s dependencies on our host system. We’ll start an Alpine instance, and interactively :
Let’s populate the configuration file (mydjango/settings.py) with parameters usable with the information Docker provides in the containers, remove all the content between the “DATABASE” part and the “INTERNATIONALIZATION” and replace it with this :
Once done, just leave the container with exit (the –rm flag when creating the instance will destroy it when leaving, the application is saved in the mounted volume)
Here we load the Docker database if we find the environment variable provided by the env file, and use all those information to connect, if not, it’s because we are building the image, we don’t want this to block some commands (Like if you want to compile the gettext translation of your website)
For the redis container, we just target to the redis DNS that will be present once we have deployed it using docker-compose.
Now we have our little django application, let’s put all that we need in the Dockerfile of the web container:
As you see, here we also clean the packages we will not need after we installed the Python modules (For compiling the PostgreSQL module for example)
Let the magic happen
Take a coffee (depending of your connection speed) and build everything
Then you can check your containers with this :
The “restarting” on the data container is normal as it does not have persistent process.
If everything is fine, go to your public IP and then you should see the default Django page (That should only work if it can connect successfully to the database and load the cache engine):
The backups
Then if everything went fine, we can setup backups for the PostgreSQL database, here we just created a small script to do it every night and upload it to S3 (on a bucket with versioning enabled and a lifecycle policy)
/scripts/do_backup.py
Don’t forget to install the boto3 PIP module, the AWS cli and to configure your credentials with aws configure
Then, just configure a cron job to schedule it when you want.
The end
You now have a fully working docker compose environment 🙂 When you do a modification, you just need to run a new docker-compose build; docker-compose up -d and everything will be automatically updated.
Comments
Hi I would run it wil mysql rather than postgres The required modification will be really helpful thanks
great article, also can you tell us how to setup this to use hostname instead of ip?
i have a problem in dockerize my djanog project
related project run with bellow command:
docker-compose up -d
with this command my tools has been run completely. but after build my project i don’t have any container! please find bellow command to build and run my project:
docker build -t my-app .
docker run -e myapp
after execute Previous command and use “docker ps” my tools has not been run and my container list is empty!
my Dockerfile value is:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
EXPOSE 8000
and value of docker-compose.yml is:
version: ‘3’
services:
db:
image: postgres
webs:
build: .
command: bash -c “python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000”
volumes:
— .:/code
ports:
— “8000:8000”
depends_on:
— db
would you please help me?
thank you
Hi, thanks for the great article. It was quite useful for me even after two years of writing that.
I just wanted to say that you have not made it clear which version of the docker-compose this article has been written for, in the beginning of filling up the docker-compose.yml file. I tried to run it by v3 but it failed as the `volumes_from` has been removed from v3 and something else (top-level `volumes` key) is introduced which is not quite the same thing. So to run exactly this setup, we would need v2.
And also having an aggregated docker-compose.yml file including everything, at the end of the article would be much appreciated.
Keep up the good work 🙂
Hi Saeed,
Thank you for your comment, we will take into account your recommendations.
Glad it was useful!
The port is correct?
“Expose : We expose the port 8080 to linked machines (that will be used by the NGINX container)”
Hi Yuri,
there was a typo. Thanks for the heads up! The port is 8000.
do we experience downtime whenever we build and run docker-compose up?
Hi Soreena, thanks for participating 🙂
It’s a really good question. There will be, indeed, a downtime.
This downtime will not be between the ‘build’ and the ‘up’. You can run ‘build’ any number of times without affecting your already running service.
But if the image has changed, when you run ‘docker-compose up -d’ it will restart the service(s) affected. You will be without service while:
– The previous container stops. Some processes will take a while to stop.
– The process in the new container starts. Again, it can take a while depending in your setup.
I hope that clarifies the issue 🙂
Why you do not use also alpine linux versions for postgres and nginx?
Thansk for the tutorial!
Hi Bryan, thanks for your comment!
Of course, we could use an alpine based image if image size was a concern, but this is just a PoC tutorial, of course you can use any base images you like : )
Best regards,
L. Alberto
This is an awesome tutorial, thank you!
I just have one issue, when running docker-compose ps the web container is always restarting can´t seem to make it work
Managed to fix it, Anders comment below solved my problem!
Great Uriel! Thanks for commenting!
“`if ‘DB_NAME’ in os.environ:
# Running the Docker image
DATABASES = {
‘default’: {
‘ENGINE’: ‘django.db.backends.postgresql_psycopg2’,
‘NAME’: os.environ[‘DB_NAME’],
‘USER’: os.environ[‘DB_USER’],
‘PASSWORD’: os.environ[‘DB_PASS’],
‘HOST’: os.environ[‘DB_SERVICE’],
‘PORT’: os.environ[‘DB_PORT’]
}
}
else:
# Building the Docker image
DATABASES = {
‘default’: {
‘ENGINE’: ‘django.db.backends.sqlite3’,
‘NAME’: os.path.join(BASE_DIR, ‘db.sqlite3’),
}
}
“` What this comment about?
Hi Ihor, thanks for commenting!
The meaning of the comment is explained a little bit later. In this particular case, we want to make a distinction in the settings between Docker image build time and container run time.
When running the container, Docker provides the environment with a bunch of variables, and one of them (DB_NAME) is the one we use to distinguish between image build/run time.
Hope we could make it clear for you.
Best regards,
L. Alberto
Hi.
How we can do commands;
migrate & createsuperuser?
Hi Vetos, thanks for your comment!
Depending on your exact needs, you have several options:
You can execute any arbitrary command in a running container using “docker attach”, executing the commands and then detaching with Ctrl+p Ctrl+q. Be sure not to “control+d” to
If you need to do it as part of the image build process, you can do that in the Dockerfile using RUN commands.
The third option would be to add those commands to your entrypoint script and in the Dockerfile COPY that script to the image and then set it as ENTRYPOINT (or CMD).
Hope that helped!
L. Alberto
docker attach {{id}} dont work.
Can someone help?
Thanks
Hi Diogo,
thanks for your comment!
As we don’t know your particular case (what you’ve tried, the errors that it is showing…) we’re not able to give you a proper answer.
Maybe you could try with Docker’s support channels. 🙂
hello im doing this tutorial but i got an error can you help?
this is the error:
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
myproject_web_1 exited with code 3
myproject_data_1 exited with code 0
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Using worker: sync
web_1 | [2017-02-26 23:43:19 +0000] [9] [INFO] Booting worker with pid: 9
web_1 | [2017-02-26 23:43:19 +0000] [9] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/arbiter.py”, line 557, in spawn_worker
web_1 | worker.init_process()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 136, in load_wsgi
web_1 | self.wsgi = self.app.wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/base.py”, line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py”, line 65, in load
web_1 | return self.load_wsgiapp()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py”, line 52, in load_wsgiapp
web_1 | return util.import_app(self.app_uri)
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/util.py”, line 357, in import_app
web_1 | __import__(module)
web_1 | ImportError: No module named ‘mydjango.wsgi’
web_1 | [2017-02-26 23:43:19 +0000] [9] [INFO] Worker exiting (pid: 9)
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Shutting down: Master
web_1 | [2017-02-26 23:43:19 +0000] [1] [INFO] Reason: Worker failed to boot.
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web_1 | [2017-02-26 23:43:21 +0000] [1] [INFO] Using worker: sync
web_1 | [2017-02-26 23:43:21 +0000] [9] [INFO] Booting worker with pid: 9
web_1 | [2017-02-26 23:43:21 +0000] [9] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/arbiter.py”, line 557, in spawn_worker
web_1 | worker.init_process()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/workers/base.py”, line 136, in load_wsgi
web_1 | self.wsgi = self.app.wsgi()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/base.py”, line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File “/usr/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py”, line 65, in load
web_1 | return self.load_wsgiapp()
Hi Francis,
Thanks for commenting! It is quite hard to troubleshoot your particular problem since we don’t have the overall view of your project layout, but taking into account the error you get (“web_1 | ImportError: No module named ‘mydjango.wsgi’”), the problem might be related to some wrong path/filename/module name in your scripts or code.
Can you please double check that you followed all the steps?
I had the same problem, but added a line in the docker-compose.yml under the web entry to make sure the working_dir was where my manage.py file was sitting. Maybe it has something to do with the newer version of Django. So, now my docker-compose.yml file has the following line:
working_dir: /data/web/mydjango
The alternative, is to pass a –pythonpath string to gunicorn.
Anders
Thank you for this awesome tutorial. I’m looking forward to use this in production
Well, great article. My suggestions to improve:
1) add a virtualenv to a Django application
2) create a github repository as an example and put here a link to it
We don’t need virtual environment here because we have only one application in our container.
Hi Lospejos,
thank you very much for your suggestions!
We’ll keep it in mind for future occasions! 🙂