Docker, Docker, Docker…

At CAPSiDE we have an amazing operations team and we also develop internal and customer-oriented tools. Being part of the development team, I have a strong opinion I want to share: we don’t talk enough about the benefits of using tools such as Docker for development. In this post, I’ll explain the benefits of adopting Docker and Docker Compose for development, based on our team’s experience.

  1. A new approach
  2. To deploy or not to deploy (containers in production)
  3. What happens if you don’t deploy containers?
  4. How to train your dragon
  5. Prepare your application for its habitat
  6. Bring everything to development
  7. Our life improved using Docker for development

1. A new approach

Traditionally, a developer would run the whole application on a laptop. This approach leads to multiple problems like huge initial setup times and incompatibilities when working on multiple projects.

Those problems were somehow mitigated using devboxes. Usually, a devbox is a virtual machine with all the tools you need to work on a particular project. It adds an indirection layer but reduces launch time and isolates the projects. We’ve still found some problems with this approach:

We decided to go one step further and use Docker for development. Build Docker images not for each application, but for each component of the application. If an application has, for example, an API, a worker and a database we would create three different images. Then, we use Docker Compose to orchestrate them and make each container mount the code from the local disk.

It is important to note that the only dependencies we need installed on our laptops are Docker and Docker Compose. We have our Dockerfile and docker-compose.yml under version control. That means we can recreate a dev environment in minutes without human interaction. That’s a huge benefit when onboarding new contributors, when your laptop brakes or when you have to set up testing environments.

Docker for Developers - CAPSiDE

2. To deploy or not to deploy (containers in production)

If you deploy containers to production, it definitively makes sense to work with the same images in development. Those images are your artifacts, your deploying units, and you are now using Docker as a means of code distribution. You would also use the same images on your CI pipeline. You code, test and deploy the same image and using the same tooling.

Most of you won’t have that setup in place. Lots of people think that you can only run containers on orchestration platforms like Kubernetes. These platforms, as amazing as they are, are not a requirement to run your images in production: You can deploy the images to traditional VMS with Docker installed and running inside them.

Then, you will use the exact same image in production that you had extensively tested during your development.

We use this approach in scenarios where the host machine is, resource-wise, adapted to the application (in CloudLand that’s easy because of the plethora of instance sizes you can choose from). It’s rendering great results for us, so I encourage you to consider it. It allows you to focus on containerizing your application without handling all the extra overhead of the orchestration platform. At least for now 🙂

3. What happens if you don’t deploy containers?

Ok, you still think deploying containers is not an option for you. Should you bother changing your development tools? I say: Yes!

Why is that? because with Docker Compose you will be able to emulate your production environment. If you deploy your code inside a traditional instance or even a physical machine, you can create an image using the same OS and dependencies and use it for development. You’ll have not only an image with all the packages installed in your production environment. You will have an image without extra packages not present in production.

Docker for Developers - CAPSiDE

I’ve spent a lot of time in my life trying to catch bugs caused by different versions of apps, dependencies, system libraries, etc. installed in different environments. It’s an annoying thing to do and you end up feeling stupid to let that happen in the first place. If you maintain your Dockerfiles for your services, you have your dependencies (even system dependencies!) explicitly stated. You’ll keep finding ways to make things work on one machine and not in another, but it’ll be way more difficult.

If you don’t use the containers in production, you’ll have to make sure that your images are as similar as possible to your production environment. If you can, use the same tooling to provision the servers and to build your images. Your images will be more bloated but you won’t have to keep things in sync manually. Remember that a manual process will fail, you only don’t know when.

You can also keep your development tooling inside containers. If you are strict with it, all the team will be using exactly the same versions and that will be one less thing to debug. In any case, try not to rely on tools or configs outside your repo. They will eventually get out of sync and you’ll have a hard time finding the cause. Sometimes you really need to access files outside your repo (for example, a credentials file). In this case, consider adding an automatic check for those files that prevent you from doing anything if they are not present/well-formed.

Making your development environments similar to production allows you to pay a small price up-front to prevent a lot of payments in the future. You will catch a lot of issues in your day-to-day development that otherwise would have been detected during a deploy or, worse, by your costumers.

4. How to train your dragon?

Think about your application as a gorgeous dragon: like applications, all the cool dragons live in the Cloud. Also, it’s a damn shame to see your beautiful product sleeping in a pile of money generated in the past.

Yeah, that makes me feel better about my job, too.

Docker for Developers - CAPSiDE

C’mon Smaug! you can do it better!

If your dragon has to live in the Cloud, it makes no sense to keep it in a cage in your desktop until the day it has to fly. If you don’t let it out of the cage early it will not be able to fly or, even worse, breath fire. One cannot become the king of the sky without a proper training. Targaryens forgot about that and ended up killing almost all the dragons. Don’t be a mad Targaryen.

5. Prepare your application for its habitat

Using Docker Compose in development adds an important constraint: your services are not on the same machine anymore.

This has a couple of important limitations in the way you code your application:

This is one of the cases when limitations are actually a good thing for you. Production-ready applications can’t rely on the shared disk or localhost for communication between services. How does doing it help you allowing it in development? It does not.

Using localhost or the local filesystem gives you a shortcut to bypass problems that you will face in production, where mistakes are more expensive to solve. You don’t want to take that shortcut. Embracing those limitations will lead you to better architectural decisions. All the components will be built to live isolated, and not bundled together into one “machine”. You will make the communication between components explicit and configurable.

Sometimes one of the components of your application is a third-party service. For example, you production stack might use load balancers or databases as a service from your cloud provider. I recommend you to emulate those services in your development environments, even if not 100%. An alternative is to have conditional code behaving differently in development and production. I’d strongly advise against that. That means there’s a path of your code that you don’t execute while developing, where you spend most of your time. Instead, you will only catch bugs in the “hidden” part when it really hurts: in production.

If your production infrastructure has an AWS Application LoadBalancer as a front-end for two different services (say, API and web) you can add a service in your docker-compose.yml file that plays the ALB role:

services:
  alb:
    image: nginx:alpine
    ports:
      - "8080:80"
    depends_on:
      - web
      - api
    command: |
      sh -c "sh -s <<'EOF' cat > /etc/nginx/conf.d/default.conf <<'EON'
      server {
        listen 80;

        location / {
          proxy_pass http://web:80;
        }

        location /api {
          proxy_pass http://api:80;
        }
    }
    EON
    nginx -g 'daemon off;'
    EOF"

This is a nice trick to inline nginx’s configuration inside the command in docker-compose.yml

This way, the interaction between components will be closer to the production environment: the web content will never talk to the API component. It will use the alb component instead. This design will help you grow nicely in the future.

If you use an RDS Database, you can add a container running the same database version your cloud provider is offering. This will force you to think about how you bootstrap and upgrade your database while you’re developing. In the end, you will have practised your upgrades hundreds of times in development. That will help you figure out how things will go once the code is pushed to production.

6. Bring everything to development

It’s important to keep in mind that your product is not the code that runs on your laptop. Your product is what runs in production. There, you often have other roles involved. Specialists in operations, security or system administration should also shape what the product is.

A common misunderstanding of DevOps is that it seeks to replace those roles. My understanding is that it’s quite the opposite: It’s a great opportunity to talk with specialists sooner, when you are designing a solution and not later when the cost of fixing a problem is greater. Forcing you to think about your full product since the very beginning will raise a lot of questions about how the components will interact.

When you have two pieces that should work together, you want to work on the integration first. If you don’t, you will need to make assumptions when developing each component. Your assumptions will be wrong at some point. You will probably need to redo a lot of work, and that hurts. Using Docker Compose, you will be doing integration first and the contracts between the components will be defined from the beginning.

Docker for Developers - CAPSiDE

What happens when you don’t integrate first?

Working with all the integrations in mind will help you in different ways:

Two tips when making your images:

Ok, this definitely helps when you are designing a new system. But, what happens when you already have something in place?

In my experience, trying to build a development environment using Docker Compose is a great way to gain knowledge about the system. During the process, you will probably fix a lot of potential issues waiting to explode at the worst possible moment. In the end, your product will be more robust and your gained knowledge will be spread across the whole team.

Docker for Developers - CAPSiDE

7. Our life improved using Docker for development

I don’t know your situation, but I can tell you some of the benefits we’ve been experiencing since we decided to use Docker for development:

Maybe you’re already good in some of these points or maybe you have other struggles. Using Docker for development (and Docker Compose) has made the life of our developers easier and I hope it helps you, too. In any case, give it a thought. I’m sure you’ll find places where you can improve your process.

TAGS: application, automation, development, docker, docker compose

speech-bubble-13-icon Created with Sketch.
Comments
Ernest | December 15, 2017 9:18 am

Nice post! As a comment I would like to remember and remark that main benefit of Docker is the same main benefit of well known JVM. Okey, extended to not only code.

Reply
Marc Egea i Sala | December 18, 2017 11:59 am

Thanks Ernest 🙂

I agree that some of the JVM promises are also Docker promises. Particularly, “run everywhere”.

But, IMHO there are at least a couple of key differences that impact in the way we develop:

1. With a classic JVM approach you still need all the tooling replicated in all the dev environments. Different versions of maven or the JVM itself may lead to errors.

2. The container approach is completely language agnostic. You can use the language you want and you can also use multiple languages in the same project. This happens more than we realize. For example, all the web projects have heavily rely on Javascript for the frontend, even if the backend is written in any other language.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*