With the DevOps mindset comes the need for automation, the need to subtract manual intervention (and thus human error) from the equation as much as possible in the deployment pipeline, for we know that for the most part, errors are more probable whenever a task needs to be done by anyone other than a machine. The database is as much a part of this pipeline as any other component and is often left as an afterthought of the deployment tasks. Or, there may be a fear that an application will not be able to handle the details as well as a person and that a data loss may ensue if no care is taken to do things properly.

But, it shouldn’t be like this. No person should be needed to update a schema to the latest version of an application, or, conversely, to rollback changes. The gain in speed and reliance, without the supervision of anyone, is what allows performing a continuous deployment. Another added value is that the schema upgrades can be tested before making it to production.

Thus, I present to you:

database automation

Tools to execute automatic schema upgrades are often referred to as “database migration” tools. There are a lot of them out there, and I’d like to comment one I’ve used in some projects. This does not mean that CAPSiDE endorses or recommends Liquibase. Take a look at the tools that your ORM brings along (lots of them already have schema migration tools).

So, on with Liquibase:

It’s a tool to keep the database schema and it’s evolution under source control by providing an XML (or JSON or YAML) DSL that defines this schema and the changesets that follow the initial creation. This way, a project can keep synchronised:

How does it work?

Liquibase is a Java application that can be integrated into Maven/Graddle/Ant projects, or executed as a standalone in the command line. It can also be integrated into Java EE/Spring apps to execute changes upon application deployment, but Liquibase itself is not restricted to Java at all, and is agnostic of the database (click here for a list of supported databases). Since it can be executed on the command line, it can be integrated into whichever build and project management tools (Rake, Grunt, Module::Build, etc.) and whichever CI server (Jenkins, Travis CI, Team Foundation Server, etc.) you’re using for your project. Only Java 1.6+ is needed to run Liquibase.

To know what to do, changesets files are needed, that describe the schema of your database. A Liquibase DSL provides the tools to do that. A simple example will help see what a changeset looks like:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ext="http://www.liquibase.org/xml/ns/dbchangelog-ext" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.0.xsd http://www.liquibase.org/xml/ns/dbchangelog-ext http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-ext.xsd">

        <runningAs username="liquibase"/>

    <changeSet id="1" author="nvoxland">
        <createTable tableName="person">
            <column name="id" type="int" autoIncrement="true">
                <constraints primaryKey="true" nullable="false"/>
            <column name="firstname" type="varchar(50)"/>
            <column name="lastname" type="varchar(50)">
                <constraints nullable="false"/>
            <column name="state" type="char(2)"/>

    <changeSet id="2" author="nvoxland">
        <addColumn tableName="person">
            <column name="username" type="varchar(8)"/>
    <changeSet id="3" author="nvoxland">
        <addLookupTable existingTableName="person" existingColumnName="state" newTableName="state" newColumnName="id" newColumnDataType="char(2)"/>


If this excerpt causes you to shudder in light of the XML verbosity, a YAML version exists.

From a changeset master file, Liquibase will take the needed actions to update the database. To know what changes to apply, the application will create a table (databasechangelog) in the schema that will keep track of whatever changesets were applied on what date. It will also create another table (databasechangeloglock) to prevent concurrent execution of two Liquibase commands.

The update command isn’t the only one that Liquibase provides. All sorts of commands help manage the database, extract info, rollback changes, generate SQL, among others.

For example, by executing the following command:

liquibase \
      --driver=oracle.jdbc.OracleDriver \
      --classpath=\path\to\classes:jdbcdriver.jar \
      --changeLogFile=db.master_changelog.xml \
      --url="jdbc:oracle:thin:@localhost:1521:oracle" \
      --username=scott \
      --password=tiger \

Assuming the db.master_changelog.xml file was the previous xml, it would create a “person” table, then it would add a “username” column to that table, and then it would create a lookup table for “person” (there aren’t only basic SQL operations, but instead, with plugins, one can extend Liquibase to output more complex SQL associated with Liquibase operations).

Also, any option of a Liquibase command can go to a properties file so as to not to have to write every parameter every time.
If the target database already had these operations performed previously, then Liquibase would recognise it and it would not execute them.

In case you’re wondering whether you could use Liquibase on your pre-existing database, you can use the generateChangeLog command, to generate an initial changeset.

What does Liquibase really bring me?

So, that’s all nice and dandy, but… what does this tool really bring to the table that just giving the SQL to the operations department doesn’t?

First of all, the DSL. Be it with XML, YAML or JSON (or even with a pseudoSQL), gives four very important things:

From the last benefit comes one of the best features of Liquibase: Tagging. The same concept as in a VCS tag. You mark a point as an anchor to which to come later if need be. The main purpose of tags in Liquibase is to be able to roll back to them later. To do so, it will rollback all the changesets that came after the tag.

It is then important that for every changeset there are rollback operations defined. If you have used the DSL of Liquibase you will probably have them automatically. If not, you can still define rollback operations to be performed on the rollback section of a changeset. Like so:

- changeSet:
      id: changeRollback
      author: nvoxland
        - createTable:
            tableName: changeRollback1
              - column:
                  name: id
                  type: int
        - dropTable:
            tableName: changeRollback1

If you then synchronise your application tags with the Liquibase tags, you get a very easy way to transition from a database version to another.

Another good thing Liquibase provides is project and tech stack independence. You don’t depend on a particular language, which means that the database handling can be unified between projects, easing the operations day to day.

It can’t be this easy… right?

It isn’t. There are some pitfalls while using Liquibase and some difficulties that have to be surpassed in order to be able to use this tool to its fullest.

First of all, in true DevOps spirit, Liquibase can’t be used if either the devs or the ops are not buying it. It requires a strong discipline for the dev team since each and every change must be registered in a changeset with a rollback provided. It’s not so important while doing the development and before the first version to production is released, since the schema can be recreated at any time, but after that, failure to put any change into a changeset and then tagging for each code release, will cause trouble and defeat the purpose of using Liquibase.

For the operations team, it’s an easier buy-in, since in fact, it eases database handling by providing an automated way to perform it, without its intervention. In case automated upgrade to the production environment is not on your list of priorities, Liquibase provides a host of commands and operations that help manage and execute the changesets, either by doing it itself or by generating the SQL that will be executed so that it can be revised before execution.

In conclusion…

This article doesn’t aspire to be a Liquibase tutorial, nor a manual on how to use it. For that, I recommend reading the documentation at the official page. Read up on best practices, since there are some patterns that may bite you later in the project if you fall for them.

What I really wanted to show you is how you can benefit from Liquibase, or the inclusion of any database migration tool into your development process, and thus be able to integrate schema changes into a continuous deployment/delivery system, getting closer to a DevOps style workflow.

TAGS: automated database, database migration, liquibase

speech-bubble-13-icon Created with Sketch.
Gabriele | February 7, 2018 3:13 pm

Hi, I have a problem integrating liquibase maven execution with a mysql 5.7 database provided by azure as a Service.
the error log is:
Failed to execute goal org.liquibase:liquibase-maven-plugin:3.0.5:status (default-cli) on project — : Error setting up or running Liquibase: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Last packet sent to the server was 575 ms ago. Remote host closed connection during handshake: SSL peer shut down incorrectly.

this is the connection string I give to liquibase:

Emma Briones | February 8, 2018 9:41 am

Hi Gabriele,

Thank you for reading our labs post!
I’m not sure we can help you as we don’t have all the required information for your case.
Anyway, we suggest you check de JDBC driver documentation, especially on the connection string format and also Azure’s MySQL documentation in your case. 😉


Leave a Reply

Your email address will not be published. Required fields are marked *