Guide to managing multiple applications with systemd in a Docker container

There have been a lot of cases where teams were looking to run multiple commands / applications in at the runtime in a docker container. Quite a few organisations are simply lifting and shifting their legacy applications directly in containers. This is a pain for the DevOps teams because they end up having to run multiple applications in the same container (which is totally against the idea, a bit ironic).

In this tutorial, we’re going to discuss the following topics.

  1. Understanding the problem with multiple applications in a single docker container
  2. How to use systemd to manage multiple applications in a docker container
  3. Example Dockerfile and systemd files to start / stop / restart your containers

Problem Statement

Containers by design are built to run one application (process) per container. That is pretty much what an ENTRYPOINT / CMD instruction in a Dockerfile does. Let’s say you have a website (HTML / css) that you would want to run in a Docker container with the help of Apache (httpd) web server. You can set CMD to “CMD apachectl -D FOREGROUND” in order to ensure that Apache is started at the run time. This is good because now Docker can ensure that the apache is always running, Docker will mark the container “exited / crashed” if the apache crashes. This is important for us to know if the application is actually working or not.

With that in mind, let’s say you have two applications that you’d want to run in the same container. The same web server that we discussed above and another python flask application (could be any other app). You can not have multiple CMD or Entrypoint arguments in your Dockerfile, so there’s no way for you to ensure that BOTH the applications are working. It might occur to you that you can write a simple shell script which will start both the applications and call that shell script in CMD. While the idea is not wrong, it won’t help because in this case the shell script is starting your applications. So if they were to crash / fail, docker would never realise that they are not working because they were started by the shell script.

So the question here is, how do you not only start multiple applications in a Docker container, but also ensure that they are restarted automatically in case they crash / fail?

Here’s one of the work-arounds suggested by Docker. I personally like supervisord over systemd specifically for Docker containers, but in my case, it was more of a “people problem”. The “manager” was adamant on using systemd even after a few heated counter arguments. So here’s we go.

Using systemd inside a Docker container

This assumes that you already know what systemd is and you want to use it in your Docker container to start / manage the applications. We’ll discuss how you can achieve this.

  1. We already have a Docker image with systemd installed in it that we can use. You can find the image at jrei/systemd-ubuntu on Dockerhub.
  2. If you read the documentation, you’d know that this image would HAVE to run as a privileged container and cgroup would have to be mounted for it to get access to the docker socket.

This is what the actual Dockerfile would look like.

FROM jrei/systemd-ubuntu:18.04
COPY myapplication /opt/myapplication
WORKDIR /opt/myapplication
COPY myapplication.service /etc/systemd/system/myapplication.service
RUN ln -s /etc/systemd/system/myapplication.service /etc/systemd/system/
RUN apt-get update &&\
    install dependencies for your app
CMD ["/sbin/init"]
  • FROM – we’re instructing Docker to use jrei/systemd-ubuntu:18.04 as the base image. Be aware that you can use the Ubuntu version of your choice, just modify the tag.
  • COPY – copying my application files to my preferred location. This could be any location of your choice and application of your choice.
  • WORKDIR – Setting the workdir
  • COPY service – You will need to setup the sytemd service file in order for systemd to recognise the application. The service file can be found below
  • RUN ln -s – It is important that you set this symlink, otherwise it wouldn’t work
  • RUN – This is where you add your own instructions to install the dependencies of your application and setup the environment as required
  • CMD – We’re instructing Dockerfile to start the systemd whenever somebody uses this image to create this container. So eseentially, docker will start systemd and systemd will start / manage as many applications (service files) as we have set up.

Example service file


ExecStartPre=/bin/bash -c 'source /opt/myapplication/venv/bin/activate'
ExecStart=/bin/bash -c 'cd /opt/myapplication/ && /opt/myapplication/venv/bin/uwsgi --ini uwsgi.ini'


Please be sure to modify this service file as per your preference. If you’re new to it, the most important lines here are “User, ExecStartPre and ExecStart”, that is where you set exactly how to start your applications and if there are pre-start steps / commands that have to be executed. In my case, I am first activating the python virtual environment in “ExecStartPre” and then starting the application in “ExecStart”. You can basically replace my commands with yours.

If you have multiple applications, you would simply have as many service files. One service file for each application, don’t forget to copy and symlink (ln -s) all of them in your Dockerfile.

Hope this helps and feel free to comment below if you have any questions or concerns.


Dockerfile – Import Postgres Database Dump

I was recently working with an application to dockerize it, essentially writing a Dockerfile for it. The application had a Python Flask frontend and postgresql as the backend database. During the planning stage, we’d determined that we’re not going to require any container / docker orchestration (such as kubernetes / swarm) because we’ll only need one instance of this application running. Logically, we ended up deciding on docker-compose.

As part of the process, I received a database dump that was supposed to be imported in the database for the application to work. Now the job here was to actually import the dump in a vanilla postgresql container which will then be clubbed with the frontend in docker-compose. While the task is not that complexed, I guess it’ll help someone out.

Here’s the dockerfile that was used and the explanation

FROM postgres:12.3
COPY my_database_dump.sql /home/tempdir/my_dump.sql
COPY /docker/entrypoint-initdb.d/

Here’s the one-liner

set -e
psql -U youruser -d myapp < /home/tempdir/my_dump.sql

Now the first question is, why a shell script? I mean we could just put it in the docker file.

The reason to use the shell script is that docker uses /bin/sh by default. In some cases, sh is known to have issues with modern commands. You could use “SHELL” instruction in dockerfile as an alternative, but in this case I just figured we could write a shell one-liner and ensure that it uses /bin/bash instead of sh.

Second question is, how do you actually ensure that the database dump was import AFTER the database container is created? If you read the documentation of postgres on Dockerhub, it says that you can put your scripts in “/docker/entrypoint-initdb.d/” and they will be executed at the runtime and not at buildtime. So that’s what we did.

I hope this clarified any doubts you had and happy learning!