Categories
Docker

Guide to managing multiple applications with systemd in a Docker container

There have been a lot of cases where teams were looking to run multiple commands / applications in at the runtime in a docker container. Quite a few organisations are simply lifting and shifting their legacy applications directly in containers. This is a pain for the DevOps teams because they end up having to run multiple applications in the same container (which is totally against the idea, a bit ironic).

In this tutorial, we’re going to discuss the following topics.

  1. Understanding the problem with multiple applications in a single docker container
  2. How to use systemd to manage multiple applications in a docker container
  3. Example Dockerfile and systemd files to start / stop / restart your containers

Problem Statement

Containers by design are built to run one application (process) per container. That is pretty much what an ENTRYPOINT / CMD instruction in a Dockerfile does. Let’s say you have a website (HTML / css) that you would want to run in a Docker container with the help of Apache (httpd) web server. You can set CMD to “CMD apachectl -D FOREGROUND” in order to ensure that Apache is started at the run time. This is good because now Docker can ensure that the apache is always running, Docker will mark the container “exited / crashed” if the apache crashes. This is important for us to know if the application is actually working or not.

With that in mind, let’s say you have two applications that you’d want to run in the same container. The same web server that we discussed above and another python flask application (could be any other app). You can not have multiple CMD or Entrypoint arguments in your Dockerfile, so there’s no way for you to ensure that BOTH the applications are working. It might occur to you that you can write a simple shell script which will start both the applications and call that shell script in CMD. While the idea is not wrong, it won’t help because in this case the shell script is starting your applications. So if they were to crash / fail, docker would never realise that they are not working because they were started by the shell script.

So the question here is, how do you not only start multiple applications in a Docker container, but also ensure that they are restarted automatically in case they crash / fail?

Here’s one of the work-arounds suggested by Docker. I personally like supervisord over systemd specifically for Docker containers, but in my case, it was more of a “people problem”. The “manager” was adamant on using systemd even after a few heated counter arguments. So here’s we go.

Using systemd inside a Docker container

This assumes that you already know what systemd is and you want to use it in your Docker container to start / manage the applications. We’ll discuss how you can achieve this.

  1. We already have a Docker image with systemd installed in it that we can use. You can find the image at jrei/systemd-ubuntu on Dockerhub.
  2. If you read the documentation, you’d know that this image would HAVE to run as a privileged container and cgroup would have to be mounted for it to get access to the docker socket.

This is what the actual Dockerfile would look like.

FROM jrei/systemd-ubuntu:18.04
COPY myapplication /opt/myapplication
WORKDIR /opt/myapplication
COPY myapplication.service /etc/systemd/system/myapplication.service
RUN ln -s /etc/systemd/system/myapplication.service /etc/systemd/system/multi-user.target.wants/myapplication.service
RUN apt-get update &&\
    install dependencies for your app
CMD ["/sbin/init"]
  • FROM – we’re instructing Docker to use jrei/systemd-ubuntu:18.04 as the base image. Be aware that you can use the Ubuntu version of your choice, just modify the tag.
  • COPY – copying my application files to my preferred location. This could be any location of your choice and application of your choice.
  • WORKDIR – Setting the workdir
  • COPY service – You will need to setup the sytemd service file in order for systemd to recognise the application. The service file can be found below
  • RUN ln -s – It is important that you set this symlink, otherwise it wouldn’t work
  • RUN – This is where you add your own instructions to install the dependencies of your application and setup the environment as required
  • CMD – We’re instructing Dockerfile to start the systemd whenever somebody uses this image to create this container. So eseentially, docker will start systemd and systemd will start / manage as many applications (service files) as we have set up.

Example service file

[Unit]
Description=myapplication
After=network.target

[Service]
WorkingDirectory=/opt/myapplication
Type=simple
Restart=always
RestartSec=30
User=myuser
ExecStartPre=/bin/bash -c 'source /opt/myapplication/venv/bin/activate'
ExecStart=/bin/bash -c 'cd /opt/myapplication/ && /opt/myapplication/venv/bin/uwsgi --ini uwsgi.ini'

[Install]
WantedBy=multi-user.target

Please be sure to modify this service file as per your preference. If you’re new to it, the most important lines here are “User, ExecStartPre and ExecStart”, that is where you set exactly how to start your applications and if there are pre-start steps / commands that have to be executed. In my case, I am first activating the python virtual environment in “ExecStartPre” and then starting the application in “ExecStart”. You can basically replace my commands with yours.

If you have multiple applications, you would simply have as many service files. One service file for each application, don’t forget to copy and symlink (ln -s) all of them in your Dockerfile.

Hope this helps and feel free to comment below if you have any questions or concerns.

Leave a Reply

Your email address will not be published. Required fields are marked *