A few years back (2013-2016) I was working as a C++ Software Development Engineer at Intel on a monolithic product with a backend written in C++ and a web frontend written in Java. The product was shipped complete with hardware and as a VMware image.
Internally we kept ISO CD images on a shared server for every released or QA approved version of the product. Built into the product was a very clever issue reporting mechanism that allowed us, developers, to replicate a customer’s setup. For reproducing bugs we had a very handy web frontend to construct a running virtual machine from a product version combined with the customer’s “feedback”.
The only downside for this was that it took some time (approx. 20 minutes) until we were ready to use the virtual product and do testing. The bottleneck was that our test servers had to install a complete operating system bundled with our software and then fetch some definition files for the product.
It was about that time that developers were summoned internally to think out of the box and come up with new ideas on how to increase our SaaS market share. Just a few weeks earlier, it was my brother-in-law who told me how fascinated he was about Linux containers (LXC). As a developer quite new to the team that existed for over a decade, I dared to ask if we are “allowed to change the virtual machine” – I was obviously thinking about containers. And yes, I was allowed to, but I had no clue what impact this would have.
What followed were some weeks of exploration but quite early on I wanted to go with Docker instead of manually handling Linux containers.
I’ve looked at how you can create your own base operating system image with Docker because we shipped a customized operating system (RHEL) with our product. What followed was, in essence, an automated process to take every released ISO of our product and convert it into a ready-to-use docker image, which got pushed to our internal Docker registry. As a result, we’ve reduced the time it took to set up a ready-to-go machine from ~20 minutes to ~2 minutes.
It goes without saying that I received a lot of help and support from my colleagues. Especially during the first days of that exploratory journey, I asked them a ton of questions so weird that they first didn’t know why I would care about such low-level OS related things.
Up to that point, I had converted our monolithic product into a monolithic docker image and I knew that this was not how to run container by the book. But nevertheless, you have to consider that nobody had to change one line of source code in order to run the product in Docker and this was true for already released versions as well. On its own, it was a very convincing selling point to everybody in development and QA.
Now that we had improved our internal test and verification pipeline, the team responsible for bundling our product with the OS tried out different orchestration strategies and in the end went with Ansible and a more or less custom implementation. This was mostly done in order to stay in control and meet some of the very specific customer requirements (e.g. global and region-based routing).
This post was originally published here.