Building application containers
Drew LeskeFor one of our projects (ZooDB) we recently ran into a problem when adding new Python libraries to our dependencies, as several of the libraries required build tools in their installation. These tools were not available in the application’s base image nor were they installed as part of the build, so they had to be added.
Alpine Linux
Our application image for this project is currently based on Alpine Linux, which describes itself as “a security-oriented, lightweight Linux distribution”–sounds like a good fit for a containerized app, and it’s a popular choice. Like riding a fixed-gear bike, though, not having all the niceties you’re used to will mean having to adjust how you operate.
First off, there’s less in there to begin with. It’s more likely that a basic tool or library will not be present. “Lightweight” is a typical goal for any container image, though, so this isn’t limited to Alpine, and it’ll likely be an adjustment for anybody moving from a standard OS distribution and moving to the same OS’s standard container offering. For example, the Debian container image also doesn’t provide Git, GCC or Make.
The trickier part at first is that it’s less likely you’re using Alpine in
your development environment. In our team we use either Mac or Windows
laptops and VMs running Ubuntu, so there’s a different package system (apk
)
which must be learned and a different place to look for missing packages.
(Ubuntu also has a handy suggestion feature where, if you try to invoke a
utility that isn’t installed, it can suggest what you might need.)
The main things I’ve found essential in building container images based on Alpine Linux:
apk update
creates/updates the package indices, or you won’t be able to install anythingapk upgrade
updates the current install and we all know to do that as the first step in any new-built machine, VM or container build, right?apk add
installs packages- pkgs.alpinelinux.org is invaluable for finding packages by name or contents
Other options
For a Python-based project, the Docker Official Image for Python is a good choice. It’s based on Debian and includes what a Python project needs, including the build tools for installing non-binary dependencies (ones that include source components that must be built).
So back to the problem
Bhavy installed some additional Python packages whose dependencies required build tools, which were not available in the base image. Builds wound up here:
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [3 lines of output]
<string>:86: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
WARNING:root:Failed to get options via gdal-config: [Errno 2] No such file or directory: 'gdal-config'
CRITICAL:root:A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable.
[end of output]
We can ignore the warning for now, especially since it’s in a dependency. The
problem is the missing GDAL_CONFIG
.
Bhavy did the regular thing, which was to search the web, and that helped a bit but didn’t get him all the way home. He mentioned it to me and I said I’d take a look, knowing, from experience, that he could spend a lot of time basically flailing with libraries and missing tools and so on. Also–and this is huge–fixing this sort of thing by updating a Dockerfile and rebuilding is the really long and sad way to resolve this problem, but that’s what Bhavy has in his toolbox at this point. He’d commented out the other jobs in the CI pipeline, so updates weren’t going through the entire CI/CD pipeline, but it’s still slower. (Instead of commenting out jobs, we could update our CI config such that changes to the Dockerfile only trigger the container builds, which would make sense, but we haven’t had a need for that before.)
Interactive use of Docker
The first step was to build the container image locally, to the point where
it’s known to work. The part that fails in the installation of the Python
requirements, so I temporarily reverted the updates to the requirements.txt
file, and built the image, which happened without any trouble. Let’s call it
zoodb
.
(I had also built the image using a different base: python:3.11
and that
mostly went off without a hitch, but that image is 3x the size. Also I’m kind
of stubborn sometimes.)
The container images we’re using here provide a shell (not all container images include one–and for security and space it’s good to avoid it if you can, at least in production) so it’s possible to connect to a running container or start a new container in the shell. I want the latter:
$ docker run -it zoodb:alpine /bin/sh
This ran into problems because the Dockerfile defines an ENTRYPOINT
which is
not automatically overridden by the given command–contrast to CMD
. I had
to do the following after a brief web search of my own:
$ docker run -it --entrypoint="" zoodb:alpine /bin/sh
This means “instantiate the zoodb:alpine image with an interactive terminal
and ignore the configured command, and run the shell”. Unlike bigger, beefier
OS images, Alpine provides a basic shell (from
BusyBox) which is a bit more fundamental than Bash
or Zsh, so if you try to run this with /bin/bash
as the command, it’ll fail.
With the above, though, I get an interactive shell into an instantiation of the image I’m trying to build. Right off the bat, I run into an issue when I try to install new software: the configured user does not have access. We design our images such that they do not require to run in privileged mode, but for the development, this can be overridden, as follows:
$ docker run -it --user=root --entrypoint="" zoodb:alpine /bin/sh
Now, once in the container, I can install packages.
Fixing the problem
This is basically an iterative process: try to install the dependencies, and fix whatever problems come up. The problems were:
- could not find some library’s config. The first error message was
symptomatic of this problem. Development packages often have a script
mylib-config
(for some librarymylib
) that provide information about the package and its libraries. I guessed, based on experience, that the package I needed wasgdal-dev
, so I installed the package and tried re-runningpip install -r requirements.txt
again, and got to the next problem. - a failure on a missing library. This will fail on a linking step when
building from source and will look like
Could not find libwhatever.so
. In obvious cases or ones where you’ve seen the error before you may know to installwhatever-dev
. - a failure on a missing header. This will fail on a compilation step, and
will look like
Could not find whatever.h
–maybe. Sometimes the header file won’t be as obvious, in which case you can search for the error message or use the distribution’s package tools to find what package provides that file. - a missing tool. One of the dependencies required
g++
which was not available, so I installed that.
It was an exercise in incremental improvements. In the end, I had everything installed, and was able to start up the application, still within the running container’s shell.
As I made these increments, I added the packages to the Dockerfile so I did not lose track, and once finished, I rebuilt from scratch to ensure the solution worked. It’s a good idea to leave the container running in case you forgot to add something to the specification, so you can look back at your command history. I might have had to do that.
Tightening up the image
At that point, I had an image that had everything it needed to run the application, but also included a C++ compiler and libraries required to build libraries required to build the Python package we now had in our image. We need the Python package; we may or may not need the libraries; we certainly don’t need the compiler.
We can remove unneeded packages using apk del
. This was also an
interactive, incremental process: remove something, run the application; if it
works, great; if not, add it back in.
At the end, however, the image was still 1.33 GB, whereas the one built from the Python base image was only a bit bigger at 1.44 GB, so I was thinking something must be up.
Image layers and build commands
Docker images are built of layers, and every significant line in a Dockerfile results in a new layer. If you add something in one line and remove it in a subsequent line, it won’t be in the latter layer but it’ll still be in the earlier one–it’ll be obscured and basically unavailable in the image but it’s still there, taking up space. The way around this is to install the build dependencies, install the stuff you want, and then remove the build dependencies, in the same command so only the stuff you want makes it into the layer.
Previously, we had something like this:
RUN apk update && apk upgrade
RUN apk add dep1 dep2 dep3
RUN pip install -r requirements.txt
RUN apk del dep1 dep2 dep3
This worked, but it left all the cruft somewhere in the image layers. We actually needed:
RUN apk update && apk upgrade
RUN apk add dep1 dep2 dep3 \
&& pip install -r requirements.txt \
&& apk del dep1 dep2 dep3
And this is how I got the image size down to 485 MB.
Virtual packages
I’ve glossed over an improvement that was already present in the Dockerfile,
but I left it out so the above explanation is less cluttered. The apk add
command can take a --virtual
parameter which defines a “virtual package”
consisting of the specified packages. It’s then simple to remove the virtual
package and with it, its packages. This is easier to maintain because the
packages are only listed explicitly once. So the above looks more like this:
RUN apk update && apk upgrade
RUN apk add need-this-later need-this-too
RUN apk add --virtual .build-deps dep1 dep2 dep3 \
&& pip install -r requirements.txt \
&& apk del .build-deps
Tidy, no? Now the dependencies are only listed once, so if I need to add
another one, I don’t have to remember to add it to the del
line. There are
libraries that are needed by the Python code so I’ve left them outside of this
and I don’t delete them.
References
- Dockerfile reference
- Alpine Linux
- StackOverflow question on virtual packages in Alpine Linux
- BusyBox