- Blog
- Shrinking the Size of our Teamscale Docker Image
Over the last few years, Docker has largely been adopted to ease the deployment of applications. One of the key benefits is bundling all required dependencies of an application in a single image that can be used right away without a huge installation and configuration overhead. Hence, we are using Docker for running our own Teamscale instances, as well as for Teamscale instances at the customer site.
Docker containers are also neat when testing and reviewing new features before they are merged back into master. Especially on a local developer machine it is beneficial that the Docker image is small, because you don’t want to waste your SSD space with gigabytes of Docker image data.
Over the last years the Teamscale Docker image became quite large, recently hitting the 1GB mark. This prompted me to inspect the image, in order to find a way to reduce its size. After some (surprisingly) minor adjustments, the size of the resulting image was more than cut in half.
Reducing the size of Teamscale's docker image
Before going into further details on how I reduced the size of the image, let me explain the layering of the Teamscale Docker image: Ubuntu is used as the base image and Java is installed as the application runtime environment. As Teamscale integrates the results of several third party checkers, Mono has to be installed, as well as TSLint and ESLint via NPM. So in general a bigger base image is expected.
Finally, the Teamscale distribution is added to the Docker image. The distribution zip file is around 300 MB, so this adds several additional megabytes.
Something looks dodgy
I was already aware that our image is about 1 GB large. On a regular pull, however, I expected the amount of downloaded data to be roughly the distribution size, as the base image (Ubuntu, Java, …) usually remains stable. But my console showed something different:
It looks like a large chunk of data is downloaded twice. As this is the last layer, these must be the actual Teamscale distribution files. Inspecting the Dockerfile
revealed that the whole /opt/teamscale
directory is chown
‘ed. While this is a safe operation in a traditional deployment setup, Docker has to add a new image layer containing all the distribution files again with correct ownership information.
It was easy to get rid of the chown
call by adjusting the ownership information before the actual Docker build, saving us instantly over 300 MB on the final image. (Remark: since Docker version 17.09 it is also possible to change file ownership during ADD/COPY
)
Don’t stop me now
The resulting 670 MB is a good start, but there were still some megabytes to shed:
The first one was obvious: The full-blown OpenJDK is used as Java runtime, which comes with several development and GUI tools. Replacing it with the the headless OpenJRE saved almost 100 MB right away.
Second, running StyleCop and reading information from .NET PDB files, requires Mono, which has been installed with the mono-complete
package, which pulls a ton of dependencies, e.g. a complete MonoDevelop. Being more specific by using only the required packages (i.e. mono-runtime
) saved several megabytes.
Last but not least, ESLint and TSLint require NodeJS as a runtime. Previously NodeJS and NPM was installed and then ESLint and TSLint via NPM. While this approach works perfectly, NPM is not needed during runtime, so I removed the NPM package after installing the linters.
The final result
All in all, the final image size has been reduced from 1 GB down to 480 MB:
As always, there is still some room for further improvements:
- The base image could be switched from Ubuntu to something even more lightweight.
- The StyleCop executor has several dependencies to GUI related libraries, which could be removed. Alternatively, the runtime switched to .NET Core.
- The use of several external libraries within Teamscale could be reviewed.
However, any of these improvements are no quick wins and the gain might be negligible compared to the effort needed to realize them. For now, I am more than satisfied that the size was roughly cut in half—making pulls on mobile network connections significantly faster.
Related Posts
Our latest related blog posts.