Docker Multi-Arch Images, manifests & mquery

Putting aside the oft-quoted figures about growth, and number of images downloaded, 2017 was quite a year for Docker. It culminated in the DockerCon Europe announcement in Copenhagen that the platform would natively support Kubernetes, and we are now starting to see the fruits of this shift in the Docker support for Kubernetes Beta. Prior to that, however, a couple of 'multi-centric' features were released that started to lower the already negligible barrier to entry & transform how people use Docker.

Multi-Stage Builds

This was the earlier of the two features of interest. It started to emerge during March ahead of general availability during June. Not wishing to cover this in any great detail here, this change replaced the earlier 'Builder Pattern' which existed in order to facilitate production of smaller images for particular types of applications. For a first-class discussion of how multi-stage build superseded the builder pattern head over to Alex Ellis' blog.

Multi-Architecture Images

During September Docker announced multi-platform support of their official images and as someone who runs Docker on ARM32v6, ARM32v7 and x86_64 this news was of particular interest.


Quite simply, it removes complexity. In the pre-multi-arch world it would typically be the case that multiple Docker hub repo's would need to be visited, namespaces determined, or tags forensically interrogated in order to create bespoke Dockerfiles for each platform being targeted by the application. For example, consider a golang application being developed to run on each of the hardware platforms described earlier; 3 separate Dockerfiles could be required:

  • Dockerfile
FROM golang:1.8.5  
  • Dockerfile.arm32v6
FROM arm32v6/golang:1.8.5  
  • Dockerfile.arm32v7
FROM arm32v7/golang:1.8.5  

The remainder of the Dockerfiles' content would likely remain the same, and users would need to invoke the build command against the Dockerfile relevant to their environment.

  • Dockerfile: docker build .
  • Dockerfile.arm32v6 docker build . -f Dockerfile.arm32v6
  • Dockerfile.arm32v7 docker build . -f Dockerfile.arm32v7

Note the -f option which needs to be supplied if a non-standard Dockerfile name is used.

With the multi-arch approach the responsibility for finding the appropriate base image has been delegated to the docker client, which greatly improves the UX. Revisiting the earlier example the three Dockerfiles can be condensed into one:

  • Dockerfile
FROM golang:1.8.5  

Consequently the build command is the same regardless of the users' environment, simply: docker build .


Something called manifest lists make this possible. In (much) earlier versions of Docker the client/server conversation resulting from an invocation of docker pull / docker run would be very brief. Latterly an intermediate step has been introduced to the conversation, during which the client can, where available, receive a list of supported platforms for the requested image. The client subsequently requests an image appropriate for the hardware platform upon which it's running.

In order to get a taste of how these manifest lists look, try exploring the work that Docker Captain Phil Estes has done in this area and in particular his mquery tool. As you might expect this has also been made available as a docker image:

$ docker run mplatform/mquery <imageName>
  Image: <imageName>
   * Manifest List: Yes
   * Supported platforms:
     - linux/amd64
     - linux/arm/v7
     - linux/386
     - linux/ppc64le
     - linux/s390x
     - windows/amd64:10.0.14393.2007
     - windows/amd64:10.0.16299.192

Practical Implications

Looking beyond the build phase. Prior to multi-arch, with applications where stacks might be in use, standard practice would be to assemble separate compose files for each supported hardware platform. For example, a docker-compose.yml for use on x86 with the relevant x86 images specified, and a docker-compose.armhf.yml which would target 32-bit ARM hardware, and so would detail the respective ARM images. With multi-arch images, the same stack can be deployed on each supported hardware platform from a single compose file; the client will invisibly select the relevant image during deployment.

Create your own Manifest List / Multi-Arch

At the time of writing, the CLI feature to create manifest lists is still experimental, so the edge channel version of Docker is required. With 18.02 installed ensure that the following is added to the config.json file


That will enable access to the command in the CLI:

$ docker manifest

Usage:  docker manifest COMMAND

Manage Docker image manifests and manifest lists


  annotate    Add additional information to a local image manifest
  create      Create a local manifest list for annotating and pushing to a registry
  inspect     Display an image manifest, or manifest list
  push        Push a manifest list to a repository

Run 'docker manifest COMMAND --help' for more information on a command.  

Create would seem like the obvious place to start. The first argument specifies the name of the manifest - effectively the image name that will be used to access the hardware specific images - and the subsequent arguments specify those images. Here a manifest containing two images is created, x86 linux image and an ARM32v7 equivalent.

$ docker manifest create rgee0/of-mquery:1.0 \
    rgee0/mquery-x86:1.0 \ 

Created manifest list  

Once created, the manifest can be annotated to add additional information. It's this information that is extracted by the docker run mplatform/mquery command demonstrated above.

$ docker manifest annotate rgee0/of-mquery:1.0 \
    rgee0/mquery-x86:1.0 --os linux --arch amd64

$ docker manifest annotate rgee0/of-mquery:1.0 \
    rgee0/mquery-arm32v7:1.0 --os linux --arch arm --variant v7

The OS and arch combinations should align with those available from the golang environment options. Notice the --variant option supplied to the ARM annotation; as an example, this is useful in situations where distinct images are available for each of ARM32v6 & ARM32v7.

The final step, which should be familiar from working with images, is to push the manifest:

$ docker manifest push rgee0/of-mquery:1.0

To review the changes, the CLI makes an inspect option available, and of course the earlier mquery command could be provided with the manifest name chosen during create:

$ docker run mplatform/mquery rgee0/of-mquery:1.0
  Image: rgee0/of-mquery:1.0
   * Manifest List: Yes
   * Supported platforms:
     - linux/arm/v7
     - linux/amd64

Now, running docker pull rgee0/of-mquery:1.0 on a mac, or a Raspberry Pi 3 will see the relevant hardware specific image pulled from Docker hub.

mquery as a function

Why not try mquery as a function? By adapting Phil's code very slightly it's possible to make it compatible with the OpenFaaS framework and present mquery as a serverless function. OpenFaaS functions take input via stdin so a small change to swap out os.Args and the addition of a helper function to tidy up stdin prepares the program for use with OpenFaaS. Finally, the addition of multi-stage Dockerfiles produce a small function image ready for deployment. Find the OpenFaaS-ready code over on github.

Since the images were built and pushed during the manifest creation section, those with access to an OpenFaaS instance can dive straight in to deploying the function. Those who haven't, head over to the guides and work through the deployment guide for your preferred orchestrator.


One of the core tenets pervading OpenFaaS is that of choice, and deployment activity is no different. Here are two of the deployment options.


Grab the CLI and use deploy against a yaml file:

  name: faas

    lang: Dockerfile
    handler: ./mquery
    image: rgee0/of-mquery:1.0

Then run the deploy command:

$ faas-cli deploy -f ./mquery.yml

or, more succinctly, name the yaml file stack.yml and use the CLI alias:

$ faas deploy

The function can then be invoked via the CLI:

$ echo 'rgee0/of-mquery:1.0' | faas invoke mquery

  Manifest List: Yes 
  Supported platforms:
   - linux/arm/v7
   - linux/amd64

Function Store

By far the easiest way to deploy to OpenFaaS is via the OpenFaaS Function Store. The Function Store is a community curated index of OpenFaaS functions and deployment is as simple as clicking through the desired function.
The function created through this article has been accepted into the OpenFaaS Function Store which means it can be used in just 2 clicks:

Animated gif showing deploying from the OpenFaaS Store

Once initialised the function can be invoked through the UI by adding an image name and tag (rgee0/of-mquery:1.0) into the request body and selecting invoke.


Title Photo by Anthony DELANOIX on Unsplash

Show Comments