Home Explore Blog CI



docker

14th chunk of `content/manuals/engine/release-notes/23.0.md`
f935fc4289c5fc389431a88f57288549bee36dd2c564d8080000000100000ad0
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x147ff00]

goroutine 693 [running]:
github.com/docker/docker/vendor/github.com/moby/buildkit/cache.computeBlobChain.func4.1({0x245cca8, 0x4001394960})
        /go/src/github.com/docker/docker/vendor/github.com/moby/buildkit/cache/blobs.go:206 +0xc90
github.com/docker/docker/vendor/github.com/moby/buildkit/util/flightcontrol.(*call).run(0x40013c2240)
        /go/src/github.com/docker/docker/vendor/github.com/moby/buildkit/util/flightcontrol/flightcontrol.go:121 +0x64
sync.(*Once).doSlow(0x0?, 0x4001328240?)
        /usr/local/go/src/sync/once.go:74 +0x100
sync.(*Once).Do(0x4001328240?, 0x0?)
        /usr/local/go/src/sync/once.go:65 +0x24
created by github.com/docker/docker/vendor/github.com/moby/buildkit/util/flightcontrol.(*call).wait
```

The daemon will restart if configured to do so (e.g. via systemd) after such a crash. The only available mitigation in this release is to avoid performing builds with the inline cache feature enabled.

#### BuildKit with warm cache ([tracking issue](https://github.com/moby/moby/issues/44943))

If an image was built with BuildKit on a previous version of the daemon, and is built with a 23.0 daemon, previously cached layers will not be restored correctly. The image may appear to build correctly if no lines are changed in the Dockerfile; however, if partial cache invalidation occurs due to changing some lines in the Dockerfile, the still valid and previously cached layers will not be loaded correctly.

This most often presents as files that should be present in the image not being present in a `RUN` stage, or any other stage that references files, after changing some lines in the Dockerfile:

```
[+] Building 0.4s (6/6) FINISHED
 => [internal] load build definition from Dockerfile
 => => transferring dockerfile: 102B
 => [internal] load .dockerignore
 => => transferring context: 2B
 => [internal] load metadata for docker.io/library/node:18-alpine
 => [base 1/2] FROM docker.io/library/node:18-alpine@sha256:bc329c7332cffc30c2d4801e38df03cbfa8dcbae2a7a52a449db104794f168a3
 => CACHED [base 2/2] WORKDIR /app
 => ERROR [stage-1 1/1] RUN uname -a
------
 > [stage-1 1/1] RUN uname -a:
#0 0.138 runc run failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory
------
Dockerfile:5
--------------------
   3 |
   4 |     FROM base
   5 | >>> RUN uname -a
   6 |
--------------------
ERROR: failed to solve: process "/bin/sh -c uname -a" did not complete successfully: exit code: 1
```

To mitigate this, the previous build cache must be discarded. `docker builder prune -a` will completely empty the build cache, and allow the affected builds to proceed again by removing the mishandled cache layers.

Title: Docker Engine 23.0.0: BuildKit Cache Issues Continued
Summary
This section elaborates on further known issues with Docker Engine 23.0.0, focusing on BuildKit cache problems. It describes a scenario where using BuildKit with a 'warm' cache (i.e., cache from a previous Docker version) results in incorrect layer restoration, potentially causing missing files in the image after Dockerfile changes. The suggested solution is to completely clear the build cache using `docker builder prune -a` to resolve the issue.