The diagram below shows how this is put together with a running container based
on a two-layer image.

When you start a container, the following steps happen in order:
1. The base layer of the image exists on the Docker host as a ZFS filesystem.
2. Additional image layers are clones of the dataset hosting the image layer
directly below it.
In the diagram, "Layer 1" is added by taking a ZFS snapshot of the base
layer and then creating a clone from that snapshot. The clone is writable and
consumes space on-demand from the zpool. The snapshot is read-only,
maintaining the base layer as an immutable object.
3. When the container is launched, a writable layer is added above the image.
In the diagram, the container's read-write layer is created by making
a snapshot of the top layer of the image (Layer 1) and creating a clone from
that snapshot.
4. As the container modifies the contents of its writable layer, space is
allocated for the blocks that are changed. By default, these blocks are
128k.
## How container reads and writes work with `zfs`
### Reading files
Each container's writable layer is a ZFS clone which shares all its data with
the dataset it was created from (the snapshots of its parent layers). Read
operations are fast, even if the data being read is from a deep layer.
This diagram illustrates how block sharing works:

### Writing files
**Writing a new file**: space is allocated on demand from the underlying `zpool`
and the blocks are written directly into the container's writable layer.
**Modifying an existing file**: space is allocated only for the changed blocks,
and those blocks are written into the container's writable layer using a
copy-on-write (CoW) strategy. This minimizes the size of the layer and increases
write performance.
**Deleting a file or directory**:
- When you delete a file or directory that exists in a lower layer, the ZFS
driver masks the existence of the file or directory in the container's
writable layer, even though the file or directory still exists in the lower