Home Explore Blog Models CI



docker

4th chunk of `content/manuals/engine/storage/drivers/zfs-driver.md`
59629d42e6a5f60fa47d0fb7cb94a99847839cc3b9cf9ca10000000100000cde
the dataset it was created from (the snapshots of its parent layers). Read
operations are fast, even if the data being read is from a deep layer.
This diagram illustrates how block sharing works:

![ZFS block sharing](/Users/baehyunsol/Documents/Rust/ragit/sample/docker/content/manuals/engine/storage/drivers/images/zpool_blocks.webp?w=450)


### Writing files

**Writing a new file**: space is allocated on demand from the underlying `zpool`
and the blocks are written directly into the container's writable layer.

**Modifying an existing file**: space is allocated only for the changed blocks,
and those blocks are written into the container's writable layer using a
copy-on-write (CoW) strategy. This minimizes the size of the layer and increases
write performance.

**Deleting a file or directory**:
  - When you delete a file or directory that exists in a lower layer, the ZFS
    driver masks the existence of the file or directory in the container's
    writable layer, even though the file or directory still exists in the lower
    read-only layers.
  - If you create and then delete a file or directory within the container's
    writable layer, the blocks are reclaimed by the `zpool`.


## ZFS and Docker performance

There are several factors that influence the performance of Docker using the
`zfs` storage driver.

- **Memory**: Memory has a major impact on ZFS performance. ZFS was originally
  designed for large enterprise-grade servers with a large amount of memory.

- **ZFS Features**: ZFS includes a de-duplication feature. Using this feature
  may save disk space, but uses a large amount of memory. It is recommended that
  you disable this feature for the `zpool` you are using with Docker, unless you
  are using SAN, NAS, or other hardware RAID technologies.

- **ZFS Caching**: ZFS caches disk blocks in a memory structure called the
  adaptive replacement cache (ARC). The *Single Copy ARC* feature of ZFS allows
  a single cached copy of a block to be shared by multiple clones of a
  With this feature, multiple running containers can share a single copy of a
  cached block. This feature makes ZFS a good option for PaaS and other
  high-density use cases.

- **Fragmentation**: Fragmentation is a natural byproduct of copy-on-write
  filesystems like ZFS. ZFS mitigates this by using a small block size of 128k.
  The ZFS intent log (ZIL) and the coalescing of writes (delayed writes) also
  help to reduce fragmentation. You can monitor fragmentation using
  `zpool status`. However, there is no way to defragment ZFS without reformatting
  and restoring the filesystem.

- **Use the native ZFS driver for Linux**: The ZFS FUSE implementation is not
  recommended, due to poor performance.

### Performance best practices

- **Use fast storage**:  Solid-state drives (SSDs) provide faster reads and
  writes than spinning disks.

- **Use volumes for write-heavy workloads**: Volumes provide the best and most
  predictable performance for write-heavy workloads. This is because they bypass
  the storage driver and do not incur any of the potential overheads introduced
  by thin provisioning and copy-on-write. Volumes have other benefits, such as
  allowing you to share data among containers and persisting even when no
  running container is using them.

Title: ZFS Container Operations and Performance Considerations
Summary
This section details how ZFS handles file writing and deletion in Docker containers, including the use of copy-on-write. It also outlines factors influencing ZFS performance, such as memory, ZFS features (like deduplication and ARC caching), fragmentation, and the importance of using the native ZFS driver. Performance best practices include using fast storage like SSDs and leveraging volumes for write-heavy workloads to bypass storage driver overhead.