pg_dump <replaceable class="parameter">dbname</replaceable> | gzip > <replaceable class="parameter">filename</replaceable>.gz
</programlisting>
Reload with:
<programlisting>
gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>
or:
<programlisting>
cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>
</para>
</formalpara>
<formalpara>
<title>Use <command>split</command>.</title>
<para>
The <command>split</command> command
allows you to split the output into smaller files that are
acceptable in size to the underlying file system. For example, to
make 2 gigabyte chunks:
<programlisting>
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 2G - <replaceable class="parameter">filename</replaceable>
</programlisting>
Reload with:
<programlisting>
cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>
If using GNU <application>split</application>, it is possible to
use it and <application>gzip</application> together:
<programlisting>
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 2G --filter='gzip > $FILE.gz'
</programlisting>
It can be restored using <command>zcat</command>.
</para>
</formalpara>
<formalpara>
<title>Use <application>pg_dump</application>'s custom dump format.</title>
<para>
If <productname>PostgreSQL</productname> was built on a system with the
<application>zlib</application> compression library installed, the custom dump
format will compress data as it writes it to the output file. This will
produce dump file sizes similar to using <command>gzip</command>, but it
has the added advantage that tables can be restored selectively. The
following command dumps a database using the custom dump format:
<programlisting>
pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable>
</programlisting>
A custom-format dump is not a script for <application>psql</application>, but
instead must be restored with <application>pg_restore</application>, for example:
<programlisting>
pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable class="parameter">filename</replaceable>
</programlisting>
See the <xref linkend="app-pgdump"/> and <xref
linkend="app-pgrestore"/> reference pages for details.
</para>
</formalpara>
<para>
For very large databases, you might need to combine <command>split</command>
with one of the other two approaches.
</para>
<formalpara>
<title>Use <application>pg_dump</application>'s parallel dump feature.</title>
<para>
To speed up the dump of a large database, you can use
<application>pg_dump</application>'s parallel mode. This will dump
multiple tables at the same time. You can control the degree of
parallelism with the <command>-j</command> parameter. Parallel dumps
are only supported for the "directory" archive format.
<programlisting>
pg_dump -j <replaceable class="parameter">num</replaceable> -F d -f <replaceable class="parameter">out.dir</replaceable> <replaceable class="parameter">dbname</replaceable>
</programlisting>
You can use <command>pg_restore -j</command> to restore a dump in parallel.
This will work for any archive of either the "custom" or the "directory"
archive mode, whether or not it has been created with <command>pg_dump -j</command>.
</para>
</formalpara>
</sect2>
</sect1>
<sect1 id="backup-file">
<title>File System Level Backup</title>
<para>
An alternative backup strategy is to directly copy the files that
<productname>PostgreSQL</productname> uses to store