database, you can use
<application>pg_dump</application>'s parallel mode. This will dump
multiple tables at the same time. You can control the degree of
parallelism with the <command>-j</command> parameter. Parallel dumps
are only supported for the "directory" archive format.
<programlisting>
pg_dump -j <replaceable class="parameter">num</replaceable> -F d -f <replaceable class="parameter">out.dir</replaceable> <replaceable class="parameter">dbname</replaceable>
</programlisting>
You can use <command>pg_restore -j</command> to restore a dump in parallel.
This will work for any archive of either the "custom" or the "directory"
archive mode, whether or not it has been created with <command>pg_dump -j</command>.
</para>
</formalpara>
</sect2>
</sect1>
<sect1 id="backup-file">
<title>File System Level Backup</title>
<para>
An alternative backup strategy is to directly copy the files that
<productname>PostgreSQL</productname> uses to store the data in the database;
<xref linkend="creating-cluster"/> explains where these files
are located. You can use whatever method you prefer
for doing file system backups; for example:
<programlisting>
tar -cf backup.tar /usr/local/pgsql/data
</programlisting>
</para>
<para>
There are two restrictions, however, which make this method
impractical, or at least inferior to the <application>pg_dump</application>
method:
<orderedlist>
<listitem>
<para>
The database server <emphasis>must</emphasis> be shut down in order to
get a usable backup. Half-way measures such as disallowing all
connections will <emphasis>not</emphasis> work
(in part because <command>tar</command> and similar tools do not take
an atomic snapshot of the state of the file system,
but also because of internal buffering within the server).
Information about stopping the server can be found in
<xref linkend="server-shutdown"/>. Needless to say, you
also need to shut down the server before restoring the data.
</para>
</listitem>
<listitem>
<para>
If you have dug into the details of the file system layout of the
database, you might be tempted to try to back up or restore only certain
individual tables or databases from their respective files or
directories. This will <emphasis>not</emphasis> work because the
information contained in these files is not usable without
the commit log files,
<filename>pg_xact/*</filename>, which contain the commit status of
all transactions. A table file is only usable with this
information. Of course it is also impossible to restore only a
table and the associated <filename>pg_xact</filename> data
because that would render all other tables in the database
cluster useless. So file system backups only work for complete
backup and restoration of an entire database cluster.
</para>
</listitem>
</orderedlist>
</para>
<para>
An alternative file-system backup approach is to make a
<quote>consistent snapshot</quote> of the data directory, if the
file system supports that functionality (and you are willing to
trust that it is implemented correctly). The typical procedure is
to make a <quote>frozen snapshot</quote> of the volume containing the
database, then copy the whole data directory (not just parts, see
above) from the snapshot to a backup device, then release the frozen
snapshot. This will work even while the database server is running.
However, a backup created in this way saves
the database files in a state as if the database server was not
properly shut down; therefore, when you start the database server
on the backed-up data, it will think the previous server instance
crashed and will replay the WAL log. This is not a problem; just
be aware of it (and be sure to include the WAL files in your backup).