Home Explore Blog CI



postgresql

5th chunk of `doc/src/sgml/backup.sgml`
80348ba366de651665570626e903c3e79928e26f9fb9d7700000000100000fa3

pg_dump <replaceable class="parameter">dbname</replaceable> | gzip &gt; <replaceable class="parameter">filename</replaceable>.gz
</programlisting>

     Reload with:

<programlisting>
gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>

     or:

<programlisting>
cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>
    </para>
   </formalpara>

   <formalpara>
    <title>Use <command>split</command>.</title>
    <para>
     The <command>split</command> command
     allows you to split the output into smaller files that are
     acceptable in size to the underlying file system. For example, to
     make 2 gigabyte chunks:

<programlisting>
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 2G - <replaceable class="parameter">filename</replaceable>
</programlisting>

     Reload with:

<programlisting>
cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>

     If using GNU <application>split</application>, it is possible to
     use it and <application>gzip</application> together:

<programlisting>
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 2G --filter='gzip > $FILE.gz'
</programlisting>

     It can be restored using <command>zcat</command>.
    </para>
   </formalpara>

   <formalpara>
    <title>Use <application>pg_dump</application>'s custom dump format.</title>
    <para>
     If <productname>PostgreSQL</productname> was built on a system with the
     <application>zlib</application> compression library installed, the custom dump
     format will compress data as it writes it to the output file. This will
     produce dump file sizes similar to using <command>gzip</command>, but it
     has the added advantage that tables can be restored selectively. The
     following command dumps a database using the custom dump format:

<programlisting>
pg_dump -Fc <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">filename</replaceable>
</programlisting>

     A custom-format dump is not a script for <application>psql</application>, but
     instead must be restored with <application>pg_restore</application>, for example:

<programlisting>
pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable class="parameter">filename</replaceable>
</programlisting>

     See the <xref linkend="app-pgdump"/> and <xref
     linkend="app-pgrestore"/> reference pages for details.
    </para>
   </formalpara>

   <para>
    For very large databases, you might need to combine <command>split</command>
    with one of the other two approaches.
   </para>

   <formalpara>
    <title>Use <application>pg_dump</application>'s parallel dump feature.</title>
    <para>
     To speed up the dump of a large database, you can use
     <application>pg_dump</application>'s parallel mode. This will dump
     multiple tables at the same time. You can control the degree of
     parallelism with the <command>-j</command> parameter. Parallel dumps
     are only supported for the "directory" archive format.

<programlisting>
pg_dump -j <replaceable class="parameter">num</replaceable> -F d -f <replaceable class="parameter">out.dir</replaceable> <replaceable class="parameter">dbname</replaceable>
</programlisting>

     You can use <command>pg_restore -j</command> to restore a dump in parallel.
     This will work for any archive of either the "custom" or the "directory"
     archive mode, whether or not it has been created with <command>pg_dump -j</command>.
    </para>
   </formalpara>
  </sect2>
 </sect1>

 <sect1 id="backup-file">
  <title>File System Level Backup</title>

  <para>
   An alternative backup strategy is to directly copy the files that
   <productname>PostgreSQL</productname> uses to store

Title: Handling Large Databases with pg_dump
Summary
This section discusses methods for handling large databases when using pg_dump, including using compressed dumps with gzip, splitting output into smaller files with split, using pg_dump's custom dump format, and using parallel dump features to speed up the dump process, as well as restoring these dumps with various commands and tools.