that control resource utilization, such
as <xref linkend="guc-work-mem"/>. Resource limits such as
<varname>work_mem</varname> are applied individually to each worker,
which means the total utilization may be much higher across all
processes than it would normally be for any single process.
For example, a parallel query using 4 workers may use up to 5 times
as much CPU time, memory, I/O bandwidth, and so forth as a query which
uses no workers at all.
</para>
<para>
For more information on parallel query, see
<xref linkend="parallel-query"/>.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-max-parallel-maintenance-workers" xreflabel="max_parallel_maintenance_workers">
<term><varname>max_parallel_maintenance_workers</varname> (<type>integer</type>)
<indexterm>
<primary><varname>max_parallel_maintenance_workers</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Sets the maximum number of parallel workers that can be
started by a single utility command. Currently, the parallel
utility commands that support the use of parallel workers are
<command>CREATE INDEX</command> when building a B-tree or BRIN index,
and <command>VACUUM</command> without <literal>FULL</literal>
option. Parallel workers are taken from the pool of processes
established by <xref linkend="guc-max-worker-processes"/>, limited
by <xref linkend="guc-max-parallel-workers"/>. Note that the requested
number of workers may not actually be available at run time.
If this occurs, the utility operation will run with fewer
workers than expected. The default value is 2. Setting this
value to 0 disables the use of parallel workers by utility
commands.
</para>
<para>
Note that parallel utility commands should not consume
substantially more memory than equivalent non-parallel
operations. This strategy differs from that of parallel
query, where resource limits generally apply per worker
process. Parallel utility commands treat the resource limit
<varname>maintenance_work_mem</varname> as a limit to be applied to
the entire utility command, regardless of the number of
parallel worker processes. However, parallel utility
commands may still consume substantially more CPU resources
and I/O bandwidth.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-max-parallel-workers" xreflabel="max_parallel_workers">
<term><varname>max_parallel_workers</varname> (<type>integer</type>)
<indexterm>
<primary><varname>max_parallel_workers</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Sets the maximum number of workers that the cluster can support for
parallel operations. The default value is 8. When increasing or
decreasing this value, consider also adjusting
<xref linkend="guc-max-parallel-maintenance-workers"/> and
<xref linkend="guc-max-parallel-workers-per-gather"/>.
Also, note that a setting for this value which is higher than
<xref linkend="guc-max-worker-processes"/> will have no effect,
since parallel workers are taken from the pool of worker processes
established by that setting.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-parallel-leader-participation" xreflabel="parallel_leader_participation">
<term>
<varname>parallel_leader_participation</varname> (<type>boolean</type>)
<indexterm>
<primary><varname>parallel_leader_participation</varname> configuration parameter</primary>