<varname>bgwriter_lru_maxpages</varname> and
<varname>bgwriter_lru_multiplier</varname> reduce the extra I/O load
caused by the background writer, but make it more likely that server
processes will have to issue writes for themselves, delaying interactive
queries.
</para>
</sect2>
<sect2 id="runtime-config-resource-io">
<title>I/O</title>
<variablelist>
<varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after">
<term><varname>backend_flush_after</varname> (<type>integer</type>)
<indexterm>
<primary><varname>backend_flush_after</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Whenever more than this amount of data has
been written by a single backend, attempt to force the OS to issue
these writes to the underlying storage. Doing so will limit the
amount of dirty data in the kernel's page cache, reducing the
likelihood of stalls when an <function>fsync</function> is issued at the end of a
checkpoint, or when the OS writes data back in larger batches in the
background. Often that will result in greatly reduced transaction
latency, but there also are some cases, especially with workloads
that are bigger than <xref linkend="guc-shared-buffers"/>, but smaller
than the OS's page cache, where performance might degrade. This
setting may have no effect on some platforms.
If this value is specified without units, it is taken as blocks,
that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.
The valid range is
between <literal>0</literal>, which disables forced writeback,
and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no
forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB,
the maximum value scales proportionally to it.)
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-effective-io-concurrency" xreflabel="effective_io_concurrency">
<term><varname>effective_io_concurrency</varname> (<type>integer</type>)
<indexterm>
<primary><varname>effective_io_concurrency</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Sets the number of concurrent storage I/O operations that
<productname>PostgreSQL</productname> expects can be executed
simultaneously. Raising this value will increase the number of I/O
operations that any individual <productname>PostgreSQL</productname>
session attempts to initiate in parallel. The allowed range is
<literal>1</literal> to <literal>1000</literal>, or
<literal>0</literal> to disable issuance of asynchronous I/O requests.
The default is <literal>16</literal>.
</para>
<para>
Higher values will have the most impact on higher latency storage
where queries otherwise experience noticeable I/O stalls and on
devices with high IOPs. Unnecessarily high values may increase I/O
latency for all queries on the system
</para>
<para>
On systems with prefetch advice support,
<varname>effective_io_concurrency</varname> also controls the
prefetch distance.
</para>
<para>
This value can be overridden for tables in a particular tablespace by
setting the tablespace parameter of the same name (see <xref
linkend="sql-altertablespace"/>).
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-maintenance-io-concurrency" xreflabel="maintenance_io_concurrency">
<term><varname>maintenance_io_concurrency</varname> (<type>integer</type>)
<indexterm>
<primary><varname>maintenance_io_concurrency</varname> configuration parameter</primary>