linkend="guc-shared-buffers"><varname>shared_buffers</varname></link>,
<link linkend="guc-work-mem"><varname>work_mem</varname></link>, and
<link linkend="guc-hash-mem-multiplier"><varname>hash_mem_multiplier</varname></link>.
In other cases, the problem may be caused by allowing too many
connections to the database server itself. In many cases, it may
be better to reduce
<link linkend="guc-max-connections"><varname>max_connections</varname></link>
and instead make use of external connection-pooling software.
</para>
<para>
It is possible to modify the
kernel's behavior so that it will not <quote>overcommit</quote> memory.
Although this setting will not prevent the <ulink
url="https://lwn.net/Articles/104179/">OOM killer</ulink> from being invoked
altogether, it will lower the chances significantly and will therefore
lead to more robust system behavior. This is done by selecting strict
overcommit mode via <command>sysctl</command>:
<programlisting>
sysctl -w vm.overcommit_memory=2
</programlisting>
or placing an equivalent entry in <filename>/etc/sysctl.conf</filename>.
You might also wish to modify the related setting
<varname>vm.overcommit_ratio</varname>. For details see the kernel documentation
file <ulink url="https://www.kernel.org/doc/Documentation/vm/overcommit-accounting"></ulink>.
</para>
<para>
Another approach, which can be used with or without altering
<varname>vm.overcommit_memory</varname>, is to set the process-specific
<firstterm>OOM score adjustment</firstterm> value for the postmaster process to
<literal>-1000</literal>, thereby guaranteeing it will not be targeted by the OOM
killer. The simplest way to do this is to execute
<programlisting>
echo -1000 > /proc/self/oom_score_adj
</programlisting>
in the <productname>PostgreSQL</productname> startup script just before
invoking <filename>postgres</filename>.
Note that this action must be done as root, or it will have no effect;
so a root-owned startup script is the easiest place to do it. If you
do this, you should also set these environment variables in the startup
script before invoking <filename>postgres</filename>:
<programlisting>
export PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
export PG_OOM_ADJUST_VALUE=0
</programlisting>
These settings will cause postmaster child processes to run with the
normal OOM score adjustment of zero, so that the OOM killer can still
target them at need. You could use some other value for
<envar>PG_OOM_ADJUST_VALUE</envar> if you want the child processes to run
with some other OOM score adjustment. (<envar>PG_OOM_ADJUST_VALUE</envar>
can also be omitted, in which case it defaults to zero.) If you do not
set <envar>PG_OOM_ADJUST_FILE</envar>, the child processes will run with the
same OOM score adjustment as the postmaster, which is unwise since the
whole point is to ensure that the postmaster has a preferential setting.
</para>
</sect2>
<sect2 id="linux-huge-pages">
<title>Linux Huge Pages</title>
<para>
Using huge pages reduces overhead when using large contiguous chunks of
memory, as <productname>PostgreSQL</productname> does, particularly when
using large values of <xref linkend="guc-shared-buffers"/>. To use this
feature in <productname>PostgreSQL</productname> you need a kernel
with <varname>CONFIG_HUGETLBFS=y</varname> and
<varname>CONFIG_HUGETLB_PAGE=y</varname>. You will also have to configure
the operating system to provide enough huge pages of the desired size.
The runtime-computed parameter
<xref linkend="guc-shared-memory-size-in-huge-pages"/> reports the number
of huge pages required. This parameter can be viewed before starting the
server with a <command>postgres</command> command like:
<programlisting>
$ <userinput>postgres -D $PGDATA -C shared_memory_size_in_huge_pages</userinput>