the root user. The system call
<function>setrlimit</function> is responsible for setting these
parameters. The shell's built-in command <command>ulimit</command>
(Bourne shells) or <command>limit</command> (<application>csh</application>) is
used to control the resource limits from the command line. On
BSD-derived systems the file <filename>/etc/login.conf</filename>
controls the various resource limits set during login. See the
operating system documentation for details. The relevant
parameters are <varname>maxproc</varname>,
<varname>openfiles</varname>, and <varname>datasize</varname>. For
example:
<programlisting>
default:\
...
:datasize-cur=256M:\
:maxproc-cur=256:\
:openfiles-cur=256:\
...
</programlisting>
(<literal>-cur</literal> is the soft limit. Append
<literal>-max</literal> to set the hard limit.)
</para>
<para>
Kernels can also have system-wide limits on some resources.
<itemizedlist>
<listitem>
<para>
On <productname>Linux</productname> the kernel parameter
<varname>fs.file-max</varname> determines the maximum number of open
files that the kernel will support. It can be changed with
<literal>sysctl -w fs.file-max=<replaceable>N</replaceable></literal>.
To make the setting persist across reboots, add an assignment
in <filename>/etc/sysctl.conf</filename>.
The maximum limit of files per process is fixed at the time the
kernel is compiled; see
<filename>/usr/src/linux/Documentation/proc.txt</filename> for
more information.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The <productname>PostgreSQL</productname> server uses one process
per connection so you should provide for at least as many processes
as allowed connections, in addition to what you need for the rest
of your system. This is usually not a problem but if you run
several servers on one machine things might get tight.
</para>
<para>
The factory default limit on open files is often set to
<quote>socially friendly</quote> values that allow many users to
coexist on a machine without using an inappropriate fraction of
the system resources. If you run many servers on a machine this
is perhaps what you want, but on dedicated servers you might want to
raise this limit.
</para>
<para>
On the other side of the coin, some systems allow individual
processes to open large numbers of files; if more than a few
processes do so then the system-wide limit can easily be exceeded.
If you find this happening, and you do not want to alter the
system-wide limit, you can set <productname>PostgreSQL</productname>'s <xref
linkend="guc-max-files-per-process"/> configuration parameter to
limit the consumption of open files.
</para>
<para>
Another kernel limit that may be of concern when supporting large
numbers of client connections is the maximum socket connection queue
length. If more than that many connection requests arrive within a very
short period, some may get rejected before the <productname>PostgreSQL</productname> server can service
the requests, with those clients receiving unhelpful connection failure
errors such as <quote>Resource temporarily unavailable</quote> or
<quote>Connection refused</quote>. The default queue length limit is 128
on many platforms. To raise it, adjust the appropriate kernel parameter
via <application>sysctl</application>, then restart the <productname>PostgreSQL</productname> server.
The parameter is variously named <varname>net.core.somaxconn</varname>
on Linux, <varname>kern.ipc.soacceptqueue</varname> on newer FreeBSD,
and <varname>kern.ipc.somaxconn</varname> on macOS and other BSD
variants.
</para>
</sect2>
<sect2 id="linux-memory-overcommit">
<title>Linux Memory Overcommit</title>
<indexterm>