that any set of successfully committed concurrent
Serializable transactions will have the same effect as if they were run
one at a time means that if you can demonstrate that a single transaction,
as written, will do the right thing when run by itself, you can have
confidence that it will do the right thing in any mix of Serializable
transactions, even without any information about what those other
transactions might do, or it will not successfully commit. It is
important that an environment which uses this technique have a
generalized way of handling serialization failures (which always return
with an SQLSTATE value of '40001'), because it will be very hard to
predict exactly which transactions might contribute to the read/write
dependencies and need to be rolled back to prevent serialization
anomalies. The monitoring of read/write dependencies has a cost, as does
the restart of transactions which are terminated with a serialization
failure, but balanced against the cost and blocking involved in use of
explicit locks and <literal>SELECT FOR UPDATE</literal> or <literal>SELECT FOR
SHARE</literal>, Serializable transactions are the best performance choice
for some environments.
</para>
<para>
While <productname>PostgreSQL</productname>'s Serializable transaction isolation
level only allows concurrent transactions to commit if it can prove there
is a serial order of execution that would produce the same effect, it
doesn't always prevent errors from being raised that would not occur in
true serial execution. In particular, it is possible to see unique
constraint violations caused by conflicts with overlapping Serializable
transactions even after explicitly checking that the key isn't present
before attempting to insert it. This can be avoided by making sure
that <emphasis>all</emphasis> Serializable transactions that insert potentially
conflicting keys explicitly check if they can do so first. For example,
imagine an application that asks the user for a new key and then checks
that it doesn't exist already by trying to select it first, or generates
a new key by selecting the maximum existing key and adding one. If some
Serializable transactions insert new keys directly without following this
protocol, unique constraints violations might be reported even in cases
where they could not occur in a serial execution of the concurrent
transactions.
</para>
<para>
For optimal performance when relying on Serializable transactions for
concurrency control, these issues should be considered:
<itemizedlist>
<listitem>
<para>
Declare transactions as <literal>READ ONLY</literal> when possible.
</para>
</listitem>
<listitem>
<para>
Control the number of active connections, using a connection pool if
needed. This is always an important performance consideration, but
it can be particularly important in a busy system using Serializable
transactions.
</para>
</listitem>
<listitem>
<para>
Don't put more into a single transaction than needed for integrity
purposes.
</para>
</listitem>
<listitem>
<para>
Don't leave connections dangling <quote>idle in transaction</quote>
longer than necessary. The configuration parameter
<xref linkend="guc-idle-in-transaction-session-timeout"/> may be used to
automatically disconnect lingering sessions.
</para>
</listitem>
<listitem>
<para>
Eliminate explicit locks, <literal>SELECT FOR UPDATE</literal>, and
<literal>SELECT FOR SHARE</literal> where no longer needed due to the
protections automatically provided by Serializable transactions.
</para>
</listitem>
<listitem>
<para>
When the system is forced to combine multiple page-level