</indexterm>
<indexterm>
<primary>serialization anomaly</primary>
</indexterm>
<para>
The <firstterm>Serializable</firstterm> isolation level provides
the strictest transaction isolation. This level emulates serial
transaction execution for all committed transactions;
as if transactions had been executed one after another, serially,
rather than concurrently. However, like the Repeatable Read level,
applications using this level must
be prepared to retry transactions due to serialization failures.
In fact, this isolation level works exactly the same as Repeatable
Read except that it also monitors for conditions which could make
execution of a concurrent set of serializable transactions behave
in a manner inconsistent with all possible serial (one at a time)
executions of those transactions. This monitoring does not
introduce any blocking beyond that present in repeatable read, but
there is some overhead to the monitoring, and detection of the
conditions which could cause a
<firstterm>serialization anomaly</firstterm> will trigger a
<firstterm>serialization failure</firstterm>.
</para>
<para>
As an example,
consider a table <structname>mytab</structname>, initially containing:
<screen>
class | value
-------+-------
1 | 10
1 | 20
2 | 100
2 | 200
</screen>
Suppose that serializable transaction A computes:
<screen>
SELECT SUM(value) FROM mytab WHERE class = 1;
</screen>
and then inserts the result (30) as the <structfield>value</structfield> in a
new row with <structfield>class</structfield><literal> = 2</literal>. Concurrently, serializable
transaction B computes:
<screen>
SELECT SUM(value) FROM mytab WHERE class = 2;
</screen>
and obtains the result 300, which it inserts in a new row with
<structfield>class</structfield><literal> = 1</literal>. Then both transactions try to commit.
If either transaction were running at the Repeatable Read isolation level,
both would be allowed to commit; but since there is no serial order of execution
consistent with the result, using Serializable transactions will allow one
transaction to commit and will roll the other back with this message:
<screen>
ERROR: could not serialize access due to read/write dependencies among transactions
</screen>
This is because if A had
executed before B, B would have computed the sum 330, not 300, and
similarly the other order would have resulted in a different sum
computed by A.
</para>
<para>
When relying on Serializable transactions to prevent anomalies, it is
important that any data read from a permanent user table not be
considered valid until the transaction which read it has successfully
committed. This is true even for read-only transactions, except that
data read within a <firstterm>deferrable</firstterm> read-only
transaction is known to be valid as soon as it is read, because such a
transaction waits until it can acquire a snapshot guaranteed to be free
from such problems before starting to read any data. In all other cases
applications must not depend on results read during a transaction that
later aborted; instead, they should retry the transaction until it
succeeds.
</para>
<para>
To guarantee true serializability <productname>PostgreSQL</productname>
uses <firstterm>predicate locking</firstterm>, which means that it keeps locks
which allow it to determine when a write would have had an impact on
the result of a previous read from a concurrent transaction, had it run
first. In <productname>PostgreSQL</productname> these locks do not
cause any blocking and therefore can <emphasis>not</emphasis> play any part in
causing a deadlock. They are used to identify and flag dependencies
among concurrent Serializable transactions which in certain combinations
can lead to serialization