ok
</screen>
In the above queries, the second form is dangerous because the
<literal>LIMIT</literal> is not guaranteed to be applied before the locking
function is executed. This might cause some locks to be acquired
that the application was not expecting, and hence would fail to release
(until it ends the session).
From the point of view of the application, such locks
would be dangling, although still viewable in
<structname>pg_locks</structname>.
</para>
<para>
The functions provided to manipulate advisory locks are described in
<xref linkend="functions-advisory-locks"/>.
</para>
</sect2>
</sect1>
<sect1 id="applevel-consistency">
<title>Data Consistency Checks at the Application Level</title>
<para>
It is very difficult to enforce business rules regarding data integrity
using Read Committed transactions because the view of the data is
shifting with each statement, and even a single statement may not
restrict itself to the statement's snapshot if a write conflict occurs.
</para>
<para>
While a Repeatable Read transaction has a stable view of the data
throughout its execution, there is a subtle issue with using
<acronym>MVCC</acronym> snapshots for data consistency checks, involving
something known as <firstterm>read/write conflicts</firstterm>.
If one transaction writes data and a concurrent transaction attempts
to read the same data (whether before or after the write), it cannot
see the work of the other transaction. The reader then appears to have
executed first regardless of which started first or which committed
first. If that is as far as it goes, there is no problem, but
if the reader also writes data which is read by a concurrent transaction
there is now a transaction which appears to have run before either of
the previously mentioned transactions. If the transaction which appears
to have executed last actually commits first, it is very easy for a
cycle to appear in a graph of the order of execution of the transactions.
When such a cycle appears, integrity checks will not work correctly
without some help.
</para>
<para>
As mentioned in <xref linkend="xact-serializable"/>, Serializable
transactions are just Repeatable Read transactions which add
nonblocking monitoring for dangerous patterns of read/write conflicts.
When a pattern is detected which could cause a cycle in the apparent
order of execution, one of the transactions involved is rolled back to
break the cycle.
</para>
<sect2 id="serializable-consistency">
<title>Enforcing Consistency with Serializable Transactions</title>
<para>
If the Serializable transaction isolation level is used for all writes
and for all reads which need a consistent view of the data, no other
effort is required to ensure consistency. Software from other
environments which is written to use serializable transactions to
ensure consistency should <quote>just work</quote> in this regard in
<productname>PostgreSQL</productname>.
</para>
<para>
When using this technique, it will avoid creating an unnecessary burden
for application programmers if the application software goes through a
framework which automatically retries transactions which are rolled
back with a serialization failure. It may be a good idea to set
<literal>default_transaction_isolation</literal> to <literal>serializable</literal>.
It would also be wise to take some action to ensure that no other
transaction isolation level is used, either inadvertently or to
subvert integrity checks, through checks of the transaction isolation
level in triggers.
</para>
<para>
See <xref linkend="xact-serializable"/> for performance suggestions.
</para>
<warning>
<title>Warning: Serializable Transactions and Data Replication</title>