propagate to the
standby.
</para>
<para>
<function>pg_cancel_backend()</function>
and <function>pg_terminate_backend()</function> will work on user backends,
but not the startup process, which performs
recovery. <structname>pg_stat_activity</structname> does not show
recovering transactions as active. As a result,
<structname>pg_prepared_xacts</structname> is always empty during
recovery. If you wish to resolve in-doubt prepared transactions, view
<literal>pg_prepared_xacts</literal> on the primary and issue commands to
resolve transactions there or resolve them after the end of recovery.
</para>
<para>
<structname>pg_locks</structname> will show locks held by backends,
as normal. <structname>pg_locks</structname> also shows
a virtual transaction managed by the startup process that owns all
<literal>AccessExclusiveLocks</literal> held by transactions being replayed by recovery.
Note that the startup process does not acquire locks to
make database changes, and thus locks other than <literal>AccessExclusiveLocks</literal>
do not show in <structname>pg_locks</structname> for the Startup
process; they are just presumed to exist.
</para>
<para>
The <productname>Nagios</productname> plugin <productname>check_pgsql</productname> will
work, because the simple information it checks for exists.
The <productname>check_postgres</productname> monitoring script will also work,
though some reported values could give different or confusing results.
For example, last vacuum time will not be maintained, since no
vacuum occurs on the standby. Vacuums running on the primary
do still send their changes to the standby.
</para>
<para>
WAL file control commands will not work during recovery,
e.g., <function>pg_backup_start</function>, <function>pg_switch_wal</function> etc.
</para>
<para>
Dynamically loadable modules work, including <structname>pg_stat_statements</structname>.
</para>
<para>
Advisory locks work normally in recovery, including deadlock detection.
Note that advisory locks are never WAL logged, so it is impossible for
an advisory lock on either the primary or the standby to conflict with WAL
replay. Nor is it possible to acquire an advisory lock on the primary
and have it initiate a similar advisory lock on the standby. Advisory
locks relate only to the server on which they are acquired.
</para>
<para>
Trigger-based replication systems such as <productname>Slony</productname>,
<productname>Londiste</productname> and <productname>Bucardo</productname> won't run on the
standby at all, though they will run happily on the primary server as
long as the changes are not sent to standby servers to be applied.
WAL replay is not trigger-based so you cannot relay from the
standby to any system that requires additional database writes or
relies on the use of triggers.
</para>
<para>
New OIDs cannot be assigned, though some <acronym>UUID</acronym> generators may still
work as long as they do not rely on writing new status to the database.
</para>
<para>
Currently, temporary table creation is not allowed during read-only
transactions, so in some cases existing scripts will not run correctly.
This restriction might be relaxed in a later release. This is
both an SQL standard compliance issue and a technical issue.
</para>
<para>
<command>DROP TABLESPACE</command> can only succeed if the tablespace is empty.
Some standby users may be actively using the tablespace via their
<varname>temp_tablespaces</varname> parameter. If there are temporary files in the
tablespace, all active queries are canceled to ensure that temporary
files are removed, so the tablespace can be removed and WAL replay
can continue.
</para>
<para>
Running <command>DROP DATABASE</command> or <command>ALTER