to indicate that we start
returning the results of next query). The application must keep track
of the order in which it sent queries, to associate them with their
corresponding results.
Applications will typically use a state machine or a FIFO queue for this.
</para>
</sect3>
<sect3 id="libpq-pipeline-errors">
<title>Error Handling</title>
<para>
From the client's perspective, after <function>PQresultStatus</function>
returns <literal>PGRES_FATAL_ERROR</literal>,
the pipeline is flagged as aborted.
<function>PQresultStatus</function> will report a
<literal>PGRES_PIPELINE_ABORTED</literal> result for each remaining queued
operation in an aborted pipeline. The result for
<function>PQpipelineSync</function> or
<function>PQsendPipelineSync</function> is reported as
<literal>PGRES_PIPELINE_SYNC</literal> to signal the end of the aborted pipeline
and resumption of normal result processing.
</para>
<para>
The client <emphasis>must</emphasis> process results with
<function>PQgetResult</function> during error recovery.
</para>
<para>
If the pipeline used an implicit transaction, then operations that have
already executed are rolled back and operations that were queued to follow
the failed operation are skipped entirely. The same behavior holds if the
pipeline starts and commits a single explicit transaction (i.e. the first
statement is <literal>BEGIN</literal> and the last is
<literal>COMMIT</literal>) except that the session remains in an aborted
transaction state at the end of the pipeline. If a pipeline contains
<emphasis>multiple explicit transactions</emphasis>, all transactions that
committed prior to the error remain committed, the currently in-progress
transaction is aborted, and all subsequent operations are skipped completely,
including subsequent transactions. If a pipeline synchronization point
occurs with an explicit transaction block in aborted state, the next pipeline
will become aborted immediately unless the next command puts the transaction
in normal mode with <command>ROLLBACK</command>.
</para>
<note>
<para>
The client must not assume that work is committed when it
<emphasis>sends</emphasis> a <literal>COMMIT</literal> — only when the
corresponding result is received to confirm the commit is complete.
Because errors arrive asynchronously, the application needs to be able to
restart from the last <emphasis>received</emphasis> committed change and
resend work done after that point if something goes wrong.
</para>
</note>
</sect3>
<sect3 id="libpq-pipeline-interleave">
<title>Interleaving Result Processing and Query Dispatch</title>
<para>
To avoid deadlocks on large pipelines the client should be structured
around a non-blocking event loop using operating system facilities
such as <function>select</function>, <function>poll</function>,
<function>WaitForMultipleObjectEx</function>, etc.
</para>
<para>
The client application should generally maintain a queue of work
remaining to be dispatched and a queue of work that has been dispatched
but not yet had its results processed. When the socket is writable
it should dispatch more work. When the socket is readable it should
read results and process them, matching them up to the next entry in
its corresponding results queue. Based on available memory, results from the
socket should be read frequently: there's no need to wait until the
pipeline end to read the results. Pipelines should be scoped to logical
units of work, usually (but not necessarily) one transaction per pipeline.
There's no need to exit pipeline mode and re-enter it between pipelines,
or to wait for one pipeline to finish before sending the next.
</para>