gtest_parent_1 (cost=0.00..1.01 rows=1 width=14)
Filter: (f2 = 101)
-> Seq Scan on gtest_child2 gtest_parent_2 (cost=0.00..1.01 rows=1 width=14)
Filter: (f2 = 101)
-> Seq Scan on gtest_child3 gtest_parent_3 (cost=0.00..1.01 rows=1 width=14)
Filter: (f2 = 101)
</screen>
In this example the Update node needs to consider three child tables,
but not the originally-mentioned partitioned table (since that never
stores any data). So there are three input
scanning subplans, one per table. For clarity, the Update node is
annotated to show the specific target tables that will be updated, in the
same order as the corresponding subplans.
</para>
<para>
The <literal>Planning time</literal> shown by <command>EXPLAIN
ANALYZE</command> is the time it took to generate the query plan from the
parsed query and optimize it. It does not include parsing or rewriting.
</para>
<para>
The <literal>Execution time</literal> shown by <command>EXPLAIN
ANALYZE</command> includes executor start-up and shut-down time, as well
as the time to run any triggers that are fired, but it does not include
parsing, rewriting, or planning time.
Time spent executing <literal>BEFORE</literal> triggers, if any, is included in
the time for the related Insert, Update, or Delete node; but time
spent executing <literal>AFTER</literal> triggers is not counted there because
<literal>AFTER</literal> triggers are fired after completion of the whole plan.
The total time spent in each trigger
(either <literal>BEFORE</literal> or <literal>AFTER</literal>) is also shown separately.
Note that deferred constraint triggers will not be executed
until end of transaction and are thus not considered at all by
<command>EXPLAIN ANALYZE</command>.
</para>
<para>
The time shown for the top-level node does not include any time needed
to convert the query's output data into displayable form or to send it
to the client. While <command>EXPLAIN ANALYZE</command> will never
send the data to the client, it can be told to convert the query's
output data to displayable form and measure the time needed for that,
by specifying the <literal>SERIALIZE</literal> option. That time will
be shown separately, and it's also included in the
total <literal>Execution time</literal>.
</para>
</sect2>
<sect2 id="using-explain-caveats">
<title>Caveats</title>
<para>
There are two significant ways in which run times measured by
<command>EXPLAIN ANALYZE</command> can deviate from normal execution of
the same query. First, since no output rows are delivered to the client,
network transmission costs are not included. I/O conversion costs are
not included either unless <literal>SERIALIZE</literal> is specified.
Second, the measurement overhead added by <command>EXPLAIN
ANALYZE</command> can be significant, especially on machines with slow
<function>gettimeofday()</function> operating-system calls. You can use the
<xref linkend="pgtesttiming"/> tool to measure the overhead of timing
on your system.
</para>
<para>
<command>EXPLAIN</command> results should not be extrapolated to situations
much different from the one you are actually testing; for example,
results on a toy-sized table cannot be assumed to apply to large tables.
The planner's cost estimates are not linear and so it might choose
a different plan for a larger or smaller table. An extreme example
is that on a table that only occupies one disk page, you'll nearly
always get a sequential scan plan whether indexes are available or not.
The planner realizes that it's going to take one disk page read to
process the table in any case, so there's no value in expending additional
page reads to look at an index. (We saw this happening in the
<literal>polygon_tbl</literal>