I wrote:
> ... PREPARE/EXECUTE work a bit funny though: if you have
> track = all then you get EXECUTE cycles reported against both the
> EXECUTE statement and the underlying PREPARE. This is because when
> PREPARE calls parse_analyze_varparams the post-analyze hook doesn't know
> that this isn't a top-level statement, so it marks the query with a
> queryId. I don't see any way around that part without something like
> what I suggested before. However, this behavior seems to me to be
> considerably less of a POLA violation than the cases involving two
> identical-looking entries for self-contained statements, and it might
> even be thought to be a feature not a bug (since the PREPARE entry will
> accumulate totals for all uses of the prepared statement). So I'm
> satisfied with it for now.
Actually, there's an easy hack for that too: we can teach the
ProcessUtility hook to do nothing (and in particular not increment the
nesting level) when the statement is an ExecuteStmt. This will result
in the executor time being blamed on the original PREPARE, whether or
not you have enabled tracking of nested statements. That seems like a
substantial win to me, because right now you get a distinct EXECUTE
entry for each textually-different set of parameter values, which seems
pretty useless. This change would make use of PREPARE/EXECUTE behave
very nearly the same in pg_stat_statement as use of protocol-level
prepared statements. About the only downside I can see is that the
cycles expended on evaluating the EXECUTE's parameters will not be
charged to any pg_stat_statement entry. Since those can be expressions,
in principle this might be a non-negligible amount of execution time,
but in practice it hardly seems likely that anyone would care about it.
Barring objections I'll go fix this, and then this patch can be
considered closed except for possible future tweaking of the
sticky-entry decay rule.
regards, tom lane