Tom Lane-2 wrote
> legrand legrand <
> legrand_legrand@
> > writes:
>> Tom Lane-2 wrote
>>> The hard part here is that you have to be really careful what you do in
>>> a PG_CATCH block, because the only thing you know for sure about the
>>> backend's state is that it's not good. Catalog fetches are right out,
>>> and anything that might itself throw an error had best be avoided as
>>> well. (Which, among other things, means that examining executor state
>>> would be a bad idea, and I'm not even sure you'd want to traverse the
>>> plan
>>> tree.)
>>> I'm not convinced that it's practical for pg_stat_statements to make a
>>> new
>>> shared hashtable entry under those constraints. But figuring out how to
>>> minimize the risks around that is the stumbling block, not lack of a
>>> hook.
>
>> As far as I have been testing this with *cancelled* queries (Cancel,
>> pg_cancel_backend(), statement_timeout, ...), I haven't found any
>> problem.
>> Would limiting the PG_CATCH block to thoses *cancelled* queries
>> and *no other error*, be an alternate solution ?
>
> I do not see that that would make one iota of difference to the risk that
> the executor state tree is inconsistent at the instant the error is
> thrown. You can't test your way to the conclusion that it's safe, either
> (much less that it'd remain safe); your test cases surely haven't hit
> every CHECK_FOR_INTERRUPTS call in the backend.
>
> regards, tom lane
new try:
Considering that executor state tree is limited to QueryDesc->estate,
that would mean that rows processed can not be trusted, but that
queryid, buffers and *duration* (that is the more important one)
can still be used ?
Knowing that shared hashtable entries are now (in pg13) created during
planning time. There is no need to create a new one for execution error:
just update counters (current ones or new columns like "errors" ,
"total_error_time", ... added to pg_stat_statements view).
Is that better ?
Regards
PAscal
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html