Re: pg_stat_statements : how to catch non successfully finishedstatements ? - Mailing list pgsql-general

From legrand legrand
Subject Re: pg_stat_statements : how to catch non successfully finishedstatements ?
Date
Msg-id 1588325173462-0.post@n3.nabble.com
Whole thread Raw
In response to Re: pg_stat_statements : how to catch non successfully finished statements ?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Tom Lane-2 wrote
> legrand legrand <

> legrand_legrand@

> > writes:
>> Tom Lane-2 wrote
>>> The hard part here is that you have to be really careful what you do in
>>> a PG_CATCH block, because the only thing you know for sure about the
>>> backend's state is that it's not good.  Catalog fetches are right out,
>>> and anything that might itself throw an error had best be avoided as
>>> well.  (Which, among other things, means that examining executor state
>>> would be a bad idea, and I'm not even sure you'd want to traverse the
>>> plan
>>> tree.)
>>> I'm not convinced that it's practical for pg_stat_statements to make a
>>> new
>>> shared hashtable entry under those constraints.  But figuring out how to
>>> minimize the risks around that is the stumbling block, not lack of a
>>> hook.
> 
>> As far as I have been testing this with *cancelled* queries (Cancel, 
>> pg_cancel_backend(), statement_timeout, ...), I haven't found any
>> problem.
>> Would limiting the PG_CATCH block to thoses *cancelled* queries  
>> and *no other error*, be an alternate solution ?
> 
> I do not see that that would make one iota of difference to the risk that
> the executor state tree is inconsistent at the instant the error is
> thrown.  You can't test your way to the conclusion that it's safe, either
> (much less that it'd remain safe); your test cases surely haven't hit
> every CHECK_FOR_INTERRUPTS call in the backend.
> 
>             regards, tom lane


new try:

 Considering that executor state tree is limited to QueryDesc->estate,
 that would mean that rows processed can not be trusted, but that 
 queryid, buffers and *duration* (that is the more important one)
 can still be used ?
  
 Knowing that shared hashtable entries are now (in pg13) created during 
 planning time. There is no need to create a new one for execution error: 
 just update counters (current ones or new columns like "errors" , 
 "total_error_time",  ... added to pg_stat_statements view).
 
Is that better ?
 
Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



pgsql-general by date:

Previous
From: Paul Förster
Date:
Subject: Re: How to move a 11.4 cluster to another Linux host, but empty?
Next
From: André Hänsel
Date:
Subject: How to use pg_waldump?