Re: When manual analyze is needed - Mailing list pgsql-general

From veem v
Subject Re: When manual analyze is needed
Date
Msg-id CAB+=1TUNvYwZu0A7GMNc3iEsDZbhGCLFpUe3_zXxMGo8z6x6xA@mail.gmail.com
Whole thread Raw
In response to Re: When manual analyze is needed  (Greg Sabino Mullane <htamfids@gmail.com>)
Responses Re: When manual analyze is needed  (Greg Sabino Mullane <htamfids@gmail.com>)
List pgsql-general

On Mon, 4 Mar 2024 at 21:46, Greg Sabino Mullane <htamfids@gmail.com> wrote:
On Mon, Mar 4, 2024 at 12:23 AM veem v <veema0000@gmail.com> wrote:
Additionally if a query was working fine but suddenly takes a suboptimal plan because of missing stats , do we have any hash value column on any performance view associated with the queryid which we can refer to see past vs current plans difference and identify such issues quickly and fix it?

You can use auto_explain; nothing else tracks things at that fine a level. You can use pg_stat_statements to track the average and max time for each query. Save and reset periodically to make it more useful.


 

Thank you so much Greg. That helps.

We were planning to have the auto_explain extension added and set the log_min_duration to ~5 seconds and log_analyze to true. So that all the queries going above that time period will be logged and provide detailed information on the exact point of bottleneck. Will it be a good idea to set it on production DB which is a highly active database? or should we only have the extension added but only set the parameters while we debug some performance issue and then reset it back after we are done.

 

pgsql-general by date:

Previous
From: sud
Date:
Subject: Is partition pruning impacted by data type
Next
From: Brecht De Rooms
Date:
Subject: CTEs and concurrency