On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> today I worked on postgres's server used for critical service. Because the
> application is very specific, we had to do final tuning on production
> server. I fix lot of queries, but I am not able to detect fast queries that
> does full scan of middle size tables - to 1M rows. Surely I wouldn't log
> all queries. Now, there are these queries with freq 10 per sec.
>
> Can be nice to have a possibility to set a log of queries that do full
> scan and read more tuples than is specified limit or that does full scan of
> specified tables.
>
> What do you think about the proposed feature?
Are you able to use auto_explain with auto_explain.log_min_duration ?
Then you can search for query logs with
message ~ 'Seq Scan .* \(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'
Or can you use pg_stat_all_tables.seq_scan ?
But it seems to me that filtering on the duration would be both a more
important criteria and a more general one, than "seq scan with number of rows".
| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222;
--
Justin