Re: Logging parallel worker draught - Mailing list pgsql-hackers

From Imseih (AWS), Sami
Subject Re: Logging parallel worker draught
Date
Msg-id D04977E3-9F54-452C-A4C4-CDA67F392BD1@amazon.com
Whole thread Raw
In response to Re: Logging parallel worker draught  (Benoit Lobréau <benoit.lobreau@dalibo.com>)
Responses Re: Logging parallel worker draught
List pgsql-hackers
> I believe both cumulative statistics and logs are needed. Logs excel in 
> pinpointing specific queries at precise times, while statistics provide 
> a broader overview of the situation. Additionally, I often encounter 
> situations where clients lack pg_stat_statements and can't restart their 
> production promptly.

I agree that logging will be very useful here. 
Cumulative stats/pg_stat_statements can be handled in a separate discussion.

> log_temp_files exhibits similar behavior when a query involves multiple
> on-disk sorts. I'm uncertain whether this is something we should or need
> to address. I'll explore whether the error message can be made more
> informative.


> [local]:5437 postgres@postgres=# SET work_mem to '125kB';
> [local]:5437 postgres@postgres=# SET log_temp_files TO 0;
> [local]:5437 postgres@postgres=# SET client_min_messages TO log;
> [local]:5437 postgres@postgres=# WITH a AS ( SELECT x FROM
> generate_series(1,10000) AS F(x) ORDER BY 1 ) , b AS (SELECT x FROM
> generate_series(1,10000) AS F(x) ORDER BY 1 ) SELECT * FROM a,b;
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.20", size
> 122880 => First sort
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.19", size 140000
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.23", size 140000
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.22", size
> 122880 => Second sort
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.21", size 140000

That is true.

Users should also control if they want this logging overhead or not, 
The best answer is a new GUC that is OFF by default.

I am also not sure if we want to log draught only. I think it's important
to not only see which operations are in parallel draught, but to also log 
operations are using 100% of planned workers. 
This will help the DBA tune queries that are eating up the parallel workers.

Regards,

Sami


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Making aggregate deserialization (and WAL receive) functions slightly faster
Next
From: Konstantin Knizhnik
Date:
Subject: Can concurrent create index concurrently block each other?