On Mon, 14 Jul 2025 17:25:22 -0400
Greg Sabino Mullane <htamfids@gmail.com> wrote:
[…]
> > Other than picking an arbitrary value (i.e. 5000), any thoughts about how
> > to build a case around a specific value ?
>
>
> Do you have actual examples of queries / situations that are harmed by the
> current settings? Let's start there.
I did, mid 2024. The customer environment was PostgreSQL 13 and later
PostgreSQL 16 after a major upgrade.
The application was Nextcloud, ~4000 users per day, 500 per minutes, around
4000 queries per second on a database of only 40GB. Typical OLTP workload.
Despite connection poolers in the architecture (one per application node), the
number of procs/s on the (dedicated) server was between 60-150+ depending on the
activity. This was hammering the server (a small VM of 8 cores 32GB of memory).
When we set max_parallel_worker_per_gather=0, this procs/s fell flat bellow 5,
no more variation. During high activity period, the CPU activity (usr+sys)
fell from ~70% to 30%. The %sys almost disappeared as you might guess. System
load went from 12 to 6.
Unfortunately, the production dead lines and constraints couldn't allow us to
inspect further and tweak parallel costs more delicately.
This exact experience is _one_ (but not the original one) motivation in our
effort to add stats about PQ in core.
Regards,