Re: [HACKERS] Effect of changing the value forPARALLEL_TUPLE_QUEUE_SIZE - Mailing list pgsql-hackers

From Andres Freund
Subject Re: [HACKERS] Effect of changing the value forPARALLEL_TUPLE_QUEUE_SIZE
Date
Msg-id 20170530162617.ex5lxepgwp3bezpd@alap3.anarazel.de
Whole thread Raw
In response to Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On 2017-05-30 07:27:12 -0400, Robert Haas wrote:
> The other is that I figured 64k was small enough that nobody would
> care about the memory utilization.  I'm not sure we can assume the
> same thing if we make this bigger.  It's probably fine to use a 6.4M
> tuple queue for each worker if work_mem is set to something big, but
> maybe not if work_mem is set to the default of 4MB.

Probably not.  It might also end up being detrimental performancewise,
because we start touching more memory.  I guess it'd make sense to set
it in the planner, based on the size of a) work_mem b) number of
expected tuples.

I do wonder whether the larger size fixes some scheduling issue
(i.e. while some backend is scheduled out, the other side of the queue
can continue), or whether it's largely triggered by fixable contention
inside the queue.  I'd guess it's a bit of both.  It should be
measurable in some cases, by comparing the amount of time blocking on
reading the queue (or continuing because the queue is empty), writing
to the queue (should always result in blocking) and time spent waiting
for the spinlock.

- Andres



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256
Next
From: Robert Haas
Date:
Subject: Re: [HACKERS] "cannot specify finite value after UNBOUNDED" ... uh, why?