Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Date
Msg-id 20190926212455.p6lp3q3lg7jske57@development
Whole thread Raw
In response to Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Alvaro Herrera <alvherre@2ndquadrant.com>)
List pgsql-hackers
On Thu, Sep 26, 2019 at 04:36:20PM -0300, Alvaro Herrera wrote:
>On 2019-Sep-26, Alvaro Herrera wrote:
>
>> How certain are you about the approach to measure memory used by a
>> reorderbuffer transaction ... does it not cause a measurable performance
>> drop?  I wonder if it would make more sense to use a separate contexts
>> per transaction and use context-level accounting (per the patch Jeff
>> Davis posted elsewhere for hash joins ... though I see now that that
>> only works fot aset.c, not other memcxt implementations), or something
>> like that.
>
>Oh, I just noticed that that patch was posted separately in its own
>thread, and that that improved version does include support for other
>memory context implementations.  Excellent.
>

Unfortunately, that won't fly, for two simple reasons:

1) The memory accounting patch is known to perform poorly with many
child contexts - this was why array_agg/string_agg were problematic,
before we rewrote them not to create memory context for each group.

It could be done differently (eager accounting) but then the overhead
for regular/common cases (with just a couple of contexts) is higher. So
that seems like a much inferior option.

2) We can't actually have a single context per transaction. Some parts
(REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID) of a transaction are not
evicted, so we'd have to keep them in a separate context.

It'd also mean higher allocation overhead, because now we can reuse
chunks cross-transaction. So one transaction commits or gets serialized,
and we reuse the chunks for something else. With per-transaction
contexts we'd lose some of this benefit - we could only reuse chunks
within a transaction (i.e. large transactions that get spilled to disk)
but not across commits.

I don't have any numbers, of course, but I wouldn't be surprised if it
was significant e.g. for small transactions that don't get spilled. And
creating/destroying the contexts is not free either, I think.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: range_agg
Next
From: Tomas Vondra
Date:
Subject: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions