Re: [HACKERS] Parallel Hash take II - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [HACKERS] Parallel Hash take II
Date
Msg-id CA+TgmoYPGASbPCWcgjDe0sSBX=cNem=djsRk-GCGjrMrjiQ4fA@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Parallel Hash take II  (Andres Freund <andres@anarazel.de>)
Responses Re: [HACKERS] Parallel Hash take II  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On Tue, Nov 14, 2017 at 4:24 PM, Andres Freund <andres@anarazel.de> wrote:
>> I agree, and I am interested in that subject.  In the meantime, I
>> think it'd be pretty unfair if parallel-oblivious hash join and
>> sort-merge join and every other parallel plan get to use work_mem * p
>> (and in some cases waste it with duplicate data), but Parallel Hash
>> isn't allowed to do the same (and put it to good use).
>
> I'm not sure I care about fairness between pieces of code ;)

I realize you're sort of joking here, but I think it's necessary to
care about fairness between pieces of code.

I mean, the very first version of this patch that Thomas submitted was
benchmarked by Rafia and had phenomenally good performance
characteristics.  That turned out to be because it wasn't respecting
work_mem; you can often do a lot better with more memory, and
generally you can't do nearly as well with less.  To make comparisons
meaningful, they have to be comparisons between algorithms that use
the same amount of memory.  And it's not just about testing.  If we
add an algorithm that will run twice as fast with equal memory but
only allow it half as much, it will probably never get picked and the
whole patch is a waste of time.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Jesper Pedersen
Date:
Subject: Re: [HACKERS] A GUC to prevent leader processes from runningsubplans?
Next
From: Merlin Moncure
Date:
Subject: Re: [HACKERS] Transaction control in procedures