Re: bad estimation together with large work_mem generates terrible slow hash joins - Mailing list pgsql-hackers

From Robert Haas
Subject Re: bad estimation together with large work_mem generates terrible slow hash joins
Date
Msg-id CA+TgmoYsXfrFeFoyz7SCqA7gi6nF6+qH8OGMvZM7_yovouWQrw@mail.gmail.com
Whole thread Raw
In response to Re: bad estimation together with large work_mem generates terrible slow hash joins  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Responses Re: bad estimation together with large work_mem generates terrible slow hash joins  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Re: bad estimation together with large work_mem generates terrible slow hash joins  (Tomas Vondra <tv@fuzzy.cz>)
List pgsql-hackers
On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
> The dense-alloc-v5.patch looks good to me. I have committed that with minor
> cleanup (more comments below). I have not looked at the second patch.

Gah.  I was in the middle of doing this.  Sigh.

>> * the chunks size is 32kB (instead of 16kB), and we're using 1/4
>>    threshold for 'oversized' items
>>
>>    We need the threshold to be >=8kB, to trigger the special case
>>    within AllocSet. The 1/4 rule is consistent with ALLOC_CHUNK_FRACTION.
>
> Should we care about the fact that if there are only a few tuples, we will
> nevertheless waste 32kB of memory for the chunk? I guess not, but I thought
> I'd mention it. The smallest allowed value for work_mem is 64kB.

I think we should change the threshold here to 1/8th.  The worst case
memory wastage as-is ~32k/5 > 6k.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Heikki Linnakangas
Date:
Subject: Re: bad estimation together with large work_mem generates terrible slow hash joins
Next
From: Robert Haas
Date:
Subject: Re: B-Tree support function number 3 (strxfrm() optimization)