Re: A better way than tweaking NTUP_PER_BUCKET - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: A better way than tweaking NTUP_PER_BUCKET
Date
Msg-id 51C71013.2080802@vmware.com
Whole thread Raw
In response to Re: A better way than tweaking NTUP_PER_BUCKET  (Simon Riggs <simon@2ndQuadrant.com>)
Responses Re: A better way than tweaking NTUP_PER_BUCKET
List pgsql-hackers
On 23.06.2013 01:48, Simon Riggs wrote:
> On 22 June 2013 21:40, Stephen Frost<sfrost@snowman.net>  wrote:
>
>> I'm actually not a huge fan of this as it's certainly not cheap to do. If it
>> can be shown to be better than an improved heuristic then perhaps it would
>> work but I'm not convinced.
>
> We need two heuristics, it would seem:
>
> * an initial heuristic to overestimate the number of buckets when we
> have sufficient memory to do so
>
> * a heuristic to determine whether it is cheaper to rebuild a dense
> hash table into a better one.
>
> Although I like Heikki's rebuild approach we can't do this every x2
> overstretch. Given large underestimates exist we'll end up rehashing
> 5-12 times, which seems bad.

It's not very expensive. The hash values of all tuples have already been 
calculated, so rebuilding just means moving the tuples to the right bins.

> Better to let the hash table build and
> then re-hash once, it we can see it will be useful.

That sounds even less expensive, though.

- Heikki



pgsql-hackers by date:

Previous
From: Kevin Grittner
Date:
Subject: Re: FILTER for aggregates [was Re: Department of Redundancy Department: makeNode(FuncCall) division]
Next
From: Kevin Grittner
Date:
Subject: Re: changeset generation v5-01 - Patches & git tree