Re: Macro customizable hashtable / bitmapscan & aggregation perf - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Macro customizable hashtable / bitmapscan & aggregation perf
Date
Msg-id 20161002001758.GA14611@awork2
Whole thread Raw
In response to Re: Macro customizable hashtable / bitmapscan & aggregation perf  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: Macro customizable hashtable / bitmapscan & aggregation perf
List pgsql-hackers
Hi,


On 2016-10-02 01:37:35 +0200, Tomas Vondra wrote:
> On 10/01/2016 09:59 PM, Andres Freund wrote:
> >>What about running a bigger benchmark - say, TPC-DS - and evaluating
> >>the impact?
> >
> >Worthwhile, although obviously the impact will be a lot smaller,
> >since they're not entirely bottlenecked on hash-aggs and bitmap
> >scans.
> >
> 
> Sure, the improvement won't be as significant as on the simple queries, but
> it's interesting IMHO.

Agreed.


> >>I think a crucial part of the benchmarking will be identifying and
> >>measuring corner cases, e.g. a lot of rows with the same key, etc.
> >>Although that probably is not a major issue for the two places
> >>switched to the new implementation (e.g. execGrouping merges the
> >>duplicates to a single group, by definition).
> >
> >Hm. I don't really see a case where that's going to cause issues - all
> >the execGrouping.c users only store one key, and either ignore later
> >ones, or add the data to the initial tuple in the group.  I don't really
> >see any case where repeated keys should cause an issue for a hash table?
> >
> 
> Yep, that's pretty much what I suggested. I was wondering more about the
> other places where this hash table might be used - I've been thinking about
> hashjoins, but now that I think of it - that's a fairly specialized and
> tuned code. In any case, let's not complicate the discussion for now.

My point is that I don't really see any potential usecase where that's
an issue.


> A few comments, after quickly skimming through the first two patches:
> 
> 1) SH_CREATE does this:
> 
>     /* round up size to the next power of 2, eases lookups */
>     if (size < 2)
>         size = 2;
>     else
>         size = sh_pow2_int(size);
> 
> I very much doubt a hash table with 2 buckets is very useful. What about
> using some reasonable lower boundary - IIRC hashjoins use 1024 buckets,
> which might be a bit too much, but perhaps 256 would work?

For some cases that'll waste a good bit of space. Consider
e.g. catcache.c. Callers can (and do) specify more appropriate sizes for
the individual uses.


> 2) tb->resize_above
> 
> I'd say 'resize_threshold' is a better name. Also, 0.8 should be defined as
> a constant somewhere (not sure if it makes sense to make it part of SH_TYPE,
> so that hash tables may use different load factors).

Fair point.


> 3) 'size' is a rather poor name for parameter (e.g. in SH_CREATE or
> SH_RESIZE), as it's unclear whether it's 'element size in bytes', 'total
> size in bytes' or 'number of entries'.

Ok.


> 4) SH_CONTAINS sounds more like a function checking whether a hash table
> contains a key, not as a 'type of hash table entry'. What about
> SH_ENTRY_TYPE instead? Also, SH_KEYTYPE should be SH_KEY_TYPE I guess.

Hm, ok.


> 5) SH_RESIZE
> 
> Do we expect to shrink hash tables?

Not at the moment. Should be doable, but I'm not sure it's worth
developing and testing - I don't see any usecases in pg atm.


> It's not entirely clear why is it guaranteed that there's always an element
> with optimal position (when load factor < 1)? Adding an explanation or a
> link to a paper would be helpful, I guess.

As the load factor always has to be below 1, there always has to be an
empty bucket. Every bucket after an empty bucket has to be at it's
optimal position.

> 
> 6) SH_STAT
> 
> This bit is a bit suspicious:
> 
>     uint32 max_collisions = 0;
> 
>     ...
> 
>     /* single contained element is not a collision */
>     curcoll--;
>     total_collisions += curcoll;
>     if (curcoll > max_collisions)
>         max_collisions = curcoll - 1;
> 
> Shouldn't the last line be just "max_collisions = curcoll"?

Hm. That's probably a refactoring artefact. Nice catch.


> 7) Should hash agg size estimation for the planner consider the fillfactor?
> 
> I think we should account for fillfactor - we should account for the
> allocated memory, even if there's free space in the hash table. After all,
> that's what hashjoins do.

We didn't really for dynahash (as it basically assumed a fillfactor of 1
all the time), not sure if changing it won't also cause issues.


Thanks!


Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Jim Nasby
Date:
Subject: Re: PATCH: two slab-like memory allocators
Next
From: Tomas Vondra
Date:
Subject: Re: PATCH: two slab-like memory allocators