Re: Avoiding hash join batch explosions with extreme skew and weird stats - Mailing list pgsql-hackers

From David Kimura
Subject Re: Avoiding hash join batch explosions with extreme skew and weird stats
Date
Msg-id CAHnPFjQiYN83NjQ4KvjX19Wti==uzyw8D24va56zJKzOt+B51A@mail.gmail.com
Whole thread Raw
In response to Re: Avoiding hash join batch explosions with extreme skew and weird stats  (David Kimura <david.g.kimura@gmail.com>)
List pgsql-hackers
On Wed, Apr 29, 2020 at 4:44 PM David Kimura <david.g.kimura@gmail.com> wrote:
>
> Following patch adds logic to create a batch 0 file for serial hash join so
> that even in pathalogical case we do not need to exceed work_mem.

Updated the patch to spill batch 0 tuples after it is marked as fallback.

A couple questions from looking more at serial code:

1) Does the current pattern to repartition batches *after* the previous
   hashtable insert exceeds work_mem still make sense?

   In that case we'd allow ourselves to exceed work_mem by one tuple. If that
   doesn't seem correct anymore then I think we can move the space exceeded
   check in ExecHashTableInsert() *before* actual hashtable insert.

2) After batch 0 is marked fallback, does the logic to insert into its batch
   file fit more in MultiExecPrivateHash() or ExecHashTableInsert()?

   The latter already has logic to decide whether to insert into hashtable or
   batchfile

Thanks,
David

Attachment

pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: design for parallel backup
Next
From: Tom Lane
Date:
Subject: Re: Poll: are people okay with function/operator table redesign?