Re: BUG #18909: Query creates millions of temporary files and stalls - Mailing list pgsql-bugs

From Tom Lane
Subject Re: BUG #18909: Query creates millions of temporary files and stalls
Date
Msg-id 2177960.1746289626@sss.pgh.pa.us
Whole thread Raw
In response to Re: BUG #18909: Query creates millions of temporary files and stalls  (Sergey Koposov <Sergey.Koposov@ed.ac.uk>)
Responses Re: BUG #18909: Query creates millions of temporary files and stalls
List pgsql-bugs
Sergey Koposov <Sergey.Koposov@ed.ac.uk> writes:
> #8  0x00005615d84f6a59 in ExecHashTableInsert (hashtable=0x5615da85e5c0, slot=0x5615da823378, hashvalue=2415356794)
>     at nodeHash.c:1714
>         shouldFree = true
>         tuple = 0x5615da85f5e8
>         bucketno = 32992122
>         batchno = 3521863

Yeah, this confirms the idea that the hashtable has exploded into an
unreasonable number of buckets and batches.  I don't know why a
parallel hash join would be more prone to do that than a non-parallel
one, though.  I'm hoping some of the folks who worked on PHJ will
look at this.

What have you got work_mem set to?  I hope it's fairly large, if
you need to join such large tables.

            regards, tom lane



pgsql-bugs by date:

Previous
From: Sergey Koposov
Date:
Subject: Re: BUG #18909: Query creates millions of temporary files and stalls
Next
From: Abdullah DURSUN
Date:
Subject: Planner does not use btree index for LIKE 'prefix%' on text column, but does for equivalent range query (PostgreSQL 17.4)