Hi,
thanks a lot, I just tried with work_mem set to 128MB on PG16 and it worked fine:
https://explain.depesz.com/s/7Zan
Likewise on PG18:
https://explain.depesz.com/s/H15B
And with enable_memoize=0 (PG18, 128MB):
https://explain.depesz.com/s/SaVI
So increasing work_mem seems like a good workaround for when we upgrade
our production DB. But I guess there's still a but somewhere that results to the
wrong estimate?
Cheers,
Adrian
> Hi,
> On 2026-04-02 13:04:46 +0000, PG Bug reporting form wrote:
>> This is extreme both in general and compared to the performance we got on
>> 14/15, where the same
>> query took just a few seconds.
>>
>> Here are EXPLAIN ANALYZE outputs from when I tested this a few weeks ago on
>> 14 and 16
>> using our real production database.
>> https://explain.depesz.com/s/17Fp
>> https://explain.depesz.com/s/0dHI
> A lot of time is wasted due to batching in the hash join in 16, seemingly due
> to a mis-estimate in how much batching we would need:
> -> Parallel Hash
> (cost=323037.00..323037.00 rows=1075136 width=10) (actual
> time=3267572.432..3267575.016 rows=1023098 loops=3)
> Buckets: 262144 (originally 262144)
> Batches: 262144 (originally 32) Memory Usage: 18912kB
> (note the 262144 batches, when 32 were originally assumed)
> I'd suggest trying to run the query with a larger work mem. Not because
> that should be necessary to avoid regressions, but because it will be useful
> to narrow down whether that's related to the issue...
> However, even on 14, you do look to be loosing a fair bit of performance due
> to batching, so it might be also worth running the query on 14 with a larger
> work mem, to see what performance you get there.
> It also looks like that the choice of using memoize might not be working out
> entirely here. Although I don't think it's determinative for performance, it
> might still be worth checking what plan you get with
> SET enable_memoize = 0;
> Greetings,
> Andres Freund