Re: BUG #19076: Generic query plan is extremely slow - Mailing list pgsql-bugs

From David Rowley
Subject Re: BUG #19076: Generic query plan is extremely slow
Date
Msg-id CAApHDvrtYT_eoqgGtR-6Z=s0upNv1NeiptdRdEEnVm2=osD8jg@mail.gmail.com
Whole thread Raw
In response to Re: BUG #19076: Generic query plan is extremely slow  (Laurenz Albe <laurenz.albe@cybertec.at>)
Responses Re: BUG #19076: Generic query plan is extremely slow
Re: BUG #19076: Generic query plan is extremely slow
List pgsql-bugs
On Wed, 8 Oct 2025 at 21:21, Laurenz Albe <laurenz.albe@cybertec.at> wrote:
> Either don't use a prepared statement for this statement, or make sure
> that "plan_cache_mode" is set to "force_custom_plan" for that single
> prepared statement.

That might be the best idea.  Going by the following fragment, I'm
suspecting that it's a multi-tenanted system and the query is running
on a larger than average tenant. At best when planning the generic
plan the planner can only assume there's going to be <estimated rows
in table> divided by <number of distinct tenant_ids in table> matching
"tenant_id = $1".

                        ->  Bitmap Index Scan on ix_contacts_tenant_id
(cost=0.00..9.74 rows=709 width=0) (actual time=32.114..32.114
rows=966884 loops=1)
                              Index Cond: (tenant_id = $1)

In this case that estimates to be 709, but in reality, for this
tenant, it's 966884. That results in the parameterized nested loop
having to do over 1000x more inner scans than planned.

Another option would be to check if pg_stats reports that n_distinct
is roughly accurate for this table. (see if it's close to select
count(distinct tenant_id) from contacts;). If that's set too high then
lowering it might help. Artificially lowering it could also be an
option, but that could lead to bad plans in other areas, so the
plan_cache_mode idea might be better.

David



pgsql-bugs by date:

Previous
From: Laurenz Albe
Date:
Subject: Re: BUG #19076: Generic query plan is extremely slow
Next
From: Marco Boeringa
Date:
Subject: Re: Potential "AIO / io workers" inter-worker locking issue in PG18?