PG Bug reporting form <noreply@postgresql.org> writes:
> Issue description:
> After some minimal activity in database first plan changes to the second.
> Analyze on table1 do not help (tried with various default_statistics_target
> values).
> content_idx index recreation helps for some time, but several minutes later
> plan degrades back to second one.
> The only thing helped (surprisingly) is vacuum. It also helps for some time,
> but once number of dead tuples reaches something like 300-500 - plan
> changes back to second one.
Surely it is *not* a bug that dead tuples affect the plan choice.
The density of live tuples is an important factor in the relative
costs of different table scan techniques.
In the case at hand, I wonder why your rowcount estimate is off
by a factor of 50:
-> Bitmap Index Scan on content_idx (cost=0.00..155.07 rows=409
width=0) (actual time=4.932..4.932 rows=21952 loops=1)
Index Cond: (content @> '{"anotherjsonkey": {"values": ["13"]}}'::jsonb)
Buffers: shared hit=48
If you can't improve that you're not likely to get a good plan, and
futzing around with cost factors to make this particular query do
"the right thing" anyway is inevitably going to make things worse
for other queries. Maybe a larger stats target for the content column
would help, but I fear that this @> condition is just beyond
the ability of the planner to estimate. You might need to redesign
the data representation to make it a bit more SQL-friendly.
regards, tom lane