Peter Geoghegan <pg@bowt.ie> writes:
> Perhaps Tom can weigh-in here. I removed code that generated these
> alternative index paths from the planner because its original
> justification (see bugfix commit a4523c5a, a follow-up to bugfix
> commit 807a40c5) no longer applied. Perhaps this should be revisited
> now, or perhaps the issue should be ameliorated on the nbtree side. Or
> maybe we should just do nothing -- the issue can be worked around in
> the application itself.
Well, maybe it was a mistake to no longer consider such plans, but
this example doesn't prove it. Quoting the submitted readme file,
we selected this plan in v15:
-> Index Scan using zsf_pkey on zsf sf (cost=1.49..1.51 rows=1 width=24) (actual time=0.001..0.001 rows=1
loops=47089)
Index Cond: (id = sdo.sfi)
Filter: (cid = ANY ('{...}'::bigint[]))
versus this in v17:
-> Index Only Scan using zsf_id_fpi_cid_key on zsf sf (cost=0.29..0.31 rows=1 width=24) (actual
time=0.023..0.023rows=1 loops=47089)
Index Cond: ((id = sdo.sfi) AND (cid = ANY ('{...}'::bigint[])))
IIUC you're saying the planner no longer even considers the first
case --- but if it did, it'd surely still pick the second one,
because the estimated cost is a lot less. So undoing that choice
would not help the blackduck folks.
I do think we should do something about this, though. My suggestion
is that we should always presort in the planner if the SAOP argument
is a Const array, and then skip the run-time sort if the executor
sees the argument is a Const. Yes, there will be cases where the
plan-time sort is wasted effort, but not too darn many.
An alternative thought is that maybe the run-time sort is expensive
enough that the planner ought to account for it in its estimates.
However, that's a bit of a research project, and I don't think we'd
dare shove it into v17 at this point even if it turns out to fix
this particular case. But a pre-sort seems like a pretty safe change.
regards, tom lane