On Fri, 13 Feb 2004, Tom Lane wrote:
> Stephan Szabo <sszabo@megazone.bigpanda.com> writes:
> > One thing is that IIRC we're going to ask for only one row when we do the
> > SPI_execp_current. However, unless I misremember, the behavior of for
> > update and limit means that saying limit 1 is potentially unsafe (if you
> > block on a row that goes away). Is there anyway for us to let the planner
> > know this?
>
> I was looking at that last night. It seems like we could add a LIMIT at
> least in some contexts. In the case at hand, we're just going to error
> out immediately if we find a matching row, and so there's no need for
> FOR UPDATE, is there?
I think there still is, because a not yet committed transaction could have
deleted them all in which case I think the correct behavior is to wait and
if that transaction commits allow the action and if it rolls back to
error.
Really we'd want a different behavior where we're only blocking in these
cases if all the matching rows are locked by other transactions.
> However, I'm not sure it would help the OP anyway. With the stats he
> had, the planner would still take a seqscan, because it's going to
> expect that it can find a match by probing the first ten or so rows of
> the first page. With anything close to the normal cost parameters,
> that's going to look more expensive than an index probe. Possibly if
> the table had a few more values it would work.
Hmm, that's true. It also doesn't help the real actions (cascade, set *)
since those really do need to get at all the rows, but it probably helps
in a reasonable number of cases.