Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, Feb 27, 2013 at 8:58 AM, Stephen Frost <sfrost@snowman.net> wrote:
>> It's still entirely possible to get 99% done and then hit that last
>> tuple that you need a lock on and just tip over the lock_timeout_stmt
>> limit due to prior waiting and ending up wasting a bunch of work, hence
>> why I'm not entirely sure that this is that much better than
>> statement_timeout.
> I tend to agree that this should be based on the length of any
> individual lock wait, not the cumulative duration of lock waits.
> Otherwise, it seems like it'll be very hard to set this to a
> meaningful value. For example, if you set this to 1 minute, and that
> means the length of any single wait, then you basically know that
> it'll only kick in if there is some other, long-running transaction
> that's holding the lock. But if it means the cumulative length of all
> waits, it's not so clear, because now you might also have this kick in
> if you wait for 100ms on 600 different occasions. In other words,
> complex queries that lock or update many tuples may get killed even if
> they never wait very long at all for any single lock. That seems like
> it will be almost indistinguishable from random, unprincipled query
> cancellations.
Yeah. I'm also unconvinced that there's really much use-case territory
here that statement_timeout doesn't cover well enough. To have a case
that statement-level lock timeout covers and statement_timeout doesn't,
you need to suppose that you know how long the query can realistically
wait for all locks together, but *not* how long it's going to run in the
absence of lock delays. That seems a bit far-fetched, particularly when
thinking of row-level locks, whose cumulative timeout would presumably
need to scale with the number of rows the query will visit.
If statement-level lock timeouts were cheap to add, that would be one
thing; but given that they're complicating the code materially, I think
we need a more convincing argument for them.
regards, tom lane