On Wed, Feb 27, 2013 at 8:58 AM, Stephen Frost <sfrost@snowman.net> wrote:
> * Boszormenyi Zoltan (zb@cybertec.at) wrote:
>> But unlike statement_timeout,
>> with lock_timeout_stmt the statement can still finish after this limit
>> as it does useful work besides waiting for locks.
>
> It's still entirely possible to get 99% done and then hit that last
> tuple that you need a lock on and just tip over the lock_timeout_stmt
> limit due to prior waiting and ending up wasting a bunch of work, hence
> why I'm not entirely sure that this is that much better than
> statement_timeout.
I tend to agree that this should be based on the length of any
individual lock wait, not the cumulative duration of lock waits.
Otherwise, it seems like it'll be very hard to set this to a
meaningful value. For example, if you set this to 1 minute, and that
means the length of any single wait, then you basically know that
it'll only kick in if there is some other, long-running transaction
that's holding the lock. But if it means the cumulative length of all
waits, it's not so clear, because now you might also have this kick in
if you wait for 100ms on 600 different occasions. In other words,
complex queries that lock or update many tuples may get killed even if
they never wait very long at all for any single lock. That seems like
it will be almost indistinguishable from random, unprincipled query
cancellations.
Now, you could try to work around that by varying the setting based on
the complexity of the query you're about to run, but that seems like a
pain in the neck, to put it mildly. And it might still not give you
the behavior that you want. Personally, I'd think a big part of the
appeal of this is to make sure that you don't hang waiting for a
RELATION level lock for too long before giving up. And for that,
scaling with the complexity of the query would be exactly the wrong
thing to do, even if you could figure out some system for it.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company