The best solution would be to fix the web app but....
Could you set the deadlock_timeout in postgresql.conf? According to
documentation that will kill one of the transactions that has issued the
lock on the table or row?
On Fri, 2004-07-09 at 09:36, Konstantin Pelepelin wrote:
> Hi!
>
> In situations where client application is unstable there is very big
> chance that it will enter transaction, acquire very restrictive locks and
> then hang. This means: connection is not lost, but nothing happens and any
> transaction in conflict with those locks will hang too.
> statement_timeout can help to detect this situation, but not to solve,
> when most of applications fail. Currently the only way I see to solve is
> to kill locked postgres backend, or (more difficult) find hanging client
> among hundreds of similar.
> I work with web-application (Apache-mod_php-Postgres), where PHP hangs
> every 100000 requests or so, which means from a day to some weeks between
> whole system hangs.
> I could set up a cron job which will kill every postgres backend "idle in
> transaction" which was started more than 10 minutes ago, but I can have
> accident problems with some long-running tasks.
> The best solution I see is having an (per session)
> idle_transaction_timeout or idle_connection_timeout setting.
> But there is no such thing in postgres!
> Scanning through list I've not found any workaround.
> Looking into TODO I don't see any plans for it.
> Is it too difficult implementing or I've missed something or are there
> workarounds?
>
> Hoping on your responce,
> Konstantin Pelepelin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if your
> joining column's datatypes do not match