Alvaro Herrera wrote:
> On Sun, May 01, 2005 at 03:09:37PM +0300,
adnandursun@asrinbilisim.com.tr wrote:
> > Process A start to update / insert some rows in a table
> > and then the connection of process A is lost to PostgreSQL
> > before it sends commit or rollback. Other processes want to
> > update the same rows or SELECT
..FOR UPDATE for the same
> > rows.Now these processes are providing SELECT WAITING
or
> > CANCEL QUERY if statement_timeout was set. Imagine these
> > processes is getting grower. What will we do now ?
> > Restarting backend or finding process A and kill it ?
>
> Well, if process A loses the connection to the client, then the
> transaction will be rolled back and other processes will be able to
> continue.
The problem, as I understand it, is that if you have a long-running
query and the client process disappears, the query keeps running and
holds whatever resources it may have until it finishes. In fact, it
keeps sending data to the client and keeps ignoring the SIGPIPE it gets
(in case of a Unix-domain socket connection).
Now of course this has nothing to do with "high availability" and does
not warrant hijacking a thread about the release schedule, but it may
be worth investigating.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/