Killing off old postgres processes in a friendly way? - Mailing list pgsql-admin

From Rainer Mager
Subject Killing off old postgres processes in a friendly way?
Date
Msg-id NEBBJBCAFMMNIHGDLFKGEECBCBAA.rmager@vgkk.co.jp
Whole thread Raw
In response to Backend closed the channel unexpectedly  ("Marco A. Bravo" <marco@ife.org.mx>)
List pgsql-admin
Hi all,

    I believe something like this question has been asked here before but I
don't remember seeing an answer.

    Briefly, the problem we are having is that we sometimes open connections
(JDBC) to our database and then do not properly close them. The odd thing is
that postgres itself does not EVER seem to time them out and close them.
We've had processes over 2 weeks old that just sat there doing nothing.
Finally we restarted postgres to fix the problem.

    So, is there a setting to postgres (postmaster) so that it will timeout
old, unused connections?





    In more detail:

    We have a Java application that uses JDBC to connect to a postgres
database. The app uses a connection pool to improve performance. When the
app starts up it creates some number of connections for this pool (e.g.,
10). During our development process, we are offing debugging/killing the app
in mid-run. This means that it dies immediately without ever properly
closing the connections.
    The result of this is that the process exist on the postgres sever machine
until we restart postmaster. It appears that this is not a problem in our
production system because the debugging killing of the app does not occur.
However, we would like to find a setting to postgres so that it proactively
cleans up old connections.
    How can this be done?


--Rainer


pgsql-admin by date:

Previous
From: "Marco A. Bravo"
Date:
Subject: Backend closed the channel unexpectedly
Next
From: Jeremy Hylton
Date:
Subject: pg_dump fails with memory exhausted error