On 06/19/2012 09:29 AM, Mark Rostron wrote:
> hi
>
> we are running out of database connections.
>
> we are using pg 9.0.6 on linux centos 5.7 64bit.
> we are not using any go-between connection pools such as pgbouncer or
> pgpool - connections occur directly from client to database.
> the connection setup on the client (java) is default, only providing
> (user,password,dbhost,dbname).
>
> we have about 10 developers developing java thru IDEA who start/stop
> the local tomcat server frequently.
> i have observed that tomcat doesn't disconnect from pg cleanly when
> they cycle, and the server processes persist for a long time.
> I have had them reduce their local connection factory pool size to 1
> (this helped) and increased our max_connection value to 1000.
> yet the problem persists.
>
> I have noticed that the server processes do die "after some time" -
> due to inactivity?
> we are looking for a way to control server processes better than we
> are doing now.
>
> thnx for your time.
> mr
>
>
I am unaware of any system setting like max_connection_idle_time (though
it might be a useful addition). I have not had to mess with
tcp_keepalive settings but you might be able to alter those (perhaps at
the OS instead of PostgreSQL) to reduce the delay before the backend
terminates. But this won't work for socket connections.
You could hack together a tailored solution by having cron run a script
that would query pg_stat_activity for queries equal to "<IDLE>" and with
a backend_start age greater than whatever you find reasonable and then
execute pg_terminate_backend() on those PIDs. You could even have a
table of developer IP addresses and only terminate those processes.
Alternately, if Tomcat connected to a different port you could only kill
those.
Cheers,
Steve