Thread: Backends created by ODBC live forever ...

Backends created by ODBC live forever ...

From
Hans-Jürgen Schönig
Date:
We have an application which syncs Access databases with PostgreSQL (I 
know that this is VERY ugly). It is a simple script based Access 
application. People wanted that application because they are familiar 
with Microsoft stuff. When Access establishes a connection to PostgreSQL 
everything is just fine. If Access is closed properly everything is 
still fine - the server side backend dies just the way it is supposed to 
die. However, if Access crashes or if the dial-up connections goes down 
the connection is not closed explicitly and the server side backend 
stays alive forever - even if the Windows client is rebooted.

As you can easily imagine that there are MANY dead backends around at 
the end of the day. Is there a way for PostgreSQL to found out that the 
connection is lost? This problem occurs in combination with ODBC - I 
haven't seen it anywhere else up to now.

Did anybody encouter similar problems?
   Regards,
       Hans
<http://kernel.cybertec.at>



Re: Backends created by ODBC live forever ...

From
Tom Lane
Date:
Hans-Jürgen Schönig <postgres@cybertec.at> writes:
> ... However, if Access crashes or if the dial-up connections goes down 
> the connection is not closed explicitly and the server side backend 
> stays alive forever - even if the Windows client is rebooted.

It should time out and exit after an hour or so, I'd expect.  It sounds
like the TCP_KEEPALIVE code on the server side is failing to detect loss
of connection.  What is the server-side platform, exactly?
        regards, tom lane


Re: Backends created by ODBC live forever ...

From
Tom Lane
Date:
Hans-Jürgen Schönig <hs@cybertec.at> writes:
> As far as I have seen it did not time out - at least it seems that it 
> doesn't. Also, if we were running a system with many unreliable clients 
> we'd run out of memory soon. It is also a potential security problem.

I do not see a security issue.  There should be a TCP timeout, but it
may well be long enough (order of hours) to not be very useful to you.

> If we had something like: "max_connections_from IP" or "backend_timeout 
> TIME" we could solve the problem more easily. How about that?

max_connections_from_IP would not solve your problem: it would amount to
denying service to individual clients until whenever the TCP stack did
finally time out.

We've considered and rejected a backend-side timeout before.  I'd be
willing to consider a timeout that only runs while the backend is idle
and not inside a transaction block, but I'm not sure that solves your
problem either.  Clients that send BEGIN and then go to sleep would
defeat such a timeout.  Clients that crash mid-transaction would be a
problem too --- although I believe the kernel TCP stack will time out much
more quickly if it is trying to push unsent data than when it thinks the
connection is idle, so clients that crash while receiving data may not
be too much of a problem.

The ideal solution would really be to get your kernel to issue TCP
KEEPALIVE queries at some shorter interval than what the TCP RFCs call
for, like after a few minutes of inactivity instead of an hour or more.
I dunno whether this is adjustable in the Linux kernel, but I'd suggest
looking there first.  I don't feel very comfortable with having Postgres
second-guess the transport mechanism about whether a connection is still
alive.
        regards, tom lane