Thread: How to close dead connections immediately
My ODBC client uses single connection to Postgres 8.1.3 server in W2K In case of TCP error it closes connection and re-opens it. Unfortunately, in server status window I see a number of connections from same IP address. In log file I have lines like 2006-04-24 01:24:41 oksana 188.16.34.194 idleLOG: could not receive data from client: No connection could be made because the target machine actively refused it. 2006-04-24 01:24:41 oksana 188.16.34.194 idleLOG: unexpected EOF on client connection 2006-04-24 01:24:41 oksana 188.16.34.194 idleLOG: disconnection: session time: 6:19:20.54 user=oksana database=mydb host=188.16.34.194 port=33625 This occurs in 1:24 in midnight while user has exited my application a number of hours ago. How to prevent dead connections ? My server and client are in Windows and TCP_KEEPALIVE parameters have default settings. Is it possible to force postgres to close dead connections immediately, not after 6 hours ? Andrus.
"Andrus" <eetasoft@online.ee> writes: > My ODBC client uses single connection to Postgres 8.1.3 server in W2K > In case of TCP error it closes connection and re-opens it. > ... > Is it possible to force postgres to close dead connections immediately, not > after 6 hours ? PG *will* close the connection immediately if it receives any notification of connection drop from the client. If it's not seeing one, that suggests something wrong in your network infrastructure. I'd suggest fixing the problem rather than kluging the symptom. You can reduce the TCP timeout settings if you are using PG 8.1 and an operating system that supports it (I have no idea if Windows does or not). I wouldn't recommend trying to make it "immediate" since then any network instability breaks your application. 5 minutes or so might be reasonable though. As you've noticed, the default timeouts are usually upwards of an hour. (You should however ask yourself if you really know more about TCP than the authors of the TCP specifications do.) regards, tom lane
> PG *will* close the connection immediately if it receives any > notification of connection drop from the client. If it's not seeing > one, that suggests something wrong in your network infrastructure. > I'd suggest fixing the problem rather than kluging the symptom. I have W2K server running also Exchange and other applications. I havent seen any problem with other appls. In every morning 8 persons from different locations over internet each open a single TCP connection to this Postgres 8.1.3 server using ODBC. They close my appl in midnight. Every day some clients receive randomly error form ODBC driver SQL state: 08S01 Error number: 27 Message: "Error while executing the query; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request." In fewer cases error message is "Could not send query to backend;Could not send query to backend" In this case my application sends disconnect command to ODBC driver and re-connects. Re-connect is OK. After first error I see that there are 2 Postgres processes running for that user. Clients are using ADSL connection over phone line maintained by phone company and I have no control over this. Doe to design request my application checks for duplicate connections and those dupl connections prevent working. Any idea what should I change? > You can reduce the TCP timeout settings if you are using PG 8.1 and an > operating system that supports it (I have no idea if Windows does or not). > I wouldn't recommend trying to make it "immediate" since then any > network instability breaks your application. 5 minutes or so might be > reasonable though. As you've noticed, the default timeouts are usually > upwards of an hour. (You should however ask yourself if you really know > more about TCP than the authors of the TCP specifications do.) My queries do not take more time than 5 minutes. In case of connection loss application re-connects automatically. What are reasonable values of the 3 TCP_KEEPALIVE_* parameters in this case ? Andrus.