Connecton timeout issues and JDBC - Mailing list pgsql-general

From Kelvin Lau
Subject Connecton timeout issues and JDBC
Date
Msg-id 20d676e7-a2d4-da48-891e-0a1dd4e9048d@hku.hk
Whole thread Raw
Responses Re: Connecton timeout issues and JDBC
List pgsql-general

Hello psql community,

I have been using Python to deal with CRUD of the database. I have discovered that there are some issues when dealing with long queries (either SELECT or COPY, since it is somewhat big data). The connection is dropped by the 2~3 hours mark and I have no idea what is wrong. There is no knowledge on how my workstation is connected to the server.

But I managed to work around the issue by putting a few parameters in psycopg2:

conn = psycopg2.connect(host=“someserver.hk”,
port=12345,
dbname=“ohdsi”,
user=“admin”,
password=“admin1”,
options="-c search_path="+schema,
# it seems the below lines are needed to keep the connection alive.
connect_timeout=10,
keepalives=1,
keepalives_idle=5,
keepalives_interval=2,
keepalives_count=5)

It looks like that few keepalives* parameter kept the connection alive so the long queries can run day and night.

The problem now is that, I am forced to use R and JDBC to deal with a bunch of codes, because there are a lot of analyses written in R. The issue that a long query would be dropped around the 2~3 hours mark showed up again in R/JDBC. How can I work around that?

I have tried putting tcpKeepAlive=true in the link but it seems to have mixed results. Do I also have to put tcp_keepalives_interval or tcp_keepalives_count? What are some recommend values in these parameters?

Are there any other possible solutions?

Thanks


pgsql-general by date:

Previous
From: Lucas
Date:
Subject: Re: PostgreSQL 9.2 high replication lag
Next
From: "Markhof, Ingolf"
Date:
Subject: Re: [E] Re: Regexp_replace bug / does not terminate on long strings