Hello,
I have problems with dropped database connection, as described in the
online FAQ (below).
While the explanation is interesting, it's not really up to snuff for
most of us. I don't know how to set up an SSH tunnel, nor should I have
to just to get pgAdmin working. It would be nice if pgAdmin could be as
robust as possible, even in a less-than-ideal environment (where most of
us work in the real world).
I ran into the same problem in my Java applications. I solved it as
follows: whenever my app wishes to talk to the database, if the
connection has been closed unexpectedly, it opens a new connection and
tries again. If THAT doesn't work, well then, it has an error to
report. Otherwise, things silently "just work" for the user. It works
like a charm in my application.
I understand that in closing and re-opening a connection, one will lose
cursors, temporary tables and other connection-based state. But almost
all the time, that state is not being used anyway in pgAdmin. For the
most part, people (myself included) use pgAdmin to look at table
structures and fire off sample SQL queries. So this fix will work 95%
of the time and make our lives a whole lot easier. It's certainly much
better than the current state of the art, in which the user is forced to
needlessly click through the database tree on the left, or to needlessly
save the contents of an SQL window only to re-open a new window and
paste the contents back in.
In other words --- EVEN IF the DB connection is closed by accident,
pgAdmin should not require the user to extra work to open it back up
again.
Sincerely,
--- Bob
Connection to database dropped
I'm connecting to the database server via a firewall. After some minutes
of inactivity, the connection to the database is dropped. Some admins
report that bogus backend processes remain that are never terminated.
Unfortunately some network administrators or default firewall settings
extend functions meant for external web server access to cover internal
database traffic too. After some minutes of inactivity, the TCP/IP
connection is interrupted without notice to both sides of the
connection. As a result, the backend doesn't know that there's no
connection to the client (in this case: pgAdmin III, but any other
PostgreSQL client would be affected in the same way) any more, but
thinks it's just idle, and will continue waiting for the next query that
never arrives.
The PostgreSQL interface uses the TCP/IP protocol stack, that by
definition should provide a reliable connection between the two end
points of the connection. If interrupted, the protocol stack would
notice this and notify the interface. In contrast, a firewall configured
to interrupt by not forwarding any packet belonging to the connection
after some arbitrary timeout, violates basic TCP/IP principles.
While this might be perfectly reasonable for browser connections to a
web servers that are aware of possible non-detectable interruptions,
this is not acceptable for database connections carrying sensitive data.
Unfortunately, many network administrators don't understand the vital
difference, others can't change the firewall configuration for some
hardware/software/policy restricting reasons. As the officially required
solution, ask your network administrator to vastly increase or even
better disable TCP/IP connection watchdog timeouts on the PostgreSQL
port (usually 5432) to restore RFC compliant protocol behaviour of the
firewall.
If there's absolutely no way to accomplish this, you could use a SSH
tunnel for PostgreSQL traffic. SSH can be configured to keep the channel
open at all times, so that database traffic can be passed even after a
prolonged period of inactivity. For information how to configure this,
ask your SSH package's documentation for "tunneling". [AP]
Encoding