Thread: Many connections lingering

Many connections lingering

From
Slavisa Garic
Date:
Hi all,

I've just noticed an interesting behaviour with PGSQL. My software is
made up of few different modules that interact through PGSQL database.
Almost every query they do is an individual transaction and there is a
good reason for that. After every query done there is some processing
done by those modules and I didn't want to lock the database in a
single transaction while that processing is happening. Now, the
interesting behaviour is this. I've ran netstat on the machine where
my software is running and I searched for tcp connections to my PGSQL
server. What i found was hundreds of lines like this:

tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:41631 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41119 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41311 TIME_WAIT
tcp        0      0 remus.dstc.monash.:8649 remus.dstc.monash:41369 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40479 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39454 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39133 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:41501 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39132 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41308 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:40667 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41179 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39323 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41434 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:40282 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41050 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41177 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39001 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41305 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:38937 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39128 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40600 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:41624 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:39000 TIME_WAIT

Now could someone explain to me what this really means and what effect
it might have on the machine (the same machine where I ran this
query)? Would there eventually be a shortage of available ports if
this kept growing? The reason I am asking this is because one of my
modules was raising exception saying that TCP connection could not be
establish to a server it needed to connect to. This may sound
confusing so I'll try to explain this.

We have this scenario, there is a PGSQL server (postmaster) which is
running on machine A. Then there is a custom server called DBServer
which is running on machine B. This server accepts connections from a
client called an Agent. Agent may ran on any machine out there and it
would connect back to DBServer asking for some information. The
communication between these two is in the form of SQL queries. When
agent sends a query to DBServer it passes that query to machine A
postmaster and then passes back the result of the query to that Agent.
The connection problem I mentioned in the paragraph above happens when
Agent tries to connect to DBServer.

So the only question I have here is would those lingering socket
connections above have any effect on the problem I am having. If not I
am sorry for bothering you all with this, if yes I would like to know
what I  could do to avoid that.

Any help would be appreciated,
Regards,
Slavisa

Re: Many connections lingering

From
Tom Lane
Date:
Slavisa Garic <sgaric@gmail.com> writes:
> ... Now, the
> interesting behaviour is this. I've ran netstat on the machine where
> my software is running and I searched for tcp connections to my PGSQL
> server. What i found was hundreds of lines like this:

> tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
> tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
> tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT

This is a network-level issue: the TCP stack on your machine knows the
connection has been closed, but it hasn't seen an acknowledgement of
that fact from the other machine, and so it's remembering the connection
number so that it can definitively say "that connection is closed" if
the other machine asks.  I'd guess that either you have a flaky network
or there's something bogus about the TCP stack on the client machine.
An occasional dropped FIN packet is no surprise, but hundreds of 'em
are suspicious.

> Now could someone explain to me what this really means and what effect
> it might have on the machine (the same machine where I ran this
> query)? Would there eventually be a shortage of available ports if
> this kept growing? The reason I am asking this is because one of my
> modules was raising exception saying that TCP connection could not be
> establish to a server it needed to connect to.

That kinda sounds like "flaky network" to me, but I could be wrong.
In any case, you'd have better luck asking kernel or network hackers
about this than database weenies ;-)

            regards, tom lane

Re: [PERFORM] Many connections lingering

From
Greg Stark
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Slavisa Garic <sgaric@gmail.com> writes:
> > ... Now, the
> > interesting behaviour is this. I've ran netstat on the machine where
> > my software is running and I searched for tcp connections to my PGSQL
> > server. What i found was hundreds of lines like this:
>
> > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
> > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
> > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT
>
> This is a network-level issue: the TCP stack on your machine knows the
> connection has been closed, but it hasn't seen an acknowledgement of
> that fact from the other machine, and so it's remembering the connection
> number so that it can definitively say "that connection is closed" if
> the other machine asks.  I'd guess that either you have a flaky network
> or there's something bogus about the TCP stack on the client machine.
> An occasional dropped FIN packet is no surprise, but hundreds of 'em
> are suspicious.

No, what Tom's describing is a different pair of states called FIN_WAIT_1 and
FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout. This is to
prevent any delayed packets from earlier in the connection causing problems
with a subsequent good connection. Otherwise you could get data from the old
connection mixed in the data for later ones.

> > Now could someone explain to me what this really means and what effect
> > it might have on the machine (the same machine where I ran this
> > query)? Would there eventually be a shortage of available ports if
> > this kept growing? The reason I am asking this is because one of my
> > modules was raising exception saying that TCP connection could not be
> > establish to a server it needed to connect to.

What it does indicate is that each query you're making is probably not just a
separate transaction but a separate TCP connection. That's probably not
necessary. If you have a single long-lived process you could just keep the TCP
connection open and issue a COMMIT after each transaction. That's what I would
recommend doing.


Unless you have thousands of these TIME_WAIT connections they probably aren't
actually directly the cause of your failure to establish connections. But yes
it can happen.

What's more likely happening here is that you're stressing the server by
issuing so many connection attempts that you're triggering some bug, either in
the TCP stack or Postgres that is causing some connection attempts to not be
handled properly.

I'm skeptical that there's a bug in Postgres since lots of people do in fact
run web servers configured to open a new connection for every page. But this
wouldn't happen to be a Windows server would it? Perhaps the networking code
in that port doesn't do the right thing in this case?

--
greg

Re: [PERFORM] Many connections lingering

From
Tom Lane
Date:
Greg Stark <gsstark@mit.edu> writes:
> Tom Lane <tgl@sss.pgh.pa.us> writes:
>> This is a network-level issue: the TCP stack on your machine knows the
>> connection has been closed, but it hasn't seen an acknowledgement of
>> that fact from the other machine, and so it's remembering the connection
>> number so that it can definitively say "that connection is closed" if
>> the other machine asks.

> No, what Tom's describing is a different pair of states called FIN_WAIT_1 and
> FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout.

D'oh, obviously it's been too many years since I read Stevens ;-)

So AFAICS this status report doesn't actually indicate any problem,
other than massively profligate use of separate connections.  Greg's
correct that there's some risk of resource exhaustion at the TCP level,
but it's not very likely.  I'd be more concerned about the amount of
resources wasted in starting a separate Postgres backend for each
connection.  PG backends are fairly heavyweight objects --- if you
are at all concerned about performance, you want to get a decent number
of queries done in each connection.  Consider using a connection pooler.

            regards, tom lane

Re: [PERFORM] Many connections lingering

From
Slavisa Garic
Date:
Hi Greg,

This is not a Windows server. Both server and client are the same
machine (done for testing purposes) and it is a Fedora RC2 machine.
This also happens on debian server and client in which case they were
two separate machines.

There are thousands (2+) of these waiting around and each one of them
dissapears after 50ish seconds. I tried psql command line and
monitored that connection in netstats. After I did a graceful exit
(\quit) the connection changed to TIME_WAIT and it was sitting there
for around 50 seconds. I thought I could do what you suggested with
having one connection and making each query a full BEGIN/QUERY/COMMIT
transaction but I thought I could avoid that :).

This is a serious problem for me as there are multiple users using our
software on our server and I would want to avoid having connections
open for a long time. In the scenario mentioned below I haven't
explained the magnitute of the communications happening between Agents
and DBServer. There could possibly be 100 or more Agents per
experiment, per user running on remote machines at the same time,
hence we need short transactions/pgsql connections. Agents need a
reliable connection because failure to connect could mean a loss of
computation results that were gathered over long periods of time.

Thanks for the help by the way :),
Regards,
Slavisa

On 12 Apr 2005 23:27:09 -0400, Greg Stark <gsstark@mit.edu> wrote:
>
> Tom Lane <tgl@sss.pgh.pa.us> writes:
>
> > Slavisa Garic <sgaric@gmail.com> writes:
> > > ... Now, the
> > > interesting behaviour is this. I've ran netstat on the machine where
> > > my software is running and I searched for tcp connections to my PGSQL
> > > server. What i found was hundreds of lines like this:
> >
> > > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
> > > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
> > > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT
> >
> > This is a network-level issue: the TCP stack on your machine knows the
> > connection has been closed, but it hasn't seen an acknowledgement of
> > that fact from the other machine, and so it's remembering the connection
> > number so that it can definitively say "that connection is closed" if
> > the other machine asks.  I'd guess that either you have a flaky network
> > or there's something bogus about the TCP stack on the client machine.
> > An occasional dropped FIN packet is no surprise, but hundreds of 'em
> > are suspicious.
>
> No, what Tom's describing is a different pair of states called FIN_WAIT_1 and
> FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout. This is to
> prevent any delayed packets from earlier in the connection causing problems
> with a subsequent good connection. Otherwise you could get data from the old
> connection mixed in the data for later ones.
>
> > > Now could someone explain to me what this really means and what effect
> > > it might have on the machine (the same machine where I ran this
> > > query)? Would there eventually be a shortage of available ports if
> > > this kept growing? The reason I am asking this is because one of my
> > > modules was raising exception saying that TCP connection could not be
> > > establish to a server it needed to connect to.
>
> What it does indicate is that each query you're making is probably not just a
> separate transaction but a separate TCP connection. That's probably not
> necessary. If you have a single long-lived process you could just keep the TCP
> connection open and issue a COMMIT after each transaction. That's what I would
> recommend doing.
>
> Unless you have thousands of these TIME_WAIT connections they probably aren't
> actually directly the cause of your failure to establish connections. But yes
> it can happen.
>
> What's more likely happening here is that you're stressing the server by
> issuing so many connection attempts that you're triggering some bug, either in
> the TCP stack or Postgres that is causing some connection attempts to not be
> handled properly.
>
> I'm skeptical that there's a bug in Postgres since lots of people do in fact
> run web servers configured to open a new connection for every page. But this
> wouldn't happen to be a Windows server would it? Perhaps the networking code
> in that port doesn't do the right thing in this case?
>
> --
> greg
>
>

Re: [PERFORM] Many connections lingering

From
John DeSoi
Date:
On Apr 13, 2005, at 1:09 AM, Slavisa Garic wrote:

> This is not a Windows server. Both server and client are the same
> machine (done for testing purposes) and it is a Fedora RC2 machine.
> This also happens on debian server and client in which case they were
> two separate machines.
>
> There are thousands (2+) of these waiting around and each one of them
> dissapears after 50ish seconds. I tried psql command line and
> monitored that connection in netstats. After I did a graceful exit
> (\quit) the connection changed to TIME_WAIT and it was sitting there
> for around 50 seconds. I thought I could do what you suggested with
> having one connection and making each query a full BEGIN/QUERY/COMMIT
> transaction but I thought I could avoid that :).


If you do a bit of searching on TIME_WAIT you'll find this is a common
TCP/IP related problem, but the behavior is within the specs of the
protocol.  I don't know how to do it on Linux, but you should be able
to change TIME_WAIT to a shorter value. For the archives, here is a
pointer on changing TIME_WAIT on Windows:

http://www.winguides.com/registry/display.php/878/


John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


Re: [PERFORM] Many connections lingering

From
Slavisa Garic
Date:
HI Mark,

My DBServer module already serves as a broker. At the moment it opens
a new connection for every incoming Agent connection. I did it this
way because I wanted to leave synchronisation to PGSQL. I might have
to modify it a bit and use a shared, single connection for all agents.
I guess that is not a bad option I just have to ensure that the code
is not below par :),

Also thank for the postgresql.conf hint, that limit was pretty low on
our server so this might help a bit,

Regards,
Slavisa

On 4/14/05, Mark Lewis <mark.lewis@mir3.com> wrote:
> If there are potentially hundreds of clients at a time, then you may be
> running into the maximum connection limit.
>
> In postgresql.conf, there is a max_connections setting which IIRC
> defaults to 100.  If you try to open more concurrent connections to the
> backend than that, you will get a connection refused.
>
> If your DB is fairly gnarly and your performance needs are minimal it
> should be safe to increase max_connections.  An alternative approach
> would be to add some kind of database broker program.  Instead of each
> agent connecting directly to the database, they could pass their data to
> a broker, which could then implement connection pooling.
>
> -- Mark Lewis
>
> On Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:
> > This is a serious problem for me as there are multiple users using our
> > software on our server and I would want to avoid having connections
> > open for a long time. In the scenario mentioned below I haven't
> > explained the magnitute of the communications happening between Agents
> > and DBServer. There could possibly be 100 or more Agents per
> > experiment, per user running on remote machines at the same time,
> > hence we need short transactions/pgsql connections. Agents need a
> > reliable connection because failure to connect could mean a loss of
> > computation results that were gathered over long periods of time.
>
>

Re: [PERFORM] Many connections lingering

From
Mark Lewis
Date:
If there are potentially hundreds of clients at a time, then you may be
running into the maximum connection limit.

In postgresql.conf, there is a max_connections setting which IIRC
defaults to 100.  If you try to open more concurrent connections to the
backend than that, you will get a connection refused.

If your DB is fairly gnarly and your performance needs are minimal it
should be safe to increase max_connections.  An alternative approach
would be to add some kind of database broker program.  Instead of each
agent connecting directly to the database, they could pass their data to
a broker, which could then implement connection pooling.

-- Mark Lewis

On Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:
> This is a serious problem for me as there are multiple users using our
> software on our server and I would want to avoid having connections
> open for a long time. In the scenario mentioned below I haven't
> explained the magnitute of the communications happening between Agents
> and DBServer. There could possibly be 100 or more Agents per
> experiment, per user running on remote machines at the same time,
> hence we need short transactions/pgsql connections. Agents need a
> reliable connection because failure to connect could mean a loss of
> computation results that were gathered over long periods of time.