Thread: Persistent Connections

Persistent Connections

From
"Bee.Lists"
Date:
I have an issue with a server (v10) that’s seeing increasing connections until it’s maxxed-out.

max_connections for my 4-core server is set to 12.

I’ve installed pg_stat_activity and pg_stat_statements.

I access this server through a web app, and another client on two machines.  I also send queries through SSH on the
LAN. psql queries indicate there are too many clients already.  I’ve been getting errors from my workstation through a
Rubygem that says dropped connections happen.  Essentially, the server isn’t giving up connections and clients from all
sidesare showing some issues. 

pg_stat_activity has shown a handful of idle queries that are quite simple.  I’ve tested those queries and they seem
fineon a workstation client.  I’m assuming these queries somehow don’t finish and leave the connection open, but I
couldbe wrong.  All of this is new to me.   

It was suggested on Slack that it sounds like my workstation had some TCP issues with these connections, and that it
wasa firewall.  I disabled the firewall and the same happened.  The firewall has been restarted.   

I am running no connection pool tool of any sort, and I’ve never used one.

At this point I don’t know what to even be watching for.  Connections increase and connections “drop”, according to the
gemI’m using.  I have simple queries that are idle and won’t disappear for some reason.   

How can I figure this out so the connections remain within the max_connections limit, and connections are not dropped?


Any insight appreciated.


Cheers, Bee







Re: Persistent Connections

From
Michael Lewis
Date:
On Tue, Jun 23, 2020 at 2:29 PM Bee.Lists <bee.lists@gmail.com> wrote:
I have an issue with a server (v10) that’s seeing increasing connections until it’s maxxed-out. 

max_connections for my 4-core server is set to 12. 

I’ve installed pg_stat_activity and pg_stat_statements.

Do you see anything in pg_stat_activity that stays idle for a while and then *does* disappear on its own? Perhaps some types of connections are doing client side/application stuff before telling the DB to close the connection.
 
pg_stat_activity has shown a handful of idle queries that are quite simple.  I’ve tested those queries and they seem fine on a workstation client.  I’m assuming these queries somehow don’t finish and leave the connection open, but I could be wrong.  All of this is new to me. 

Idle means the query finished and that was the last query run. It isn't active or waiting on another process, that connection is open by idle.
 
I am running no connection pool tool of any sort, and I’ve never used one. 

It sounds like a good time to set one up.
 
How can I figure this out so the connections remain within the max_connections limit, and connections are not dropped? 

I would increase the limit directly, or with a pooler and research which connections are behaving, and which are taking too long to close or not closing at all. You could set up a process to snapshot pg_stat_activity every minute or 5 and trace which pids are terminating properly, and/or make logging very verbose.

Re: Persistent Connections

From
"Bee.Lists"
Date:
>
> On Jun 23, 2020, at 4:51 PM, Michael Lewis <mlewis@entrata.com> wrote:
>
> Do you see anything in pg_stat_activity that stays idle for a while and then *does* disappear on its own? Perhaps
sometypes of connections are doing client side/application stuff before telling the DB to close the connection. 

I’m finding those queries sticking around.  These queries are very simple.  Last login type of stuff.

> Idle means the query finished and that was the last query run. It isn't active or waiting on another process, that
connectionis open by idle. 

OK.  The page that I load up is a dashboard and has a handful of queries.  From the looks of it, it looks like they’re
stillworking, but idle.  But you’re saying they’re just open connections?  Why would they remain open? 

I check for numbackends this way:

pgconns='psql -c "select datid, datname, numbackends, xact_commit, stats_reset from pg_stat_database where datname in
('\’'mydbname'\'');”'

> It sounds like a good time to set one up.

OK, some further questions:

Who do the connections belong to?  Not the client, not the server (apparently).  Is there one that’s independent and
behavesas the front end of connection management? 

> I would increase the limit directly, or with a pooler and research which connections are behaving, and which are
takingtoo long to close or not closing at all. You could set up a process to snapshot pg_stat_activity every minute or
5and trace which pids are terminating properly, and/or make logging very verbose. 

How do I go about researching connection behaviour?  I guess a pooler should be investigated first.  I have that
pgconnsalready logging, so I’ll do one for pg_stat_activity.   

Once I find culprits, what options do I have?  Not sure why new connections are made when these idle past connections
seemvalid and usable.   

There is agreement that ORMs shouldn’t be managing a connection pool, and this doesn’t achieve to do that.  I’ll be
lookinginto a pooler.  This client (the gem is Sequel, btw) uses what it assumes are valid connections, but that’s
whereit fails as the database apparently disconnects prematurely.  The gem has a procedure to check how long since the
lastpool was investigated for legit connections, but I think that’s irrelevant.  It’s finding what it’s told are legit
connections,which are not.  It’s been lied to.   


Cheers, Bee


Re: Persistent Connections

From
"David G. Johnston"
Date:
Why is there now a second thread for this topic?

On Tue, Jun 23, 2020 at 3:21 PM Bee.Lists <bee.lists@gmail.com> wrote:
>
> On Jun 23, 2020, at 4:51 PM, Michael Lewis <mlewis@entrata.com> wrote:
>
> Do you see anything in pg_stat_activity that stays idle for a while and then *does* disappear on its own? Perhaps some types of connections are doing client side/application stuff before telling the DB to close the connection.

I’m finding those queries sticking around.  These queries are very simple.  Last login type of stuff. 

> Idle means the query finished and that was the last query run. It isn't active or waiting on another process, that connection is open by idle.

OK.  The page that I load up is a dashboard and has a handful of queries.  From the looks of it, it looks like they’re still working, but idle.  But you’re saying they’re just open connections?  Why would they remain open?

"they (queries) are still working, but idle" - your terminology is problematic and it is probably affecting your understanding.  As I said on the other thread you should probably post the actual output you are commenting on if you want to actually move this discussion forward.

> It sounds like a good time to set one up.

OK, some further questions:

Who do the connections belong to?  Not the client, not the server (apparently).  Is there one that’s independent and behaves as the front end of connection management?

As I asked on the other thread: a connection is a link between two parties.  What does it mean to "belong to" in this context?  You have mis-interpreted Tom's answer from the other thread.

> I would increase the limit directly, or with a pooler and research which connections are behaving, and which are taking too long to close or not closing at all. You could set up a process to snapshot pg_stat_activity every minute or 5 and trace which pids are terminating properly, and/or make logging very verbose.

How do I go about researching connection behaviour?  I guess a pooler should be investigated first.

Until you get a better grasp of the basics you should not be introducing any more moving parts.  If anything you need to remove some in order to figure out which one of the existing parts is causing your problem.
 
Once I find culprits, what options do I have?  Not sure why new connections are made when these idle past connections seem valid and usable. 

Not sure how you expect an answer to "how do I fix the problem" without an understanding of what the problem is.

There is agreement that ORMs shouldn’t be managing a connection pool, and this doesn’t achieve to do that.  I’ll be looking into a pooler.  This client (the gem is Sequel, btw) uses what it assumes are valid connections, but that’s where it fails as the database apparently disconnects prematurely.  The gem has a procedure to check how long since the last pool was investigated for legit connections, but I think that’s irrelevant.  It’s finding what it’s told are legit connections, which are not.  It’s been lied to. 

That the pooler in your application is being lied to is probably the most likely answer, as Tom said in the other thread.  But the rest of what you are saying here just sounds like nonsense.  "I'll be looking into a pooler. ... The gem has a procedure to check how long since the last pool was investigated for legit connections ...".  I'd like to point out that you seem to be saying that you simultaneously have a connection pool and don't have a connection pool involved here...

As I suggested on the other thread, and repeated above, you need to build up a script that can reproduce the problem.  Something that can open a connection to the server and then wait for a period of time before executing a query against it to see if that session gets dropped on the client side while still remaining visible on the server.

David J.

Re: Persistent Connections

From
Tim Cross
Date:
Bee.Lists <bee.lists@gmail.com> writes:

> I have an issue with a server (v10) that’s seeing increasing connections until it’s maxxed-out.
>
> max_connections for my 4-core server is set to 12.
>
> I’ve installed pg_stat_activity and pg_stat_statements.
>
> I access this server through a web app, and another client on two machines.  I also send queries through SSH on the
LAN. psql queries indicate there are too many clients already.  I’ve been getting errors from my workstation through a
Rubygem that says dropped connections happen.  Essentially, the server isn’t giving up connections and clients from all
sidesare showing some issues. 
>
> pg_stat_activity has shown a handful of idle queries that are quite simple.  I’ve tested those queries and they seem
fineon a workstation client.  I’m assuming these queries somehow don’t finish and leave the connection open, but I
couldbe wrong.  All of this is new to me.   
>
> It was suggested on Slack that it sounds like my workstation had some TCP issues with these connections, and that it
wasa firewall.  I disabled the firewall and the same happened.  The firewall has been restarted.   
>
> I am running no connection pool tool of any sort, and I’ve never used one.
>
> At this point I don’t know what to even be watching for.  Connections increase and connections “drop”, according to
thegem I’m using.  I have simple queries that are idle and won’t disappear for some reason.   
>
> How can I figure this out so the connections remain within the max_connections limit, and connections are not
dropped?  
>
> Any insight appreciated.
>

Sounds like your web app may not be closing connections once it has
finished with them? The fact your seeing idle connections would seem to
support this. I would be verifying there isn't a code path in your
client application which is failing to close a connection correctly. Seeing
connections go up and down in the gem may not be telling you the full story - could
be that your client connection objects are being destroyed in your app,
but are not closing the connection correctly so PG is unaware the client
has finished.

Typically, due to the overhead of making a connection, you don't want
your client app/web app to create a new connection for every query.
Instead, you would use some type of connection pool. Many development
languages support some form of pooling as part of their db connection
library (don't know about Ruby, but JS, Python, PHP, Java, Perl all do)
and there are external 3rd party solutions like pgbouncer which sit
between your client and the database and work a bit like a connection
broker which will manage a pool of connections.

From the description of what your doing, I would first look to see what
level of connection pooling your development language supports. This
will likely be easier to configure and use than setting up a whole
additional bit of infrastructure which is probably far more powerful
than you need.

I would also go through your code and make sure that every time you
create a database connection, there is some code which is closing that
connection once your finished with it. This is a frequent source of
problems, especially during development when your code might throw an
error and the code you have to disconnect does not get executed or you
simply forget to issue a disconnect when your finished.

A connection pool can help in tracking down such issues as well. Most
pooling solutions will allow you to set a max pool size. In addition to
enabling you to 'reserve' a set number of connections for a client, you
will know which client seems to be running out of connections, helping
to identify the culprit.

--
Tim Cross



Re: Persistent Connections

From
"Peter J. Holzer"
Date:
On 2020-06-23 16:29:46 -0400, Bee.Lists wrote:
> I have an issue with a server (v10) that’s seeing increasing
> connections until it’s maxxed-out.
>
> max_connections for my 4-core server is set to 12.

The default is 100. What was your reason for reducing it to such a low
value?


> I’ve installed pg_stat_activity and pg_stat_statements.
>
> I access this server through a web app,

So that's probably a handful connections already.

> and another client on two machines.

Maybe two or more (depending on what the non-specified client does).

> I also send queries through SSH on the LAN.

And maybe some more connections.

I can see that this could easily reach 12 connections.

        hp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Attachment

Re: Persistent Connections

From
"Bee.Lists"
Date:
> On Jun 23, 2020, at 8:09 PM, Tim Cross <theophilusx@gmail.com> wrote:
>
> Sounds like your web app may not be closing connections once it has
> finished with them? The fact your seeing idle connections would seem to
> support this. I would be verifying there isn't a code path in your
> client application which is failing to close a connection correctly. Seeing
> connections go up and down in the gem may not be telling you the full story - could
> be that your client connection objects are being destroyed in your app,
> but are not closing the connection correctly so PG is unaware the client
> has finished.

Hi Tim.  I can’t speak for the gem.  I’m assuming its garbage collection is working.  But yes, it does look that way.
Ifound someone else who was having similar issues as myself: 

https://stackoverflow.com/questions/60843123/djangopostgres-fatal-sorry-too-many-clients-already

I’m also seeing the connection count rise overnight from crontabs.

For some clarity, the gem is Sequel, which is on top of the PG gem (Ruby).  I’ve spoken to the Sequel author and he
sayseverything is fine.  I have some evidence it’s a connection issue and the gem is doing its job, as I’m seeing it’s
happeningelsewhere with crontabs and other clients.   


> Typically, due to the overhead of making a connection, you don't want
> your client app/web app to create a new connection for every query.
> Instead, you would use some type of connection pool. Many development
> languages support some form of pooling as part of their db connection
> library (don't know about Ruby, but JS, Python, PHP, Java, Perl all do)
> and there are external 3rd party solutions like pgbouncer which sit
> between your client and the database and work a bit like a connection
> broker which will manage a pool of connections.

That’s why I’m thinking installing a connection pooler would solve all of this.  pgbouncer is what I’m looking at now.

> From the description of what your doing, I would first look to see what
> level of connection pooling your development language supports. This
> will likely be easier to configure and use than setting up a whole
> additional bit of infrastructure which is probably far more powerful
> than you need.

I will do that.  This is all new.

> I would also go through your code and make sure that every time you
> create a database connection, there is some code which is closing that
> connection once your finished with it. This is a frequent source of
> problems, especially during development when your code might throw an
> error and the code you have to disconnect does not get executed or you
> simply forget to issue a disconnect when your finished.

The issue here is that it’s all in the gem.  The gem is actually an ORM, built on top of the PG gem.  So all the
databaseconnection stuff is inside there.  It’s so abstracted that I don’t know when the actual calls are made.  It’s a
DSLso the workings aren’t evident when making queries.  It was suggested I install a logger to see what and when
queriesare actually made.   

> A connection pool can help in tracking down such issues as well. Most
> pooling solutions will allow you to set a max pool size. In addition to
> enabling you to 'reserve' a set number of connections for a client, you
> will know which client seems to be running out of connections, helping
> to identify the culprit.

That SO link above suggests I’m not alone.  So installing a connection pooler seems the right thing to do.  Good to
knowabout the reservations.  That is better feedback. 


Cheers, Bee







Re: Persistent Connections

From
"Bee.Lists"
Date:
> On Jun 24, 2020, at 6:47 AM, Peter J. Holzer <hjp-pgsql@hjp.at> wrote:
>
> The default is 100. What was your reason for reducing it to such a low
> value?

“PostgreSQL 9 High Availability” recommended core count * 3.

>
>> I’ve installed pg_stat_activity and pg_stat_statements.
>>
>> I access this server through a web app,
>
> So that's probably a handful connections already.

Looks like 5 queries.

As a comparison I have 37 queries asking for last login of a single user.  No errors.
(I increased the max_connections yesterday)

>> and another client on two machines.
>
> Maybe two or more (depending on what the non-specified client does).

I’m running crons on two machines, Postico, web apps on both machines.

>> I also send queries through SSH on the LAN.
>
> And maybe some more connections.

Yep.

> I can see that this could easily reach 12 connections.

Yep.  But I can’t see all this going over 12 when connections close.  Hence the confusion and issue.

At least it’s a good thing I’m finding it’s not the gem alone.


Cheers, Bee







Re: Persistent Connections

From
"Peter J. Holzer"
Date:
On 2020-06-24 13:45:21 -0400, Bee.Lists wrote:
> > Sounds like your web app may not be closing connections once it has
> > finished with them? The fact your seeing idle connections would seem to
> > support this. I would be verifying there isn't a code path in your
> > client application which is failing to close a connection correctly. Seeing
> > connections go up and down in the gem may not be telling you the full story - could
> > be that your client connection objects are being destroyed in your app,
> > but are not closing the connection correctly so PG is unaware the client
> > has finished.
>
> Hi Tim.  I can’t speak for the gem.  I’m assuming its garbage
> collection is working.  But yes, it does look that way.  I found
> someone else who was having similar issues as myself:
>
> https://stackoverflow.com/questions/60843123/djangopostgres-fatal-sorry-too-many-clients-already
>
> I’m also seeing the connection count rise overnight from crontabs.

Unlike a web-server which runs continuously and can therefore keep a
connection open indefinitely, a cron job normally has a finite live-time.

So if the number of connections from your cron jobs keeps rising that
means that your cron jobs don't finish (or at least some of them). You
should investigate why they are stuck.

> The issue here is that it’s all in the gem.  The gem is actually an
> ORM, built on top of the PG gem.  So all the database connection stuff
> is inside there.  It’s so abstracted that I don’t know when the actual
> calls are made.

Yeah, that's one reason why I'm not overly fond of ORMs.

> It was suggested I install a logger to see what and when queries are
> actually made.

That would probably help. Alternatively you can tell PostgreSQL to log
that stuff.


> > A connection pool can help in tracking down such issues as well. Most
> > pooling solutions will allow you to set a max pool size. In addition to
> > enabling you to 'reserve' a set number of connections for a client, you
> > will know which client seems to be running out of connections, helping
> > to identify the culprit.
>
> That SO link above suggests I’m not alone.

Well, yes. Just about everyone who has ever run a web-application
against a database has run into this problem. It's part of the learning
process :-). What you should learn from this is how to analyze the
situation and how to mitigate it. The solution might be a setting in the
webserver or gem configuration. It might be a connection pooler. It
even be resetting max_connections to the default and not caring about a
few idle connections.

What you shouldn't learn from this is that a pooler will make your
problems magically go away. Because it won't.

        jp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Attachment

Re: Persistent Connections

From
"Peter J. Holzer"
Date:
On 2020-06-24 13:55:00 -0400, Bee.Lists wrote:
> On Jun 24, 2020, at 6:47 AM, Peter J. Holzer <hjp-pgsql@hjp.at> wrote:
> > The default is 100. What was your reason for reducing it to such a low
> > value?
>
> “PostgreSQL 9 High Availability” recommended core count * 3.

I suspected something like that.

I don't have that book, but I assume that the author meant that as a
limit on parallel active queries. An idle connection uses (almost) no
CPU, so it doesn't make sense to include it in the count.

The question is, how do you enforce that limit? Setting max_connections
on the database is no good: It will prevent new connections after the
limit is reached, but the application doesn't know why its getting
errors.

Instead you have to limit the application. If all/most of your database
accesses come from a single web application, configure that application
to open at most 12 connections (there may be an explicit pool size
configuration setting, or you may have to limit that implicitely through
the number of threads and/or processses). If you have several apps, you
might want to configure each to a lower limit so that the sum doesn't
exceed 12. But if you have configured your web app to 12 connections,
you still want max_connections to be higher: You want to be able to
connect with psql even when your web app is running at peak capacity.
You want to run your cron jobs. So always leave some head room.


> >> I’ve installed pg_stat_activity and pg_stat_statements.
> >>
> >> I access this server through a web app,
> >
> > So that's probably a handful connections already.
>
> Looks like 5 queries.
>
> As a comparison I have 37 queries asking for last login of a single user.  No errors.
> (I increased the max_connections yesterday)

Does "I have 37 queries" mean you have seen 37 queries of this type in
some time window (e.g. the last day or hour) or does it mean you are
currently seeing 37 connections where the last query was of this type?

If it's the latter, you very obviously have at least 37 (more likely
37 + 5 = 42) connections. So you web app is configured to open dozens of
connections concurrently. You might want to look into that.

        hp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Attachment

Re: Persistent Connections

From
Tim Cross
Date:
Bee.Lists <bee.lists@gmail.com> writes:

>> On Jun 23, 2020, at 8:09 PM, Tim Cross <theophilusx@gmail.com> wrote:
>>
>> Sounds like your web app may not be closing connections once it has
>> finished with them? The fact your seeing idle connections would seem to
>> support this. I would be verifying there isn't a code path in your
>> client application which is failing to close a connection correctly. Seeing
>> connections go up and down in the gem may not be telling you the full story - could
>> be that your client connection objects are being destroyed in your app,
>> but are not closing the connection correctly so PG is unaware the client
>> has finished.
>
> Hi Tim.  I can’t speak for the gem.  I’m assuming its garbage collection is working.  But yes, it does look that way.
I found someone else who was having similar issues as myself: 
>
> https://stackoverflow.com/questions/60843123/djangopostgres-fatal-sorry-too-many-clients-already
>
> I’m also seeing the connection count rise overnight from crontabs.
>
> For some clarity, the gem is Sequel, which is on top of the PG gem (Ruby).  I’ve spoken to the Sequel author and he
sayseverything is fine.  I have some evidence it’s a connection issue and the gem is doing its job, as I’m seeing it’s
happeningelsewhere with crontabs and other clients.   
>
Rather than a problem with the libraries, I would be looking for a
problem with the code which uses those libraries. If it was a problem
with either the Sequel or PG gems (or with Postgres for that matter), it
would be a lot more wide spread and you would be seeing a lot more
reports.

I'm not familiar with Sequel and haven't used Ruby for nearly 20 years,
but have used plenty of other PG libraries. You mention garbage
collection and I'm sure that is working fine in Ruby. However, you
cannot rely on that to correctly cleanup your PG connections. Somewhere
in your code, there has to be code the developer writes which tells the
library you are finished with the connection. For example, the JS PG
package has the command 'disconnect'. Essentially, your code needs to
tell the remote PG server you have finished with the connection so that
it knows it can clean up things on its end. If your code is not issuing
explicit disconnect commands, what is happening is that the connection
on your client side are being cleanup when the connection object goes
out of scope and the garbage collector kicks in. However, on the PG
side, the connections hang around until PG times them out, which takes
much longer and could easily cause you to hit the limit, especially as
you have such a very small limit.  I think you need to verify that in
all your client code, somewhere there is explicit code being called
which is telling PG you are disconnecting the connection. Don't assume
this is automagically happening as part of GC.
>
>> Typically, due to the overhead of making a connection, you don't want
>> your client app/web app to create a new connection for every query.
>> Instead, you would use some type of connection pool. Many development
>> languages support some form of pooling as part of their db connection
>> library (don't know about Ruby, but JS, Python, PHP, Java, Perl all do)
>> and there are external 3rd party solutions like pgbouncer which sit
>> between your client and the database and work a bit like a connection
>> broker which will manage a pool of connections.
>
> That’s why I’m thinking installing a connection pooler would solve all of this.  pgbouncer is what I’m looking at
now. 
>

I doubt this will solve your problem. It might hide the problem or it
might lengthen the time between failures, but it is very unlikely to
solve the problem. It may help identify the source of the problem.

Have you verified the PG gem doesn't support pooling? If it does, that
would be far easier to configure and use than installing
pgbouncer. Based on your description of the app and the small number of
connections you have PG configured for, adding pgbouncer is like putting
a fighter jet engine in a family SUV.

>> From the description of what your doing, I would first look to see what
>> level of connection pooling your development language supports. This
>> will likely be easier to configure and use than setting up a whole
>> additional bit of infrastructure which is probably far more powerful
>> than you need.
>
> I will do that.  This is all new.
>
>> I would also go through your code and make sure that every time you
>> create a database connection, there is some code which is closing that
>> connection once your finished with it. This is a frequent source of
>> problems, especially during development when your code might throw an
>> error and the code you have to disconnect does not get executed or you
>> simply forget to issue a disconnect when your finished.
>
> The issue here is that it’s all in the gem.  The gem is actually an ORM, built on top of the PG gem.  So all the
databaseconnection stuff is inside there.  It’s so abstracted that I don’t know when the actual calls are made.  It’s a
DSLso the workings aren’t evident when making queries.  It was suggested I install a logger to see what and when
queriesare actually made.   
>

I have used ORMs in the past. Personally, I hate them as they always
introduce limitations. However, I would strongly suggest checking the
Sequel API and examples.

I just had a very quick look at the Sequel API. Things I immediately
noticed are

1. It does support connection pooling
2. There is a DB.database#disconnect method. You need to verify your
client code is calling that method before the objects are GC'd.

>> A connection pool can help in tracking down such issues as well. Most
>> pooling solutions will allow you to set a max pool size. In addition to
>> enabling you to 'reserve' a set number of connections for a client, you
>> will know which client seems to be running out of connections, helping
>> to identify the culprit.
>
> That SO link above suggests I’m not alone.  So installing a connection pooler seems the right thing to do.  Good to
knowabout the reservations.  That is better feedback. 
>
>

Use the connection pool provided by the Sequel gem first and see how
that goes. I'm pretty confident the issue will turn out to be in your
code. I would also strongly consider increasing the PG max
connections. What you have configured is extremely low. Default is
100. 50 or 25 would possibly be more reasonable in your case. It is
quite possible your Ruby scripts are keeping the connections open
in-=between HTTP requests if your using something which keeps the
scripts loaded in memory within the web server (common setup to reduce
spin up time on requests).

--
Tim Cross



Re: Persistent Connections

From
Tim Cross
Date:
Peter J. Holzer <hjp-pgsql@hjp.at> writes:

> On 2020-06-24 13:55:00 -0400, Bee.Lists wrote:
>> On Jun 24, 2020, at 6:47 AM, Peter J. Holzer <hjp-pgsql@hjp.at> wrote:
> Does "I have 37 queries" mean you have seen 37 queries of this type in
> some time window (e.g. the last day or hour) or does it mean you are
> currently seeing 37 connections where the last query was of this type?
>
> If it's the latter, you very obviously have at least 37 (more likely
> 37 + 5 = 42) connections. So you web app is configured to open dozens of
> connections concurrently. You might want to look into that.
>

I just had another thought. Based on a VERY brief scan of the Sequel
API, I suspect it uses a connection pool by default. So it is quite
likely that the expectations on when the connections are closed is
incorrect. It could easily be that the web app creates a connection pool
as soon as it is started and keeps that pool open until either the web
server is closed down or a pool timeout kicks in (some connection pools
use a max lifetime setting for connections and will close a connection
after that period, replacing it with a new connection).

It is also quite likely that the Sequel GEM creates a connection pool
with a default number of connections if not explicitly defined by the
developer. This default could be close to or even exceed the number set
for max connections within PG (especially as the OP has indicated it is
very small).

I have to say, I do hate ORMs. They always reduce the flexibility and
power offered by SQL, tend to result in code where more processing is
done in the client which would have been faster and more efficiently
done by the SQL engine and hides details which make troubleshooting even
harder. However, the worst issue is that it also results in developers
not understanding the power of the underlying RDMS and encourages poor
DB schema design. Those who support such technologies typically point to
the benefits of database neutrality such systems can provide. In over 30
years of DB work, I have yet to see such neutrality actually work. It is
a pipe dream. 
-- 
Tim Cross