Re: Out of memory - Mailing list pgsql-general

From Alex Adriaanse
Subject Re: Out of memory
Date
Msg-id 47ED6101.2060907@innovacomputing.com
Whole thread Raw
In response to Re: Out of memory  ("Scott Marlowe" <scott.marlowe@gmail.com>)
List pgsql-general
Scott Marlowe wrote:
> Just because you can set max_connections to 2000 doesn't mean it's a
> good idea.  If your client needs 1000 persistent connections, then put
> a connection pooler between your app (I'm guessing php since it
> operates this way) and the database.
>
> Running 1000 connections is a LOT, and you need 1000 active
> connections, then you're likely gonna need a bigger machine than one
> with 8 cores and 16 gig of rams.  OTOH, if you are actively servicing
> less than 10% of those connections at a time, then you're wasting
> memory on the number of backends that are started up and doing
> nothing.  each one consumes some amount of memory on its own, usually
> in the 5 to 10 meg range, just to sit there and do nothing.
>
> Plus you've got issues with thundering herd type situations that can
> show up as you increase connections.
>
> Pooling is the answer here.
>
Thanks for the suggestion.  I fully agree with you, and the client is
already making plans to move from persistent connections to connection
pooling (they will probably use PgBouncer).  I'm just trying to figure
out the exact cause of this memory issue so that we can make sure it
doesn't happen again, even after they switch to connection pooling.
According to my rough calculations, if the server were to max out at
2,000 connections there should still be around 2GB of memory free -
unless all of them were running active queries which is doubtful since I
did not see unusual changes in the load patterns.  And from what I could
tell, the server did not tap into swap when PostgreSQL ran out of memory.

Thanks,

Alex


pgsql-general by date:

Previous
From: Terry Lee Tucker
Date:
Subject: Escaping \n
Next
From: Sam Mason
Date:
Subject: Re: Escaping \n