Thread: Postgres memory question

Postgres memory question

From
Kobus Wolvaardt
Date:
Hi,

We have software deployed on our network that need postgres, we have server that hosts the server and all worked fine until we crossed about 200 users. The application is written so that it makes a connection right at the start and keeps it alive for the duration of the app. The app is written in Delphi. The postgres server runs on a windows 2008 server with quad core cpu and 4 GB of ram.

The problem after +-200 connections is that the server runs out of memory, but most of these connections are idle... it only gets used every 20 minutes to capture a transaction.

It looks like every idle connection uses about 10MB of ram which sees high, but I cannot find a config option to limit it.

I tried pgbouncer to do connection pooling, but for each connection to pgbouncer one connection is made to the server which results in exactly the same amount of connection. If I run it in transaction pooling mode it works for simple queries, but something goes lost says the programmer (views that were setup or something).

Any help or pointers would be nice, either on how to make usage less, or on how to get pooling to work.

Thanks,
Kobus Wolvaardt

P.S. We are growing the users by another 20% soon and the will result in massive issues. I don't mind slower operation for now, I just need to keep it working.

Postgres memory question

From
Kobus Wolvaardt
Date:



Hi,

We have software deployed on our network that need postgres, we have server that hosts the server and all worked fine until we crossed about 200 users. The application is written so that it makes a connection right at the start and keeps it alive for the duration of the app. The app is written in Delphi. The postgres server runs on a windows 2008 server with quad core cpu and 4 GB of ram.

The problem after +-200 connections is that the server runs out of memory, but most of these connections are idle... it only gets used every 20 minutes to capture a transaction.

It looks like every idle connection uses about 10MB of ram which sees high, but I cannot find a config option to limit it.

I tried pgbouncer to do connection pooling, but for each connection to pgbouncer one connection is made to the server which results in exactly the same amount of connection. If I run it in transaction pooling mode it works for simple queries, but something goes lost says the programmer (views that were setup or something).

Any help or pointers would be nice, either on how to make usage less, or on how to get pooling to work.

Thanks,
Kobus Wolvaardt

P.S. We are growing the users by another 20% soon and the will result in massive issues. I don't mind slower operation for now, I just need to keep it working.


Re: Postgres memory question

From
Scott Marlowe
Date:
On Sun, Aug 9, 2009 at 4:06 AM, Kobus Wolvaardt<kobuswolf@gmail.com> wrote:
> Hi,
>
> We have software deployed on our network that need postgres, we have server
> that hosts the server and all worked fine until we crossed about 200 users.
> The application is written so that it makes a connection right at the start
> and keeps it alive for the duration of the app. The app is written in
> Delphi. The postgres server runs on a windows 2008 server with quad core cpu
> and 4 GB of ram.

Is this an app you can fix yourself, or are you stuck with this
mis-step in design?

> The problem after +-200 connections is that the server runs out of memory,
> but most of these connections are idle... it only gets used every 20 minutes
> to capture a transaction.
>
> It looks like every idle connection uses about 10MB of ram which sees high,
> but I cannot find a config option to limit it.
>
> I tried pgbouncer to do connection pooling, but for each connection to
> pgbouncer one connection is made to the server which results in exactly the
> same amount of connection. If I run it in transaction pooling mode it works
> for simple queries, but something goes lost says the programmer (views that
> were setup or something).

Are each of these connections quite different from each other or
something?  I'm not that familiar with pgbouncer so I don't know if
this behaviour is normal.  Can you get by with pgpool for this?  Does
it work any better?

> Any help or pointers would be nice, either on how to make usage less, or on
> how to get pooling to work.
>
> P.S. We are growing the users by another 20% soon and the will result in
> massive issues. I don't mind slower operation for now, I just need to keep
> it working.

If another pooling solution won't fix this, then you need more memory
and a bigger server.  pg on windows is 32 bit so you might have some
problems running it well on a larger windows machine, if that's the
case, then it would likely help if you could run this on 64 bit linux
with 8+Gigs of ram.  This solution would allow you to grow to several
hundred more connections before you'd have issues. Also, performance
might be better on linux with this many connections, but I have not
empirical evidence to support that belief.

Re: Postgres memory question

From
Rainer Bauer
Date:
Kobus Wolvaardt schrieb:

>We have software deployed on our network that need postgres, we have server
>that hosts the server and all worked fine until we crossed about 200 users.
>The application is written so that it makes a connection right at the start
>and keeps it alive for the duration of the app. The app is written in
>Delphi. The postgres server runs on a windows 2008 server with quad core cpu
>and 4 GB of ram.

Have you tried to increase Desktop Heap as described here:

<http://wiki.postgresql.org/wiki/Running_%26_Installing_PostgreSQL_On_Native_Windows#I_cannot_run_with_more_than_about_125_connections_at_once.2C_despite_having_capable_hardware>

Rainer

Re: Postgres memory question

From
John R Pierce
Date:
Kobus Wolvaardt wrote:
> Hi,
>
> We have software deployed on our network that need postgres, we have
> server that hosts the server and all worked fine until we crossed
> about 200 users. The application is written so that it makes a
> connection right at the start and keeps it alive for the duration of
> the app. The app is written in Delphi. The postgres server runs on a
> windows 2008 server with quad core cpu and 4 GB of ram.
>
> The problem after +-200 connections is that the server runs out of
> memory, but most of these connections are idle... it only gets used
> every 20 minutes to capture a transaction.
>
> It looks like every idle connection uses about 10MB of ram which sees
> high, but I cannot find a config option to limit it.
>
> I tried pgbouncer to do connection pooling, but for each connection to
> pgbouncer one connection is made to the server which results in
> exactly the same amount of connection. If I run it in transaction
> pooling mode it works for simple queries, but something goes lost says
> the programmer (views that were setup or something).

views are stored in the database and not connection specific.   things
that are connection specific are like schema search path, language
encoding options, and so forth.   if you setup the database properly,
these shouldn't be an issue.

I would seriously plan on rewriting the app to use pg_pool or similar,
and fetch a conncetion from a much smaller pool of actual database
connections, use it, then release it back to the pool.   For this to
work, all your database connections need the same persistent settings.



Re: Postgres memory question

From
Kobus Wolvaardt
Date:


2009/8/9 Scott Marlowe <scott.marlowe@gmail.com>
On Sun, Aug 9, 2009 at 4:06 AM, Kobus Wolvaardt<kobuswolf@gmail.com> wrote:
> Hi,
>
> We have software deployed on our network that need postgres, we have server
> that hosts the server and all worked fine until we crossed about 200 users.
> The application is written so that it makes a connection right at the start
> and keeps it alive for the duration of the app. The app is written in
> Delphi. The postgres server runs on a windows 2008 server with quad core cpu
> and 4 GB of ram.

Is this an app you can fix yourself, or are you stuck with this
mis-step in design?

It is our app but it is not going to be easy to change it. It will get changed, but the time frame is a bit long and we need a solution to hold us over.
 


> The problem after +-200 connections is that the server runs out of memory,
> but most of these connections are idle... it only gets used every 20 minutes
> to capture a transaction.
>
> It looks like every idle connection uses about 10MB of ram which sees high,
> but I cannot find a config option to limit it.
>
> I tried pgbouncer to do connection pooling, but for each connection to
> pgbouncer one connection is made to the server which results in exactly the
> same amount of connection. If I run it in transaction pooling mode it works
> for simple queries, but something goes lost says the programmer (views that
> were setup or something).

Are each of these connections quite different from each other or
something?  I'm not that familiar with pgbouncer so I don't know if
this behaviour is normal.  Can you get by with pgpool for this?  Does
it work any better?

When using pgbouncer (it is supposed to be a light weight connection pooler similar to pgpool) it makes a connection for each connection. All our connections are to the same DB with the same username. So really pooling should work perfectly. Should views and temporary tables and such work over poolers? Do you need to indicate the end of a session for the pooler to reuse the connection? If I make a connection at the start of an app and just leave it right till the end and then use it, will that get pooled?
 


> Any help or pointers would be nice, either on how to make usage less, or on
> how to get pooling to work.
>
> P.S. We are growing the users by another 20% soon and the will result in
> massive issues. I don't mind slower operation for now, I just need to keep
> it working.

If another pooling solution won't fix this, then you need more memory
and a bigger server.  pg on windows is 32 bit so you might have some
problems running it well on a larger windows machine, if that's the
case, then it would likely help if you could run this on 64 bit linux
with 8+Gigs of ram.  This solution would allow you to grow to several
hundred more connections before you'd have issues. Also, performance
might be better on linux with this many connections, but I have not
empirical evidence to support that belief.

Jip, I think that might be the way forward, and for all our other clients we are already doing this, but the largest one, the one giving us the problems unfortuanetly have less flexible IT people. Still if it is the only solution they will have to budge,

Thanks,
Kobus

Re: Postgres memory question

From
John R Pierce
Date:
Kobus Wolvaardt wrote:
> We have similar connection settings, same app, same DB, same username,
> same settings. What appart from installing pgpool do you need to do to
> get it to seamlessly share connections? Do you need to do anything in
> code? Do a special query to indicate the end of a transaction?

your app would need to be modified to open a connection, use it, and
close it before going idle for a long period.   it could do this with or
without a connection pool, the pool just speeds up the process by
keeping the actual database connections open.   if you have 200 threads
that are idle 95% of the time, you likely wouldn't need more than 20
actual database connections.



Re: Postgres memory question

From
Bill Moran
Date:
In response to Kobus Wolvaardt <kobuswolf@gmail.com>:

> 2009/8/9 Scott Marlowe <scott.marlowe@gmail.com>
>
> > On Sun, Aug 9, 2009 at 4:06 AM, Kobus Wolvaardt<kobuswolf@gmail.com>
> > wrote:
> > > Hi,
> > >
> > > We have software deployed on our network that need postgres, we have
> > server
> > > that hosts the server and all worked fine until we crossed about 200
> > users.
> > > The application is written so that it makes a connection right at the
> > start
> > > and keeps it alive for the duration of the app. The app is written in
> > > Delphi. The postgres server runs on a windows 2008 server with quad core
> > cpu
> > > and 4 GB of ram.
> >
> > Is this an app you can fix yourself, or are you stuck with this
> > mis-step in design?
>
>
> It is our app but it is not going to be easy to change it. It will get
> changed, but the time frame is a bit long and we need a solution to hold us
> over.

We have servers using about 200 connections on average ... it climbs up
to 300+ during busy use.  I've seen it peak as high as 450, and we've seen
no performance issues.

This is a quad core with 4G of RAM.  Of course the OS isn't windows, it's
64 bit FreeBSD.

However, my point is that your server _can_ be tuned to handle loads like
this.  I can't say for sure how much the OS makes a difference in this case,
but I expect any POSIX system is going to scale better than Windows.

As far as tuning, I just went through the config file and tuned everything
logically based on published best practices.  Aside from the FSM settings,
I don't think I've had to fine tune anything else, post.

And for those who may want to jump in -- we have investigated pgpool several
times, we just can justify the added complexity when the system just works
as is, but we're ready to add it on quickly should problems arise.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: Postgres memory question

From
Vick Khera
Date:
On Mon, Aug 10, 2009 at 9:46 AM, Bill Moran<wmoran@potentialtech.com> wrote:
> We have servers using about 200 connections on average ... it climbs up
> to 300+ during busy use.  I've seen it peak as high as 450, and we've seen
> no performance issues.
>
> This is a quad core with 4G of RAM.  Of course the OS isn't windows, it's
> 64 bit FreeBSD.

I too run 64bit FreeBSD 7.2.  However in my primary use case, anything
short of 20GB of RAM makes the application very unresponsive when many
customers are online.  We usually don't have more than about 45 to 50
connections simultaneously.  My general-use Pg server has 4GB and that
is more than adequate for the miscellaneous uses of blogs, ad servers,
and drupal installations.

Re: Postgres memory question

From
Bill Moran
Date:
On Mon, 10 Aug 2009 13:49:02 -0400
Vick Khera <vivek@khera.org> wrote:

> On Mon, Aug 10, 2009 at 9:46 AM, Bill Moran<wmoran@potentialtech.com> wrote:
> > We have servers using about 200 connections on average ... it climbs up
> > to 300+ during busy use.  I've seen it peak as high as 450, and we've seen
> > no performance issues.
> >
> > This is a quad core with 4G of RAM.  Of course the OS isn't windows, it's
> > 64 bit FreeBSD.
>
> I too run 64bit FreeBSD 7.2.  However in my primary use case, anything
> short of 20GB of RAM makes the application very unresponsive when many
> customers are online.  We usually don't have more than about 45 to 50
> connections simultaneously.  My general-use Pg server has 4GB and that
> is more than adequate for the miscellaneous uses of blogs, ad servers,
> and drupal installations.

Results will obviously vary by installation, usage, and application type.

Are you saying you have performance issues with the application when there
are many idle connections?  Because that was the original discussion.
Performance under heavy concurrent load is another topic, and a much more
complex one.

-Bill

Re: Postgres memory question

From
Allan Kamau
Date:
On Mon, Aug 10, 2009 at 7:49 PM, Vick Khera<vivek@khera.org> wrote:
> On Mon, Aug 10, 2009 at 9:46 AM, Bill Moran<wmoran@potentialtech.com> wrote:
>> We have servers using about 200 connections on average ... it climbs up
>> to 300+ during busy use.  I've seen it peak as high as 450, and we've seen
>> no performance issues.
>>
>> This is a quad core with 4G of RAM.  Of course the OS isn't windows, it's
>> 64 bit FreeBSD.
>
> I too run 64bit FreeBSD 7.2.  However in my primary use case, anything
> short of 20GB of RAM makes the application very unresponsive when many
> customers are online.  We usually don't have more than about 45 to 50
> connections simultaneously.  My general-use Pg server has 4GB and that
> is more than adequate for the miscellaneous uses of blogs, ad servers,
> and drupal installations.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Hi Kobus,

My simple (perhaps wrong) understanding of how DB connection pooling
works is that no active connection(s) can be shared. The connection
pool manager may however assign a free connection from the connection
pool to service a connection request, and a connection is usually
declared as free when the application having the active connection
closes it (or returns it to the pool in a connection pool specific
manner).
If this is correct, having an application request for a connection at
the start of a session then holding on to it for the duration of the
session may yield at least as many connections to the DB as there are
sessions. It may then be advisable to (re)write the application to
open (request for a connection from the pool) and close (return a
connection to the pool) DB connections for each logical data
transaction (or group of closely associated transactions). Since in
your case a code rewrite seems unfavourable (due to time pressure) the
DB connection pooling option may not be viable as you may have
noticed.

Allan.

Re: Postgres memory question

From
Scott Marlowe
Date:
On Mon, Aug 10, 2009 at 3:16 PM, Allan Kamau<kamauallan@gmail.com> wrote:
> On Mon, Aug 10, 2009 at 7:49 PM, Vick Khera<vivek@khera.org> wrote:
>> On Mon, Aug 10, 2009 at 9:46 AM, Bill Moran<wmoran@potentialtech.com> wrote:
>>> We have servers using about 200 connections on average ... it climbs up
>>> to 300+ during busy use.  I've seen it peak as high as 450, and we've seen
>>> no performance issues.
>>>
>>> This is a quad core with 4G of RAM.  Of course the OS isn't windows, it's
>>> 64 bit FreeBSD.
>>
>> I too run 64bit FreeBSD 7.2.  However in my primary use case, anything
>> short of 20GB of RAM makes the application very unresponsive when many
>> customers are online.  We usually don't have more than about 45 to 50
>> connections simultaneously.  My general-use Pg server has 4GB and that
>> is more than adequate for the miscellaneous uses of blogs, ad servers,
>> and drupal installations.
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
> Hi Kobus,
>
> My simple (perhaps wrong) understanding of how DB connection pooling
> works is that no active connection(s) can be shared. The connection
> pool manager may however assign a free connection from the connection
> pool to service a connection request, and a connection is usually
> declared as free when the application having the active connection
> closes it (or returns it to the pool in a connection pool specific
> manner).
> If this is correct, having an application request for a connection at
> the start of a session then holding on to it for the duration of the
> session may yield at least as many connections to the DB as there are
> sessions. It may then be advisable to (re)write the application to
> open (request for a connection from the pool) and close (return a
> connection to the pool) DB connections for each logical data
> transaction (or group of closely associated transactions). Since in
> your case a code rewrite seems unfavourable (due to time pressure) the
> DB connection pooling option may not be viable as you may have
> noticed.

Yeah, from what the OP stated as requirements, I'm guessing the one
they can push back on the easiest is to get more memory and run run
Linux / BSD / OpenSolaris instead of Windows server.

Re: Postgres memory question

From
Kobus Wolvaardt
Date:
Hi,

Thanks for all the answers.

I think I need to first look into the windows connection issue (where some memory pool gets used up and then new connections cannot be made). If that does not help, upping the ram by a bit might help (at least buy some time). And looking into tuning the options that result in increased memory size to be slightly lower (does anybody know which options will lower the per connection memory usages, the only one that seems clear to me is work_mem).

But ultimately we need to rewrite the code to not keep connections open, or too not do queries that upset transaction pooling (ala pgbouncer). And we need to get that client onto Linux so that memory increases can be taken advantage of.

If there are other suggestions that would be much appreciated.

Regards,
Kobus


2009/8/10 Scott Marlowe <scott.marlowe@gmail.com>
On Mon, Aug 10, 2009 at 3:16 PM, Allan Kamau<kamauallan@gmail.com> wrote:
> On Mon, Aug 10, 2009 at 7:49 PM, Vick Khera<vivek@khera.org> wrote:
>> On Mon, Aug 10, 2009 at 9:46 AM, Bill Moran<wmoran@potentialtech.com> wrote:
>>> We have servers using about 200 connections on average ... it climbs up
>>> to 300+ during busy use.  I've seen it peak as high as 450, and we've seen
>>> no performance issues.
>>>
>>> This is a quad core with 4G of RAM.  Of course the OS isn't windows, it's
>>> 64 bit FreeBSD.
>>
>> I too run 64bit FreeBSD 7.2.  However in my primary use case, anything
>> short of 20GB of RAM makes the application very unresponsive when many
>> customers are online.  We usually don't have more than about 45 to 50
>> connections simultaneously.  My general-use Pg server has 4GB and that
>> is more than adequate for the miscellaneous uses of blogs, ad servers,
>> and drupal installations.
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
> Hi Kobus,
>
> My simple (perhaps wrong) understanding of how DB connection pooling
> works is that no active connection(s) can be shared. The connection
> pool manager may however assign a free connection from the connection
> pool to service a connection request, and a connection is usually
> declared as free when the application having the active connection
> closes it (or returns it to the pool in a connection pool specific
> manner).
> If this is correct, having an application request for a connection at
> the start of a session then holding on to it for the duration of the
> session may yield at least as many connections to the DB as there are
> sessions. It may then be advisable to (re)write the application to
> open (request for a connection from the pool) and close (return a
> connection to the pool) DB connections for each logical data
> transaction (or group of closely associated transactions). Since in
> your case a code rewrite seems unfavourable (due to time pressure) the
> DB connection pooling option may not be viable as you may have
> noticed.

Yeah, from what the OP stated as requirements, I'm guessing the one
they can push back on the easiest is to get more memory and run run
Linux / BSD / OpenSolaris instead of Windows server.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general