Re: Increasing number of PG connections. - Mailing list pgsql-performance

From Kevin Barnard
Subject Re: Increasing number of PG connections.
Date
Msg-id 401E8493.30458.332D5CEF@localhost
Whole thread Raw
In response to Re: Increasing number of PG connections.  ("scott.marlowe" <scott.marlowe@ihs.com>)
Responses Re: Increasing number of PG connections.
List pgsql-performance
On 2 Feb 2004 at 13:58, scott.marlowe wrote:

> what do you mean at 2 GB?  Is that how much is in kernel cache plus
> buffer, plus used, plus etc???  Could you give us the top of top output to
> make sure?  If most of that is kernel cache, then that's fine.

2GB was total system memory.  We upgraded to 4GB to prior to increasing the
number of connections.

Here's the top of top

 16:14:17  up 2 days, 16:15,  1 user,  load average: 7.60, 6.56, 4.61
730 processes: 721 sleeping, 9 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
           cpu00    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
           cpu01    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
           cpu02    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
           cpu03    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
Mem:  3747644k av, 3298344k used,  449300k free,       0k shrd,  147880k buff
      2158532k active,             760040k inactive
Swap: 1048088k av,       0k used, 1048088k free                 2262156k cached


The DB is pretty close to max connections at this point in time.  I don't know why
CPU shows 0% in every bucket.  It looks like I can increase the number of
connections a little from here.  This is a fairly standard Fedora install.  It's using
version 2.4.22 of the Kernel.  Postgres is a complied version using 7.4.1

> experience has been that individual postgresql backends only weigh in at a
> mega byte at most, and they share buffer, so 700 connections can be
> anywhere from 300meg to 1 gig.  the rest would be buffer memory.  It's not
> a good idea to give up too much to shared buffers, as the database isn't
> as good at caching as the kernel.

OK I take this as I should keep shared buffers around 2x connections then correct?

>
> What do you have in postgresql.conf?  sort_mem, shared_buffers, etc???

Here is what I have that is not set from the defaults.

max_connections = 700
shared_buffers = 1500
sort_mem = 512
random_page_cost = 2
stats_start_collector = true
stats_command_string = true
stats_block_level = true
stats_row_level = true


> sort_mem can be a real killer if it lets the processes chew up too much
> memory.  Once sort_mem gets high enough to make the machine start swapping
> it is doing more harm than good being that high, and should usually be
> lowered a fair bit.

I dropped it down to 512 as you can see.  Should I be running with all of the stats on?
I am no longer using pg_autovacuum.  I seem to be getting better results with an
hourly Vacuum anaylse.

> How many disks in your RAID5?  The more the better.  Is it hardware with
> battery backed cache?  If you write much to it it will help to have
> battery backed cache on board.  If it's a megaraid / LSI board, get the
> megaraid2 driver, it's supposedly much faster.

4 disk IBM ServeRAID 5i with battery backed cache.

> You may find it hard to get postgresql to use any more memory than you
> have, as 32 bit apps can only address 2 gigs anyway, but the extra can
> certainly be used by the kernel as cache, which will help.

Isn't that only true for each indivdual process space.  Shouldn't each process have
access at most 2GB.  If each backend is in it's own process space is this really a limit
since all of my queries are pretty small.

I have been monitoring the system has it gets up to load.  For most of the time the
sytem sits around 100-300 connections.  Once it ramps up it ramps up hard.  Top
starts cycling at 0 and 133% CPU for irq, softirq and iowait.  The system stays at 700
connections until users give up.  I can watch bandwidth utilization drop to almost
nothing right before the DB catches up.

--
Kevin Barnard
Speed Fulfillment and Call Center
kbarnard@speedfc.com
214-258-0120

pgsql-performance by date:

Previous
From: "scott.marlowe"
Date:
Subject: Re: Increasing number of PG connections.
Next
From: "scott.marlowe"
Date:
Subject: Re: Increasing number of PG connections.