Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average
Date
Msg-id CAHyXU0wuP84JucOujSrkY1V3MkY9P37G9peGyqG8pW8S6f7CBg@mail.gmail.com
Whole thread Raw
In response to Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-performance
On Tue, Oct 6, 2015 at 10:10 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Tue, Oct 6, 2015 at 3:33 AM, FattahRozzaq <ssoorruu@gmail.com> wrote:
>> @Merlin Moncure, I got the calculation using pg_tune. And I modified
>> the shared_buffers=24GB and the effective_cache_size=64GB
>>
>> @Igor Neyman,
>> Yes, I had performance problem which sometimes the response time took
>> 11ms, with the exactly same query it took 100ms, and the response time
>> seems randomly fluctuating even with the exact same query.
>>
>> Any idea on how I should configure postgres to effectively utilize the
>> hardware and reduce the response time to be quicker?
>> *(RAM=128GB, CPU=24cores, RAID-1+0:SSD)
>
> OK I'm gonna copy and paste some stuff from previous messages since
> top-posting kinda messed up the formatting.
>
> First, this line:
>
>>> max_connections = 9999
>
> When you are experiencing this problem, how many connections are
> there? There's a bell shaped curve for performance, and the peak is
> WAY less than 9999. The IPC / shared memory performance etc will drop
> off very quickly after a few dozen or at most a hundred or so
> connections. If your application layer needs to keep more than a
> couple dozen connections open, then it's a REAL good idea to throw a
> connection pooler between the app and the db. I recommend pgbouncer as
> it's very easy to setup.
>
> BUT more important than that, it appears you're looking for a "go
> faster" knob, and there may or may not be one for what you're doing.
>
> I'd recommend profiling your db server under load to see what's going
> on. What does iostat, iotop, top, etc show you when this is happening?
> Are you running out of IO? Memory, CPU? What does "explain analyze
> slowquerygoeshere" tell you?
>
> I would recommend you consider reducing shared_buffers unless you have
> some concrete proof that 24GB is helping. Big shared_buffers have
> maintenance costs that affect write speeds, and slow writing can make
> everything kind of back up behind it.  Typically something under 1GB
> is fine. PostgreSQL relies on the OS to cache most read data. So
> trying to crank up shared_buffers to do the same job is often either
> counter-productive or of no real gain.

This is spot on.  9999 max_connections is gross overconfiguration
(unless your server has 9999 cores....).   If you need to support a
large number of hopefully idle clients, you need to immediately
explore pgbouncer.

Also, OP did not answer my previous question correctly: "how did you
measure that?" was asking how you determined that the server was only
using 4-5GB.  Reason for that question is that measuring shared memory
usage is a little more complex than it looks on the surface.  It's
pretty common for sysadmins unfamiliar with it to under- or over-
count usage.  We need to confirm your measurements with a some
diagnostics from utilities like 'top'.

If your performance issues are in fact cache related (which is really
storage), you have the traditional mitigation strategies: prewarm the
cache, buy faster storage etc.

merlin


pgsql-performance by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average
Next
From: ajaykbs
Date:
Subject: Re: Multi processor server overloads occationally with system process while running postgresql-9.4