Re: postgresql.conf recommendations - Mailing list pgsql-performance

From Charles Gomes
Subject Re: postgresql.conf recommendations
Date
Msg-id BLU002-W33BBF2489B3A231AC13AD1AB0A0@phx.gbl
Whole thread Raw
In response to Re: postgresql.conf recommendations  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: postgresql.conf recommendations  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-performance


> Date: Sat, 9 Feb 2013 14:03:35 -0700
> Subject: Re: [PERFORM] postgresql.conf recommendations
> From: scott.marlowe@gmail.com
> To: jeff.janes@gmail.com
> CC: charlesrg@outlook.com; strahinjak@nordeus.com; kgrittn@ymail.com; johnnydtan@gmail.com; ac@hsk.hk; jkrupka@gmail.com; alex@paperlesspost.com; pgsql-performance@postgresql.org
>
> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@outlook.com> wrote:
> >>> I've benchmarked shared_buffers with high and low settings, in a server
> >>> dedicated to postgres with 48GB my settings are:
> >>> shared_buffers = 37GB
> >>> effective_cache_size = 38GB
> >>>
> >>> Having a small number and depending on OS caching is unpredictable, if the
> >>> server is dedicated to postgres you want make sure postgres has the memory.
> >>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres
> >>> buffers.
> >>> I agree your problem is most related to dirty background ration, where
> >>> buffers are READ only and have nothing to do with disk writes.
> >>
> >> You make an assertion here but do not tell us of your benchmarking
> >> methods.
> >
> > Well, he is not the only one committing that sin.
>
> I'm not asking for a complete low level view. but it would be nice to
> know if he's benchmarking heavy read or write loads, lots of users, a
> few users, something. All we get is "I've benchmarked a lot" followed
> by "don't let the OS do the caching." At least with my testing I was
> using a large transactional system (heavy write) and there I KNOW from
> testing that large shared_buffers do nothing but get in the way.
>
> all the rest of the stuff you mention is why we have effective cache
> size which tells postgresql about how much of the data CAN be cached.
> In short, postgresql is designed to use and / or rely on OS cache.
>
Hello Scott

I've tested using 8 bulk writers in a 8 core machine (16 Threads).

I've loaded a database with 17 partitions, total 900 million rows and later executed single queries on it.

In my case the main point of having postgres manage memory is because postgres is the single and most important application running on the server.

 

If Linux would manage the Cache it would not know what is important and what should be discarded, it would simply discard the oldest least accessed entry.

Let's say a DBA logs in the server and copies a 20GB file. If you leave Linux to decide, it will decide that the  20GB file is more important than the old not so heavily accessed postgres entries.

 

This may be looked in a case by case, in my case I need PostgreSQL to perform FAST and I also don't want cron jobs taking my cache out. For example (locate, logrotate, prelink, makewhatis).

 

If postgres was unable to manage 40GB of RAM, we would get into major problems because nowadays it's normal to buy 64GB servers, and  many of Us have dealt with 512GB Ram Servers.

 

By the way, I've tested this same scenario with Postgres, Mysql and Oracle. And Postgres have given the best results overall. Especially with symmetric replication turned on.
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

pgsql-performance by date:

Previous
From: Alexandre Riveira
Date:
Subject: Is it correct to optimize a query with subselect in the "where"?
Next
From: Ali Pouya
Date:
Subject: Re: Partition insert trigger using C language