Thread: [GENERAL] RAM, the more the merrier?

[GENERAL] RAM, the more the merrier?

From
Willy-Bas Loos
Date:
Hi,

We have a postgresql database that is now 1.4TB in disksize and slowly growing.
In the past, we've had (read) performance trouble with this database and the solution was to buy a server that can fit the db into memory. It had 0.5 TB of RAM and at the time it could hold all of the data easily.
Those servers are now old and the db has outgrown the RAM and we are doing more reads and writes too (but the problem has not yet returned).

So i am looking into buying new servers. I'm thinking of equipping it with 1TB of RAM and room to expand. So the database will not fit completely, but largely anyway. Also, if we can afford it, it will have SSDs instead of RAID10 SAS spindles.

But I've read that there is some kind of maximum to the shared_buffers, where increasing it would actually decrease performance.
Is 1TB of RAM, or even 2TB always a good thing?
And is there anything special that I should look out for when configuring such a server?
Or would it be much better to buy 2 smaller servers and tie them together somehow? (partitioning, replication, ...)

--
Willy-Bas Loos

Re: [GENERAL] RAM, the more the merrier?

From
Achilleas Mantzios
Date:
On 29/06/2017 17:19, Willy-Bas Loos wrote:
Hi,

We have a postgresql database that is now 1.4TB in disksize and slowly growing.
In the past, we've had (read) performance trouble with this database and the solution was to buy a server that can fit the db into memory. It had 0.5 TB of RAM and at the time it could hold all of the data easily.
Those servers are now old and the db has outgrown the RAM and we are doing more reads and writes too (but the problem has not yet returned).

So i am looking into buying new servers. I'm thinking of equipping it with 1TB of RAM and room to expand. So the database will not fit completely, but largely anyway. Also, if we can afford it, it will have SSDs instead of RAID10 SAS spindles.

But I've read that there is some kind of maximum to the shared_buffers, where increasing it would actually decrease performance.
Is 1TB of RAM, or even 2TB always a good thing?
And is there anything special that I should look out for when configuring such a server?
Or would it be much better to buy 2 smaller servers and tie them together somehow? (partitioning, replication, ...)

Our DB is also on the 1T+ range. It is hosted in a cloud VM, with only 32GB RAM but ultra fast SSD disks. No problems.
IIRC the "fit DB into RAM" was a trend many years back. The new recommendation for shared buffers is about 25% of RAM. Leaving the rest to be utilized mainly by the kernel cache, also by other programs in the system.
--
Willy-Bas Loos


-- 
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt

Re: [GENERAL] RAM, the more the merrier?

From
Andreas Kretschmer
Date:
Am 29. Juni 2017 16:19:41 MESZ schrieb Willy-Bas Loos <willybas@gmail.com>:
>Hi,
>
>We have a postgresql database that is now 1.4TB in disksize and slowly
>growing.
>In the past, we've had (read) performance trouble with this database
>and
>the solution was to buy a server that can fit the db into memory. It
>had
>0.5 TB of RAM and at the time it could hold all of the data easily.
>Those servers are now old and the db has outgrown the RAM and we are
>doing
>more reads and writes too (but the problem has not yet returned).
>
>So i am looking into buying new servers. I'm thinking of equipping it
>with
>1TB of RAM and room to expand. So the database will not fit completely,
>but
>largely anyway. Also, if we can afford it, it will have SSDs instead of
>RAID10 SAS spindles.
>
>But I've read that there is some kind of maximum to the shared_buffers,
>where increasing it would actually decrease performance.
>Is 1TB of RAM, or even 2TB always a good thing?
>And is there anything special that I should look out for when
>configuring
>such a server?
>Or would it be much better to buy 2 smaller servers and tie them
>together
>somehow? (partitioning, replication, ...)


With current versions you can set shared buffers to, for instance, 40% of ram, no problem. Tune also the checkpointer.


Regards, Andreas.


--
2ndQuadrant - The PostgreSQL Support Company


Re: [GENERAL] RAM, the more the merrier?

From
"Joshua D. Drake"
Date:
On 06/29/2017 07:19 AM, Willy-Bas Loos wrote:
> Hi,
>
> We have a postgresql database that is now 1.4TB in disksize and slowly
> growing.
> In the past, we've had (read) performance trouble with this database and
> the solution was to buy a server that can fit the db into memory. It had
> 0.5 TB of RAM and at the time it could hold all of the data easily.
> Those servers are now old and the db has outgrown the RAM and we are
> doing more reads and writes too (but the problem has not yet returned).

That's o.k., although it is awesome if the database can fit in memory it
is not required by any means. What you really want to aim for is the
active data set being in ram and/or file cache. Though with SSDs even
that isn't nearly as important anymore.

>
> So i am looking into buying new servers. I'm thinking of equipping it
> with 1TB of RAM and room to expand. So the database will not fit
> completely, but largely anyway. Also, if we can afford it, it will have
> SSDs instead of RAID10 SAS spindles.

An average single SSD equates to 8 spinning disks in terms of raw
performance and that isn't quite accurate either because SSDs are better
at general multi tasking.

>
> But I've read that there is some kind of maximum to the shared_buffers,
> where increasing it would actually decrease performance.
> Is 1TB of RAM, or even 2TB always a good thing?

This can be true but it is about proper tuning. If you are on a
reasonably new version of Postgres (9.3+) then shared buffers can
sometimes be as high as 40-50% (I have seen 90% without issue).

> And is there anything special that I should look out for when
> configuring such a server?

You are going to have to tune huge pages and your checkpoints/background
writer.


> Or would it be much better to buy 2 smaller servers and tie them
> together somehow? (partitioning, replication, ...)

That is a big "if". Frankly the size of server you already have with
upgraded hardware (SSDs, cores) may work just fine but it depends. You
can also do something such as have a master->slave(hot standby) and read
load balance between them.

Hope this was helpful and thanks,

JD


--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
*****     Unless otherwise stated, opinions are my own.   *****