Thread: Hardware recommendations?

Hardware recommendations?

From
Steve Atkins
Date:
I'm looking for generic advice on hardware to use for "mid-sized" postgresql servers, $5k or a bit more.

There are several good documents from the 9.0 era, but hardware has moved on since then, particularly with changes in
SSDpricing. 

Has anyone seen a more recent discussion of what someone might want for PostreSQL in 2017?

Cheers,
  Steve



Re: Hardware recommendations?

From
"Joshua D. Drake"
Date:
On 11/02/2016 10:03 AM, Steve Atkins wrote:
> I'm looking for generic advice on hardware to use for "mid-sized" postgresql servers, $5k or a bit more.
>
> There are several good documents from the 9.0 era, but hardware has moved on since then, particularly with changes in
SSDpricing. 
>
> Has anyone seen a more recent discussion of what someone might want for PostreSQL in 2017?

The rules haven't changed much, more cores (even if a bit slower) is
better than less, as much ram as the budget will allow and:

SSD

But make sure you get datacenter/enterprise SSDs. Consider that even a
slow datacenter/enterprise SSD can do 500MB/s random write and read just
as fast if not faster. That means for most installations, a RAID1 is
more than enough.

JD


--
Command Prompt, Inc.                  http://the.postgres.company/
                         +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


Re: Hardware recommendations?

From
Scott Marlowe
Date:
On Wed, Nov 2, 2016 at 11:40 AM, Joshua D. Drake <jd@commandprompt.com> wrote:
> On 11/02/2016 10:03 AM, Steve Atkins wrote:
>>
>> I'm looking for generic advice on hardware to use for "mid-sized"
>> postgresql servers, $5k or a bit more.
>>
>> There are several good documents from the 9.0 era, but hardware has moved
>> on since then, particularly with changes in SSD pricing.
>>
>> Has anyone seen a more recent discussion of what someone might want for
>> PostreSQL in 2017?
>
>
> The rules haven't changed much, more cores (even if a bit slower) is better
> than less, as much ram as the budget will allow and:
>
> SSD
>
> But make sure you get datacenter/enterprise SSDs. Consider that even a slow
> datacenter/enterprise SSD can do 500MB/s random write and read just as fast
> if not faster. That means for most installations, a RAID1 is more than
> enough.

Just to add that many setups utilizing SSDs are as fast or faster
using kernel level RAID as they are with a hardware RAID controller,
esp if the RAID controller has caching enabled. We went from 3k to 5k
tps to 15 to 18k tps by turnong off caching on modern LSI MegaRAID
controllers running RAID5.


Re: Hardware recommendations?

From
Steve Crawford
Date:
After much cogitation I eventually went RAID-less. Why? The only option for hardware RAID was SAS SSDs and given that they are not built on electro-mechanical spinning-rust technology it seemed like the RAID card was just another point of solid-state failure. I combined that with the fact that the RAID card limited me to the relatively slow SAS data-transfer rates that are blown away by what you get with something like an Intel NVME SSD plugged into the PCI bus. Raiding those could be done in software plus $$$ for the NVME SSDs but I already have data-redundancy through a combination of regular backups and streaming replication to identically equipped machines which rarely lag the master by more than a second.

Cheers,
Steve






On Wed, Nov 2, 2016 at 1:20 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
On Wed, Nov 2, 2016 at 11:40 AM, Joshua D. Drake <jd@commandprompt.com> wrote:
> On 11/02/2016 10:03 AM, Steve Atkins wrote:
>>
>> I'm looking for generic advice on hardware to use for "mid-sized"
>> postgresql servers, $5k or a bit more.
>>
>> There are several good documents from the 9.0 era, but hardware has moved
>> on since then, particularly with changes in SSD pricing.
>>
>> Has anyone seen a more recent discussion of what someone might want for
>> PostreSQL in 2017?
>
>
> The rules haven't changed much, more cores (even if a bit slower) is better
> than less, as much ram as the budget will allow and:
>
> SSD
>
> But make sure you get datacenter/enterprise SSDs. Consider that even a slow
> datacenter/enterprise SSD can do 500MB/s random write and read just as fast
> if not faster. That means for most installations, a RAID1 is more than
> enough.

Just to add that many setups utilizing SSDs are as fast or faster
using kernel level RAID as they are with a hardware RAID controller,
esp if the RAID controller has caching enabled. We went from 3k to 5k
tps to 15 to 18k tps by turnong off caching on modern LSI MegaRAID
controllers running RAID5.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: Hardware recommendations?

From
John R Pierce
Date:
On 11/2/2016 3:01 PM, Steve Crawford wrote:
> After much cogitation I eventually went RAID-less. Why? The only
> option for hardware RAID was SAS SSDs and given that they are not
> built on electro-mechanical spinning-rust technology it seemed like
> the RAID card was just another point of solid-state failure. I
> combined that with the fact that the RAID card limited me to the
> relatively slow SAS data-transfer rates that are blown away by what
> you get with something like an Intel NVME SSD plugged into the PCI
> bus. Raiding those could be done in software plus $$$ for the NVME
> SSDs but I already have data-redundancy through a combination of
> regular backups and streaming replication to identically equipped
> machines which rarely lag the master by more than a second.

just track the write wear life remaining on those NVMe cards, and
maintain a realistic estimate of lifetime remaining in months, so you
can budget for replacements.   the complication with PCI NVMe is how to
manage a replacement when the card is nearing EOL.   The best solution
is probably failing over to a replication slave database, then replacing
the worn out card on the original server, and bringing it up from
scratch as a new slave, this can be done with minimal service
interruptions.   Note your slaves will be getting nearly as many writes
as the masters so likely will need replacing in the same time frame.



--
john r pierce, recycling bits in santa cruz



Re: Hardware recommendations?

From
Scott Marlowe
Date:
On Wed, Nov 2, 2016 at 4:19 PM, John R Pierce <pierce@hogranch.com> wrote:
> On 11/2/2016 3:01 PM, Steve Crawford wrote:
>>
>> After much cogitation I eventually went RAID-less. Why? The only option
>> for hardware RAID was SAS SSDs and given that they are not built on
>> electro-mechanical spinning-rust technology it seemed like the RAID card was
>> just another point of solid-state failure. I combined that with the fact
>> that the RAID card limited me to the relatively slow SAS data-transfer rates
>> that are blown away by what you get with something like an Intel NVME SSD
>> plugged into the PCI bus. Raiding those could be done in software plus $$$
>> for the NVME SSDs but I already have data-redundancy through a combination
>> of regular backups and streaming replication to identically equipped
>> machines which rarely lag the master by more than a second.
>
>
> just track the write wear life remaining on those NVMe cards, and maintain a
> realistic estimate of lifetime remaining in months, so you can budget for
> replacements.   the complication with PCI NVMe is how to manage a
> replacement when the card is nearing EOL.   The best solution is probably
> failing over to a replication slave database, then replacing the worn out
> card on the original server, and bringing it up from scratch as a new slave,
> this can be done with minimal service interruptions.   Note your slaves will
> be getting nearly as many writes as the masters so likely will need
> replacing in the same time frame.

Yeah the last thing you want is to start having all your ssds fail at
once due to write cycle end of life etc. Where I used to work we had
pretty hard working machines with something like 500 to 1000 writes/s
and after a year were at ~90% writes left. ymmv depending on the ssd
etc.

A common trick is to overprovision if possible. Need 100G of storage
for a fast transactional db? Use 10% of a bunch of 800GB drives to
make an array and you now have a BUNCH of spare write cycles per
device for extra long life.