Thread: Hardware recommendations
I need to build a new high performance server to replace our current production database server. The current server is a SuperMicro 1U with 2 RAID-1 containers (one for data, one for log, SAS - data is 600GB, Logs 144GB),16GB of RAM, running 2 quad core processors (E5405 @ 2GHz), Adaptec 5405 Controller with BBU. I am already havingserious I/O bottlenecks with iostat -x showing extended periods where the disk subsystem on the data partition (theone with all the random i/o) at over 85% busy. The system is running FreeBSD 7.2 amd64 and PostgreSQL 8.4.4 on amd64-portbld-freebsd7.2,compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit. Currently I have about 4GB of shared memory allocated to PostgreSQL. Database is currently about 80GB, with about 60GB beingin partitioned tables which get rotated nightly to purge old data (sort of like a circular buffer of historic data). I was looking at one of the machines which Aberdeen has (the X438), and was planning on something along the lines of 96GBRAM with 16 SAS drives (15K). If I create a RAID 10 (stripe of mirrors), leaving 2 hot spares, should I still placethe logs in a separate RAID-1 mirror, or can they be left on the same RAID-10 container? On the processor front, are there advantages to going to X series processors as opposed to the E series (especially sinceI am I/O bound)? Is anyone running this type of hardware, specially on FreeBSD? Any opinions, especially concerningthe Areca controllers which they use? The new box would ideally be built with the latest released version of FreeBSD, PG 9.x. Also, is anyone running the 8.xseries of FreeBSD with PG 9 in a high throughput production environment? I will be upgrading one of our test serversin one week to this same configuration to test out, but just wanted to make sure there aren't any caveats others haveexperienced, especially as it pertains with the autovacuum not launching worker processes which I have experienced. Best regards, Benjamin
If you are IO-bound, you might want to consider using SSD. A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10. --- On Wed, 12/8/10, Benjamin Krajmalnik <kraj@servoyant.com> wrote: > From: Benjamin Krajmalnik <kraj@servoyant.com> > Subject: [PERFORM] Hardware recommendations > To: pgsql-performance@postgresql.org > Date: Wednesday, December 8, 2010, 6:03 PM > I need to build a new high > performance server to replace our current production > database server. > The current server is a SuperMicro 1U with 2 RAID-1 > containers (one for data, one for log, SAS - data is 600GB, > Logs 144GB), 16GB of RAM, running 2 quad core processors > (E5405 @ 2GHz), Adaptec 5405 Controller with BBU. I am > already having serious I/O bottlenecks with iostat -x > showing extended periods where the disk subsystem on the > data partition (the one with all the random i/o) at over 85% > busy. The system is running FreeBSD 7.2 amd64 and > PostgreSQL 8.4.4 on amd64-portbld-freebsd7.2, compiled by > GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit. > Currently I have about 4GB of shared memory allocated to > PostgreSQL. Database is currently about 80GB, with about > 60GB being in partitioned tables which get rotated nightly > to purge old data (sort of like a circular buffer of > historic data). > > I was looking at one of the machines which Aberdeen has > (the X438), and was planning on something along the lines > of 96GB RAM with 16 SAS drives (15K). If I create a RAID > 10 (stripe of mirrors), leaving 2 hot spares, should I still > place the logs in a separate RAID-1 mirror, or can they be > left on the same RAID-10 container? > On the processor front, are there advantages to going to X > series processors as opposed to the E series (especially > since I am I/O bound)? Is anyone running this type of > hardware, specially on FreeBSD? Any opinions, especially > concerning the Areca controllers which they use? > > The new box would ideally be built with the latest released > version of FreeBSD, PG 9.x. Also, is anyone running the > 8.x series of FreeBSD with PG 9 in a high throughput > production environment? I will be upgrading one of our > test servers in one week to this same configuration to test > out, but just wanted to make sure there aren't any caveats > others have experienced, especially as it pertains with the > autovacuum not launching worker processes which I have > experienced. > > Best regards, > > Benjamin > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance >
Sent from my android device.
-----Original Message-----
From: Benjamin Krajmalnik <kraj@servoyant.com>
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)
(ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07
Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;
Wed, 8 Dec 2010 19:16:09 -0400 (AST)
Received: from maia.hub.org (maia-3.hub.org [200.46.204.243])
by mail.postgresql.org (Postfix) with ESMTP id BEF461337B83
for <pgsql-performance-postgresql.org@mail.postgresql.org>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)
Received: from mail.postgresql.org ([200.46.204.86])
by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)
with ESMTP id 69961-09
for <pgsql-performance-postgresql.org@mail.postgresql.org>;
Wed, 8 Dec 2010 23:15:55 +0000 (UTC)
X-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6
Received: from mail.illumen.com (unknown [64.207.29.137])
by mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C
for <pgsql-performance@postgresql.org>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Subject: [PERFORM] Hardware recommendations
Date: Wed, 8 Dec 2010 16:03:43 -0700
Message-ID: <F4E6A2751A2823418A21D4A160B689887B0A4D@fletch.stackdump.local>
In-Reply-To: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: Hardware recommendations
Thread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ
References: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
From: "Benjamin Krajmalnik" <kraj@servoyant.com>
To: <pgsql-performance@postgresql.org>
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Spam-Status: No, hits.107 tagged_above0 required=5
testsºYES_00.9, RDNS_NONE=0.793
X-Spam-Level:
X-Mailing-List: pgsql-performance
List-Archive: <http://archives.postgresql.org/pgsql-performance>
List-Help: <mailto:majordomo@postgresql.org?body=help>
List-ID: <pgsql-performance.postgresql.org>
List-Owner: <mailto:pgsql-performance-owner@postgresql.org>
List-Post: <mailto:pgsql-per
John, The platform is a network monitoring system, so we have quite a lot of inserts/updates (every data point has at least onerecord insert as well as at least 3 record updates). The management GUI has a lot of selects. We are refactoring thedatabase to some degree to aid in the performance, since the performance degradations are correlated to the number ofusers viewing the system GUI. My biggest concern with SSD drives is their life expectancy, as well as our need for relatively high capacity. From a purelyscalability perspective, this setup will need to support terabytes of data. I suppose I could use table spaces touse the most accessed data in SSD drives and the rest on regular drives. As I stated, I am moving to RAID 10, and was just wondering if the logs should still be moved off to different spindles,or will leaving them on the RAID10 be fine and not affect performance. > -----Original Message----- > From: John W Strange [mailto:john.w.strange@jpmchase.com] > Sent: Wednesday, December 08, 2010 4:32 PM > To: Benjamin Krajmalnik; pgsql-performance@postgresql.org > Subject: RE: Hardware recommendations > > Ben, > > It would help if you could tell us a bit more about the read/write mix > and transaction requirements. *IF* you are heavy writes I would suggest > moving off the RAID1 configuration to a RAID10 setup. I would highly > suggest looking at SLC based solid state drives or if your budget has > legs, look at fusionIO drives. > > We currently have several setups with two FusionIO Duo cards that > produce > 2GB second reads, and over 1GB/sec writes. They are pricey > but, long term cheaper for me than putting SAN in place that can meet > that sort of performance. > > It all really depends on your workload: > > http://www.fusionio.com/products/iodrive/ - BEST in slot currently > IMHO. > http://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E) > - not a bad alternative. > > There are other SSD controllers on the market but I have experience > with both so I can recommend both as well. > > - John > > > > -----Original Message----- > From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance- > owner@postgresql.org] On Behalf Of Benjamin Krajmalnik > Sent: Wednesday, December 08, 2010 5:04 PM > To: pgsql-performance@postgresql.org > Subject: [PERFORM] Hardware recommendations > > I need to build a new high performance server to replace our current > production database server. > The current server is a SuperMicro 1U with 2 RAID-1 containers (one for > data, one for log, SAS - data is 600GB, Logs 144GB), 16GB of RAM, > running 2 quad core processors (E5405 @ 2GHz), Adaptec 5405 Controller > with BBU. I am already having serious I/O bottlenecks with iostat -x > showing extended periods where the disk subsystem on the data partition > (the one with all the random i/o) at over 85% busy. The system is > running FreeBSD 7.2 amd64 and PostgreSQL 8.4.4 on amd64-portbld- > freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit. > Currently I have about 4GB of shared memory allocated to PostgreSQL. > Database is currently about 80GB, with about 60GB being in partitioned > tables which get rotated nightly to purge old data (sort of like a > circular buffer of historic data). > > I was looking at one of the machines which Aberdeen has (the X438), and > was planning on something along the lines of 96GB RAM with 16 SAS > drives (15K). If I create a RAID 10 (stripe of mirrors), leaving 2 hot > spares, should I still place the logs in a separate RAID-1 mirror, or > can they be left on the same RAID-10 container? > On the processor front, are there advantages to going to X series > processors as opposed to the E series (especially since I am I/O > bound)? Is anyone running this type of hardware, specially on > FreeBSD? Any opinions, especially concerning the Areca controllers > which they use? > > The new box would ideally be built with the latest released version of > FreeBSD, PG 9.x. Also, is anyone running the 8.x series of FreeBSD > with PG 9 in a high throughput production environment? I will be > upgrading one of our test servers in one week to this same > configuration to test out, but just wanted to make sure there aren't > any caveats others have experienced, especially as it pertains with the > autovacuum not launching worker processes which I have experienced. > > Best regards, > > Benjamin > > -- > Sent via pgsql-performance mailing list (pgsql- > performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > This communication is for informational purposes only. It is not > intended as an offer or solicitation for the purchase or sale of > any financial instrument or as an official confirmation of any > transaction. All market prices, data and other information are not > warranted as to completeness or accuracy and are subject to change > without notice. Any comments or statements made herein do not > necessarily reflect those of JPMorgan Chase & Co., its subsidiaries > and affiliates. > > This transmission may contain information that is privileged, > confidential, legally privileged, and/or exempt from disclosure > under applicable law. If you are not the intended recipient, you > are hereby notified that any disclosure, copying, distribution, or > use of the information contained herein (including any reliance > thereon) is STRICTLY PROHIBITED. Although this transmission and any > attachments are believed to be free of any virus or other defect > that might affect any computer system into which it is received and > opened, it is the responsibility of the recipient to ensure that it > is virus free and no responsibility is accepted by JPMorgan Chase & > Co., its subsidiaries and affiliates, as applicable, for any loss > or damage arising in any way from its use. If you received this > transmission in error, please immediately contact the sender and > destroy the material in its entirety, whether in electronic or hard > copy format. Thank you. > > Please refer to http://www.jpmorgan.com/pages/disclosures for > disclosures relating to European legal entities.
Ben, It would help if you could tell us a bit more about the read/write mix and transaction requirements. *IF* you are heavy writesI would suggest moving off the RAID1 configuration to a RAID10 setup. I would highly suggest looking at SLC basedsolid state drives or if your budget has legs, look at fusionIO drives. We currently have several setups with two FusionIO Duo cards that produce > 2GB second reads, and over 1GB/sec writes. Theyare pricey but, long term cheaper for me than putting SAN in place that can meet that sort of performance. It all really depends on your workload: http://www.fusionio.com/products/iodrive/ - BEST in slot currently IMHO. http://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E) - not a bad alternative. There are other SSD controllers on the market but I have experience with both so I can recommend both as well. - John -----Original Message----- From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Benjamin Krajmalnik Sent: Wednesday, December 08, 2010 5:04 PM To: pgsql-performance@postgresql.org Subject: [PERFORM] Hardware recommendations I need to build a new high performance server to replace our current production database server. The current server is a SuperMicro 1U with 2 RAID-1 containers (one for data, one for log, SAS - data is 600GB, Logs 144GB),16GB of RAM, running 2 quad core processors (E5405 @ 2GHz), Adaptec 5405 Controller with BBU. I am already havingserious I/O bottlenecks with iostat -x showing extended periods where the disk subsystem on the data partition (theone with all the random i/o) at over 85% busy. The system is running FreeBSD 7.2 amd64 and PostgreSQL 8.4.4 on amd64-portbld-freebsd7.2,compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit. Currently I have about 4GB of shared memory allocated to PostgreSQL. Database is currently about 80GB, with about 60GB beingin partitioned tables which get rotated nightly to purge old data (sort of like a circular buffer of historic data). I was looking at one of the machines which Aberdeen has (the X438), and was planning on something along the lines of 96GBRAM with 16 SAS drives (15K). If I create a RAID 10 (stripe of mirrors), leaving 2 hot spares, should I still placethe logs in a separate RAID-1 mirror, or can they be left on the same RAID-10 container? On the processor front, are there advantages to going to X series processors as opposed to the E series (especially sinceI am I/O bound)? Is anyone running this type of hardware, specially on FreeBSD? Any opinions, especially concerningthe Areca controllers which they use? The new box would ideally be built with the latest released version of FreeBSD, PG 9.x. Also, is anyone running the 8.xseries of FreeBSD with PG 9 in a high throughput production environment? I will be upgrading one of our test serversin one week to this same configuration to test out, but just wanted to make sure there aren't any caveats others haveexperienced, especially as it pertains with the autovacuum not launching worker processes which I have experienced. Best regards, Benjamin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance This communication is for informational purposes only. It is not intended as an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction. All market prices, data and other information are not warranted as to completeness or accuracy and are subject to change without notice. Any comments or statements made herein do not necessarily reflect those of JPMorgan Chase & Co., its subsidiaries and affiliates. This transmission may contain information that is privileged, confidential, legally privileged, and/or exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein (including any reliance thereon) is STRICTLY PROHIBITED. Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by JPMorgan Chase & Co., its subsidiaries and affiliates, as applicable, for any loss or damage arising in any way from its use. If you received this transmission in error, please immediately contact the sender and destroy the material in its entirety, whether in electronic or hard copy format. Thank you. Please refer to http://www.jpmorgan.com/pages/disclosures for disclosures relating to European legal entities.
On Thu, Dec 9, 2010 at 01:26, Andy <angelflow@yahoo.com> wrote: > If you are IO-bound, you might want to consider using SSD. > > A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10. Are there any that don't risk your data on power loss, AND are cheaper than SAS RAID 10? Regards, Marti
Sent from my android device.
-----Original Message-----
From: Benjamin Krajmalnik <kraj@servoyant.com>
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)
(ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07
Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;
Wed, 8 Dec 2010 19:16:09 -0400 (AST)
Received: from maia.hub.org (maia-3.hub.org [200.46.204.243])
by mail.postgresql.org (Postfix) with ESMTP id BEF461337B83
for <pgsql-performance-postgresql.org@mail.postgresql.org>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)
Received: from mail.postgresql.org ([200.46.204.86])
by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)
with ESMTP id 69961-09
for <pgsql-performance-postgresql.org@mail.postgresql.org>;
Wed, 8 Dec 2010 23:15:55 +0000 (UTC)
X-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6
Received: from mail.illumen.com (unknown [64.207.29.137])
by mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C
for <pgsql-performance@postgresql.org>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Subject: [PERFORM] Hardware recommendations
Date: Wed, 8 Dec 2010 16:03:43 -0700
Message-ID: <F4E6A2751A2823418A21D4A160B689887B0A4D@fletch.stackdump.local>
In-Reply-To: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: Hardware recommendations
Thread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ
References: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
From: "Benjamin Krajmalnik" <kraj@servoyant.com>
To: <pgsql-performance@postgresql.org>
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Spam-Status: No, hits.107 tagged_above0 required=5
testsºYES_00.9, RDNS_NONE=0.793
X-Spam-Level:
X-Mailing-List: pgsql-performance
List-Archive: <http://archives.postgresql.org/pgsql-performance>
List-Help: <mailto:majordomo@postgresql.org?body=help>
List-ID: <pgsql-performance.postgresql.org>
List-Owner: <mailto:pgsql-performance-owner@postgresql.org>
List-Post: <mailto:pgsql-per
> > If you are IO-bound, you might want to consider using > SSD. > > > > A single SSD could easily give you more IOPS than 16 > 15k SAS in RAID 10. > > Are there any that don't risk your data on power loss, AND > are cheaper > than SAS RAID 10? > Vertex 2 Pro has a built-in supercapacitor to save data on power loss. It's spec'd at 50K IOPS and a 200GB one costs around$1,000.
Sent from my android device.
-----Original Message-----
From: Benjamin Krajmalnik <kraj@servoyant.com>
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)
(ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07
Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;
Wed, 8 Dec 2010 19:16:09 -0400 (AST)
Received: from maia.hub.org (maia-3.hub.org [200.46.204.243])
by mail.postgresql.org (Postfix) with ESMTP id BEF461337B83
for <pgsql-performance-postgresql.org@mail.postgresql.org>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)
Received: from mail.postgresql.org ([200.46.204.86])
by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)
with ESMTP id 69961-09
for <pgsql-performance-postgresql.org@mail.postgresql.org>;
Wed, 8 Dec 2010 23:15:55 +0000 (UTC)
X-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6
Received: from mail.illumen.com (unknown [64.207.29.137])
by mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C
for <pgsql-performance@postgresql.org>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Subject: [PERFORM] Hardware recommendations
Date: Wed, 8 Dec 2010 16:03:43 -0700
Message-ID: <F4E6A2751A2823418A21D4A160B689887B0A4D@fletch.stackdump.local>
In-Reply-To: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: Hardware recommendations
Thread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ
References: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
From: "Benjamin Krajmalnik" <kraj@servoyant.com>
To: <pgsql-performance@postgresql.org>
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Spam-Status: No, hits.107 tagged_above0 required=5
testsºYES_00.9, RDNS_NONE=0.793
X-Spam-Level:
X-Mailing-List: pgsql-performance
List-Archive: <http://archives.postgresql.org/pgsql-performance>
List-Help: <mailto:majordomo@postgresql.org?body=help>
List-ID: <pgsql-performance.postgresql.org>
List-Owner: <mailto:pgsql-performance-owner@postgresql.org>
List-Post: <mailto:pgsql-per
Sent from my android device.
-----Original Message-----
From: Benjamin Krajmalnik <kraj@servoyant.com>
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)
(ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07
Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;
Wed, 8 Dec 2010 19:16:09 -0400 (AST)
Received: from maia.hub.org (maia-3.hub.org [200.46.204.243])
by mail.postgresql.org (Postfix) with ESMTP id BEF461337B83
for <pgsql-performance-postgresql.org@mail.postgresql.org>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)
Received: from mail.postgresql.org ([200.46.204.86])
by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)
with ESMTP id 69961-09
for <pgsql-performance-postgresql.org@mail.postgresql.org>;
Wed, 8 Dec 2010 23:15:55 +0000 (UTC)
X-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6
Received: from mail.illumen.com (unknown [64.207.29.137])
by mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C
for <pgsql-performance@postgresql.org>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Subject: [PERFORM] Hardware recommendations
Date: Wed, 8 Dec 2010 16:03:43 -0700
Message-ID: <F4E6A2751A2823418A21D4A160B689887B0A4D@fletch.stackdump.local>
In-Reply-To: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: Hardware recommendations
Thread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ
References: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
From: "Benjamin Krajmalnik" <kraj@servoyant.com>
To: <pgsql-performance@postgresql.org>
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Spam-Status: No, hits.107 tagged_above0 required=5
testsºYES_00.9, RDNS_NONE=0.793
X-Spam-Level:
X-Mailing-List: pgsql-performance
List-Archive: <http://archives.postgresql.org/pgsql-performance>
List-Help: <mailto:majordomo@postgresql.org?body=help>
List-ID: <pgsql-performance.postgresql.org>
List-Owner: <mailto:pgsql-performance-owner@postgresql.org>
List-Post: <mailto:pgsql-per
Sent from my android device.
-----Original Message-----
From: Benjamin Krajmalnik <kraj@servoyant.com>
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)
(ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07
Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;
Wed, 8 Dec 2010 19:16:09 -0400 (AST)
Received: from maia.hub.org (maia-3.hub.org [200.46.204.243])
by mail.postgresql.org (Postfix) with ESMTP id BEF461337B83
for <pgsql-performance-postgresql.org@mail.postgresql.org>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)
Received: from mail.postgresql.org ([200.46.204.86])
by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)
with ESMTP id 69961-09
for <pgsql-performance-postgresql.org@mail.postgresql.org>;
Wed, 8 Dec 2010 23:15:55 +0000 (UTC)
X-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6
Received: from mail.illumen.com (unknown [64.207.29.137])
by mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C
for <pgsql-performance@postgresql.org>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Subject: [PERFORM] Hardware recommendations
Date: Wed, 8 Dec 2010 16:03:43 -0700
Message-ID: <F4E6A2751A2823418A21D4A160B689887B0A4D@fletch.stackdump.local>
In-Reply-To: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: Hardware recommendations
Thread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ
References: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
From: "Benjamin Krajmalnik" <kraj@servoyant.com>
To: <pgsql-performance@postgresql.org>
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Spam-Status: No, hits.107 tagged_above0 required=5
testsºYES_00.9, RDNS_NONE=0.793
X-Spam-Level:
X-Mailing-List: pgsql-performance
List-Archive: <http://archives.postgresql.org/pgsql-performance>
List-Help: <mailto:majordomo@postgresql.org?body=help>
List-ID: <pgsql-performance.postgresql.org>
List-Owner: <mailto:pgsql-performance-owner@postgresql.org>
List-Post: <mailto:pgsql-per
Sent from my android device.
-----Original Message-----
From: Benjamin Krajmalnik <kraj@servoyant.com>
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)
(ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07
Received: from postgresql.org (mail.postgresql.org [200.46.204.86])
by mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;
Wed, 8 Dec 2010 19:16:09 -0400 (AST)
Received: from maia.hub.org (maia-3.hub.org [200.46.204.243])
by mail.postgresql.org (Postfix) with ESMTP id BEF461337B83
for <pgsql-performance-postgresql.org@mail.postgresql.org>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)
Received: from mail.postgresql.org ([200.46.204.86])
by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)
with ESMTP id 69961-09
for <pgsql-performance-postgresql.org@mail.postgresql.org>;
Wed, 8 Dec 2010 23:15:55 +0000 (UTC)
X-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6
Received: from mail.illumen.com (unknown [64.207.29.137])
by mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C
for <pgsql-performance@postgresql.org>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Subject: [PERFORM] Hardware recommendations
Date: Wed, 8 Dec 2010 16:03:43 -0700
Message-ID: <F4E6A2751A2823418A21D4A160B689887B0A4D@fletch.stackdump.local>
In-Reply-To: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: Hardware recommendations
Thread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ
References: <F4E6A2751A2823418A21D4A160B689887B0A4C@fletch.stackdump.local>
From: "Benjamin Krajmalnik" <kraj@servoyant.com>
To: <pgsql-performance@postgresql.org>
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Spam-Status: No, hits.107 tagged_above0 required=5
testsºYES_00.9, RDNS_NONE=0.793
X-Spam-Level:
X-Mailing-List: pgsql-performance
List-Archive: <http://archives.postgresql.org/pgsql-performance>
List-Help: <mailto:majordomo@postgresql.org?body=help>
List-ID: <pgsql-performance.postgresql.org>
List-Owner: <mailto:pgsql-performance-owner@postgresql.org>
List-Post: <mailto:pgsql-per
On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik <kraj@servoyant.com> wrote: > John, > > The platform is a network monitoring system, so we have quite a lot of inserts/updates (every data point has at least onerecord insert as well as at least 3 record updates). The management GUI has a lot of selects. We are refactoring thedatabase to some degree to aid in the performance, since the performance degradations are correlated to the number ofusers viewing the system GUI. Scalability here may be better addressed by having something like hot read only slaves for the users who want to view data. > My biggest concern with SSD drives is their life expectancy, Generally that's not a big issue, especially as the SSDs get larger. Being able to survive a power loss without corruption is more of an issue, so if you go SSD get ones with a supercapacitor that can write out the data before power down. > as well as our need for relatively high capacity. Ahhh, capacity is where SSDs start to lose out quickly. Cheap 10k SAS drives and less so 15k drives are way less per gigabyte than SSDs, and you can only fit so many SSDs onto a single controller / in a single cage before you're broke. > From a purely scalability perspective, this setup will need to support terabytes of data. I suppose I could use tablespaces to use the most accessed data in SSD drives and the rest on regular drives. > As I stated, I am moving to RAID 10, and was just wondering if the logs should still be moved off to different spindles,or will leaving them on the RAID10 be fine and not affect performance. With a battery backed caching RAID controller, it's more important that you have the pg_xlog files on a different partition than on a differen RAID set. I.e. you can have one big RAID set, and set aside the first 100G or so for pg_xlog. This has to do with fsync behaviour. In linux this is a known issue, I'm not sure how much so it would be in BSD. But you should test for fsync contention. As for the Areca controllers, I haven't tested them with the latest drivers or firmware, but we would routinely get 180 to 460 days of uptime between lockups on our 1680s we installed 2.5 or so years ago. Of the two brand new LSI 8888 controllers we installed this summer, we've had one fail already. However, the database didn't get corrupted so not too bad. My preference still leans towards the Areca, but no RAID controller is perfect and infallible. Performance wise the Areca is still faster than the LSI 8888, and the newer faster LSI just didn't work with out quad 12 core AMD mobo. Note that all of that hardware was brand new, so things may have improved by now. I have to say Aberdeen took great care of us in getting the systems up and running. As for CPUs, almost any modern CPU will do fine.
-----Original Message----- From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Andy Sent: Wednesday, December 08, 2010 5:24 PM To: Marti Raudsepp Cc: pgsql-performance@postgresql.org; Benjamin Krajmalnik Subject: Re: [PERFORM] Hardware recommendations >> > If you are IO-bound, you might want to consider using >> SSD. >> > >> > A single SSD could easily give you more IOPS than 16 >> 15k SAS in RAID 10. >> >> Are there any that don't risk your data on power loss, AND >> are cheaper >> than SAS RAID 10? >> >Vertex 2 Pro has a built-in supercapacitor to save data on power loss. It's spec'd at 50K IOPS and a 200GB one costs around $1,000. Viking offers 6Gbps SAS physical connector SSD drives as well - with a super capacitor. I have not seen any official pricing yet, but I would suspect it would be in the same ballpark. I am currently begging to get some for eval. I will let everyone know if I swing that and can post numbers. -mark
On Thu, Dec 9, 2010 at 04:28, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik <kraj@servoyant.com> wrote: >> My biggest concern with SSD drives is their life expectancy, > > Generally that's not a big issue, especially as the SSDs get larger. > Being able to survive a power loss without corruption is more of an > issue, so if you go SSD get ones with a supercapacitor that can write > out the data before power down. I agree with Benjamin here. Even if you put multiple SSD drives into a RAID array, all the drives get approximately the same write load and thus will likely wear out and fail at the same time! > As for the Areca controllers, I haven't tested them with the latest > drivers or firmware, but we would routinely get 180 to 460 days of > uptime between lockups That sucks! But does a BBU even help with SSDs? The flash eraseblock is larger than the RAID cache unit size anyway, so as far as I can tell, it might not save you in the case of a power loss. Any thoughts whether software RAID on SSD is a good idea? Regards, Marti
If you are worried about wearing out the SSD's long term get a larger SSD and create the partition smaller than the disk,this will reduce the write amplification and extend the life of the drive. TRIM support also helps lower write amplification issues by not requiring as many pages to do the writes, and improve performanceas well! As a test I bought 4 cheap 40GB drives in a raid0 software stripe, I have run it for almost a year now with a lot of randomIO. I portioned them as 30GB drives, leaving an extra 25% spare area to reduce the write amplification, I can stillget over 600MB/sec on these for a whopping cost of $400 and a little of my time. SSD's can be very useful, but you have to be aware of the shortcomings and how to overcome them. - John -----Original Message----- From: Marti Raudsepp [mailto:marti@juffo.org] Sent: Thursday, December 09, 2010 6:09 AM To: Scott Marlowe Cc: Benjamin Krajmalnik; John W Strange; pgsql-performance@postgresql.org Subject: Re: [PERFORM] Hardware recommendations On Thu, Dec 9, 2010 at 04:28, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik <kraj@servoyant.com> wrote: >> My biggest concern with SSD drives is their life expectancy, > > Generally that's not a big issue, especially as the SSDs get larger. > Being able to survive a power loss without corruption is more of an > issue, so if you go SSD get ones with a supercapacitor that can write > out the data before power down. I agree with Benjamin here. Even if you put multiple SSD drives into a RAID array, all the drives get approximately the same write load and thus will likely wear out and fail at the same time! > As for the Areca controllers, I haven't tested them with the latest > drivers or firmware, but we would routinely get 180 to 460 days of > uptime between lockups That sucks! But does a BBU even help with SSDs? The flash eraseblock is larger than the RAID cache unit size anyway, so as far as I can tell, it might not save you in the case of a power loss. Any thoughts whether software RAID on SSD is a good idea? Regards, Marti This communication is for informational purposes only. It is not intended as an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction. All market prices, data and other information are not warranted as to completeness or accuracy and are subject to change without notice. Any comments or statements made herein do not necessarily reflect those of JPMorgan Chase & Co., its subsidiaries and affiliates. This transmission may contain information that is privileged, confidential, legally privileged, and/or exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein (including any reliance thereon) is STRICTLY PROHIBITED. Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by JPMorgan Chase & Co., its subsidiaries and affiliates, as applicable, for any loss or damage arising in any way from its use. If you received this transmission in error, please immediately contact the sender and destroy the material in its entirety, whether in electronic or hard copy format. Thank you. Please refer to http://www.jpmorgan.com/pages/disclosures for disclosures relating to European legal entities.
On Wed, Dec 8, 2010 at 3:03 PM, Benjamin Krajmalnik <kraj@servoyant.com> wrote: > I need to build a new high performance server to replace our current production database server. We run FreeBSD 8.1 with PG 8.4 (soon to upgrade to PG 9). Hardware is: Supermicro 2u 6026T-NTR+ 2x Intel Xeon E5520 Nehalem 2.26GHz Quad-Core (8 cores total), 48GB RAM We use ZFS and use SSDs for both the log device and L2ARC. All disks and SSDs are behind a 3ware with BBU in single disk mode. This has given us the capacity of the spinning disks with (mostly) the performance of the SSDs. The main issue we've had is that if the server is rebooted performance is horrible for a few minutes until the various memory and ZFS caches are warmed up. Luckily, that doesn't happen very often.
> We use ZFS and use SSDs for both the log device and > L2ARC. All disks > and SSDs are behind a 3ware with BBU in single disk > mode. Out of curiosity why do you put your log on SSD? Log is all sequential IOs, an area in which SSD is not any faster than HDD.So I'd think putting log on SSD wouldn't give you any performance boost.
On 10-12-2010 14:58 Andy wrote: >> We use ZFS and use SSDs for both the log device and L2ARC. All >> disks and SSDs are behind a 3ware with BBU in single disk mode. > > Out of curiosity why do you put your log on SSD? Log is all > sequential IOs, an area in which SSD is not any faster than HDD. So > I'd think putting log on SSD wouldn't give you any performance > boost. The "common knowledge" you based that comment on, may actually not be very up-to-date anymore. Current consumer-grade SSD's can achieve up to 200MB/sec when writing sequentially and they can probably do that a lot more consistent than a hard disk. Have a look here: http://www.anandtech.com/show/2829/21 The sequential writes-graphs consistently put several SSD's at twice the performance of the VelociRaptor 300GB 10k rpm disk and that's a test from over a year old, current SSD's have increased in performance, whereas I'm not so sure there was much improvement in platter based disks lately? Apart from that, I'd guess that log-devices benefit from reduced latencies. Its actually the recommended approach from Sun to add a pair of (small SLC-based) ssd log devices to increase performance (especially for nfs-scenario's where a lot of synchonous writes occur) and they offer it as an option for most of their "Unified Storage" appliances. Best regards, Arjen
On 10-12-2010 18:57 Arjen van der Meijden wrote: > Have a look here: http://www.anandtech.com/show/2829/21 > The sequential writes-graphs consistently put several SSD's at twice the > performance of the VelociRaptor 300GB 10k rpm disk and that's a test > from over a year old, current SSD's have increased in performance, > whereas I'm not so sure there was much improvement in platter based > disks lately? Here's a more recent test: http://www.anandtech.com/show/4020/ocz-vertex-plus-preview-introducing-the-indilinx-martini/3 That shows several consumer grade SSD's and a 600GB VelociRaptor, its 200+ vs 140MB/sec. I'm not sure how recent 15k rpm sas disks would do, nor do I know how recent server grade SSD's would behave. But if we assume similar gains for both, its still in favor of SSD's :-) Best regards, Arjen
On Fri, Dec 10, 2010 at 11:05 AM, Arjen van der Meijden <acmmailing@tweakers.net> wrote: > > On 10-12-2010 18:57 Arjen van der Meijden wrote: >> >> Have a look here: http://www.anandtech.com/show/2829/21 >> The sequential writes-graphs consistently put several SSD's at twice the >> performance of the VelociRaptor 300GB 10k rpm disk and that's a test >> from over a year old, current SSD's have increased in performance, >> whereas I'm not so sure there was much improvement in platter based >> disks lately? > > Here's a more recent test: > http://www.anandtech.com/show/4020/ocz-vertex-plus-preview-introducing-the-indilinx-martini/3 > > That shows several consumer grade SSD's and a 600GB VelociRaptor, its 200+ > vs 140MB/sec. I'm not sure how recent 15k rpm sas disks would do, nor do I > know how recent server grade SSD's would behave. But if we assume similar > gains for both, its still in favor of SSD's :-) The latest Seagate Cheetahs (15k.7) can do 122 to 204 depending on what part of the drive you're writing to.
> The "common knowledge" you based that comment on, may > actually not be very up-to-date anymore. Current > consumer-grade SSD's can achieve up to 200MB/sec when > writing sequentially and they can probably do that a lot > more consistent than a hard disk. > > Have a look here: http://www.anandtech.com/show/2829/21 > The sequential writes-graphs consistently put several SSD's > at twice the performance of the VelociRaptor 300GB 10k rpm > disk and that's a test from over a year old, current SSD's > have increased in performance, whereas I'm not so sure there > was much improvement in platter based disks lately? The sequential IO performance of SSD may be twice faster than HDD, but the random IO performance of SSD is at least an orderof magnitude faster. I'd think it'd make more sense to take advantage of SSD's greatest strength, which is random IO. The same website you linked, anandtech, also benchmarked various configurations of utilizing SSD: http://www.anandtech.com/show/2739/11 According to their benchmarks putting logs on SSD results in no performance improvements, while putting data on SSD leadsto massive improvement. They used MySQL for the benchmarks. So perhaps Postgresql is different in this regard?
John W Strange wrote: > http://www.fusionio.com/products/iodrive/ - BEST in slot currently IMHO. > http://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E) - not a bad alternative. > The FusionIO drives are OK, so long as you don't mind the possibility that your system will be down for >15 minutes after any unexpected crash. They can do a pretty time consuming verification process on the next boot if you didn't shut the server down properly before mounting. Intel's drives have been so bad about respecting OS cache flush calls that I can't recommend them for any PostgreSQL use, due to their tendency for the database to get corrupted in the same sort of post-crash situation. See http://wiki.postgresql.org/wiki/Reliable_Writes for more background. If you get one of the models that allows being setup for reliability, those are so slow that's not even worth the trouble. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services and Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
Benjamin Krajmalnik wrote: > I am already having serious I/O bottlenecks with iostat -x showing extended periods where the disk subsystem on the datapartition (the one with all the random i/o) at over 85% busy. The system is running FreeBSD 7.2 amd64 and PostgreSQL8.4.4 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit. > Currently I have about 4GB of shared memory allocated to PostgreSQL. Database is currently about 80GB, with about 60GBbeing in partitioned tables which get rotated nightly to purge old data (sort of like a circular buffer of historic data). > What sort of total read/write rates are you seeing when iostat is showing the system 85% busy? That's a useful number to note as an estimate of just how random the workload is. Have you increased checkpoint parameters like checkpoint_segments? You need to avoid having checkpoints too often if you're going to try and use 4GB of memory for shared_buffers. > I was looking at one of the machines which Aberdeen has (the X438), and was planning on something along the lines of 96GBRAM with 16 SAS drives (15K). If I create a RAID 10 (stripe of mirrors), leaving 2 hot spares, should I still placethe logs in a separate RAID-1 mirror, or can they be left on the same RAID-10 container? > It's nice to put the logs onto a separate disk because it lets you measure exactly how much I/O is going to them, relative to the database. It's not really necessary though; with 14 disks you'll be at the range where you can mix them together and things should still be fine. > On the processor front, are there advantages to going to X series processors as opposed to the E series (especially sinceI am I/O bound)? Is anyone running this type of hardware, specially on FreeBSD? Any opinions, especially concerningthe Areca controllers which they use? > It sounds like you should be saving your hardware dollars for more RAM and disks, not getting faster procesors. The Areca controllers are fast and pretty reliable under Linux. I'm not aware of anyone using them for PostgreSQL in production on FreeBSD. Aberdeen may have enough customers doing that to give you a good opinion on how stable that is likely to be; they're pretty straight as vendors go. You'd want to make sure to stress test that hardware/software combo as early as possible regardless, it's generally a good idea and you wouldn't be running a really popular combination. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services and Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
> -----Original Message----- > From: Greg Smith [mailto:greg@2ndquadrant.com] > Sent: Saturday, December 11, 2010 2:18 AM > To: Benjamin Krajmalnik > Cc: pgsql-performance@postgresql.org > Subject: Re: [PERFORM] Hardware recommendations > > > What sort of total read/write rates are you seeing when iostat is > showing the system 85% busy? That's a useful number to note as an > estimate of just how random the workload is. > I did a vacuum full of the highly bloated, constantly accessed tables, which has improved the situation significantly. I am not seeing over 75% busy right now, but these are some values for the high busy presently: 71% 344 w/s 7644 kw/s 81% 392 w/s 8880 kw/s 79% 393 w/s 9526 kw/s 75% 443 w/s 10245 kw/s 80% 436 w/s 10157 kw/s 76% 392 w/s 8438 kw/s > Have you increased checkpoint parameters like checkpoint_segments? You > need to avoid having checkpoints too often if you're going to try and > use 4GB of memory for shared_buffers. > Yes, I have it configured at 1024 checkpoint_segments, 5min timeout,0.9 compiostat -x 5letion_target > > It's nice to put the logs onto a separate disk because it lets you > measure exactly how much I/O is going to them, relative to the > database. It's not really necessary though; with 14 disks you'll be at > the range where you can mix them together and things should still be > fine. > Thx. I will place them in their own RAID1 (or mirror if I end up going to ZFS) > > > On the processor front, are there advantages to going to X series > processors as opposed to the E series (especially since I am I/O > bound)? Is anyone running this type of hardware, specially on FreeBSD? > Any opinions, especially concerning the Areca controllers which they > use? > > > > It sounds like you should be saving your hardware dollars for more RAM > and disks, not getting faster procesors. The Areca controllers are > fast > and pretty reliable under Linux. I'm not aware of anyone using them > for > PostgreSQL in production on FreeBSD. Aberdeen may have enough > customers > doing that to give you a good opinion on how stable that is likely to > be; they're pretty straight as vendors go. You'd want to make sure to > stress test that hardware/software combo as early as possible > regardless, it's generally a good idea and you wouldn't be running a > really popular combination. > Thx. That was my overall plan - that's why I am opting for drives, cache on the controller, and memory. > -- > Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD > PostgreSQL Training, Services and Support www.2ndQuadrant.us > "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
> -----Original Message----- > From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance- > owner@postgresql.org] On Behalf Of Benjamin Krajmalnik > Sent: Monday, December 13, 2010 1:45 PM > To: Greg Smith > Cc: pgsql-performance@postgresql.org > Subject: Re: [PERFORM] Hardware recommendations > > > > > -----Original Message----- > > From: Greg Smith [mailto:greg@2ndquadrant.com] > > Sent: Saturday, December 11, 2010 2:18 AM > > To: Benjamin Krajmalnik > > Cc: pgsql-performance@postgresql.org > > Subject: Re: [PERFORM] Hardware recommendations > > > > Have you increased checkpoint parameters like checkpoint_segments? > You > > need to avoid having checkpoints too often if you're going to try and > > use 4GB of memory for shared_buffers. > > > > Yes, I have it configured at 1024 checkpoint_segments, 5min timeout,0.9 > compiostat -x 5letion_target I would consider bumping that checkpoint timeout duration to a bit longer and see if that helps any if you are still looking for knobs to fiddle with. YMMV. -Mark