High volume inserts - more disks or more CPUs? - Mailing list pgsql-general

From Guy Rouillier
Subject High volume inserts - more disks or more CPUs?
Date
Msg-id CC1CF380F4D70844B01D45982E671B2348E496@mtxexch01.add0.masergy.com
Whole thread Raw
Responses Re: High volume inserts - more disks or more CPUs?  (Richard Huxton <dev@archonet.com>)
List pgsql-general
Seeking advice on system configuration (and I have read the techdocs.)
We are converting a data collection system from Oracle to PostgreSQL
8.0.  We are currently getting about 64 million rows per month; data is
put into a new table each month.  The number of simultaneous connections
is very small: one that does all these inserts, and < 5 others that
read.

We trying to identify a server for this.  Options are a 4-way Opteron
with 4 SCSI disks, or a 2-way Opteron with 6 SCSI disks.  The 4-CPU box
currently has 16 GB of memory and the 2-CPU 4 GB, but we can move that
memory around as necessary.

(1) Would we be better off with more CPUs and fewer disks or fewer CPUs
and more disks?

(2) The techdocs suggest starting with 10% of available memory for
shared buffers, which would be 1.6 GB on the 4-way.  But I've seen posts
here saying that anything more than 10,000 shared buffers (80 MB)
provides little or no improvement.  Where should we start?

(3) If we go with more disks, should we attempt to split tables and
indexes onto different drives (i.e., tablespaces), or just put all the
disks in hardware RAID5 and use a single tablespace?

I appreciate all suggestions.

--
Guy Rouillier

pgsql-general by date:

Previous
From: Greg Stark
Date:
Subject: Re: disabling OIDs?
Next
From: Shridhar Daithankar
Date:
Subject: Re: Spanning tables