On Jan 28, 2008 7:54 AM, Alex Hochberger <alex@dsgi.us> wrote:
> We are trying to optimize our Database server without spending a
> fortune on hardware. Here is our current setup.
>
> Main Drive array: 8x 750 GB SATA 2 Drives in a RAID 10 Configuration,
> this stores the OS, Applications, and PostgreSQL Data drives. 3 TB
> Array, 2 TB Parition for PostgreSQL.
> Secondary drive array: 2x 36 GB SAS 15000 RPM Drives in a RAID 1
> Configuration: the pg_xlog directory, checkpoints set to use about 18
> GB max, this way when massive numbers of small writes occur, they
> don't slow the system down. Drive failure loses no data. Checkpoints
> will be another matter, hope to keep under control with bgwriter
> tweaking.
>
SNIP
> However, the joins of two 50GB tables really just can't be solved in
> RAM without using drive space. My question is, can hardware speed
> that up? Would putting a 400 GB SAS Drive (15000 RPM) in just to
> handle PostgreSQL temp files help? Considering it would store "in
> process" queries and not "completed transactions" I see no reason to
> mirror the drive. If it fails, we'd simply unmount it, replace it,
> then remount it, it could use the SATA space in the mean time.
>
> Would that speed things up, and if so, where in the drive mappings
> should that partition go?
Do you have a maintenance window to experiment in? Try putting it on
the pg_xlog array to see if it speeds up the selects during one. Then
you'll know. I'm thinking it will help a little, but there's only so
much you can do with 50g result sets.