Or more importantly the bus?? I take it you're using a SCSI sub system,
most likely with a RAID Container, lots of cache too, level 5?
After all regardless of how powerful your CPU is, or how fast your disks
are, if you're trying to squeeze it all down a straw, that won't help.
We've experienced Db slow downs before when joining tables that
comprised of long string keys (GUID/OID like).
Do you really need all the data in the tables available? Have you
considered an archiving process to a second database for data
wharehousing? For example most banks keep only a months worth of live
data in the actual system, and archive nightly on a sliding window out
to a seconday database, this is then archived off to tape for long term
storage. This keeps the system at an average size for performance. Often
you can also keep historical information can be kept in a summary
(optimised) form rather than generating off the raw data?
Also when doing HUGE INSERT sets, it can often be more efficient to drop
the indexes, or if the Db engine supports set to deferred indexing and
then rebuild after the inserts.
Some ideas, that you may have already covered, but, hey! They're free
;-)
Hadley
On Wed, 2002-12-11 at 16:21, Joseph Shraibman wrote:
> Is it blocking on cpu or disk? Perhaps simply buying faster disks will work.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
--
Hadley Willan > Systems Development > Deeper Design Limited.
hadley@deeper.co.nz > www.deeperdesign.com > +64 (21) 28 41 463
Level 1, 4 Tamamutu St, PO Box 90, TAUPO 2730, New Zealand.