Queries against multi-million record tables. - Mailing list pgsql-admin

From Michael Miyabara-McCaskey
Subject Queries against multi-million record tables.
Date
Msg-id 000b01c08b3c$9ca08cc0$c700a8c0@ncc1701e
Whole thread Raw
List pgsql-admin
Hello all,

I am in the midst of taking a development DB into production, but the
performance has not been very good so far.

The DB is a decision based system, that currently has queries against tables
with up to 20million records (3GB table sizes), and at this point about a
25GB DB in total. {Later down the road up to 60million records and a DB of
up to 150GB is planned).

As I understand it, Oracle has some product called "parallel query" which
splits the table queried into 10 pieces and then does each one across as
many CPUs as possible, then puts it all back together again.

So my question is... based upon the messages I have read here, it does not
appear that PostgreSQL makes use of multiple CPUs, but only hands the next
query off to the next processor based upon operating system rules.

Therefore, what are some good ways to handle such large amounts of
information using PostgreSQL?

Michael Miyabara-McCaskey
Email: mykarz@miyabara.com
Web: http://www.miyabara.com/mykarz/
Mobile: +1 408 504 9014


pgsql-admin by date:

Previous
From: lbottorff@harveycounty.com
Date:
Subject: When is a constraint too big?
Next
From: microx_2000@yahoo.com
Date:
Subject: startup Postgres on NT