pgsql and large tables - Mailing list pgsql-general

From Gurupartap Davis
Subject pgsql and large tables
Date
Msg-id 096901c16c84$bd1dbc50$0f00a8c0@marlows
Whole thread Raw
Responses Practical PostgreSQL over 5,000 copies already!
Re: pgsql and large tables
Re: pgsql and large tables
List pgsql-general
I've got a large database, currently about 40 million rows in the biggest table.  I'm trying to set it up with PostgreSQL 7.1.3, but I've got some questions...
 
I initially created the table and with a primary key (5 fields: char(4), varchar(32), date, int, int) and a couple additional indexes, one that just changes the order of the primary key, and one that sorts on the date.  Then I imported 40 million rows and tried some queries, but discovered that none of my queries were using indexes, causing them to take forever.
 
So I read somewhere in the archives thatyou need to VACUUM a table regularly for indexes to work properly.  I tried that, but aborted after about 5 hours.  I can't use pgsql if I have to take the db down for more than 10-15 minutes a day. 
 
Then I read somewhere else that you should drop your indexes before VACUUMing and re-create them afterwards. I tried that, and VACUUM finished in about 10 minutes.  Kewl... but now I've been trying to recreate my primary key for the last 18 hours...not so good.
 
Should I have dropped all indexes *except* for the primary?  or would VACUUM still take forever that way?  Should I make an artificial primary key with a serial type to simplify things?  Anyone have any hints at all for me?
 
thanks,
Partap Davis
 
 

pgsql-general by date:

Previous
From: "Command Prompt, Inc."
Date:
Subject: XML/PostgreSQL Application server now available
Next
From: "Command Prompt, Inc."
Date:
Subject: Practical PostgreSQL over 5,000 copies already!