Best approach for large table maintenance - Mailing list pgsql-general

From Vanole, Mike
Subject Best approach for large table maintenance
Date
Msg-id C9C075DB3961464180CE3DEF766B4A2C07EB4376@ad01msxmb007.US.Cingular.Net
Whole thread Raw
Responses Re: Best approach for large table maintenance  (Decibel! <decibel@decibel.org>)
List pgsql-general
Hi,

I have an application where I drop, recreate, reload, and recreate
indexes on a 1 million row table each day. I do this to avoid having to
run vacuum on the table in the case where I might use DELETE or UPDATEs
on deltas.

It seems that running vacuum still has value in the above approach
because I still see index row versions were removed. I do not explicitly
drop the indexes because they are dropped with the table.

In considering the use of TRUNCATE I sill have several indexes that if
left in place would slow down the data load.

My question is, what is the best way to manage a large table that gets
reloaded each day?

Drop
Create Table
Load (copy or insert/select)
Create Indexes
Vacuum anyway?

Or...

DROP indexes
Truncate
Load (copy or insert/select)
Create Indexes

And is vacuum still going to be needed?

Many Thanks,
Mike



pgsql-general by date:

Previous
From: Clemens Schwaighofer
Date:
Subject: Re: Postgres Encoding conversion problem
Next
From: "Leandro Casadei"
Date:
Subject: Updating with a subselect