Thread: Best approach for large table maintenance

Best approach for large table maintenance

From
"Vanole, Mike"
Date:
Hi,

I have an application where I drop, recreate, reload, and recreate
indexes on a 1 million row table each day. I do this to avoid having to
run vacuum on the table in the case where I might use DELETE or UPDATEs
on deltas.

It seems that running vacuum still has value in the above approach
because I still see index row versions were removed. I do not explicitly
drop the indexes because they are dropped with the table.

In considering the use of TRUNCATE I sill have several indexes that if
left in place would slow down the data load.

My question is, what is the best way to manage a large table that gets
reloaded each day?

Drop
Create Table
Load (copy or insert/select)
Create Indexes
Vacuum anyway?

Or...

DROP indexes
Truncate
Load (copy or insert/select)
Create Indexes

And is vacuum still going to be needed?

Many Thanks,
Mike



Re: Best approach for large table maintenance

From
Decibel!
Date:
On Apr 22, 2008, at 12:04 PM, Vanole, Mike wrote:
> It seems that running vacuum still has value in the above approach
> because I still see index row versions were removed.


That means either part of the import failed (transaction aborted), or
you're updating or deleting rows between the load and the vacuum.
--
Decibel!, aka Jim C. Nasby, Database Architect  decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



Attachment