Re: Big delete on big table... now what? - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: Big delete on big table... now what?
Date
Msg-id 48AEE9C3.EE98.0025.0@wicourts.gov
Whole thread Raw
In response to Big delete on big table... now what?  ("Fernando Hevia" <fhevia@ip-tel.com.ar>)
List pgsql-performance
>>> "Fernando Hevia" <fhevia@ip-tel.com.ar> wrote:

> I have a table with over 30 million rows. Performance was dropping
steadily
> so I moved old data not needed online to an historic table. Now the
table
> has about 14 million rows. I don't need the disk space returned to
the OS
> but I do need to improve performance. Will a plain vacuum do or is a
vacuum
> full necessary?
> *Would a vacuum full improve performance at all?

If this database can be out of production for long enough to run it
(possibly a few hours, depending on hardware, configuration, table
width, indexes) your best option might be to CLUSTER and ANALYZE the
table.  It gets more complicated if you can't tolerate down-time.

-Kevin

pgsql-performance by date:

Previous
From: "Fernando Hevia"
Date:
Subject: Big delete on big table... now what?
Next
From: Bill Moran
Date:
Subject: Re: Big delete on big table... now what?