vacuum - adjusts the rows in a disk file (it might do more)
vacuum analyze - does the above and updates the table's statistics for
planning-DEJ
> Hello everyone,
>
> in a substantial multi location development project we are using PG
> to create a large information system. After having loaded data in
> the order of one million records a vaccum took 50h of CPU time on a
> 350MHz PII (RedHat 5.2, pg 6.4.2). There were no deletes or anyting
> like that.
>
> I made a test database when I was deciding whether PostgreSQL was
> going to be up to the task. It just so happens that it had a million
> records in it as well. The first vacuum took a very very long time,
> subsequent vacuums, however, took much less time.
>
> What does vaccum do that takes so long and is there a way to speed
> this up?
>
> Apparently it does some "database stuff." Yikes, someone else will
> have to answer that question. I just know how to get data in and out
> of PostgreSQL.
>
> Jason Earl