Ed Loehr wrote:
> ... it is a well-known "postgresqlism"
> that you should consider running vacuum analyze at least nightly, possibly
> more frequently. [I run it hourly.]
I think there must be something wrong with the optimiser that it's
"postgresqlism" that you must vacuum analyze frequently. Just as an example,
for Clipper (dBase compiler), it's Clipperism that you must re-index if you
cannot locate some records just because the indexing module screws up.
For large 24x7 installations, it's impossible to vacuum nightly because when
postgresql is vacuuming the table is locked up, to the end-user the database
has already hung.
There has been effort to speed up the vacuuming process, but this isn't the
cure. I believe the fault lies on the optimizer.
For eg, in Bruce Momjian's FAQ 4.9:
PostgreSQL does not automatically maintain statistics. One has to make
an explicit vacuum call to update the statistics. After statistics are
updated, the optimizer knows how many rows in the table, and can
better decide if it should use indices. Note that the optimizer does
not use indices in cases when the table is small because a sequential
scan would be faster.
Why save on micro-seconds to use sequential scan when the table is small and
later 'forgets' that the table is now big because you didn't vacuum analyze?
Why can't the optimizer just use indexes when they are there and not
'optimize' for special cases when the table is small to save micro-seconds?
Thomas