"Ed L." <pgsql@bluepolka.net> writes:
> If it were indeed the case that we'd leaked a lot of diskspace, then after
> bumping max_fsm_pages up to a much higher number (4M), will these pages
> gradually be "remembered" as they are accessed by autovac and or queried,
> etc? Or is a dump/reload or 'vacuum full' the only way? Trying to avoid
> downtime...
The next vacuum will add the "leaked" space back into the FSM, once
there's space there to remember it. You don't need to do anything
drastic, unless you observe that the amount of wasted space is so large
that a vacuum full is needed.
BTW, these days, a CLUSTER is a good alternative to a VACUUM FULL; it's
likely to be faster if the VACUUM would involve moving most of the live
data anyway.
regards, tom lane