Hi All
Thanks for all your valuable inputs,
Here is some more data,
Though we have 150 GB free space spread across 500 Tables , Every alternative day DB is growing with 1 GB rate.
Also,We have manual vacuum job scheduled to run weekly basis, So seems to be space is not reusing all the time ?
So conclude the requirement here , The only way to parallelism is multiple script. And no need to do REINDEX exclusively.
Question : Do we need to consider Table dependencies while preparing script in order to avoid table locks during vacuum full ?
At present Maintenance work memory set to 20 GB.
Question : Do we need to tweak any other parameters ?
Note:
We are planning this activity with Application Downtime only.
Let me know if i missed anything.
Regards,
Raj
> And future updates can reuse it, too (an update is very similar to an
> insert+delete).
Hm, then it's strange our DB takes 6 times as much space compared to
freshly restored one (only public schema is considered).
> Not if autovacuum has a chance to run between updates.
Ours is run regularly, although we had to tweak it down not to interfere
with normal database activity, so it takes several hours each run on the
table. We did that by setting autovacuum_vacuum_scale_factor = 0.05 from
default 0.2.