Stephan Szabo wrote:
>
>Unfortunately, yes. VACUUM FULL is effectively the 7.1 and earlier
>vacuum. Normal vacuum wouldn't have helped you in this case since it
>wouldn't compress the table (IIRC it only moves tuples within a block
>rather than between them, so all the dead blocks at the beginning are
>still there).
>
Aha! This (finally) gives me some hope! If I read you correctly - are
you saying, that, once my 'vacuum full' is finished, I'm finally back in
business?
Thank you very much!
If you are still not tired of me, could you also give me an idea on the
extent of this problem, while you are at it? I mean, how bad it really is?
I will never update the whole table from, now on (I realize now, that
it's much cheaper to just recreate it)...
But in general, if I put this database into production, I am going to
routinely update about 10-15% of all the rows in that table every
month... How bad it is?
Am I going to need a 'vacuum full' after every update? Or how often
would it be practical to do that?
Also, do you have any idea, why does vacuum take me so long (24 hours
sounds a little excessive, doesn't it)? I've got a decent machine (4
CPU, 750 MHz, 8 Gig of RAM), so hardware whouldn't be a problem...
Could it be something wrong with my configuration.
Here is my postgresql.conf (with all the comments removed - just the
stuff I changed from the default)...
Does anything look wrong to you here?
tcpip_socket = true
max_connections = 100
shared_buffers = 64000
max_locks_per_transaction = 640
wal_buffers = 80
sort_mem = 10240
wal_files = 64
checkpoint_segments = 20
checkpoint_timeout = 600
stats_command_string = true
stats_row_level = true
stats_block_level = true
deadlock_timeout = 300000
Thanks again for giving my hope back! :-)
I really appreciate your response!
Dima