Hi,
I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields ('d0','d1',...,'d4').
I populated the table with 10000 rows; each d.. field inizialized with 20 bytes.
Reported table size is 1.5MB. OK.
Now, for 1000 times, I update 2000 different rows each time, changing d0 filed keeping the same length, and at the end of all, I issued VACUUM.
Now table size is 29MB.
Why so big? What is an upper bound to estimate a table occupation on disk?
The same test, redone with dX length=200 bytes instead of 20 reports:
Size before UPDATES = 11MB. OK.
Size after UPDATES = 1.7GB . Why?
Attached a txt file with details of statistical command I issued (max of row size, rows count etc....)
Regards
Pupillo