Tom DalPozzo <t.dalpozzo@gmail.com> wrote:
> Hi,
> I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields
> ('d0','d1',...,'d4').
> I populated the table with 10000 rows; each d.. field inizialized with 20
> bytes.
> Reported table size is 1.5MB. OK.
> Now, for 1000 times, I update 2000 different rows each time, changing d0
> filed keeping the same length, and at the end of all, I issued VACUUM.
> Now table size is 29MB.
>
> Why so big? What is an upper bound to estimate a table occupation on disk?
every (!) update creates a new row-version and marks the old row as
'old', but don't delete the old row.
A Vacuum marks old rows as reuseable - if there is no runnung
transaction that can see the old row-version. That's how MVCC works in
PostgreSQL.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services