Hi Tom
On Sat, Dec 10, 2016 at 1:15 PM, Tom DalPozzo <t.dalpozzo@gmail.com> wrote:
...
> Reported table size is 1.5MB. OK.
That's 150 bytes per row, prety normal.
> Now, for 1000 times, I update 2000 different rows each time, changing d0
> filed keeping the same length, and at the end of all, I issued VACUUM.
And probably autovacuum or something similar kicked in meanwhile. 2M
updates is 200 updates per row, that's pretty heavy traffic, many
tables do not get that in their whole lifetime.
> Now table size is 29MB.
> Why so big? What is an upper bound to estimate a table occupation on disk?
Strictly, you could probably calculate an upper bound as row
size*number or insertions, given an update aproximates an insertion
plus deletion. Given the original insertion used 1.5Mb and you
repeated thar about 200 times, I would stimate 201*1.5=301.5Mb as an
upper bound, but I doubt that's of any use.
Those many updates probably left your table badly fragmented, with few
rows per page. On a normal usage you do not need to worry, as
periodic vacuum would mark the space for reuse and the table will not
grow that big. But issuing an 1k updates on 20% of the table is hardly
normal usage, if you need this kind of usage maybe you should rethink
your strategies.
Vacuum full will probably pack the table and ''recover'' the space, it
should be fast with just 29Mb on disk. Not knowing your intended usage
nothing can be recommended, but I've had some usage patterns where a
heavy update plus vacuuum full was successfully used.
Francisco Olarte.