Thread: vacuum full, why multiple times ?

vacuum full, why multiple times ?

From
Gaetano Mendola
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi all,
this is the third email that I post but I do not see it in archives,
the email was too long I believe so this time I will limit the rows.
Basically I'm noticing that a simple vacuum full is not enough to
shrink completelly the table:

# vacuum full verbose url;
INFO:  vacuuming "public.url"
INFO:  "url": found 268392 removable, 21286 nonremovable row versions in 8563 pages
DETAIL:  22 dead row versions cannot be removed yet.
Nonremovable row versions range from 104 to 860 bytes long.
There were 13924 unused item pointers.
Total free space (including removable row versions) is 63818404 bytes.
4959 pages are or will become empty, including 7 at the end of the table.
8296 pages containing 63753840 free bytes are potential move destinations.
CPU 0.33s/0.12u sec elapsed 9.55 sec.

[SNIPPED]

INFO:  "url": moved 2 row versions, truncated 8563 to 8550 pages


and after 4 vacuum full:



empdb=# vacuum full verbose url;
INFO:  vacuuming "public.url"
INFO:  "url": found 13 removable, 21264 nonremovable row versions in 8504 pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 104 to 860 bytes long.
There were 280528 unused item pointers.
Total free space (including removable row versions) is 63349188 bytes.
4913 pages are or will become empty, including 0 at the end of the table.
8234 pages containing 63340628 free bytes are potential move destinations.
CPU 0.17s/0.04u sec elapsed 0.49 sec.

[SNIPPED]

INFO:  "url": moved 5666 row versions, truncated 8504 to 621 pages




anyone knows why ? I had the same behaviour with a 46000 rows table with
46000 pages! It was reduced to 3000 pages after 7 vacuum full.


Regards
Gaetano Mendola


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCLksV7UpzwH2SGd4RAoz3AKDvXSx3w/jRz/NR1pgtrxIZs8cJcwCg/0xm
zSr0sPDBkp8V1WXjREoVdLk=
=EHv2
-----END PGP SIGNATURE-----


Re: vacuum full, why multiple times ?

From
Michael Fuhr
Date:
On Wed, Mar 09, 2005 at 02:02:13AM +0100, Gaetano Mendola wrote:

> Basically I'm noticing that a simple vacuum full is not enough to
> shrink completelly the table:
>
> # vacuum full verbose url;
> INFO:  vacuuming "public.url"
> INFO:  "url": found 268392 removable, 21286 nonremovable row versions in 8563 pages
> DETAIL:  22 dead row versions cannot be removed yet.

How busy is the database?  I'd guess that each time you run VACUUM,
there are still open transactions that have visibility to the dead
rows, so VACUUM doesn't touch them.  Those transactions eventually
complete, and eventually VACUUM FULL does what you're expecting.
I don't know if that's the only possible cause, but I get results
similar to yours if I have transactions open when I run VACUUM.

--
Michael Fuhr
http://www.fuhr.org/~mfuhr/

Re: vacuum full, why multiple times ?

From
Gaetano Mendola
Date:
Michael Fuhr wrote:
> On Wed, Mar 09, 2005 at 02:02:13AM +0100, Gaetano Mendola wrote:
>
>
>>Basically I'm noticing that a simple vacuum full is not enough to
>>shrink completelly the table:
>>
>># vacuum full verbose url;
>>INFO:  vacuuming "public.url"
>>INFO:  "url": found 268392 removable, 21286 nonremovable row versions in 8563 pages
>>DETAIL:  22 dead row versions cannot be removed yet.
>
>
> How busy is the database?  I'd guess that each time you run VACUUM,
> there are still open transactions that have visibility to the dead
> rows, so VACUUM doesn't touch them.  Those transactions eventually
> complete, and eventually VACUUM FULL does what you're expecting.
> I don't know if that's the only possible cause, but I get results
> similar to yours if I have transactions open when I run VACUUM.
>

That was my first tough but it seem strange that 2 dead rows where
grabbing 7883 pages, don't you think ?


# vacuum full verbose url;
INFO:  vacuuming "public.url"
INFO:  "url": found 74 removable, 21266 nonremovable row versions in 8550 pages
DETAIL:  2 dead row versions cannot be removed yet.
[SNIP]
INFO:  "url": moved 11 row versions, truncated 8550 to 8504 pages


and in the next run:


# vacuum full verbose url;
INFO:  vacuuming "public.url"
INFO:  "url": found 13 removable, 21264 nonremovable row versions in 8504 pages
DETAIL:  0 dead row versions cannot be removed yet.
[SNIP]
INFO:  "url": moved 5666 row versions, truncated 8504 to 621 pages




Regards
Gaetano Mendola





Re: vacuum full, why multiple times ?

From
Richard Huxton
Date:
Gaetano Mendola wrote:
>
> # vacuum full verbose url;
> INFO:  vacuuming "public.url"
> INFO:  "url": found 74 removable, 21266 nonremovable row versions in 8550 pages
> DETAIL:  2 dead row versions cannot be removed yet.
> [SNIP]
> INFO:  "url": moved 11 row versions, truncated 8550 to 8504 pages
>
>
> and in the next run:
>
>
> # vacuum full verbose url;
> INFO:  vacuuming "public.url"
> INFO:  "url": found 13 removable, 21264 nonremovable row versions in 8504 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> [SNIP]
> INFO:  "url": moved 5666 row versions, truncated 8504 to 621 pages

If page number 8549 was the one being held, I don't think vacuum can
truncate the file. The empty space can be re-used, but the rows can't be
moved to a lower page while a transaction is using them.

--
   Richard Huxton
   Archonet Ltd

Re: vacuum full, why multiple times ?

From
Gaetano Mendola
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Richard Huxton wrote:
> Gaetano Mendola wrote:
>
>>
>> # vacuum full verbose url;
>> INFO:  vacuuming "public.url"
>> INFO:  "url": found 74 removable, 21266 nonremovable row versions in
>> 8550 pages
>> DETAIL:  2 dead row versions cannot be removed yet.
>> [SNIP]
>> INFO:  "url": moved 11 row versions, truncated 8550 to 8504 pages
>>
>>
>> and in the next run:
>>
>>
>> # vacuum full verbose url;
>> INFO:  vacuuming "public.url"
>> INFO:  "url": found 13 removable, 21264 nonremovable row versions in
>> 8504 pages
>> DETAIL:  0 dead row versions cannot be removed yet.
>> [SNIP]
>> INFO:  "url": moved 5666 row versions, truncated 8504 to 621 pages
>
>
> If page number 8549 was the one being held, I don't think vacuum can
> truncate the file. The empty space can be re-used, but the rows can't be
> moved to a lower page while a transaction is using them.

It's clear now.


Regards
Gaetano Mendola
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCLwhu7UpzwH2SGd4RAhEIAKDodnb03RvInDOJz9H+4w//DgJifACeNINP
0UMkQ0yBwNAZw91clvAUjRI=
=e+mM
-----END PGP SIGNATURE-----


Re: vacuum full, why multiple times ?

From
Tom Lane
Date:
Gaetano Mendola <mendola@bigfoot.com> writes:
> Richard Huxton wrote:
>> If page number 8549 was the one being held, I don't think vacuum can
>> truncate the file. The empty space can be re-used, but the rows can't be
>> moved to a lower page while a transaction is using them.

> It's clear now.

Not entirely.  VACUUM FULL doesn't really worry about whether anyone
else "is using" the table --- it knows no one else is, because it holds
exclusive lock on the table.  However it must preserve dead tuples that
would still be visible to any existing transaction, because that other
transaction could come along and look at the table after VACUUM
finishes and releases the lock.

What really drives the process is that VACUUM FULL moves tuples in order
to make the file shorter (release empty pages at the end) --- and not
for any other reason.  So it could stop when there is still plenty of
dead space in the table.  It stops when the last nonempty page contains
a tuple that it can't find room for in any earlier page.

What I suppose you saw was that page 8503 contained a tuple so large it
wouldn't fit in the free space on any earlier page.  By the time of the
second vacuum, either this tuple was deleted, or deletion of some other
tuples had made a hole big enough for it to fit in.

The extent of the truncation in the second vacuum says that you had
quite a lot of free space, so it's a bit surprising that there wasn't
enough room in any one page for such a tuple to be moved, but that seems
to be what happened.

            regards, tom lane

Re: vacuum full, why multiple times ?

From
Gaetano Mendola
Date:
Tom Lane wrote:
> Gaetano Mendola <mendola@bigfoot.com> writes:
>
>>Richard Huxton wrote:
>>
>>>If page number 8549 was the one being held, I don't think vacuum can
>>>truncate the file. The empty space can be re-used, but the rows can't be
>>>moved to a lower page while a transaction is using them.
>
>
>>It's clear now.
>
>
> Not entirely.  VACUUM FULL doesn't really worry about whether anyone
> else "is using" the table --- it knows no one else is, because it holds
> exclusive lock on the table.  However it must preserve dead tuples that
> would still be visible to any existing transaction, because that other
> transaction could come along and look at the table after VACUUM
> finishes and releases the lock.
>
> What really drives the process is that VACUUM FULL moves tuples in order
> to make the file shorter (release empty pages at the end) --- and not
> for any other reason.  So it could stop when there is still plenty of
> dead space in the table.  It stops when the last nonempty page contains
> a tuple that it can't find room for in any earlier page.
>
> What I suppose you saw was that page 8503 contained a tuple so large it
> wouldn't fit in the free space on any earlier page.  By the time of the
> second vacuum, either this tuple was deleted, or deletion of some other
> tuples had made a hole big enough for it to fit in.
>
> The extent of the truncation in the second vacuum says that you had
> quite a lot of free space, so it's a bit surprising that there wasn't
> enough room in any one page for such a tuple to be moved, but that seems
> to be what happened.

All rows of that table are almost of the same size, so this is not the
reason, and neither any row was deleted.
May be the page 8503 was cointainig a dead row ?

I can send you off line another vacuum full sequence if you need it, I sent it
to list but apparently the size was too much and noone of you seen it.

Regards
Gaetano Mendola