Thread: Bloated tables and why is vacuum full the only option

Bloated tables and why is vacuum full the only option

From
Claudio Freire
Date:
Hello list.

I know all the theory about vacuuming. I've got log tables that get
periodically pruned. The pruning is... quirky, though. It's not so
much deleting data, as summarizing many (thousands) of rows into one
single row. For that, a combination of deletes and updates are used.

In essence, the tables are write-only except for the summarization
step for old data.

Many tables are becoming increasingly bloated, which is somewhat
expected due to this usage pattern: I had expected table size to be
about constant, holding recent data plus archived old data (which is
really small compared to full recent logs), with some constant-sized
bloat due to daily summarization updates/deletes.

What I'm seeing, though, is not that, but bloat proportional to table
size (always stuck at about 65% bloat). What's weird, is that vacuum
full does the trick of reducing table size and bloat back to 0%. I
haven't had time yet to verify whether it goes back to 65% after
vacuum full (that will take time, maybe a month).

Question is... why isn't all that free space being used? The table
grows in size even though there's plenty (65%) of free space.

I've got autovacuum severely crippled and that could be a reason, but
I do perform regular vacuum runs weekly that always run to completion.
I also do routine reindexing to stop index bloat on its tracks, yet
freshly-reindexed indexes get considerably reduced in size with vacuum
full.

Is there some case in which regular vacuum would fail to reclaim space
but vacuum full would not?

I'm running postgresql 9.2.5. If this was 8.x, I'd suspect of the free
space map, but AFAIK there's no such limit in 9.2. Relevant
non-default settings are:

"archive_mode";"on"
"autovacuum_analyze_scale_factor";"0.05"
"autovacuum_analyze_threshold";"500"
"autovacuum_max_workers";"1"
"autovacuum_naptime";"900"
"autovacuum_vacuum_cost_delay";"80"
"autovacuum_vacuum_cost_limit";"100"
"autovacuum_vacuum_scale_factor";"0.1"
"autovacuum_vacuum_threshold";"500"
"bgwriter_lru_maxpages";"200"
"checkpoint_completion_target";"0.9"
"checkpoint_segments";"64"
"checkpoint_timeout";"1800"
"maintenance_work_mem";"1048576"
"vacuum_cost_delay";"20"
"wal_level";"archive"


[0] http://labs.omniti.com/labs/pgtreats/browser/trunk/tools/pg_bloat_report.pl


Re: Bloated tables and why is vacuum full the only option

From
Tomas Vondra
Date:
On 7.2.2014 19:47, Claudio Freire wrote:
>
> Question is... why isn't all that free space being used? The table
> grows in size even though there's plenty (65%) of free space.
>
> I've got autovacuum severely crippled and that could be a reason, but
> I do perform regular vacuum runs weekly that always run to completion.
> I also do routine reindexing to stop index bloat on its tracks, yet
> freshly-reindexed indexes get considerably reduced in size with vacuum
> full.

Are you logging autovacuum actions? I.e. what is

    log_autovacuum_min_duration

set to? It it's set to -1 you won't get any messages because of
conflicting locks or stuff like that, which might be the culprit here.

Also, when you're running the weekly VACUUM, do VACUUM (VERBOSE) and
post it here. That might at least help us eliminate some of the usual
suspects.

regards
Tomas


Re: Bloated tables and why is vacuum full the only option

From
Claudio Freire
Date:
On Sun, Feb 9, 2014 at 12:50 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
> On 7.2.2014 19:47, Claudio Freire wrote:
>>
>> Question is... why isn't all that free space being used? The table
>> grows in size even though there's plenty (65%) of free space.
>>
>> I've got autovacuum severely crippled and that could be a reason, but
>> I do perform regular vacuum runs weekly that always run to completion.
>> I also do routine reindexing to stop index bloat on its tracks, yet
>> freshly-reindexed indexes get considerably reduced in size with vacuum
>> full.
>
> Are you logging autovacuum actions? I.e. what is
>
>     log_autovacuum_min_duration
>
> set to? It it's set to -1 you won't get any messages because of
> conflicting locks or stuff like that, which might be the culprit here.

It was set to -1. I set it to 5000 and I'll be keeping an eye on the logs.

> Also, when you're running the weekly VACUUM, do VACUUM (VERBOSE) and
> post it here. That might at least help us eliminate some of the usual
> suspects.

I'm using a cron job for this, I'll see about dumping the results to a
log file and post when it's done.


Re: Bloated tables and why is vacuum full the only option

From
Tom Lane
Date:
Claudio Freire <klaussfreire@gmail.com> writes:
>>> I also do routine reindexing to stop index bloat on its tracks, yet
>>> freshly-reindexed indexes get considerably reduced in size with vacuum
>>> full.

AFAIK there's no reason for vacuum full to produce a different result
from reindex.  Did you mean to say that the indexes get smaller than
what they had been after some normal operation?  If so it's worth noting
this comment from the btree index building code (nbtsort.c):

 * It is not wise to pack the pages entirely full, since then *any*
 * insertion would cause a split (and not only of the leaf page; the need
 * for a split would cascade right up the tree).  The steady-state load
 * factor for btrees is usually estimated at 70%.  We choose to pack leaf
 * pages to the user-controllable fill factor (default 90%) while upper pages
 * are always packed to 70%.  This gives us reasonable density (there aren't
 * many upper pages if the keys are reasonable-size) without risking a lot of
 * cascading splits during early insertions.

As the comment notes, the initial state of a freshly-built index is packed
more densely than what you can expect after a lot of insertions/updates
have occurred.  That's not a bug, it's just a fact of life.

Also, there are certain usage patterns that can result in btree indexes
having densities much lower than the conventional-wisdom 70%.  The main
one I've seen in practice is "decimation", where you delete say 99 out
of every 100 entries in index order.  This leaves just a few live entries
in each leaf page --- but our btree code doesn't reclaim an index page
for recycling until it's totally empty.  So you can end up with a very
low load factor after doing something like that, and a reindex is the
only good way to fix it.

            regards, tom lane


Re: Bloated tables and why is vacuum full the only option

From
Claudio Freire
Date:
On Sun, Feb 9, 2014 at 4:40 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Claudio Freire <klaussfreire@gmail.com> writes:
>>>> I also do routine reindexing to stop index bloat on its tracks, yet
>>>> freshly-reindexed indexes get considerably reduced in size with vacuum
>>>> full.
>
> AFAIK there's no reason for vacuum full to produce a different result
> from reindex.  Did you mean to say that the indexes get smaller than
> what they had been after some normal operation?  If so it's worth noting
> this comment from the btree index building code (nbtsort.c):

Smaller than after reindex. It was a surprise to me too.

> Also, there are certain usage patterns that can result in btree indexes
> having densities much lower than the conventional-wisdom 70%.  The main
> one I've seen in practice is "decimation", where you delete say 99 out
> of every 100 entries in index order.  This leaves just a few live entries
> in each leaf page --- but our btree code doesn't reclaim an index page
> for recycling until it's totally empty.  So you can end up with a very
> low load factor after doing something like that, and a reindex is the
> only good way to fix it.

That's exactly the kind of pattern the "archival" step results in,
that's why I do routine reindexing.


Re: Bloated tables and why is vacuum full the only option

From
Sergey Konoplev
Date:
On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
> What I'm seeing, though, is not that, but bloat proportional to table
> size (always stuck at about 65% bloat). What's weird, is that vacuum
> full does the trick of reducing table size and bloat back to 0%. I
> haven't had time yet to verify whether it goes back to 65% after
> vacuum full (that will take time, maybe a month).

Try pgcompact, it was designed particularily for such cases like yours
https://github.com/grayhemp/pgtoolkit.

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com


Re: Bloated tables and why is vacuum full the only option

From
Claudio Freire
Date:
On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev <gray.ru@gmail.com> wrote:
> On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
>> What I'm seeing, though, is not that, but bloat proportional to table
>> size (always stuck at about 65% bloat). What's weird, is that vacuum
>> full does the trick of reducing table size and bloat back to 0%. I
>> haven't had time yet to verify whether it goes back to 65% after
>> vacuum full (that will take time, maybe a month).
>
> Try pgcompact, it was designed particularily for such cases like yours
> https://github.com/grayhemp/pgtoolkit.

It's a pity that that requires several sequential scans of the tables.
For my case, that's probably as intrusive as the exclusive locks.

I noticed I didn't mention, but the tables involved are around 20-50GB in size.


Re: Bloated tables and why is vacuum full the only option

From
Sergey Konoplev
Date:
On Sun, Feb 9, 2014 at 2:58 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
> On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev <gray.ru@gmail.com> wrote:
>> Try pgcompact, it was designed particularily for such cases like yours
>> https://github.com/grayhemp/pgtoolkit.
>
> It's a pity that that requires several sequential scans of the tables.
> For my case, that's probably as intrusive as the exclusive locks.

Probably you should run it with --no-pgstattuple if you are talking
about these seq scans. If your tables are not TOASTed then the
approximation method of gathering statistics would work pretty good
for you.

> I noticed I didn't mention, but the tables involved are around 20-50GB in size.

It is not the thing I would worry about. I regularly use it with even
bigger tables.

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com


Re: Bloated tables and why is vacuum full the only option

From
Jeff Janes
Date:
On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
Hello list.

I know all the theory about vacuuming. I've got log tables that get
periodically pruned. The pruning is... quirky, though. It's not so
much deleting data, as summarizing many (thousands) of rows into one
single row. For that, a combination of deletes and updates are used.

In essence, the tables are write-only except for the summarization
step for old data.

Many tables are becoming increasingly bloated, which is somewhat
expected due to this usage pattern: I had expected table size to be
about constant, holding recent data plus archived old data (which is
really small compared to full recent logs), with some constant-sized
bloat due to daily summarization updates/deletes.

What I'm seeing, though, is not that, but bloat proportional to table
size (always stuck at about 65% bloat). What's weird, is that vacuum
full does the trick of reducing table size and bloat back to 0%. I
haven't had time yet to verify whether it goes back to 65% after
vacuum full (that will take time, maybe a month).

Question is... why isn't all that free space being used? The table
grows in size even though there's plenty (65%) of free space.

What does this look like with the pg_bloat_report.pl you linked to?

Does pg_freespace agree that that space is reusable?

SELECT avail,count(*) FROM pg_freespace('pgbench_accounts') group by avail;

Cheers,

Jeff

Re: Bloated tables and why is vacuum full the only option

From
Claudio Freire
Date:
On Tue, Feb 11, 2014 at 8:29 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <klaussfreire@gmail.com>
> wrote:
>>
>> Hello list.
>>
>> I know all the theory about vacuuming. I've got log tables that get
>> periodically pruned. The pruning is... quirky, though. It's not so
>> much deleting data, as summarizing many (thousands) of rows into one
>> single row. For that, a combination of deletes and updates are used.
>>
>> In essence, the tables are write-only except for the summarization
>> step for old data.
>>
>> Many tables are becoming increasingly bloated, which is somewhat
>> expected due to this usage pattern: I had expected table size to be
>> about constant, holding recent data plus archived old data (which is
>> really small compared to full recent logs), with some constant-sized
>> bloat due to daily summarization updates/deletes.
>>
>> What I'm seeing, though, is not that, but bloat proportional to table
>> size (always stuck at about 65% bloat). What's weird, is that vacuum
>> full does the trick of reducing table size and bloat back to 0%. I
>> haven't had time yet to verify whether it goes back to 65% after
>> vacuum full (that will take time, maybe a month).
>>
>> Question is... why isn't all that free space being used? The table
>> grows in size even though there's plenty (65%) of free space.
>
>
> What does this look like with the pg_bloat_report.pl you linked to?
>
> Does pg_freespace agree that that space is reusable?
>
> SELECT avail,count(*) FROM pg_freespace('pgbench_accounts') group by avail;

I don't have pg_freespacemap installed on that server.

I guess I'll take the next opportunity to install it.


Re: Bloated tables and why is vacuum full the only option

From
Claudio Freire
Date:
On Sun, Feb 9, 2014 at 12:50 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
> Also, when you're running the weekly VACUUM, do VACUUM (VERBOSE) and
> post it here. That might at least help us eliminate some of the usual
> suspects.

I have the full dump if relevant. The relevant extract for:

Report:
  1. public.full_bid_logs
2551219 of 4417505 pages wasted (57.8%), 19 GB of 34 GB.
  2. public.non_bid_logs
814076 of 1441728 pages wasted (56.5%), 6360 MB of 11 GB.
  3. public.click_logs
631630 of 2252999 pages wasted (28.0%), 4935 MB of 17 GB.
  4. public.full_impression_logs
282298 of  762238 pages wasted (37.0%), 2205 MB of 5955 MB.

Most of which appear on the log regarding cancelled autovacuums (on
Wednesday, which suggests it's because of the manual vacuum itself
since they run on Wednesdays):

/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG:  sending cancel
to blocking autovacuum PID 30000
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR:  canceling
autovacuum task
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT:  automatic
vacuum of table "mat.public.aggregated_tracks_hourly_full"
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG:  sending cancel
to blocking autovacuum PID 30000
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR:  canceling
autovacuum task
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT:  automatic
vacuum of table "mat.public.bid_logs"
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG:  sending cancel
to blocking autovacuum PID 30000
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR:  canceling
autovacuum task
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT:  automatic
vacuum of table "mat.public.full_impression_logs"
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG:  sending cancel
to blocking autovacuum PID 30000
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR:  canceling
autovacuum task
/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT:  automatic
vacuum of table "mat.public.non_bid_logs"

Note: the bid_logs one up there was the one I vacuumed full, so that's
why it's not showing on pg bloat report.

(manual) Vacuum verbose follows:

...
INFO:  vacuuming "public.full_bid_logs"
INFO:  scanned index "full_bid_logs_pkey" to remove 11416990 row versions
DETAIL:  CPU 16.49s/133.42u sec elapsed 2660.01 sec.
INFO:  "full_bid_logs": removed 11416990 row versions in 171209 pages
DETAIL:  CPU 5.28s/5.18u sec elapsed 610.86 sec.
INFO:  index "full_bid_logs_pkey" now contains 267583488 row versions
in 2207958 pages
DETAIL:  10206852 index row versions were removed.
3852 index pages have been deleted, 3010 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.04 sec.
INFO:  "full_bid_logs": found 4625843 removable, 29252597 nonremovable
row versions in 402582 out of 4417505 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 2408853 unused item pointers.
0 pages are entirely empty.
CPU 30.10s/145.42u sec elapsed 4317.03 sec.
INFO:  analyzing "public.full_bid_logs"
INFO:  "full_bid_logs": scanned 30000 of 4417505 pages, containing
1802104 live rows and 0 dead rows; 30000 rows in sample, 211394515
estimated total rows
...
INFO:  vacuuming "public.non_bid_logs"
INFO:  scanned index "non_bid_logs_pkey" to remove 10137502 row versions
DETAIL:  CPU 1.55s/23.23u sec elapsed 325.36 sec.
INFO:  scanned index "non_bid_logs_created_idx" to remove 10137502 row versions
DETAIL:  CPU 1.60s/23.40u sec elapsed 332.35 sec.
INFO:  "non_bid_logs": removed 10137502 row versions in 116848 pages
DETAIL:  CPU 3.60s/1.36u sec elapsed 403.18 sec.
INFO:  index "non_bid_logs_pkey" now contains 41372799 row versions in
176455 pages
DETAIL:  10106519 index row versions were removed.
29944 index pages have been deleted, 11887 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "non_bid_logs_created_idx" now contains 41384156 row
versions in 184580 pages
DETAIL:  10137502 index row versions were removed.
45754 index pages have been deleted, 18310 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "non_bid_logs": found 1378107 removable, 4283511 nonremovable
row versions in 163702 out of 1451616 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 1345761 unused item pointers.
0 pages are entirely empty.
CPU 9.28s/49.42u sec elapsed 1378.98 sec.
INFO:  "non_bid_logs": suspending truncate due to conflicting lock request
INFO:  "non_bid_logs": truncated 1451616 to 1441728 pages
DETAIL:  CPU 0.18s/0.20u sec elapsed 6.98 sec.
INFO:  "non_bid_logs": stopping truncate due to conflicting lock request
INFO:  analyzing "public.non_bid_logs"
INFO:  "non_bid_logs": scanned 30000 of 1441728 pages, containing
874070 live rows and 0 dead rows; 30000 rows in sample, 47720462
estimated total rows
...
INFO:  vacuuming "public.click_logs"
INFO:  scanned index "click_logs_pkey" to remove 9594881 row versions
DETAIL:  CPU 7.34s/67.36u sec elapsed 1194.09 sec.
INFO:  "click_logs": removed 9594881 row versions in 135032 pages
DETAIL:  CPU 4.36s/2.06u sec elapsed 485.08 sec.
INFO:  index "click_logs_pkey" now contains 155456958 row versions in
968791 pages
DETAIL:  9579492 index row versions were removed.
2283 index pages have been deleted, 1008 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.63 sec.
INFO:  "click_logs": found 5502098 removable, 24781508 nonremovable
row versions in 398918 out of 2252999 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 1447127 unused item pointers.
0 pages are entirely empty.
CPU 20.54s/74.54u sec elapsed 2796.24 sec.
INFO:  "click_logs": stopping truncate due to conflicting lock request
INFO:  vacuuming "pg_toast.pg_toast_16563"
INFO:  index "pg_toast_16563_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_16563": found 0 removable, 0 nonremovable row
versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "public.click_logs"
INFO:  "click_logs": scanned 30000 of 2252999 pages, containing
2057564 live rows and 0 dead rows; 30000 rows in sample, 113618340
estimated total rows
...
INFO:  vacuuming "public.full_impression_logs"
INFO:  scanned index "full_impression_logs_pkey" to remove 4982049 row versions
DETAIL:  CPU 4.50s/27.49u sec elapsed 658.99 sec.
INFO:  "full_impression_logs": removed 4982049 row versions in 45983 pages
DETAIL:  CPU 1.41s/1.00u sec elapsed 168.53 sec.
INFO:  index "full_impression_logs_pkey" now contains 61634251 row
versions in 531050 pages
DETAIL:  759542 index row versions were removed.
29 index pages have been deleted, 29 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  "full_impression_logs": found 0 removable, 758433 nonremovable
row versions in 48383 out of 763267 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 20689 unused item pointers.
0 pages are entirely empty.
CPU 6.40s/28.80u sec elapsed 879.32 sec.
INFO:  "full_impression_logs": truncated 763267 to 762238 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.05 sec.
INFO:  analyzing "public.full_impression_logs"
INFO:  "full_impression_logs": scanned 30000 of 762238 pages,
containing 2415147 live rows and 0 dead rows; 30000 rows in sample,
41836806 estimated total rows