Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise - Mailing list pgsql-hackers

From Joshua D. Drake
Subject Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
Date
Msg-id 67f31b47-9886-b59a-17b7-1cdbbe8975ac@commandprompt.com
Whole thread Raw
In response to Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
List pgsql-hackers
On 07/19/2017 07:57 PM, Tom Lane wrote:
> Peter Geoghegan <pg@bowt.ie> writes:
>> My argument for the importance of index bloat to the more general
>> bloat problem is simple: any bloat that accumulates, that cannot be
>> cleaned up, will probably accumulate until it impacts performance
>> quite noticeably.
> 
> But that just begs the question: *does* it accumulate indefinitely, or
> does it eventually reach a more-or-less steady state?  The traditional
> wisdom about btrees, for instance, is that no matter how full you pack
> them to start with, the steady state is going to involve something like
> 1/3rd free space.  You can call that bloat if you want, but it's not
> likely that you'll be able to reduce the number significantly without
> paying exorbitant costs.
> 
> I'm not claiming that we don't have any problems, but I do think it's
> important to draw a distinction between bloat and normal operating
> overhead.

Agreed but we aren't talking about 30% I don't think. Here is where I am 
at. It took until 30 minutes ago for the tests to finish:
                name                 |  setting
-------------------------------------+----------- autovacuum                          | on
autovacuum_analyze_scale_factor    | 0.1 autovacuum_analyze_threshold        | 50 autovacuum_freeze_max_age           |
200000000autovacuum_max_workers              | 3 autovacuum_multixact_freeze_max_age | 400000000 autovacuum_naptime
            | 60 autovacuum_vacuum_cost_delay        | 20 autovacuum_vacuum_cost_limit        | -1
autovacuum_vacuum_scale_factor     | 0.2 autovacuum_vacuum_threshold         | 50 autovacuum_work_mem                 |
-1log_autovacuum_min_duration         | -1
 


Test 1: 55G    /srv/main
TPS:    955

Test 2: 112G    /srv/main
TPS:    531 (Not sure what happened here, long checkpoint?)

Test 3: 109G    /srv/main
TPS:    868

Test 4: 143G
TPS:    840

Test 5: 154G
TPS:     722

I am running the query here:

https://wiki.postgresql.org/wiki/Index_Maintenance#Summarize_keyspace_of_a_B-Tree_index

And will post a followup. Once the query finishes I am going to launch 
the tests with autovacuum_vacuum_cost_limit of 5000. Is there anything 
else you folks would like me to change?

JD




-- 
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
*****     Unless otherwise stated, opinions are my own.   *****



pgsql-hackers by date:

Previous
From: Craig Ringer
Date:
Subject: Re: [HACKERS] [PATCH] pageinspect function to decode infomasks
Next
From: Neha Sharma
Date:
Subject: Re: [HACKERS] [TRAP: FailedAssertion] causing server to crash