Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise - Mailing list pgsql-hackers
From | Sokolov Yura |
---|---|
Subject | Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise |
Date | |
Msg-id | 652bdcacda269c749b3542f10fd1d7c3@postgrespro.ru Whole thread Raw |
In response to | Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise ("Joshua D. Drake" <jd@commandprompt.com>) |
Responses |
Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise |
List | pgsql-hackers |
On 2017-07-21 06:58, Joshua D. Drake wrote: > On 07/19/2017 07:57 PM, Tom Lane wrote: >> Peter Geoghegan <pg@bowt.ie> writes: >>> My argument for the importance of index bloat to the more general >>> bloat problem is simple: any bloat that accumulates, that cannot be >>> cleaned up, will probably accumulate until it impacts performance >>> quite noticeably. >> >> But that just begs the question: *does* it accumulate indefinitely, or >> does it eventually reach a more-or-less steady state? The traditional >> wisdom about btrees, for instance, is that no matter how full you pack >> them to start with, the steady state is going to involve something >> like >> 1/3rd free space. You can call that bloat if you want, but it's not >> likely that you'll be able to reduce the number significantly without >> paying exorbitant costs. >> >> I'm not claiming that we don't have any problems, but I do think it's >> important to draw a distinction between bloat and normal operating >> overhead. > > Agreed but we aren't talking about 30% I don't think. Here is where I > am at. It took until 30 minutes ago for the tests to finish: > > name | setting > -------------------------------------+----------- > autovacuum | on > autovacuum_analyze_scale_factor | 0.1 > autovacuum_analyze_threshold | 50 > autovacuum_freeze_max_age | 200000000 > autovacuum_max_workers | 3 > autovacuum_multixact_freeze_max_age | 400000000 > autovacuum_naptime | 60 > autovacuum_vacuum_cost_delay | 20 > autovacuum_vacuum_cost_limit | -1 > autovacuum_vacuum_scale_factor | 0.2 > autovacuum_vacuum_threshold | 50 > autovacuum_work_mem | -1 > log_autovacuum_min_duration | -1 > > > Test 1: 55G /srv/main > TPS: 955 > > Test 2: 112G /srv/main > TPS: 531 (Not sure what happened here, long checkpoint?) > > Test 3: 109G /srv/main > TPS: 868 > > Test 4: 143G > TPS: 840 > > Test 5: 154G > TPS: 722 > > I am running the query here: > > https://wiki.postgresql.org/wiki/Index_Maintenance#Summarize_keyspace_of_a_B-Tree_index > > And will post a followup. Once the query finishes I am going to launch > the tests with autovacuum_vacuum_cost_limit of 5000. Is there anything > else you folks would like me to change? > > JD > > > > > -- > Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc > > PostgreSQL Centered full stack support, consulting and development. > Advocate: @amplifypostgres || Learn: https://pgconf.us > ***** Unless otherwise stated, opinions are my own. ***** Have you measured increased vacuum ring buffer? This will require recompilation, though. With regards, -- Sokolov Yura Postgres Professional: https://postgrespro.ru The Russian Postgres Company
pgsql-hackers by date: