Thread: toast table growing indefinitely? Known problems?
Is there a known problem with TOAST tables growing indefinitely in 7.2? We have a database that has large text chunks inserted & deleted regularly, and despite regular vacuums, the toast table continues to grow. We can not do a VACUUM FULL since it's a 24x7 system, but we do VACUUM frequently. Based on other threads we have tried using REINDEX on the base table, but we have confirmed that it is not the toast index table that is the main consumer, and it does not seem to help. On a separate issue, we have seen toast indexes growing indefinitely on 7.3 despite VACUUM FULL. Attempting a reindex results in: # reindex table tt; WARNING: table "tt" wasn't reindexed REINDEX Any help and/or pointers would be appreciated. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.B.N. 75 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 03 5330 3172 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes: > Is there a known problem with TOAST tables growing indefinitely in 7.2? No. The index on the toast table may well bloat, because it's tracking a moving range of toast-item OIDs. But the table itself should be okay so long as it's vacuumed regularly. If you're seeing bloat while using plain (not full) vacuums, my guess would be that you need to enlarge the FSM parameters in postgresql.conf. > On a separate issue, we have seen toast indexes growing indefinitely on 7.3 > despite VACUUM FULL. Attempting a reindex results in: > # reindex table tt; > WARNING: table "tt" wasn't reindexed > REINDEX AFAIK that will only reindex tt's own indexes. To reindex the toast table would require naming same, with something like: regression=# reindex table pg_toast.pg_toast_1675403; REINDEX or you could specify the index: regression=# reindex index pg_toast.pg_toast_1675403_index; REINDEX The number that appears in these names is the OID of the table owning the toast table (ie, tt's oid). regards, tom lane
At 12:37 AM 2/12/2002 -0500, Tom Lane wrote: >If you're seeing bloat while using plain (not full) vacuums, my guess >would be that you need to enlarge the FSM parameters in postgresql.conf. Good call; they are still default (10000 pages) and we are seeing 26000 pages per day being updated: NOTICE: Removed 102226 tuples in 26002 pages. CPU 2.02s/1.87u sec elapsed 69.85 sec. NOTICE: Pages 201819: Changed 26128, Empty 0; Tup 315227: Vac 102226, Keep 0, UnUsed 393793. Total CPU 9.57s/3.07u sec elapsed 189.32 sec. I've increased the value to 40000 since we have much busier days, and I'll se what happens. Is there any way (other than VACUUM FULL) to recover the current lost space? ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.B.N. 75 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 03 5330 3172 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes: > Is there any way (other than VACUUM FULL) to recover the current lost space? Probably not. Plain VACUUM will reclaim any empty pages it happens to see at the end of the table, but without a VACUUM FULL you won't get any proactive effort to make the end-pages empty. regards, tom lane
> Good call; they are still default (10000 pages) and we are seeing 26000 > pages per day being updated: > > NOTICE: Removed 102226 tuples in 26002 pages. > CPU 2.02s/1.87u sec elapsed 69.85 sec. > NOTICE: Pages 201819: Changed 26128, Empty 0; Tup 315227: Vac 102226, Keep > 0, UnUsed 393793. > Total CPU 9.57s/3.07u sec elapsed 189.32 sec. > > I've increased the value to 40000 since we have much busier days, and I'll > se what happens. Hang on - so when vacuum runs it knows exactly how many pages it cleaned up but if the fsm value is low, it doesn't help? Why don't we just automaticlaly set the FSM value to the max that vacuum has to clean up? Why don't we introduce: max_fsm_pages = auto or something? Chris
> > I've increased the value to 40000 since we have much busier days, and I'll > > se what happens. > > Hang on - so when vacuum runs it knows exactly how many pages it cleaned up > but if the fsm value is low, it doesn't help? Why don't we just > automaticlaly set the FSM value to the max that vacuum has to clean up? Why > don't we introduce: max_fsm_pages = auto or something? I assume it uses shared memory, so I doubt it's very easy to increase on the fly -- without having to reduce something else anyway. -- Rod Taylor <rbt@rbt.ca> PGP Key: http://www.rbt.ca/rbtpub.asc
"Christopher Kings-Lynne" <chriskl@familyhealth.com.au> writes: > Hang on - so when vacuum runs it knows exactly how many pages it cleaned up > but if the fsm value is low, it doesn't help? Why don't we just > automaticlaly set the FSM value to the max that vacuum has to clean > up? Because we can't resize shared memory on-the-fly. regards, tom lane