Thread: DB become enormous with continuos insert and update
Some one know how to solve the problem of db that became huge from 700 Mbyte to 16 Gbyte in 10 days and grow... Vacuum doesn't solve the problem. Only vacuum full is too invasive, the only way to reduce the DB is dump and restore, but we cannot stop service... I know that a in a similar db where there is a lot of update the problem was solved but I don't know how... Some one Know this problem? Thank's :) -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3237814.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
On Tue, 2010-10-26 at 14:01 -0700, Hfe80 wrote: > Some one know how to solve the problem of db that became huge from 700 Mbyte > to 16 Gbyte in 10 days and grow... > Vacuum doesn't solve the problem. > Only vacuum full is too invasive, the only way to reduce the DB is dump and > restore, but we cannot stop service... > > I know that a in a similar db where there is a lot of update the problem was > solved but I don't know how... > Some one Know this problem? You are not vacuuming enough. If you vacuuum more often you should be able to keep bloat down. Joshua D. Drake > > Thank's :) > -- > View this message in context: http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3237814.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
Hi, Il 26/10/10 23:01, Hfe80 ha scritto: > Some one know how to solve the problem of db that became huge from 700 Mbyte > to 16 Gbyte in 10 days and grow... Which version of PostgreSQL are you using? > Some one Know this problem? Yes. Welcome to the "Bloating" club. :) Ciao, Gabriele -- Gabriele Bartolini - 2ndQuadrant Italia PostgreSQL Training, Services and Support gabriele.bartolini@2ndQuadrant.it | www.2ndQuadrant.it
On 10/26/10 2:01 PM, Hfe80 wrote: > Some one know how to solve the problem of db that became huge from 700 Mbyte > to 16 Gbyte in 10 days and grow... > Vacuum doesn't solve the problem. > Only vacuum full is too invasive, the only way to reduce the DB is dump and > restore, but we cannot stop service... > > I know that a in a similar db where there is a lot of update the problem was > solved but I don't know how... > Some one Know this problem? if you have a lot of updates of small non-indexed fields, use a fillfactor of something like 60 or 70% so that the HOT thing can do its job (assuming 8.3+). do be sure to have autovacuum running and cranked up sufficiently that its keeping up with the workload. never do VACUUM FULL. Rather, use CLUSTER to rebuild heavily used tables in order of the most frequently used key (typically the PK), however this requires a global table lock for the duration, so should only be used when your application is relatively inactive.
On 26 October 2010 22:01, Hfe80 <federico.mo@gmail.com> wrote: > Some one know how to solve the problem of db that became huge from 700 Mbyte > to 16 Gbyte in 10 days and grow... > Vacuum doesn't solve the problem. > Only vacuum full is too invasive, the only way to reduce the DB is dump and > restore, but we cannot stop service... > > I know that a in a similar db where there is a lot of update the problem was > solved but I don't know how... > Some one Know this problem? > This article describes why you should almost never use vacuum full: http://it.toolbox.com/blogs/database-soup/getting-rid-of-vacuum-full-feedback-needed-33959 You need to vacuum more aggressively. -- Regards, Peter Geoghegan
We tried everything...vacumm works. The problem is that updates need more space becouse datas is not overwritten in the same space... it seems that when there is huge update db grow to create space necessary ...700 Mb every time and than vacuum works but only since next update! and db grow 700Mb more and 700 Mb more.... It seems a bug Sorry for my english that isn't so good :( -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3237903.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
On 27 Oct 2010, at 24:22, Hfe80 wrote: > We tried everything...vacumm works. > The problem is that updates need more space becouse datas is not overwritten > in the same space... > it seems that when there is huge update db grow to create space necessary > ...700 Mb every time and than vacuum works but only since next update! and > db grow 700Mb more and 700 Mb more.... > > It seems a bug > > Sorry for my english that isn't so good :( Do you have any long-running transactions? Rows can't be reclaimed while there are transactions active in which those rows are visible. So if you have long-runningtransactions, those could be "locking" the deleted rows' space and prevent vacuum from re-using that space fornew rows. That's one reason that could explain your database growth. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,4cc75a3310291878029243!
Hi, Il 27/10/10 00:22, Hfe80 ha scritto: > The problem is that updates need more space becouse datas is not overwritten > in the same space... As I said earlier, we need to know which PostgreSQL version you are using. PostgreSQL 8.3 introduced Heap Only Tuples (HOT) updates. Is it at least an 8.3 version (so 8.4 or 9.0)? > it seems that when there is huge update db grow to create space necessary > ...700 Mb every time and than vacuum works but only since next update! and > db grow 700Mb more and 700 Mb more.... As John said earlier, you might want to look at the fillfactor attribute for a table and lower it (see http://www.postgresql.org/docs/current/static/sql-createtable.html - for 9.0) and make sure there is not a long transaction that keeps those rows alive. Another good place where to look and find more information is the PostgreSQL catalog (pg_stat_user_tables table). > It seems a bug It is definitely not a bug, but a desired behaviour deriving from the MVCC architecture. Ciao, Gabriele -- Gabriele Bartolini - 2ndQuadrant Italia PostgreSQL Training, Services and Support gabriele.bartolini@2ndQuadrant.it | www.2ndQuadrant.it
On Tue, Oct 26, 2010 at 5:55 PM, John R Pierce <pierce@hogranch.com> wrote: > never do VACUUM FULL. Rather, use CLUSTER to rebuild heavily used tables > in order of the most frequently used key (typically the PK), however this > requires a global table lock for the duration, so should only be used when > your application is relatively inactive. Another trick I like to use is to alter a table field from type integer to type integer. This causes pg to rewrite the table without trying to do any sorting that cluster would imply, and results in basically the same end result, and is MVCC safe to boot.
Thank to all I couldn't respond yesterday but we had tried all you suggestion in the past weeks. But nothing change really Now finally have resolve the problem!!! It was a bug of version 8.1. We solve all our problem simply update to 8.4 !!! It's incredible, DB don't change dimension even after 1 million insert and update. CPUs average load drastically is fallen down! before 16 core server has an incredible average load from 2 to 16! Now 16 core server now have nothing to do! :) average load is 0,1 :) NB Thank you to friends of http://www.nikos.it nikos that has give us the right suggestion. -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3240118.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
Hello, > It was a bug of version 8.1. > We solve all our problem simply update to 8.4 !!! Well ... it was not a bug, actually a feature that was not yet implemented. My first question about which version of PostgreSQL you were using in this thread was exactly for that purpose, but ... you never replied to that. Next time, please post the version you are using and possibly the operating system. > It's incredible, DB don't change dimension even after 1 million insert and > update. > CPUs average load drastically is fallen down! Glad you fixed it. Ciao, Gabriele -- Gabriele Bartolini - 2ndQuadrant Italia PostgreSQL Training, Services and Support Gabriele.Bartolini@2ndQuadrant.it - www.2ndQuadrant.it http://www.2ndquadrant.com/books/