Re: best practice to avoid table bloat? - Mailing list pgsql-performance

From Anibal David Acosta
Subject Re: best practice to avoid table bloat?
Date
Msg-id 00c201cd7bf3$93bc4130$bb34c390$@devshock.com
Whole thread Raw
In response to Re: best practice to avoid table bloat?  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: best practice to avoid table bloat?  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-performance
Thanks Kevin.
Postgres version is 9.1.4 (lastest)

Every day the table has about 7 millions of new rows.
The table hold the data for 60 days, so approx. the total rows must be
around 420 millions.
Every night a delete process run, and remove rows older than 60 days.

So, the space used by postgres should not be increase drastically because
every day arrive 7 millions of rows but also same quantity is deleted but my
disk get out of space every 4 months.
I must copy tables outside the server, delete local table and create it
again, after this process I got again space for about 4 months.

Maybe is a wrong autovacuum config, but is really complicate to understand
what values are correct to avoid performance penalty but to keep table in
good fit.

I think that autovacuum configuration should have some like "auto-config"
that recalculate every day which is the best configuration for the server
condition

Thanks!


-----Mensaje original-----
De: Kevin Grittner [mailto:Kevin.Grittner@wicourts.gov]
Enviado el: jueves, 16 de agosto de 2012 04:52 p.m.
Para: Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] best practice to avoid table bloat?

"Anibal David Acosta" <aa@devshock.com> wrote:

> if I have a table that daily at night is deleted about 8 millions of
> rows (table maybe has 9 millions) is recommended to do a vacuum
> analyze after delete completes or can I leave this job to autovacuum?

Deleting a high percentage of the rows should cause autovacuum to deal with
the table the next time it wakes up, so an explicit VACUUM ANALYZE shouldn't
be needed.

> For some reason for the same amount of data every day postgres consume
> a little more.

How are you measuring the data and how are you measuring the space?
And what version of PostgreSQL is this?

-Kevin



pgsql-performance by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: best practice to avoid table bloat?
Next
From: "Kevin Grittner"
Date:
Subject: Re: best practice to avoid table bloat?