Re: Postgres performance slowly gets worse over a month - Mailing list pgsql-admin

From Joshua Daniel Franklin
Subject Re: Postgres performance slowly gets worse over a month
Date
Msg-id 20020726144537.92735.qmail@web20002.mail.yahoo.com
Whole thread Raw
In response to Re: Postgres performance slowly gets worse over a month  ("Michael G. Martin" <michael@vpmonline.com>)
Responses Re: Postgres performance slowly gets worse over a month  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-admin
Perhaps I wasn't clear. There really aren't any (daily) "no-longer needed
tuples", just added ones. I am under the impression that vacuum is just for
freeing up tuples to be re-used, so the only time it needs to be run is after
the 6-monthly tuple massacre, at which time I would also need to set
max_fsm_pages to a huge number.

--- "Michael G. Martin" <michael@vpmonline.com> wrote:
> I believe the more frequently you vacuum, the faster it will go, so that
> may be the driving factor in deciding.  Personally, each day, I'd add
> the new tuples then remove the no-longer needed tuples, make sure
> max_fsm_pages is large enough to handle all the pages removed in the
> largest table, then run a vacuum analyze on the table or entire
> database.  Run it each night and it will be nice and fast and you
> shouldn't ever need to worry about locking the entire table with a
> vacuum full or spend time to re-create the table and indicies.
>
> That's what I do which I think is the most automated,maintainance-free
> solution.  I currently run a lazy vacuum analyze each night after making
> my large changes.  My tables don't change enough during the day to
> require mid-day vacuums.
>
> --Michael
>
> Joshua Daniel Franklin wrote:
> >
> >>In addition, max_fsm_pages has an impact on how many pages will be
> >>available to be marked as re-usable.  If you have a huge table and
> >>changes are impacting more than the default 10,000 pages this is set to,
> >>you will want to bump this number up.  My problem was I saw my UnUsed
> >>tuples always growing and not being re-used until I bumped this value
> >>up.  As I watched the vacuum verbose output each run, I notices more
> >>than 10k pages were in fact changing between vacuums.
> >>
> >This has made me think about something we've been doing. We've got one
> >db that is used basically read-only; every day ~15000 records are added,
> >but very rarely are any deleted. What we've been doing is just letting it
> >sit until it gets close to too big for the filesystem, then lopping off
> >the earliest 6 months worth of records. The question is, is it best
> >to do this then set the max_fsm_pages to a huge number and vacuum full?
> >Or should I change it so scripts remove the oldest day and vacuum before
> >adding the next days?
> >
> >Or just rebuild the db every time. :)
> >


__________________________________________________
Do You Yahoo!?
Yahoo! Health - Feel better, live better
http://health.yahoo.com

pgsql-admin by date:

Previous
From: Elielson Fontanezi
Date:
Subject: RES: [GENERAL] set DateStyle to 'SQL'
Next
From: "Michael G. Martin"
Date:
Subject: Re: Postgres performance slowly gets worse over a month