Mischa,
Thanks. Yes, I understand that not having a large
enough max_fsm_pages is a problem and I think that it
is most likely the case for the client. What I wasn't
sure of was if the index bloat we're seeing is the
result of the "bleeding" you're talking about or
something else.
If I deleted 75% of the rows but had a max_fsm_pages
setting that still exceeded the pages required (as
indicated in VACUUM output), would that solve my
indexing problem or would I still need to REINDEX
after such a purge?
regards,
Bill
--- Mischa Sandberg <mischa.sandberg@telus.net> wrote:
> Quoting Bill Chandler <billybobc1210@yahoo.com>:
>
> > ... The normal activity is to delete 3-5% of the
> rows per day,
> > followed by a VACUUM ANALYZE.
> ...
> > However, on occasion, deleting 75% of rows is a
> > legitimate action for the client to take.
>
> > > In case nobody else has asked: is your
> max_fsm_pages
> > > big enough to handle all the deleted pages,
> > > across ALL tables hit by the purge?
>
> > This parameter is most likely set incorrectly. So
> > that could be causing problems. Could that be a
> > culprit for the index bloat, though?
>
> Look at the last few lines of vacuum verbose output.
> It will say something like:
>
> free space map: 55 relations, 88416 pages stored;
> 89184 total pages needed
> Allocated FSM size: 1000 relations + 1000000 pages
> = 5920 kB shared memory.
>
> "1000000" here is [max_fsm_pages] from my
> postgresql.conf.
> If the "total pages needed" is bigger than the pages
>
> fsm is allocated for, then you are bleeding.
> --
> "Dreams come true, not free." -- S.Sondheim, ITW
>
>
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com