On Sat, Jul 31, 2010 at 1:00 AM, Gnanakumar <gnanam@zoniac.com> wrote:
> Hi Scott,
>
> Thanks for your valuable suggestion. Our production server is running
> PostgreSQL 8.2.3 on Red Hat 4.1.1-52.
Definitely look at updating to the latest 8.2 release, there's a lot
of bug fixes since 8.2.3.
>
>> Esp in 8.3 and before where blowing out the free space map is quite easy
> to do and it can go unnoticed for some time.
>
> As you pointed out rightly, recently in our Production server there is a
> warning showing up in VACUUM ANALYZE VERBOSE.
> WARNING: relation "public.oneofmytable" contains more than "max_fsm_pages"
> pages with useful free space
> HINT: Consider compacting this relation or increasing the configuration
> parameter "max_fsm_pages".
>
> Currently, there are 439 tables and 743 indexes, adding up 1182 relations.
> What would you recommend me to set the value for "max_fsm_pages" and
> "max_fsm_relations" parameters?
Usually I set max_fsm_pages to 2x or more whatever vacuum verbose says
it needs. As for max_fsm_relations, it only needs to be big enough
to hold all tables and indexes, so if you've got 1182, then 2000 or so
would be fine. I work with one db that has 50k or more tables and
indexes, and on that one we have it set to something lik 500k so we
don't hit the limit.
>> Where I work we use about 2.5M entries but have our fsm set to 10M so we
> don't have to worry about blowing it out overnight or anything.
> If you don't mind, can you make me clear here. Is 2.5M entries in a single
> table or is it something else?
max_fsm_pages is set to 10M
>> Lastly, make sure your IO subsystem can keep up. If you're on the hairy
> edge, then vacuum may never be able to keep up.
> How do I confirm/make sure that IO subsystem can keep up?
Keep an eye on your system with tools like iostat.
iostat -xd 10 /dev/sdb
for instance if your db is on /dev/sdb. Keep an eye on %Util. If
it's always at 100% for hours on end, then your IO subsystem is likely
maxed out.
--
To understand recursion, one must first understand recursion.