Thread: Poor Performance on a table

Poor Performance on a table

From
Pallav Kalva
Date:
Hi ,

    I have a table in my production database which has 500k rows and
from the pg_class it shows the number of "relpages" of
around 750K for this table, the same table copied to a test database
shows "relpages" as 35k. I run vacuumdb on the whole
database (not on the table individually but the whole database) daily.
I think because of this most of queries are slowing down which used to
run much faster before.
  Is there any way to fix this problem ?

Thanks!
Pallav


Re: Poor Performance on a table

From
Frank Wiles
Date:
On Thu, 02 Dec 2004 14:11:46 -0500
Pallav Kalva <pkalva@deg.cc> wrote:

> Hi ,
>
>     I have a table in my production database which has 500k rows and
> from the pg_class it shows the number of "relpages" of
> around 750K for this table, the same table copied to a test database
> shows "relpages" as 35k. I run vacuumdb on the whole
> database (not on the table individually but the whole database) daily.
>
> I think because of this most of queries are slowing down which used to
>
> run much faster before.
>   Is there any way to fix this problem ?

  Try a VACUUM FULL, this will clean up unused space.  You might also
  want to adjust your free space map so that you don't have to do FULL
  vacuums as often ( or at all ).  It is controlled by max_fsm_pages
  and max_fsm_relations.

 ---------------------------------
   Frank Wiles <frank@wiles.org>
   http://www.wiles.org
 ---------------------------------


Re: Poor Performance on a table

From
Pallav Kalva
Date:
Hi Frank,

    Thanks! for the quick reply, here are my current default fsm setting .
 max_fsm_pages   = 20000 and  max_fsm_relations  =  1000
   What are the appropriates settings for these parameters ? are there
any guidlines ? postgres docs doesnt give much information on setting
these values.

Thanks!
Pallav

Frank Wiles wrote:

>On Thu, 02 Dec 2004 14:11:46 -0500
>Pallav Kalva <pkalva@deg.cc> wrote:
>
>
>
>>Hi ,
>>
>>    I have a table in my production database which has 500k rows and
>>from the pg_class it shows the number of "relpages" of
>>around 750K for this table, the same table copied to a test database
>>shows "relpages" as 35k. I run vacuumdb on the whole
>>database (not on the table individually but the whole database) daily.
>>
>>I think because of this most of queries are slowing down which used to
>>
>>run much faster before.
>>  Is there any way to fix this problem ?
>>
>>
>
>  Try a VACUUM FULL, this will clean up unused space.  You might also
>  want to adjust your free space map so that you don't have to do FULL
>  vacuums as often ( or at all ).  It is controlled by max_fsm_pages
>  and max_fsm_relations.
>
> ---------------------------------
>   Frank Wiles <frank@wiles.org>
>   http://www.wiles.org
> ---------------------------------
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>
>



Re: Poor Performance on a table

From
Tom Lane
Date:
Pallav Kalva <pkalva@deg.cc> writes:
>     I have a table in my production database which has 500k rows and
> from the pg_class it shows the number of "relpages" of
> around 750K for this table, the same table copied to a test database
> shows "relpages" as 35k. I run vacuumdb on the whole
> database (not on the table individually but the whole database) daily.

You're obviously suffering serious table bloat :-(.  Depending on how
heavy the update traffic on that table is, it might be that once-a-day
vacuum is simply not often enough.  Another likely problem is that you
need to increase the FSM settings (how big is your whole database?)

>   Is there any way to fix this problem ?

VACUUM FULL will fix the immediate problem.  You might well find CLUSTER
to be a faster alternative, though.

            regards, tom lane

Re: Poor Performance on a table

From
Frank Wiles
Date:
On Thu, 02 Dec 2004 14:32:53 -0500
Pallav Kalva <pkalva@deg.cc> wrote:

> Hi Frank,
>
>     Thanks! for the quick reply, here are my current default fsm
>     setting .
>  max_fsm_pages   = 20000 and  max_fsm_relations  =  1000
>    What are the appropriates settings for these parameters ? are there
>
> any guidlines ? postgres docs doesnt give much information on setting
> these values.

  There really aren't any guidelines on these because it really depends
  on your data and how you use the database.  If you insert/update 99%
  of the time and only delete 1% of the time, the defaults are probably
  perfect for you.  Probably up to a 80% insert/update, 20% delete
  ratio.

  If however you're constantly deleting entries from your database, I
  would suggest slowly raising those values in step with each other
  over the course a few weeks and see where you're at. It is really
  a matter of trial an error.

  With my databases, I can afford to do VACUUM FULLs fairly often
  so I typically don't need to increase my fsm values.

 ---------------------------------
   Frank Wiles <frank@wiles.org>
   http://www.wiles.org
 ---------------------------------


Re: Poor Performance on a table

From
Pallav Kalva
Date:
Tom Lane wrote:

>Pallav Kalva <pkalva@deg.cc> writes:
>
>
>>    I have a table in my production database which has 500k rows and
>>from the pg_class it shows the number of "relpages" of
>>around 750K for this table, the same table copied to a test database
>>shows "relpages" as 35k. I run vacuumdb on the whole
>>database (not on the table individually but the whole database) daily.
>>
>>
>
>You're obviously suffering serious table bloat :-(.  Depending on how
>heavy the update traffic on that table is, it might be that once-a-day
>vacuum is simply not often enough.  Another likely problem is that you
>need to increase the FSM settings (how big is your whole database?)
>
 Yes, you are right this table is heavily updated, the whole database
size is of 1.5 gigs, right now i have default fsm settings how much
should i increase max_fsm_pages  and max_fsm_relations to ?

>
>
>
>>  Is there any way to fix this problem ?
>>
>>
>
>VACUUM FULL will fix the immediate problem.  You might well find CLUSTER
>to be a faster alternative, though.
>
>
I am hesitant to do vacuum full on the table because it is one of the
crucial table in our application and we cant afford to have exclusive
lock on this table for long time.  we can afford not to have writes and
updates but we need atleast reads on this table .
How does CLUSTER benefit me ? excuse me, i am new to this feature.

>            regards, tom lane
>
>
>



Re: Poor Performance on a table

From
Tom Lane
Date:
Pallav Kalva <pkalva@deg.cc> writes:
> Tom Lane wrote:
>> Another likely problem is that you
>> need to increase the FSM settings (how big is your whole database?)
>>
>  Yes, you are right this table is heavily updated, the whole database
> size is of 1.5 gigs, right now i have default fsm settings how much
> should i increase max_fsm_pages  and max_fsm_relations to ?

A lot --- factor of 10 at least.  Try "vacuum verbose" and look at the
last couple lines of output.

>> VACUUM FULL will fix the immediate problem.  You might well find CLUSTER
>> to be a faster alternative, though.

> How does CLUSTER benefit me ?

It'll probably take less time to rebuild the table.  VACUUM FULL is
really optimized for the case of moving a relatively small fraction
of the table around, but it sounds like you need a complete rebuild.

            regards, tom lane

Re: Poor Performance on a table

From
Josh Berkus
Date:
Pallav,

>  Yes, you are right this table is heavily updated, the whole database
> size is of 1.5 gigs, right now i have default fsm settings how much
> should i increase max_fsm_pages  and max_fsm_relations to ?

1) fix the table (see below)
2) run the system for another day
3) run VACUUM FULL ANALYZE VERBOSE
4) if you're running 7.4 or better, at the end you'll see a total of FSM pages
needed.   If you're running something earlier, you'll need to get out a
calculator and do the math yourself.

Of course, if you're getting heavy update/delete activity, vacuuming more
often might be wise.  Post the output of the above command if you have
questions.

> I am hesitant to do vacuum full on the table because it is one of the
> crucial table in our application and we cant afford to have exclusive
> lock on this table for long time.  we can afford not to have writes and
> updates but we need atleast reads on this table .

You're going to have to do at least one or the table will just keep getting
worse.   Schedule it for 3am.   Once you've set FSM correctly, and are
vacuuming with the right frequency, the need to run VACUUM FULL will go away.

Oh, and it's likely that any indexes on the table need to be REINDEXed.


--
Josh Berkus
Aglio Database Solutions
San Francisco