Re: Bad performance for a 3000 rows table updated - Mailing list pgsql-novice

From
Subject Re: Bad performance for a 3000 rows table updated
Date
Msg-id 868yuofwvi.fsf@mau.localdomain
Whole thread Raw
In response to Re: Bad performance for a 3000 rows table updated permanently  (Manfred Koizar <mkoi-pg@aon.at>)
Responses 7.2 search/replace pl/pgsql  (Nabil Sayegh <postgresql@e-trolley.de>)
Re: Bad performance for a 3000 rows table updated  (Manfred Koizar <mkoi-pg@aon.at>)
List pgsql-novice
>On Sat, 05 Apr 2003 16:39:42 +0200, fred-pg@jolliton.com wrote:
>>I have a table with 3000 rows (this number is almost constant, and
>>never decrease)
>
>>A stored procedure (PL/pgSQL) is called with an average of 14 times
>>per seconds and, 99% of the time, this result on one SELECT followed
>>by an UPDATE on table "data".

Manfred Koizar <mkoi-pg@aon.at> writes:
> So there are almost 900 updates per minute.
>
> Do a VACUUM FULL once and then a VACUUM every minute.

Well, doing a VACUUM every minute is really fast (<1s), and query test
is near 4ms ! I think I misunderstood how VACUUM work.

So, this is a great improvement !

> From time to time do ANALYSE or VACUUM ANALYSE.  MAX_FSM_RELATIONS
> should be no problem, but make sure that MAX_FSM_PAGES is not too
> low.

I don't know exactly how to pick a good value for MAX_FSM_PAGES. I'm
not familiar with pages, and how they are used. So I'm rereading the
Administrator Guide. I've noticed the following:

SELECT relname,relpages
  FROM pg_class
 WHERE relkind = 'r'
   AND relname = 'data';

give 156 for the main table, doing a VACUUM every minute, then after a
VACUUM FULL give 52 (and a initial value of 10 when benching from a
clean database.)

Thanks for your tips, they helped me a lots.

--
Frédéric Jolliton


pgsql-novice by date:

Previous
From: Don Patou
Date:
Subject: Re: configuring postgresql on the browser
Next
From: Nabil Sayegh
Date:
Subject: 7.2 search/replace pl/pgsql