Re: [GENERAL] Upgrade to dual processor machine? - Mailing list pgsql-performance
From | Cedric Dufour (Cogito Ergo Soft) |
---|---|
Subject | Re: [GENERAL] Upgrade to dual processor machine? |
Date | |
Msg-id | NDBBIFNBODNADCAOFDOAOEEMCEAA.cedric.dufour@cogito-ergo-soft.com Whole thread Raw |
In response to | Re: [GENERAL] Upgrade to dual processor machine? ("Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>) |
Responses |
Re: [GENERAL] Upgrade to dual processor machine?
|
List | pgsql-performance |
Concerning the VACUUM issue: In order to test my DB perfomance, I made a script that populates it with test data (about a million rows to start with). The INSERT insert in one of the table triggers an UPDATE in 3 related tables, which mean row size is about 50 bytes. I found out that it was *essential* to VACUUM the updated tables every 500 INSERT or so to keep the performance from *heavily* dropping. That's about every 73kB updated or so. Now, I guess this memory "limit" is depending of PG's configuration and the OS characteritics. Is there any setting in the conf file that is related to this VACUUM and dead tuples issue ? Could the "free-space map" settings be related (I never understood what were these settings) ? BTW, thanx to all of you participating to this thread. Nice to have such a complete overlook on PG's performance tuning and related OS issues. Cedric D. > -----Original Message----- > From: pgsql-general-owner@postgresql.org > [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Shridhar > Daithankar > Sent: Friday, November 15, 2002 08:10 > To: pgsql-general@postgresql.org; pgsql-performance@postgresql.org > Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine? > > > On 14 Nov 2002 at 21:36, Henrik Steffen wrote: > > > do you seriously think that I should vacuum frequently updated/inserted > > tables every 120 seconds ? > > Its not about 120 seconds. Its about how many new and dead tuples > your server > is generating. > > Here is a quick summary > > insert: New tuple:vacuum analyse updates that statistics. > update: Causes a dead tuple: Vacuum analyse marks dead tuple for > reuse saving > buffer space. > delete: Causes a dead unusable tuple: Vacuum full is required to > reclaim the > space on the disk. > > Vacuum analyse is nonblocking and vacuum full is blocking. > > If you are generating 10 dead pages i.e. 80K of data in matter of > minutes. > vacuum is warranted for optimal performance.. > > > I have many UPDATEs and INSERTs on my log-statistics. For each > http-request > > there will be an INSERT into the logfile. And if certain customer pages > > are downloaded there will even be an UPDATE in a > customer-statistics table > > causing a hits column to be set to hits+1... I didn't think this was a > > dramatical change so far. > > OK.. Schedule a cron job that would vacuum analyse every 5/10 > minutes.. And see > if that gives you overall increase in throughput > > > Still sure to run VACUUM ANALYZE on these tables so often? > > IMO you should.. > > Also have a look at http://gborg.postgresql.org/project/pgavd/projdisplay.php. I have written it but I don't know anybody using it. If you use it, I can help you with any bugfixes required. I haven't done too much testing on it. It vacuums things based on traffic rather than time. So your database performance should ideally be maintained automatically.. Let me know if you need anything on this.. And use the CVS version please.. Bye Shridhar -- love, n.: When, if asked to choose between your lover and happiness, you'd skip happiness in a heartbeat. ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster
pgsql-performance by date: