Thread: Re: [GENERAL] Upgrade to dual processor machine?

Re: [GENERAL] Upgrade to dual processor machine?

From
"Shridhar Daithankar"
Date:
On 14 Nov 2002 at 11:03, Henrik Steffen wrote:
> vmstat 1 5:
>    procs                      memory    swap          io     system         cpu
>  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
>  1  8  1     60   4964   5888 309684   0   0   176    74   16    32  25   9  66
>  0  6  3     60   4964   5932 308772   0   0  6264   256  347   347  13   9  78
>  0  5  1     60   4964   5900 309364   0   0  9312   224  380   309  11   6  83
>  1  4  1     60   5272   5940 309152   0   0 10320   116  397   429  17   6  77
>  1  4  1     60   4964   5896 309512   0   0 11020   152  451   456  14  10  76
> w:
> 12:04pm  up 2 days, 17:44,  1 user,  load average: 10.28, 7.22, 3.88
> USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT
> root     pts/0    condor.city-map. 11:46am  0.00s  0.09s  0.01s  w
> this is when things begin to go more slowly....

Two things immediately noticable.. Load average and block ins..

Either your disk write BW is saturated or CPU is too full, which I believe is
the case. HAve you ever got faster write performance than 12K blocks say? Disk
BW may be a bottleneck here.. Are they IDE disks?

Besides almost transactions are insert/update.. And if you have 11K blocks per
second to write.. I suggest you vacuum analyse most used table one in a minute
or so. Decide the best frequency by trial and error. A good start is double the

time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave
a gap of 120 sec. between two runs of vacuum.

You need to vacuum only those tables which are heavily updated. This will make
vacuum faster.

HTH
Bye
 Shridhar

--
Nouvelle cuisine, n.:    French for "not enough food".Continental breakfast, n.:
English for "not enough food".Tapas, n.:    Spanish for "not enough food".Dim Sum,

n.:    Chinese for more food than you've ever seen in your entire life.


Re: [GENERAL] Upgrade to dual processor machine?

From
"Henrik Steffen"
Date:
Hi Shridhar,

do you seriously think that I should vacuum frequently updated/inserted
tables every 120 seconds ?

This is what it says in the manual and what I have been doing until today:

"You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful to
runVACUUM 
every night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The
commandexclusively 
locks the table while processing. "

And:

"You should run VACUUM ANALYZE when a table is initially loaded and when a table's data changes dramatically. "


I have many UPDATEs and INSERTs on my log-statistics. For each http-request
there will be an INSERT into the logfile. And if certain customer pages
are downloaded there will even be an UPDATE in a customer-statistics table
causing a hits column to be set to hits+1... I didn't think this was a
dramatical change so far.

Still sure to run VACUUM ANALYZE on these tables so often?

VACUUM ANALYZE takes about 30 seconds on one of these tables and will be done once
every night automatically.


> Besides almost transactions are insert/update.. And if you have 11K blocks per
> second to write.. I suggest you vacuum analyse most used table one in a minute
> or so. Decide the best frequency by trial and error. A good start is double the
> time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave
> a gap of 120 sec. between two runs of vacuum.

--

Mit freundlichem Gruß

Henrik Steffen
Geschäftsführer

top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com          Tel. +49 4141 991230
mail: steffen@topconcepts.com       Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline:  +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------

----- Original Message -----
From: "Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>
To: <pgsql-performance@postgresql.org>
Sent: Thursday, November 14, 2002 11:28 AM
Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?


> On 14 Nov 2002 at 11:03, Henrik Steffen wrote:
> > vmstat 1 5:
> >    procs                      memory    swap          io     system         cpu
> >  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
> >  1  8  1     60   4964   5888 309684   0   0   176    74   16    32  25   9  66
> >  0  6  3     60   4964   5932 308772   0   0  6264   256  347   347  13   9  78
> >  0  5  1     60   4964   5900 309364   0   0  9312   224  380   309  11   6  83
> >  1  4  1     60   5272   5940 309152   0   0 10320   116  397   429  17   6  77
> >  1  4  1     60   4964   5896 309512   0   0 11020   152  451   456  14  10  76
> > w:
> > 12:04pm  up 2 days, 17:44,  1 user,  load average: 10.28, 7.22, 3.88
> > USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT
> > root     pts/0    condor.city-map. 11:46am  0.00s  0.09s  0.01s  w
> > this is when things begin to go more slowly....
>
> Two things immediately noticable.. Load average and block ins..
>
> Either your disk write BW is saturated or CPU is too full, which I believe is
> the case. HAve you ever got faster write performance than 12K blocks say? Disk
> BW may be a bottleneck here.. Are they IDE disks?
>
> Besides almost transactions are insert/update.. And if you have 11K blocks per
> second to write.. I suggest you vacuum analyse most used table one in a minute
> or so. Decide the best frequency by trial and error. A good start is double the
>
> time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave
> a gap of 120 sec. between two runs of vacuum.
>
> You need to vacuum only those tables which are heavily updated. This will make
> vacuum faster.
>
> HTH
> Bye
>  Shridhar
>
> --
> Nouvelle cuisine, n.: French for "not enough food".Continental breakfast, n.:
> English for "not enough food".Tapas, n.: Spanish for "not enough food".Dim Sum,
>
> n.: Chinese for more food than you've ever seen in your entire life.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org


Re: [GENERAL] Upgrade to dual processor machine?

From
Doug McNaught
Date:
"Henrik Steffen" <steffen@city-map.de> writes:

> This is what it says in the manual and what I have been doing until today:
>
> "You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful to
runVACUUM 
> every night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The
commandexclusively 
> locks the table while processing. "

The "exclusive lock" part is no longer true as of 7.2.X--it is now
much cheaper to run VACUUM.  What version were you running again?

-Doug

Re: [GENERAL] Upgrade to dual processor machine?

From
"Josh Berkus"
Date:
Henrik,

> do you seriously think that I should vacuum frequently
> updated/inserted
> tables every 120 seconds ?
>
> This is what it says in the manual and what I have been doing until
> today:
>
> "You should run VACUUM periodically to clean out expired rows. For
> tables that are heavily modified, it is useful to run VACUUM
> every night in an automated manner. For tables with few
> modifications, VACUUM should be run less frequently. The command
> exclusively
> locks the table while processing. "
>
> And:
>
> "You should run VACUUM ANALYZE when a table is initially loaded and
> when a table's data changes dramatically. "

That's the postgres *7.1* manual you're reading.   You need to read the
7.2 online manual; VACUUM has changed substantially.

-Josh Berkus



Re: [GENERAL] Upgrade to dual processor machine?

From
Medi Montaseri
Date:
I use the Async Query technique of PG to do such tasks as Vacuum-ing.

Henrik Steffen wrote:

>Hi Shridhar,
>
>do you seriously think that I should vacuum frequently updated/inserted
>tables every 120 seconds ?
>
>This is what it says in the manual and what I have been doing until today:
>
>"You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful to
runVACUUM 
>every night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The
commandexclusively 
>locks the table while processing. "
>
>And:
>
>"You should run VACUUM ANALYZE when a table is initially loaded and when a table's data changes dramatically. "
>
>
>I have many UPDATEs and INSERTs on my log-statistics. For each http-request
>there will be an INSERT into the logfile. And if certain customer pages
>are downloaded there will even be an UPDATE in a customer-statistics table
>causing a hits column to be set to hits+1... I didn't think this was a
>dramatical change so far.
>
>Still sure to run VACUUM ANALYZE on these tables so often?
>
>VACUUM ANALYZE takes about 30 seconds on one of these tables and will be done once
>every night automatically.
>
>
>
>
>>Besides almost transactions are insert/update.. And if you have 11K blocks per
>>second to write.. I suggest you vacuum analyse most used table one in a minute
>>or so. Decide the best frequency by trial and error. A good start is double the
>>time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave
>>a gap of 120 sec. between two runs of vacuum.
>>
>>
>
>--
>
>Mit freundlichem Gruß
>
>Henrik Steffen
>Geschäftsführer
>
>top concepts Internetmarketing GmbH
>Am Steinkamp 7 - D-21684 Stade - Germany
>--------------------------------------------------------
>http://www.topconcepts.com          Tel. +49 4141 991230
>mail: steffen@topconcepts.com       Fax. +49 4141 991233
>--------------------------------------------------------
>24h-Support Hotline:  +49 1908 34697 (EUR 1.86/Min,topc)
>--------------------------------------------------------
>Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
>System-Partner gesucht: http://www.franchise.city-map.de
>--------------------------------------------------------
>Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
>--------------------------------------------------------
>
>----- Original Message -----
>From: "Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>
>To: <pgsql-performance@postgresql.org>
>Sent: Thursday, November 14, 2002 11:28 AM
>Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?
>
>
>
>
>>On 14 Nov 2002 at 11:03, Henrik Steffen wrote:
>>
>>
>>>vmstat 1 5:
>>>   procs                      memory    swap          io     system         cpu
>>> r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
>>> 1  8  1     60   4964   5888 309684   0   0   176    74   16    32  25   9  66
>>> 0  6  3     60   4964   5932 308772   0   0  6264   256  347   347  13   9  78
>>> 0  5  1     60   4964   5900 309364   0   0  9312   224  380   309  11   6  83
>>> 1  4  1     60   5272   5940 309152   0   0 10320   116  397   429  17   6  77
>>> 1  4  1     60   4964   5896 309512   0   0 11020   152  451   456  14  10  76
>>>w:
>>>12:04pm  up 2 days, 17:44,  1 user,  load average: 10.28, 7.22, 3.88
>>>USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT
>>>root     pts/0    condor.city-map. 11:46am  0.00s  0.09s  0.01s  w
>>>this is when things begin to go more slowly....
>>>
>>>
>>Two things immediately noticable.. Load average and block ins..
>>
>>Either your disk write BW is saturated or CPU is too full, which I believe is
>>the case. HAve you ever got faster write performance than 12K blocks say? Disk
>>BW may be a bottleneck here.. Are they IDE disks?
>>
>>Besides almost transactions are insert/update.. And if you have 11K blocks per
>>second to write.. I suggest you vacuum analyse most used table one in a minute
>>or so. Decide the best frequency by trial and error. A good start is double the
>>
>>time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave
>>a gap of 120 sec. between two runs of vacuum.
>>
>>You need to vacuum only those tables which are heavily updated. This will make
>>vacuum faster.
>>
>>HTH
>>Bye
>> Shridhar
>>
>>--
>>Nouvelle cuisine, n.: French for "not enough food".Continental breakfast, n.:
>>English for "not enough food".Tapas, n.: Spanish for "not enough food".Dim Sum,
>>
>>n.: Chinese for more food than you've ever seen in your entire life.
>>
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 6: Have you searched our list archives?
>>
>>http://archives.postgresql.org
>>
>>
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>
>




Re: [GENERAL] Upgrade to dual processor machine?

From
Neil Conway
Date:
"Henrik Steffen" <steffen@city-map.de> writes:
> I have many UPDATEs and INSERTs on my log-statistics. For each
> http-request there will be an INSERT into the logfile. And if
> certain customer pages are downloaded there will even be an UPDATE
> in a customer-statistics table causing a hits column to be set to
> hits+1... I didn't think this was a dramatical change so far.

Just to clarify, INSERT does not create dead rows -- tables that have
lots of INSERTS don't need to be vacuumed particularly often. In
contrast, an UPDATE is really a DELETE plus an INSERT, so it *will*
create dead rows.

To get an idea of how many dead tuples there are in a table, try
contrib/pgstattuple (maybe it's only in 7.3's contrib/, not sure).

> Still sure to run VACUUM ANALYZE on these tables so often?

Well, the ANALYZE part is probably rarely needed, as I wouldn't think
the statistical distribution of the data in the table changes very
frequently -- so maybe run a database-wide ANALYZE once per day? But
if a table is updated frequently, VACUUM frequently is definately a
good idea.

Cheers,

Neil

--
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC

Re: [GENERAL] Upgrade to dual processor machine?

From
"Shridhar Daithankar"
Date:
On 14 Nov 2002 at 21:36, Henrik Steffen wrote:

> do you seriously think that I should vacuum frequently updated/inserted
> tables every 120 seconds ?

Its not about 120 seconds. Its about how many new and dead tuples your server
is generating.

Here is a quick summary

insert: New tuple:vacuum analyse updates that statistics.
update: Causes a dead tuple: Vacuum analyse marks dead tuple for reuse saving
buffer space.
delete: Causes a dead unusable tuple: Vacuum full is required to reclaim the
space on the disk.

Vacuum analyse is nonblocking and vacuum full is blocking.

If you are generating 10 dead pages i.e. 80K of data in matter of minutes.
vacuum is warranted for optimal performance..

> I have many UPDATEs and INSERTs on my log-statistics. For each http-request
> there will be an INSERT into the logfile. And if certain customer pages
> are downloaded there will even be an UPDATE in a customer-statistics table
> causing a hits column to be set to hits+1... I didn't think this was a
> dramatical change so far.

OK.. Schedule a cron job that would vacuum analyse every 5/10 minutes.. And see
if that gives you overall increase in throughput

> Still sure to run VACUUM ANALYZE on these tables so often?

IMO you should..

Also have a look at http://gborg.postgresql.org/project/pgavd/projdisplay.php.

I have written it but I don't know anybody using it. If you use it, I can help
you with any bugfixes required. I haven't done too much testing on it. It
vacuums things based on traffic rather than time. So your database performance
should ideally be maintained automatically..

Let me know if you need anything on this.. And use the CVS version please..


Bye
 Shridhar

--
love, n.:    When, if asked to choose between your lover    and happiness, you'd skip
happiness in a heartbeat.


Re: [GENERAL] Upgrade to dual processor machine?

From
Cedric Dufour (Cogito Ergo Soft)
Date:
Concerning the VACUUM issue:
In order to test my DB perfomance, I made a script that populates it with
test data (about a million rows to start with).
The INSERT insert in one of the table triggers an UPDATE in 3 related
tables, which mean row size is about 50 bytes.
I found out that it was *essential* to VACUUM the updated tables every 500
INSERT or so to keep the performance from *heavily* dropping.
That's about every 73kB updated or so.
Now, I guess this memory "limit" is depending of PG's configuration and the
OS characteritics.

Is there any setting in the conf file that is related to this VACUUM and
dead tuples issue ? Could the "free-space map" settings be related (I never
understood what were these settings) ?

BTW, thanx to all of you participating to this thread. Nice to have such a
complete overlook on PG's performance tuning and related OS issues.

    Cedric D.

> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Shridhar
> Daithankar
> Sent: Friday, November 15, 2002 08:10
> To: pgsql-general@postgresql.org; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?
>
>
> On 14 Nov 2002 at 21:36, Henrik Steffen wrote:
>
> > do you seriously think that I should vacuum frequently updated/inserted
> > tables every 120 seconds ?
>
> Its not about 120 seconds. Its about how many new and dead tuples
> your server
> is generating.
>
> Here is a quick summary
>
> insert: New tuple:vacuum analyse updates that statistics.
> update: Causes a dead tuple: Vacuum analyse marks dead tuple for
> reuse saving
> buffer space.
> delete: Causes a dead unusable tuple: Vacuum full is required to
> reclaim the
> space on the disk.
>
> Vacuum analyse is nonblocking and vacuum full is blocking.
>
> If you are generating 10 dead pages i.e. 80K of data in matter of
> minutes.
> vacuum is warranted for optimal performance..
>
> > I have many UPDATEs and INSERTs on my log-statistics. For each
> http-request
> > there will be an INSERT into the logfile. And if certain customer pages
> > are downloaded there will even be an UPDATE in a
> customer-statistics table
> > causing a hits column to be set to hits+1... I didn't think this was a
> > dramatical change so far.
>
> OK.. Schedule a cron job that would vacuum analyse every 5/10
> minutes.. And see
> if that gives you overall increase in throughput
>
> > Still sure to run VACUUM ANALYZE on these tables so often?
>
> IMO you should..
>
> Also have a look at
http://gborg.postgresql.org/project/pgavd/projdisplay.php.

I have written it but I don't know anybody using it. If you use it, I can
help
you with any bugfixes required. I haven't done too much testing on it. It
vacuums things based on traffic rather than time. So your database
performance
should ideally be maintained automatically..

Let me know if you need anything on this.. And use the CVS version please..


Bye
 Shridhar

--
love, n.:    When, if asked to choose between your lover    and happiness, you'd
skip
happiness in a heartbeat.


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] Upgrade to dual processor machine?

From
"Henrik Steffen"
Date:
running 7.2.1 here

--

Mit freundlichem Gruß

Henrik Steffen
Geschäftsführer

top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com          Tel. +49 4141 991230
mail: steffen@topconcepts.com       Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline:  +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------

----- Original Message -----
From: "Doug McNaught" <doug@mcnaught.org>
To: "Henrik Steffen" <steffen@city-map.de>
Cc: <shridhar_daithankar@persistent.co.in>; <pgsql-general@postgresql.org>; <pgsql-performance@postgresql.org>
Sent: Thursday, November 14, 2002 9:50 PM
Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?


> "Henrik Steffen" <steffen@city-map.de> writes:
>
> > This is what it says in the manual and what I have been doing until today:
> >
> > "You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful
torun VACUUM 
> > every night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The
command
exclusively
> > locks the table while processing. "
>
> The "exclusive lock" part is no longer true as of 7.2.X--it is now
> much cheaper to run VACUUM.  What version were you running again?
>
> -Doug
>


Re: [GENERAL] Upgrade to dual processor machine?

From
Andrew Sullivan
Date:
On Fri, Nov 15, 2002 at 09:47:23AM +0100, Cedric Dufour (Cogito Ergo Soft) wrote:
> Is there any setting in the conf file that is related to this VACUUM and
> dead tuples issue ? Could the "free-space map" settings be related (I never
> understood what were these settings) ?

Yes.  That's what those settings are.

A

--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: [GENERAL] Upgrade to dual processor machine?

From
Cedric Dufour (Cogito Ergo Soft)
Date:
> -----Original Message-----
> From: pgsql-performance-owner@postgresql.org
> [mailto:pgsql-performance-owner@postgresql.org]On Behalf Of Andrew
> Sullivan
> Sent: Friday, November 15, 2002 13:59
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?
>
>
> On Fri, Nov 15, 2002 at 09:47:23AM +0100, Cedric Dufour (Cogito
> Ergo Soft) wrote:
> > Is there any setting in the conf file that is related to this VACUUM and
> > dead tuples issue ? Could the "free-space map" settings be
> related (I never
> > understood what were these settings) ?
>
> Yes.  That's what those settings are.
>

The 'Runtime configuration / General operation' part of the doc is quite
short on the subject.

Is there any other places to look for more details on this FSM ? What policy
should drive changes to the FSM settings ?

I guess allowing larger FSM values might improve UPDATE performance (require
VACUUM less often) but consume RAM that may be more useful elsewhere. Am I
right ?

Has any one made experience on that matter and what conclusion were drawn ?
In other words, shall we try to alter this FSM settings for better
perfomance or is it better to stick to a regular (shortly timed) VACUUM
scenario ?

    Cedric



Re: [GENERAL] Upgrade to dual processor machine?

From
Andrew Sullivan
Date:
On Fri, Nov 15, 2002 at 02:37:01PM +0100, Cedric Dufour (Cogito Ergo Soft) wrote:

> The 'Runtime configuration / General operation' part of the doc is quite
> short on the subject.

I'm afriad the setting was new in 7.2, and people don't have a great
deal of experience with it.  So it's difficult to ake
recommendations.

> Is there any other places to look for more details on this FSM ? What policy
> should drive changes to the FSM settings ?

If your tables change a lot between VACUUM, the FSM may fill up.  the
problem is that the system is only to keep "in mind" so much
information about how many pages can be freed.  This affects the
re-use of disk space.

> I guess allowing larger FSM values might improve UPDATE performance (require
> VACUUM less often) but consume RAM that may be more useful elsewhere. Am I
> right ?

Not really.  Your better bet is to perform VACUUM often; but if you
don't do that, then VACUUM will be able to re-claim more space in the
table if your FSM is larger.  Is that clear-ish?

You can estimate the correct value, apparently, by doing some
calculations about disk space and turnover.  I think it requires some
cycles where you do VACUUM FULL.  There was a discussion on the
-general list about it some months ago, IIRC.

A
--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: [GENERAL] Upgrade to dual processor machine?

From
Stephan Szabo
Date:
On Fri, 15 Nov 2002, Shridhar Daithankar wrote:

> On 14 Nov 2002 at 21:36, Henrik Steffen wrote:
>
> > do you seriously think that I should vacuum frequently updated/inserted
> > tables every 120 seconds ?
>
> Its not about 120 seconds. Its about how many new and dead tuples your server
> is generating.
>
> Here is a quick summary
>
> insert: New tuple:vacuum analyse updates that statistics.
> update: Causes a dead tuple: Vacuum analyse marks dead tuple for reuse saving
> buffer space.
> delete: Causes a dead unusable tuple: Vacuum full is required to reclaim the
> space on the disk.

AFAIK, the delete line above is wrong.  Deleted heap space should be able
to be reclaimed with normal vacuums within the limitations of the free
space map, etc...