Re: [COMMITTERS] pgsql: Reduce the size of memoryallocations by lazy vacuum when - Mailing list pgsql-hackers

From Gregory Stark
Subject Re: [COMMITTERS] pgsql: Reduce the size of memoryallocations by lazy vacuum when
Date
Msg-id 87myvcffny.fsf@oxford.xeocode.com
Whole thread Raw
In response to Re: [COMMITTERS] pgsql: Reduce the size of memoryallocations by lazy vacuum when  ("Heikki Linnakangas" <heikki@enterprisedb.com>)
Responses Re: [COMMITTERS] pgsql: Reduce the size of memoryallocations by lazy vacuum when
List pgsql-hackers
"Heikki Linnakangas" <heikki@enterprisedb.com> writes:

> Simon Riggs wrote:
>> On Mon, 2007-09-24 at 10:02 +0100, Heikki Linnakangas wrote:
>>> How about just using MaxHeapTuplesPerPage? With the default 8K block
>>> size, it's not that much more than 200, but makes the above gripes
>>> completely go away. That seems like the safest option at this point.
>> 
>> It would be much better to use a value for each table. Any constant
>> value will be sub-optimal in many cases. 
>
> Allocating extra memory doesn't usually do much harm, as long as you
> don't actually use it. The reason we're now limiting it is to avoid Out
> Of Memory errors if you're running with overcommit turned off, and
> autovacuum triggers a vacuum on multiple tables at the same time.

For reference, MaxHeapTuplesPerPage on an 8k block is 291. If there are any
columns in your tuples (meaning they're not either HOT updates which have been
pruned or rows with 8 or fewer columns all of which are null) then the most
you can have is 255 rows.

For the small difference between 200 and 291 it seems safer to just use
MaxHeapTuplesPerPage.


BS    MHTPG     Max w/data
--------------------------
4096     145     127
8192     291     255
16384    584    511 
32768    1169    1023

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: GUC variable renaming, redux
Next
From: Andrew Dunstan
Date:
Subject: Re: Bytea as C string in pg_convert?