Re: Freeze avoidance of very large table. - Mailing list pgsql-hackers

From Jim Nasby
Subject Re: Freeze avoidance of very large table.
Date
Msg-id 553FC922.8060908@BlueTreble.com
Whole thread Raw
In response to Re: Freeze avoidance of very large table.  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Freeze avoidance of very large table.
List pgsql-hackers
On 4/28/15 7:11 AM, Robert Haas wrote:
> On Fri, Apr 24, 2015 at 4:09 PM, Jim Nasby<Jim.Nasby@bluetreble.com>
> wrote:>>> When I read that I think about something configurable at
>>>> >>>relation-level.There are cases where you may want to have more
>>>> >>>granularity of this information at block level by having the VM slots
>>>> >>>to track less blocks than 32, and vice-versa.
>>> >>
>>> >>What are those cases?  To me that sounds like making things
>>> >>complicated to no obvious benefit.
>> >
>> >Tables that get few/no dead tuples, like bulk insert tables. You'll have
>> >large sections of blocks with the same visibility.
> I don't see any reason why that would require different granularity.

Because in those cases it would be trivial to drop XMIN out of the tuple 
headers. For a warehouse with narrow rows that could be a significant 
win. Moreso, we could also move XMAX to the page level if we accept that 
if we need to invalidate any tuple we'd have to move all of them. In a 
warehouse situation that's probably OK as well.

That said, I don't think this is the first place to focus for reducing 
our on-disk format; reducing cleanup bloat would probably be a lot more 
useful.

Did you or Jan have more detailed info from the test he ran about where 
our 80% overhead was ending up? That would remove a lot of speculation 
here...
-- 
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com



pgsql-hackers by date:

Previous
From: Jim Nasby
Date:
Subject: Re: Fwd: [GENERAL] 4B row limit for CLOB tables
Next
From: Tomas Vondra
Date:
Subject: FIX : teach expression walker about RestrictInfo