Re: Freeze avoidance of very large table. - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Freeze avoidance of very large table.
Date
Msg-id CA+TgmoYuZQoZPB4+8fppsU4M=JrhvWU5u8HHc1RHV50s3zujYg@mail.gmail.com
Whole thread Raw
In response to Re: Freeze avoidance of very large table.  (Michael Paquier <michael.paquier@gmail.com>)
Responses Re: Freeze avoidance of very large table.
List pgsql-hackers
On Thu, Apr 23, 2015 at 9:03 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:
> On Thu, Apr 23, 2015 at 10:42 PM, Robert Haas wrote:
>> On Thu, Apr 23, 2015 at 4:19 AM, Simon Riggs  wrote:
>>> We only need a freeze/backup map for larger relations. So if we map 1000
>>> blocks per map page, we skip having a map at all when size < 1000.
>>
>> Agreed.  We might also want to map multiple blocks per map slot - e.g.
>> one slot per 32 blocks.  That would keep the map quite small even for
>> very large relations, and would not compromise efficiency that much
>> since reading 256kB sequentially probably takes only a little longer
>> than reading 8kB.
>>
>> I think the idea of integrating the freeze map into the VM fork is
>> also worth considering.  Then, the incremental backup map could be
>> optional; if you don't want incremental backup, you can shut it off
>> and have less overhead.
>
> When I read that I think about something configurable at
> relation-level.There are cases where you may want to have more
> granularity of this information at block level by having the VM slots
> to track less blocks than 32, and vice-versa.

What are those cases?  To me that sounds like making things
complicated to no obvious benefit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Improving vacuum/VM/etc
Next
From: Robert Haas
Date:
Subject: Re: Fwd: [GENERAL] 4B row limit for CLOB tables