Re: [PATCH] Incremental backup: add backup profile to base backup - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [PATCH] Incremental backup: add backup profile to base backup
Date
Msg-id CA+TgmobMgZeL6e3kCcgCke5RQ87z4qN=pD7X-sSgWDHTbN6-NQ@mail.gmail.com
Whole thread Raw
In response to Re: [PATCH] Incremental backup: add backup profile to base backup  (Claudio Freire <klaussfreire@gmail.com>)
Responses Re: [PATCH] Incremental backup: add backup profile to base backup
List pgsql-hackers
On Wed, Aug 20, 2014 at 7:33 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
> On Wed, Aug 20, 2014 at 8:24 PM, Bruce Momjian <bruce@momjian.us> wrote:
>> On Mon, Aug 18, 2014 at 04:05:07PM +0300, Heikki Linnakangas wrote:
>>> But more to the point, I thought the consensus was to use the
>>> highest LSN of all the blocks in the file, no? That's essentially
>>> free to calculate (if you have to read all the data anyway), and
>>> isn't vulnerable to collisions.
>>
>> The highest-LSN approach allows you to read only the tail part of each
>> 8k block.  Assuming 512-byte storage sector sizes, you only have to read
>> 1/8 of the file.
>>
>> Now, the problem is that you lose kernel prefetch, but maybe
>> posix_fadvise() would fix that problem.
>
> Sequential read of 512-byte blocks or 8k blocks takes the same amount
> of time in rotating media (if they're scheduled right). Maybe not in
> SSD media.
>
> Not only, the kernel will read in 4k blocks, instead of 8k (at least in linux).
>
> So, the benefit is dubious.

Agreed.  But, there could be a CPU benefit, too.  Pulling the LSN out
of a block is probably a lot cheaper than checksumming the whole
thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: [PATCH] Incremental backup: add backup profile to base backup
Next
From: Tom Lane
Date:
Subject: Re: [v9.5] Custom Plan API