Re: block-level incremental backup - Mailing list pgsql-hackers

From vignesh C
Subject Re: block-level incremental backup
Date
Msg-id CALDaNm1ZViXrJbZ2BB1wU0EeVBc_ahuRU-y=cWryPt72ZGX0RA@mail.gmail.com
Whole thread Raw
In response to Re: block-level incremental backup  (Jeevan Chalke <jeevan.chalke@enterprisedb.com>)
Responses Re: block-level incremental backup
List pgsql-hackers
On Mon, Sep 9, 2019 at 4:51 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:
>
>
>
> On Tue, Aug 27, 2019 at 4:46 PM vignesh C <vignesh21@gmail.com> wrote:
>>
>> Few comments:
>> Comment:
>> + buf = (char *) malloc(statbuf->st_size);
>> + if (buf == NULL)
>> + ereport(ERROR,
>> + (errcode(ERRCODE_OUT_OF_MEMORY),
>> + errmsg("out of memory")));
>> +
>> + if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)
>> + {
>> + Bitmapset  *mod_blocks = NULL;
>> + int nmodblocks = 0;
>> +
>> + if (cnt % BLCKSZ != 0)
>> + {
>>
>> We can use same size as full page size.
>> After pg start backup full page write will be enabled.
>> We can use the same file size to maintain data consistency.
>
>
> Can you please explain which size?
> The aim here is to read entire file in-memory and thus used statbuf->st_size.
>
Instead of reading the whole file here, we can read the file page by page. There is a possibility of data inconsistency if data is not read page by page, data will be consistent if read page by page as full page write will be enabled at this time.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: [HACKERS] CLUSTER command progress monitor
Next
From: Alexander Korotkov
Date:
Subject: Re: Bug in GiST paring heap comparator