[ sorry for not noticing this thread sooner; thanks to Andres for
pointing me to it ]
On Mon, Dec 15, 2025 at 9:01 AM Amul Sul <sulamul@gmail.com> wrote:
Thanks for the reproducer; I can see the reported issue, but I am not
quite sure the proposed fix is correct and might break other cases (I
haven't tried constructed that case yet) but there is a comment
detailing that case just before the point where you are planning to do
the changes:
/*
* The truncation block length is the minimum length of the reconstructed
* file. Any block numbers below this threshold that are not present in
* the backup need to be fetched from the prior backup. At or above this
* threshold, blocks should only be included in the result if they are
* present in the backup. (This may require inserting zero blocks if the
* blocks included in the backup are non-consecutive.)
*/
IIUC, we might need the original assignment logic as it is. But we
need to ensure that truncation_block_length is not set to a value that
exceeds RELSEG_SIZE.
I think you're right. By way of example, let's say that the current
length of the file is 200 blocks, but the limit block is 100 blocks
into the current segment. That means that the only blocks that we can
get from any previous backup are blocks 0-99. Blocks 100-199 of the
current segment are either mentioned in the WAL summaries we're using
for this backup, or they're all zeroes. We can't set the
truncation_block_length to a value greater than 100, or we'll go
looking for the contents of any zero-filled blocks in previous
backups, will will either fail or produce the wrong answer. But Oleg
is correct that we also shouldn't set it to a value greater than
RELSEG_SIZE. So my guess is that the correct fix might be something
like the attached (untested, for discussion).
--
Robert Haas
EDB: http://www.enterprisedb.com