pg_basebackup failed to back up large file - Mailing list pgsql-hackers

From Fujii Masao
Subject pg_basebackup failed to back up large file
Date
Msg-id CAHGQGwH0OKZ6cKpJKCWOjGa3ejwfFm1eNrmRO3dkdoTeaai-eg@mail.gmail.com
Whole thread Raw
Responses Re: pg_basebackup failed to back up large file  (Andres Freund <andres@2ndquadrant.com>)
List pgsql-hackers
Hi,

I got the bug report of pg_basebackup off-list that it causes an error
when there is large file (e.g., 4GB) in the database cluster. It's easy
to reproduce this problem.

$ dd if=/dev/zero of=$PGDATA/test bs=1G count=4
$ pg_basebackup -D hoge -c fast
pg_basebackup: invalid tar block header size: 32768

2014-06-03 22:56:50 JST data LOG:  could not send data to client: Broken pipe
2014-06-03 22:56:50 JST data ERROR:  base backup could not send data,
aborting backup
2014-06-03 22:56:50 JST data FATAL:  connection to client lost

The cause of this problem is that pg_basebackup uses an integer to
store the size of the file to receive from the server and an integer
overflow can happen when the file is very large. I think that
pg_basebackup should be able to handle even such large file properly
because it can exist in the database cluster, for example,
the server log file under $PGDATA/pg_log can be such large one.
Attached patch changes pg_basebackup so that it uses uint64 to store
the file size and doesn't cause an integer overflow.

Thought?

Regards,

--
Fujii Masao

Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Re-create dependent views on ALTER TABLE ALTER COLUMN ... TYPE?
Next
From: Tom Lane
Date:
Subject: Re: [BUGS] BUG #9652: inet types don't support min/max