Hi,
I got the bug report of pg_basebackup off-list that it causes an error
when there is large file (e.g., 4GB) in the database cluster. It's easy
to reproduce this problem.
$ dd if=/dev/zero of=$PGDATA/test bs=1G count=4
$ pg_basebackup -D hoge -c fast
pg_basebackup: invalid tar block header size: 32768
2014-06-03 22:56:50 JST data LOG: could not send data to client: Broken pipe
2014-06-03 22:56:50 JST data ERROR: base backup could not send data,
aborting backup
2014-06-03 22:56:50 JST data FATAL: connection to client lost
The cause of this problem is that pg_basebackup uses an integer to
store the size of the file to receive from the server and an integer
overflow can happen when the file is very large. I think that
pg_basebackup should be able to handle even such large file properly
because it can exist in the database cluster, for example,
the server log file under $PGDATA/pg_log can be such large one.
Attached patch changes pg_basebackup so that it uses uint64 to store
the file size and doesn't cause an integer overflow.
Thought?
Regards,
--
Fujii Masao