Thread: pg_basebackup cannot compress to STDOUNT

pg_basebackup cannot compress to STDOUNT

From
Support
Date:
Hi,

Despite of the --help saying that it's possible to gzip to STDOUT and 
pipe it for another process
pg_basebackup fails saying that it's not possible to gzip to STDOUT.

Who to believe then?



Re: pg_basebackup cannot compress to STDOUNT

From
Adrian Klaver
Date:
On 5/8/20 12:14 PM, Support wrote:
> Hi,
> 
> Despite of the --help saying that it's possible to gzip to STDOUT and 
> pipe it for another process
> pg_basebackup fails saying that it's not possible to gzip to STDOUT.

1) Postgres version?

2) Command run?

3) Error reported?

> 
> Who to believe then?
> 
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com



Re: pg_basebackup cannot compress to STDOUNT

From
Support
Date:
On 5/8/2020 12:18 PM, Adrian Klaver wrote:
> On 5/8/20 12:14 PM, Support wrote:
>> Hi,
>>
>> Despite of the --help saying that it's possible to gzip to STDOUT and 
>> pipe it for another process
>> pg_basebackup fails saying that it's not possible to gzip to STDOUT.
>
> 1) Postgres version?
>
> 2) Command run?
>
> 3) Error reported?
>
>>
>> Who to believe then?
>>
>>
1) Postgres version?
12.2 selt compiled with
./configure --with-perl --enable-integer-datetimes --enable-depend 
--with-pam --with-systemd --enable-nls --with-libxslt --with-libxml 
--with-llvm --with-python --with-icu --with-gssapi --with-openssl

2) Command run?
ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz -c 
-p2 " | pigz -cd -p2 | tar -xf- -C /usr/local/pgsql/data

3) Error reported?
pg_basebackup: error: cannot stream write-ahead logs in tar mode to stdout
Try "pg_basebackup --help" for more information.
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors

Thanks!



Re: pg_basebackup cannot compress to STDOUNT

From
Adrian Klaver
Date:
On 5/8/20 12:31 PM, Support wrote:
> 
> On 5/8/2020 12:18 PM, Adrian Klaver wrote:
>> On 5/8/20 12:14 PM, Support wrote:
>>> Hi,
>>>
>>> Despite of the --help saying that it's possible to gzip to STDOUT and 
>>> pipe it for another process
>>> pg_basebackup fails saying that it's not possible to gzip to STDOUT.
>>
>> 1) Postgres version?
>>
>> 2) Command run?
>>
>> 3) Error reported?
>>
>>>
>>> Who to believe then?
>>>
>>>
> 1) Postgres version?
> 12.2 selt compiled with
> ./configure --with-perl --enable-integer-datetimes --enable-depend 
> --with-pam --with-systemd --enable-nls --with-libxslt --with-libxml 
> --with-llvm --with-python --with-icu --with-gssapi --with-openssl
> 
> 2) Command run?
> ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz -c 
> -p2 " | pigz -cd -p2 | tar -xf- -C /usr/local/pgsql/data
> 
> 3) Error reported?
> pg_basebackup: error: cannot stream write-ahead logs in tar mode to stdout

https://www.postgresql.org/docs/12/app-pgbasebackup.html
"t
tar

     Write the output as tar files in the target directory. The main 
data directory will be written to a file named base.tar, and all other 
tablespaces will be named after the tablespace OID.

     If the value - (dash) is specified as target directory, the tar 
contents will be written to standard output, suitable for piping to for 
example gzip. This is only possible if the cluster has no additional 
tablespaces and WAL streaming is not used.
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
"

So use -X fetch or X none


> Try "pg_basebackup --help" for more information.
> tar: This does not look like a tar archive
> tar: Exiting with failure status due to previous errors
> 
> Thanks!
> 
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com



Re: pg_basebackup cannot compress to STDOUNT

From
Support
Date:

On 5/8/2020 1:24 PM, Adrian Klaver wrote:
> On 5/8/20 12:31 PM, Support wrote:
>>
>> On 5/8/2020 12:18 PM, Adrian Klaver wrote:
>>> On 5/8/20 12:14 PM, Support wrote:
>>>> Hi,
>>>>
>>>> Despite of the --help saying that it's possible to gzip to STDOUT 
>>>> and pipe it for another process
>>>> pg_basebackup fails saying that it's not possible to gzip to STDOUT.
>>>
>>> 1) Postgres version?
>>>
>>> 2) Command run?
>>>
>>> 3) Error reported?
>>>
>>>>
>>>> Who to believe then?
>>>>
>>>>
>> 1) Postgres version?
>> 12.2 selt compiled with
>> ./configure --with-perl --enable-integer-datetimes --enable-depend 
>> --with-pam --with-systemd --enable-nls --with-libxslt --with-libxml 
>> --with-llvm --with-python --with-icu --with-gssapi --with-openssl
>>
>> 2) Command run?
>> ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz 
>> -c -p2 " | pigz -cd -p2 | tar -xf- -C /usr/local/pgsql/data
>>
>> 3) Error reported?
>> pg_basebackup: error: cannot stream write-ahead logs in tar mode to 
>> stdout
>
> https://www.postgresql.org/docs/12/app-pgbasebackup.html
> "t
> tar
>
>     Write the output as tar files in the target directory. The main 
> data directory will be written to a file named base.tar, and all other 
> tablespaces will be named after the tablespace OID.
>
>     If the value - (dash) is specified as target directory, the tar 
> contents will be written to standard output, suitable for piping to 
> for example gzip. This is only possible if the cluster has no 
> additional tablespaces and WAL streaming is not used.
>                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
> "
>
> So use -X fetch or X none
>
>
>> Try "pg_basebackup --help" or more information.
>> tar: This does not look like a tar archive
>> tar: Exiting with failure status due to previous errors
>>
>> Thanks!
>>
>>

Good catch thank you!



Re: pg_basebackup cannot compress to STDOUNT

From
Adrian Klaver
Date:
On 5/8/20 1:42 PM, Support wrote:
> 
> 
> On 5/8/2020 1:24 PM, Adrian Klaver wrote:
>> On 5/8/20 12:31 PM, Support wrote:
>>>
>>> On 5/8/2020 12:18 PM, Adrian Klaver wrote:
>>>> On 5/8/20 12:14 PM, Support wrote:
>>>>> Hi,
>>>>>
>>>>> Despite of the --help saying that it's possible to gzip to STDOUT 
>>>>> and pipe it for another process
>>>>> pg_basebackup fails saying that it's not possible to gzip to STDOUT.
>>>>
>>>> 1) Postgres version?
>>>>
>>>> 2) Command run?
>>>>
>>>> 3) Error reported?
>>>>
>>>>>
>>>>> Who to believe then?
>>>>>
>>>>>
>>> 1) Postgres version?
>>> 12.2 selt compiled with
>>> ./configure --with-perl --enable-integer-datetimes --enable-depend 
>>> --with-pam --with-systemd --enable-nls --with-libxslt --with-libxml 
>>> --with-llvm --with-python --with-icu --with-gssapi --with-openssl
>>>
>>> 2) Command run?
>>> ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz 
>>> -c -p2 " | pigz -cd -p2 | tar -xf- -C /usr/local/pgsql/data
>>>
>>> 3) Error reported?
>>> pg_basebackup: error: cannot stream write-ahead logs in tar mode to 
>>> stdout
>>
>> https://www.postgresql.org/docs/12/app-pgbasebackup.html
>> "t
>> tar
>>
>>     Write the output as tar files in the target directory. The main 
>> data directory will be written to a file named base.tar, and all other 
>> tablespaces will be named after the tablespace OID.
>>
>>     If the value - (dash) is specified as target directory, the tar 
>> contents will be written to standard output, suitable for piping to 
>> for example gzip. This is only possible if the cluster has no 
>> additional tablespaces and WAL streaming is not used.
>>                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
>> "
>>
>> So use -X fetch or X none
>>
>>
>>> Try "pg_basebackup --help" or more information.
>>> tar: This does not look like a tar archive
>>> tar: Exiting with failure status due to previous errors
>>>
>>> Thanks!
>>>
>>>
> 
> Good catch thank you!

It was not much of a catch, the error message supplied the info:

"pg_basebackup: error: cannot stream write-ahead logs in tar mode to 
stdout "


> 
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com



Re: pg_basebackup cannot compress to STDOUNT

From
Paul Förster
Date:
Hi Admin,

> On 08. May, 2020, at 21:31, Support <admin@e-blokos.com> wrote:
> 2) Command run?
> ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz -c -p2 " | pigz -cd -p2 | tar -xf- -C
/usr/local/pgsql/data

I don't get it, sorry. Do I understand you correctly here that you want an online backup or a *remotely* running
PostgreSQLinstance on your local machine? 

If so, why not just let pg_basebackup connect remotely and let it do its magic? Something like this:

$ mkdir -p /usr/local/pgsql/data
$ cd /usr/local/pgsql/data
$ pg_basebackup -D /run/postgresql -Fp -P -v -h nodeXXX -p 5432 -U replicator
$ pg_ctl start

You'd have to have a role with replication privs or superuser and you'd have to adapt the port of course.

No need to take care of any WALs manually. It is all taken care of by pg_basebackup. The only real drawback is that if
youhave tablespaces, you'd have to create all directories of the tablespaces beforehand, which is why we removed them
againafter initially having tried the feature. 

That's basically, how I create async replicas on out site, which is why I additionally add -R to the above command.

Cheers,
Paul


Re: pg_basebackup cannot compress to STDOUNT

From
Support
Date:
On 5/8/2020 11:51 PM, Paul Förster wrote:

> Hi Admin,
>
>> On 08. May, 2020, at 21:31, Support <admin@e-blokos.com> wrote:
>> 2) Command run?
>> ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz -c -p2 " | pigz -cd -p2 | tar -xf- -C
/usr/local/pgsql/data
> I don't get it, sorry. Do I understand you correctly here that you want an online backup or a *remotely* running
PostgreSQLinstance on your local machine?
 
>
> If so, why not just let pg_basebackup connect remotely and let it do its magic? Something like this:
>
> $ mkdir -p /usr/local/pgsql/data
> $ cd /usr/local/pgsql/data
> $ pg_basebackup -D /run/postgresql -Fp -P -v -h nodeXXX -p 5432 -U replicator
> $ pg_ctl start
>
> You'd have to have a role with replication privs or superuser and you'd have to adapt the port of course.
>
> No need to take care of any WALs manually. It is all taken care of by pg_basebackup. The only real drawback is that
ifyou have tablespaces, you'd have to create all directories of the tablespaces beforehand, which is why we removed
themagain after initially having tried the feature.
 
>
> That's basically, how I create async replicas on out site, which is why I additionally add -R to the above command.
>
> Cheers,
> Paul
>
The trick of my command above is to get the transfer faster in one 
compressed file going through the network.