Re: pg_basebackup for streaming base backups - Mailing list pgsql-hackers

From Magnus Hagander
Subject Re: pg_basebackup for streaming base backups
Date
Msg-id AANLkTinL-UG-BXFdYByGrE2kW8s1=KGtuTFonNtiJYmi@mail.gmail.com
Whole thread Raw
In response to Re: pg_basebackup for streaming base backups  (Fujii Masao <masao.fujii@gmail.com>)
Responses Re: pg_basebackup for streaming base backups
Re: pg_basebackup for streaming base backups
Re: pg_basebackup for streaming base backups
List pgsql-hackers
On Tue, Jan 18, 2011 at 10:49, Fujii Masao <masao.fujii@gmail.com> wrote:
> On Mon, Jan 17, 2011 at 10:30 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
>> Though I haven't seen the core part of the patch (i.e.,
>> ReceiveTarFile, etc..) yet,
>> here is the comments against others.
>
> Here are another comments:

Thanks! These are all good and useful comments!


> When I untar the tar file taken by pg_basebackup, I got the following
> messages:
>
>    $ tar xf base.tar
>    tar: Skipping to next header
>    tar: Archive contains obsolescent base-64 headers
>    tar: Error exit delayed from previous errors
>
> Is this a bug? This happens only when I create $PGDATA by using
> initdb -X (i.e., I relocated the pg_xlog directory elsewhere than
> $PGDATA).

Interesting. What version of tar and what platform? I can't reproduce
that here...

It certainly is a bug, that should not happen.


> +               if (compresslevel > 0)
> +               {
> +                       snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res,
> rownum, 0));
> +                       ztarfile = gzopen(fn, "wb");
>
> Though I'm not familiar with zlib, isn't gzsetparams() required
> here?

Uh. It certainly is! I clearly forgot it there...


> +#ifdef HAVE_LIBZ
> +       if (!tarfile && !ztarfile)
> +#else
> +       if (!tarfile)
> +#endif
> +       {
> +               fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
> +                               progname, fn, strerror(errno));
>
> Instead of strerror, get_gz_error seems required when using zlib.

Indeed it is. I think it needs to be this:
#ifdef HAVE_LIBZif (compresslevel > 0 && !ztarfile){    /* Compression is in use */    fprintf(stderr, _("%s: could not
createcompressed file \"%s\": %s\n"),            progname, fn, get_gz_error(ztarfile));    exit(1);}else 
#endif{    /* Either no zlib support, or zlib support but compresslevel = 0 */    if (!tarfile)    {
fprintf(stderr,_("%s: could not create file \"%s\": %s\n"),                progname, fn, strerror(errno));
exit(1);   }} 


> +       if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
>
> The check for "!res" is not needed since PQresultStatus checks that.

Hah. I still keep doing that from old habit. I know you've pointed
that out before, with libpqwalreceiver :-)


> +               r = PQgetCopyData(conn, ©buf, 0);
> +               if (r == -1)
>
> Since -1 of PQgetCopyData might indicate an error, in this case,
> we would need to call PQgetResult?.

Uh, -1 means end of data, no? -2 means error?


> ReceiveTarFile seems refactorable by using GZWRITE and GZCLOSE
> macros.

You mean the ones from pg_dump? I don't think so. We can't use
gzwrite() with compression level 0 on the tar output, because it will
still write a gz header. With pg_dump, that is ok because it's our
format, but with a .tar (without .gz) I don't think it is.

at least that's how I interpreted the function.


> +                                       fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
>
> %m in fprintf is portable?

Hmm. I just assumed it was because we use it elsewhere, but I now see
we only really use it for ereport() stuff. Bottom line is, I don't
know - perhaps it needs to be changed to use strerror()?


> Can't you change '%s' to \"%s\" for consistency?

Yeah, absolutely. Clearly I was way inconsistent, and it got worse
with some copy/paste :-(

> +       /*
> +        * Make sure we're unpacking into an empty directory
> +        */
> +       verify_dir_is_empty_or_create(current_path);
>
> Can pg_basebackup take a backup of $PGDATA including a tablespace
> directory, without an error? The above code seems to prevent that....

Uh, how do you mean it woul dprevent that? It requires that the
directory you're writing the tablespace to is empty or nonexistant,
but that shouldn't prevent a backup, no? It will prevent you from
overwriting things with your backup, but that's intentional - if you
don't need the old dir, just remove it.


> +                               if (compresslevel <= 0)
> +                               {
> +                                       fprintf(stderr, _("%s: invalid compression level \"%s\"\n"),
>
> It's better to check "compresslevel > 9" here.

Agreed (well, check for both of them of course)


> +/*
> + * Print a progress report based on the global variables. If verbose output
> + * is disabled, also print the current file name.
>
> Typo: s/disabled/enabled

Indeed.


> I request new option which specifies whether pg_start_backup
> executes immediate checkpoint or not. Currently it always executes
> immediate one. But I'd like to run smoothed one in busy system.
> What's your opinion?

Yeah that sounds like a good idea. Shouldn't be too hard to do (will
reuqire a backend patch as well, of course). Should we use "-f" for
fast? Though that may be an unfortunate overload of the usual usecase
for -f, so maybe -c <fast|slow> for "checkpoint fast/slow"?


I've updated my git branch with the simple fixes, will get the bigger
ones in there as soon as I've done them.
--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: pg_basebackup for streaming base backups
Next
From: Greg Smith
Date:
Subject: Re: auto-sizing wal_buffers