Re: block-level incremental backup - Mailing list pgsql-hackers
From | vignesh C |
---|---|
Subject | Re: block-level incremental backup |
Date | |
Msg-id | CALDaNm1M57g+rq2chHrB+O+n3Wt-QZorhke6dfSp1BuGVZ9GCQ@mail.gmail.com Whole thread Raw |
In response to | Re: block-level incremental backup (Jeevan Chalke <jeevan.chalke@enterprisedb.com>) |
Responses |
Re: block-level incremental backup
Re: block-level incremental backup Re: block-level incremental backup |
List | pgsql-hackers |
On Thu, Aug 1, 2019 at 5:06 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote: > > On Tue, Jul 30, 2019 at 9:39 AM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote: >> >> I am almost done writing the patch for pg_combinebackup and will post soon. > > > Attached patch which implements the pg_combinebackup utility used to combine > full basebackup with one or more incremental backups. > > I have tested it manually and it works for all best cases. > > Let me know if you have any inputs/suggestions/review comments? > Some comments: 1) There will be some link files created for tablespace, we might require some special handling for it 2) + while (numretries <= maxretries) + { + rc = system(copycmd); + if (rc == 0) + return; + + pg_log_info("could not copy, retrying after %d seconds", + sleeptime); + pg_usleep(numretries++ * sleeptime * 1000000L); + } Retry functionality is hanlded only for copying of full files, should we handle retry for copying of partial files 3) + maxretries = atoi(optarg); + if (maxretries < 0) + { + pg_log_error("invalid value for maxretries"); + fprintf(stderr, _("%s: -r maxretries must be >= 0\n"), progname); + exit(1); + } + break; + case 's': + sleeptime = atoi(optarg); + if (sleeptime <= 0 || sleeptime > 60) + { + pg_log_error("invalid value for sleeptime"); + fprintf(stderr, _("%s: -s sleeptime must be between 1 and 60\n"), progname); + exit(1); + } + break; we can have some range for maxretries similar to sleeptime 4) + fp = fopen(filename, "r"); + if (fp == NULL) + { + pg_log_error("could not read file \"%s\": %m", filename); + exit(1); + } + + labelfile = malloc(statbuf.st_size + 1); + if (fread(labelfile, 1, statbuf.st_size, fp) != statbuf.st_size) + { + pg_log_error("corrupted file \"%s\": %m", filename); + free(labelfile); + exit(1); + } Should we check for malloc failure 5) Should we add display of progress as backup may take some time, this can be added as enhancement. We can get other's opinion on this. 6) + if (nIncrDir == MAX_INCR_BK_COUNT) + { + pg_log_error("too many incremental backups to combine"); + fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname); + exit(1); + } + + IncrDirs[nIncrDir] = optarg; + nIncrDir++; + break; If the backup count increases providing the input may be difficult, Shall user provide all the incremental backups from a parent folder and can we handle the ordering of incremental backup internally 7) + if (isPartialFile) + { + if (verbose) + pg_log_info("combining partial file \"%s.partial\"", fn); + + combine_partial_files(fn, IncrDirs, nIncrDir, subdirpath, outfn); + } + else + copy_whole_file(infn, outfn); Add verbose for copying whole file 8) We can also check if approximate space is available in disk before starting combine backup, this can be added as enhancement. We can get other's opinion on this. 9) + printf(_(" -i, --incr-backup=DIRECTORY incremental backup directory (maximum %d)\n"), MAX_INCR_BK_COUNT); + printf(_(" -o, --output-dir=DIRECTORY combine backup into directory\n")); + printf(_("\nGeneral options:\n")); + printf(_(" -n, --no-clean do not clean up after errors\n")); Combine backup into directory can be combine backup directory 10) +/* Max number of incremental backups to be combined. */ +#define MAX_INCR_BK_COUNT 10 + +/* magic number in incremental backup's .partial file */ MAX_INCR_BK_COUNT can be increased little, some applications use 1 full backup at the beginning of the month and use 30 incremental backups rest of the days in the month Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com
pgsql-hackers by date: