Thread: PATCH: Exclude unlogged tables from base backups
Including unlogged relations in base backups takes up space and is wasteful since they are truncated during backup recovery. The attached patches exclude unlogged relations from base backups except for the init fork, which is required to recreate the main fork during recovery. * exclude-unlogged-v1-01.patch Some refactoring of reinit.c was required to reduce code duplication but the coverage report showed that most of the interesting parts of reinit.c were not being tested. This patch adds coverage for reinit.c. * exclude-unlogged-v1-02.patch Refactor reinit.c to allow other modules to identify and work with unlogged relation forks. * exclude-unlogged-v1-03.patch Exclude unlogged relation forks (except init) from pg_basebackup to save space (and time). I decided not to try and document unlogged exclusions in the continuous backup documentation yet (they are noted in the protocol docs). I would like to get some input on whether the community thinks this is a good idea. It's a non-trivial procedure that would be easy to misunderstand and does not affect the quality of the backup other than using less space. Thoughts? I'll add these patches to the next CF. -- -David david@pgmasters.net
Attachment
Hi, On 2017-12-12 17:49:54 -0500, David Steele wrote: > Including unlogged relations in base backups takes up space and is wasteful > since they are truncated during backup recovery. > > The attached patches exclude unlogged relations from base backups except for > the init fork, which is required to recreate the main fork during recovery. How do you reliably identify unlogged relations while writes are going on? Without locks that sounds, uh, nontrivial? > I decided not to try and document unlogged exclusions in the continuous > backup documentation yet (they are noted in the protocol docs). I would > like to get some input on whether the community thinks this is a good idea. > It's a non-trivial procedure that would be easy to misunderstand and does > not affect the quality of the backup other than using less space. Thoughts? Think it's a good idea, I've serious concerns about practicability of a correct implementation though. - Andres
Hi Andres, On 12/12/17 5:52 PM, Andres Freund wrote: > On 2017-12-12 17:49:54 -0500, David Steele wrote: >> Including unlogged relations in base backups takes up space and is wasteful >> since they are truncated during backup recovery. >> >> The attached patches exclude unlogged relations from base backups except for >> the init fork, which is required to recreate the main fork during recovery. > > How do you reliably identify unlogged relations while writes are going > on? Without locks that sounds, uh, nontrivial? I don't think this is an issue. If the init fork exists it should be OK if it is torn since it will be recreated from WAL. If the forks are written out of order (i.e. main before init), which is definitely possible, then I think worst case is some files will be backed up that don't need to be. The main fork is unlikely to be very large at that point so it doesn't seem like a big deal. I don't see this as any different than what happens during recovery. The unlogged forks are cleaned / re-inited before replay starts which is the same thing we are doing here. >> I decided not to try and document unlogged exclusions in the continuous >> backup documentation yet (they are noted in the protocol docs). I would >> like to get some input on whether the community thinks this is a good idea. >> It's a non-trivial procedure that would be easy to misunderstand and does >> not affect the quality of the backup other than using less space. Thoughts? > > Think it's a good idea, I've serious concerns about practicability of a > correct implementation though. Well, I would be happy if you had a look! Thanks. -- -David david@pgmasters.net
Hi, On 2017-12-12 18:04:44 -0500, David Steele wrote: > On 12/12/17 5:52 PM, Andres Freund wrote: > > On 2017-12-12 17:49:54 -0500, David Steele wrote: > > > Including unlogged relations in base backups takes up space and is wasteful > > > since they are truncated during backup recovery. > > > > > > The attached patches exclude unlogged relations from base backups except for > > > the init fork, which is required to recreate the main fork during recovery. > > > > How do you reliably identify unlogged relations while writes are going > > on? Without locks that sounds, uh, nontrivial? > > I don't think this is an issue. If the init fork exists it should be OK if > it is torn since it will be recreated from WAL. I'm not worried about torn pages. > If the forks are written out of order (i.e. main before init), which is > definitely possible, then I think worst case is some files will be backed up > that don't need to be. The main fork is unlikely to be very large at that > point so it doesn't seem like a big deal. > > I don't see this as any different than what happens during recovery. The > unlogged forks are cleaned / re-inited before replay starts which is the > same thing we are doing here. It's quite different - in the recovery case there's no other write activity going on. But on a normally running cluster the persistence of existing tables can get changed, and oids can get recycled. What guarantees that between the time you checked for the init fork the table hasn't been dropped, the oid reused and now a permanent relation is in its place? Greetings, Andres Freund
On Wed, Dec 13, 2017 at 8:04 AM, David Steele <david@pgmasters.net> wrote: > On 12/12/17 5:52 PM, Andres Freund wrote: >> On 2017-12-12 17:49:54 -0500, David Steele wrote: >>> >>> Including unlogged relations in base backups takes up space and is >>> wasteful >>> since they are truncated during backup recovery. >>> >>> The attached patches exclude unlogged relations from base backups except >>> for >>> the init fork, which is required to recreate the main fork during >>> recovery. >> >> >> How do you reliably identify unlogged relations while writes are going >> on? Without locks that sounds, uh, nontrivial? > > > I don't think this is an issue. If the init fork exists it should be OK if > it is torn since it will be recreated from WAL. Yeah, I was just typing that until I saw your message. > If the forks are written out of order (i.e. main before init), which is > definitely possible, then I think worst case is some files will be backed up > that don't need to be. The main fork is unlikely to be very large at that > point so it doesn't seem like a big deal. As far as I recall the init forks are logged before the main forks. I don't think that we should rely on that assumption though to be always satisfied. >>> I decided not to try and document unlogged exclusions in the continuous >>> backup documentation yet (they are noted in the protocol docs). I would >>> like to get some input on whether the community thinks this is a good >>> idea. >>> It's a non-trivial procedure that would be easy to misunderstand and does >>> not affect the quality of the backup other than using less space. >>> Thoughts? >> >> >> Think it's a good idea, I've serious concerns about practicability of a >> correct implementation though. > > Well, I would be happy if you had a look! You can count me in. I think that this patch has value for some dedicated workloads. It is a waste to backup stuff that will be removed at recovery anyway. -- Michael
On 12/12/17 6:07 PM, Andres Freund wrote: >> >> I don't see this as any different than what happens during recovery. The >> unlogged forks are cleaned / re-inited before replay starts which is the >> same thing we are doing here. > > It's quite different - in the recovery case there's no other write > activity going on. But on a normally running cluster the persistence of > existing tables can get changed, and oids can get recycled. What > guarantees that between the time you checked for the init fork the table > hasn't been dropped, the oid reused and now a permanent relation is in > its place? Well, that's a good point! How about rechecking the presence of the init fork after a main/other fork has been found? Is it possible for an init fork to still be lying around after an oid has been recycled? Seems like it could be... -- -David david@pgmasters.net
On 2017-12-12 18:18:09 -0500, David Steele wrote: > On 12/12/17 6:07 PM, Andres Freund wrote: > > > > > > I don't see this as any different than what happens during recovery. The > > > unlogged forks are cleaned / re-inited before replay starts which is the > > > same thing we are doing here. > > > > It's quite different - in the recovery case there's no other write > > activity going on. But on a normally running cluster the persistence of > > existing tables can get changed, and oids can get recycled. What > > guarantees that between the time you checked for the init fork the table > > hasn't been dropped, the oid reused and now a permanent relation is in > > its place? > > Well, that's a good point! > > How about rechecking the presence of the init fork after a main/other fork > has been found? Is it possible for an init fork to still be lying around > after an oid has been recycled? Seems like it could be... I don't see how that'd help. You could just have gone through this cycle multiple times by the time you get to rechecking. All not very likely, but I don't want us to rely on luck here... If we had a way to prevent relfilenode reuse across multiple checkpoints this'd be easier, although ALTER TABLE SET UNLOGGED still'd complicate. I guess we could have the basebackup create placeholder files that prevent relfilenode reuse, but that seems darned ugly. Greetings, Andres Freund
Hi Michael, On 12/12/17 6:08 PM, Michael Paquier wrote: > >> If the forks are written out of order (i.e. main before init), which is >> definitely possible, then I think worst case is some files will be backed up >> that don't need to be. The main fork is unlikely to be very large at that >> point so it doesn't seem like a big deal. > > As far as I recall the init forks are logged before the main forks. I > don't think that we should rely on that assumption though to be always > satisfied. Indeed, nothing is sure until a checkpoint. Until then we must assume writes are random. >> Well, I would be happy if you had a look! > > You can count me in. I think that this patch has value for some > dedicated workloads. Thanks! > It is a waste to backup stuff that will be > removed at recovery anyway. It also causes confusion when the recovered database is smaller than the backup. I can't tell you how many times I have answered this question... -- -David david@pgmasters.net
On 12/12/17 6:21 PM, Andres Freund wrote: > On 2017-12-12 18:18:09 -0500, David Steele wrote: >> On 12/12/17 6:07 PM, Andres Freund wrote: >>> >>> It's quite different - in the recovery case there's no other write >>> activity going on. But on a normally running cluster the persistence of >>> existing tables can get changed, and oids can get recycled. What >>> guarantees that between the time you checked for the init fork the table >>> hasn't been dropped, the oid reused and now a permanent relation is in >>> its place? >> >> Well, that's a good point! >> >> How about rechecking the presence of the init fork after a main/other fork >> has been found? Is it possible for an init fork to still be lying around >> after an oid has been recycled? Seems like it could be... > > I don't see how that'd help. You could just have gone through this cycle > multiple times by the time you get to rechecking. All not very likely, > but I don't want us to rely on luck here... Definitely not. > If we had a way to prevent relfilenode reuse across multiple checkpoints > this'd be easier, although ALTER TABLE SET UNLOGGED still'd complicate. Or error the backup if there is wraparound? We already have an error if a standby is promoted during backup -- so there is some precedent. > I guess we could have the basebackup create placeholder files that > prevent relfilenode reuse, but that seems darned ugly. Yes, very ugly. -- -David david@pgmasters.net
Hi, On 2017-12-12 18:30:47 -0500, David Steele wrote: > > If we had a way to prevent relfilenode reuse across multiple checkpoints > > this'd be easier, although ALTER TABLE SET UNLOGGED still'd complicate. > > Or error the backup if there is wraparound? That seems entirely unacceptable to me. On a machine with lots of toasting etc going on an oid wraparound doesn't take a long time. We've only one oid counter for all tables, and relfilenodes are inferred from that .... Greetings, Andres Freund
On 12/12/17 6:33 PM, Andres Freund wrote: > > On 2017-12-12 18:30:47 -0500, David Steele wrote: >>> If we had a way to prevent relfilenode reuse across multiple checkpoints >>> this'd be easier, although ALTER TABLE SET UNLOGGED still'd complicate. >> >> Or error the backup if there is wraparound? > > That seems entirely unacceptable to me. On a machine with lots of > toasting etc going on an oid wraparound doesn't take a long time. We've > only one oid counter for all tables, and relfilenodes are inferred from > that .... Fair enough. I'll think on it. -- -David david@pgmasters.net
Andres, * Andres Freund (andres@anarazel.de) wrote: > On 2017-12-12 18:04:44 -0500, David Steele wrote: > > If the forks are written out of order (i.e. main before init), which is > > definitely possible, then I think worst case is some files will be backed up > > that don't need to be. The main fork is unlikely to be very large at that > > point so it doesn't seem like a big deal. > > > > I don't see this as any different than what happens during recovery. The > > unlogged forks are cleaned / re-inited before replay starts which is the > > same thing we are doing here. > > It's quite different - in the recovery case there's no other write > activity going on. But on a normally running cluster the persistence of > existing tables can get changed, and oids can get recycled. What > guarantees that between the time you checked for the init fork the table > hasn't been dropped, the oid reused and now a permanent relation is in > its place? We *are* actually talking about the recovery case here because this is a backup that's happening and WAL replay will be happening after the pg_basebackup is done and then the backup restored somewhere and PG started up again. If the persistence is changed then the table will be written into the WAL, no? All of the WAL generated during a backup (which is what we're talking about here) has to be replayed after the restore is done and is before the database is considered consistent, so none of this matters, as far as I can see, because the drop table or alter table logged or anything else will be in the WAL that ends up getting replayed. If that's not correct, then isn't there a live issue here with how backups are happening today with unlogged tables and online backups? I don't think there is, because, as David points out, the unlogged tables are cleaned up first and then WAL replay happens during recovery, so the init fork will cause the relation to be overwritten, but then later the logged 'drop table' and subsequent re-use of the relfilenode to create a new table (or persistence change) will all be in the WAL and will be replayed over top and will take care of this. Thanks! Stephen
Attachment
On 12/12/17 8:48 PM, Stephen Frost wrote: > Andres, > > * Andres Freund (andres@anarazel.de) wrote: >> On 2017-12-12 18:04:44 -0500, David Steele wrote: >>> If the forks are written out of order (i.e. main before init), which is >>> definitely possible, then I think worst case is some files will be backed up >>> that don't need to be. The main fork is unlikely to be very large at that >>> point so it doesn't seem like a big deal. >>> >>> I don't see this as any different than what happens during recovery. The >>> unlogged forks are cleaned / re-inited before replay starts which is the >>> same thing we are doing here. >> >> It's quite different - in the recovery case there's no other write >> activity going on. But on a normally running cluster the persistence of >> existing tables can get changed, and oids can get recycled. What >> guarantees that between the time you checked for the init fork the table >> hasn't been dropped, the oid reused and now a permanent relation is in >> its place? > > We *are* actually talking about the recovery case here because this is a > backup that's happening and WAL replay will be happening after the > pg_basebackup is done and then the backup restored somewhere and PG > started up again. > > If the persistence is changed then the table will be written into the > WAL, no? All of the WAL generated during a backup (which is what we're > talking about here) has to be replayed after the restore is done and is > before the database is considered consistent, so none of this matters, > as far as I can see, because the drop table or alter table logged or > anything else will be in the WAL that ends up getting replayed. Yes - that's the way I see it. At least when I'm not tired from a day of coding like I was last night... > I don't think there is, because, as David points out, the unlogged > tables are cleaned up first and then WAL replay happens during recovery, > so the init fork will cause the relation to be overwritten, but then > later the logged 'drop table' and subsequent re-use of the relfilenode > to create a new table (or persistence change) will all be in the WAL and > will be replayed over top and will take care of this. Files can be copied in any order, so if an OID is recycled the backup could copy its first, second, or nth incarnation. It doesn't really matter since all of it will be clobbered by WAL replay. The new base backup code just does the non-init fork removal in advance, following the same rules that would apply on recovery given the same file set. -- -David david@pgmasters.net
Attachment
David, * David Steele (david@pgmasters.net) wrote: > On 12/12/17 8:48 PM, Stephen Frost wrote: > > I don't think there is, because, as David points out, the unlogged > > tables are cleaned up first and then WAL replay happens during recovery, > > so the init fork will cause the relation to be overwritten, but then > > later the logged 'drop table' and subsequent re-use of the relfilenode > > to create a new table (or persistence change) will all be in the WAL and > > will be replayed over top and will take care of this. > > Files can be copied in any order, so if an OID is recycled the backup > could copy its first, second, or nth incarnation. It doesn't really > matter since all of it will be clobbered by WAL replay. > > The new base backup code just does the non-init fork removal in advance, > following the same rules that would apply on recovery given the same > file set. Just to be clear- the new base backup code doesn't actually *do* the non-init fork removal, it simply doesn't include the non-init fork in the backup when there is an init fork, right? We certainly wouldn't want a basebackup actually running around removing the main fork for unlogged tables on a running and otherwise healthy system. ;) Thanks! Stephen
Attachment
On 12/13/17 10:04 AM, Stephen Frost wrote: > > Just to be clear- the new base backup code doesn't actually *do* the > non-init fork removal, it simply doesn't include the non-init fork in > the backup when there is an init fork, right? It does *not* do the unlogged non-init fork removal. The code I refactored in reinit.c is about identifying the forks, not removing them. That code is reused to determine what to exclude from the backup. I added the regression tests to ensure that the behavior of reinit.c is unchanged after the refactor. > We certainly wouldn't want a basebackup actually running around removing > the main fork for unlogged tables on a running and otherwise healthy > system. ;) That would not be good. -- -David david@pgmasters.net
On Tue, Dec 12, 2017 at 8:48 PM, Stephen Frost <sfrost@snowman.net> wrote: > If the persistence is changed then the table will be written into the > WAL, no? All of the WAL generated during a backup (which is what we're > talking about here) has to be replayed after the restore is done and is > before the database is considered consistent, so none of this matters, > as far as I can see, because the drop table or alter table logged or > anything else will be in the WAL that ends up getting replayed. I can't see a hole in this argument. If we copy the init fork and skip copying the main fork, then either we skipped copying the right file, or the file we skipped copying will be recreated with the correct contents during WAL replay anyway. We could have a problem if wal_level=minimal, because then the new file might not have been WAL-logged; but taking an online backup with wal_level=minimal isn't supported precisely because we won't have WAL replay to fix things up. We would also have a problem if the missing file caused something in recovery to croak on the grounds that the file was expected to be there, but I don't think anything works that way; I think we just assume missing files are an expected failure mode and silently do nothing if asked to remove them. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
All, I have reviewed and tested these patches. The patches applied cleanly in order against master at (90947674fc). I ran the provided regression tests and a 'check-world'. All tests succeeded. Marking ready for committer. -Adam
On Thu, Dec 14, 2017 at 11:58 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Tue, Dec 12, 2017 at 8:48 PM, Stephen Frost <sfrost@snowman.net> wrote: >> If the persistence is changed then the table will be written into the >> WAL, no? All of the WAL generated during a backup (which is what we're >> talking about here) has to be replayed after the restore is done and is >> before the database is considered consistent, so none of this matters, >> as far as I can see, because the drop table or alter table logged or >> anything else will be in the WAL that ends up getting replayed. > > I can't see a hole in this argument. If we copy the init fork and > skip copying the main fork, then either we skipped copying the right > file, or the file we skipped copying will be recreated with the > correct contents during WAL replay anyway. > > We could have a problem if wal_level=minimal, because then the new > file might not have been WAL-logged; but taking an online backup with > wal_level=minimal isn't supported precisely because we won't have WAL > replay to fix things up. > > We would also have a problem if the missing file caused something in > recovery to croak on the grounds that the file was expected to be > there, but I don't think anything works that way; I think we just > assume missing files are an expected failure mode and silently do > nothing if asked to remove them. > I also couldn't see a problem in this approach. Here is the first review comments. + unloggedDelim = strrchr(path, '/'); I think it doesn't work fine on windows. How about using last_dir_separator() instead? ---- + * Find all unlogged relations in the specified directory and return their OIDs. What the ResetUnloggedrelationsHash() actually returns is a hash table. The comment of this function seems not appropriate. ---- + /* Part of path that contains the parent directory. */ + int parentPathLen = unloggedDelim - path; + + /* + * Build the unlogged relation hash if the parent path is either + * $PGDATA/base or a tablespace version path. + */ + if (strncmp(path, "./base", parentPathLen) == 0 || + (parentPathLen >= (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) && + strncmp(unloggedDelim - (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1), + TABLESPACE_VERSION_DIRECTORY, + sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) == 0)) + unloggedHash = ResetUnloggedRelationsHash(path); + } How about using get_parent_directory() to get parent directory name? Also, I think it's better to destroy the unloggedHash after use. ---- + /* Exclude all forks for unlogged tables except the init fork. */ + if (unloggedHash && ResetUnloggedRelationsMatch( + unloggedHash, de->d_name) == unloggedOther) + { + elog(DEBUG2, "unlogged relation file \"%s\" excluded from backup", + de->d_name); + continue; + } I think it's better to log this debug message at DEBUG2 level for consistency with other messages. ---- + ok(!-f "$tempdir/tbackup/tblspc1/$tblspc1UnloggedBackupPath", + 'unlogged imain fork not in tablespace backup'); s/imain/main/ ---- If a new unlogged relation is created after constructed the unloggedHash before sending file, we cannot exclude such relation. It would not be problem if the taking backup is not long because the new unlogged relation unlikely becomes so large. However, if takeing a backup takes a long time, we could include large main fork in the backup. Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
Hi Masahiko, Thanks for the review! On 1/22/18 3:14 AM, Masahiko Sawada wrote: > On Thu, Dec 14, 2017 at 11:58 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> >> We would also have a problem if the missing file caused something in >> recovery to croak on the grounds that the file was expected to be >> there, but I don't think anything works that way; I think we just >> assume missing files are an expected failure mode and silently do >> nothing if asked to remove them. > > I also couldn't see a problem in this approach. > > Here is the first review comments. > > + unloggedDelim = strrchr(path, '/'); > > I think it doesn't work fine on windows. How about using > last_dir_separator() instead? I think this function is OK on Windows -- we use it quite a bit. However, last_dir_separator() is clearer so I have changed it. > ---- > + * Find all unlogged relations in the specified directory and return > their OIDs. > > What the ResetUnloggedrelationsHash() actually returns is a hash > table. The comment of this function seems not appropriate. Fixed. > + /* Part of path that contains the parent directory. */ > + int parentPathLen = unloggedDelim - path; > + > + /* > + * Build the unlogged relation hash if the parent path is either > + * $PGDATA/base or a tablespace version path. > + */ > + if (strncmp(path, "./base", parentPathLen) == 0 || > + (parentPathLen >= > (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) && > + strncmp(unloggedDelim - > (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1), > + TABLESPACE_VERSION_DIRECTORY, > + > sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) == 0)) > + unloggedHash = ResetUnloggedRelationsHash(path); > + } > > How about using get_parent_directory() to get parent directory name? get_parent_directory() munges the string that is passed to it which I was trying to avoid (we'd need a copy) - and I don't think it makes the rest of the logic any simpler without constructing yet another string to hold the tablespace path. I know performance isn't the most important thing here, so if the argument is for clarity perhaps it makes sense. Otherwise I don't know if it's worth it. > Also, I think it's better to destroy the unloggedHash after use. Whoops! Fixed. > + /* Exclude all forks for unlogged tables except the > init fork. */ > + if (unloggedHash && ResetUnloggedRelationsMatch( > + unloggedHash, de->d_name) == unloggedOther) > + { > + elog(DEBUG2, "unlogged relation file \"%s\" > excluded from backup", > + de->d_name); > + continue; > + } > > I think it's better to log this debug message at DEBUG2 level for > consistency with other messages. I think you mean DEBUG1? It's already at DEBUG2. I considered using DEBUG1 but decided against it. The other exclusions will produce a limited amount of output because there are only a few of them. In the case of unlogged tables there could be any number of exclusions and I thought that was too noisy for DEBUG1. > + ok(!-f "$tempdir/tbackup/tblspc1/$tblspc1UnloggedBackupPath", > + 'unlogged imain fork not in tablespace backup'); > > s/imain/main/ Fixed. > If a new unlogged relation is created after constructed the > unloggedHash before sending file, we cannot exclude such relation. It > would not be problem if the taking backup is not long because the new > unlogged relation unlikely becomes so large. However, if takeing a > backup takes a long time, we could include large main fork in the > backup. This is a good point. It's per database directory which makes it a little better, but maybe not by much. Three options here: 1) Leave it as is knowing that unlogged relations created during the backup may be copied and document it that way. 2) Construct a list for SendDir() to work against so the gap between creating that and creating the unlogged hash is as small as possible. The downside here is that the list may be very large and take up a lot of memory. 3) Check each file that looks like a relation in the loop to see if it has an init fork. This might affect performance since an opendir/readdir loop would be required for every relation. Personally, I'm in favor of #1, at least for the time being. I've updated the docs as indicated in case you and Adam agree. New patches attached. Thanks! -- -David david@pgmasters.net
Attachment
>> If a new unlogged relation is created after constructed the >> unloggedHash before sending file, we cannot exclude such relation. It >> would not be problem if the taking backup is not long because the new >> unlogged relation unlikely becomes so large. However, if takeing a >> backup takes a long time, we could include large main fork in the >> backup. > > This is a good point. It's per database directory which makes it a > little better, but maybe not by much. > > Three options here: > > 1) Leave it as is knowing that unlogged relations created during the > backup may be copied and document it that way. > > 2) Construct a list for SendDir() to work against so the gap between > creating that and creating the unlogged hash is as small as possible. > The downside here is that the list may be very large and take up a lot > of memory. > > 3) Check each file that looks like a relation in the loop to see if it > has an init fork. This might affect performance since an > opendir/readdir loop would be required for every relation. > > Personally, I'm in favor of #1, at least for the time being. I've > updated the docs as indicated in case you and Adam agree. I agree with #1 and feel the updated docs are reasonable and sufficient to address this case for now. I have retested these patches against master at d6ab720360. All test succeed. Marking "Ready for Committer". -Adam
> I agree with #1 and feel the updated docs are reasonable and > sufficient to address this case for now. > > I have retested these patches against master at d6ab720360. > > All test succeed. > > Marking "Ready for Committer". Actually, marked it "Ready for Review" to wait for Masahiko to comment/agree. Masahiko, If you agree with the above, would you mind updating the status accordingly? -Adam
On 1/24/18 4:02 PM, Adam Brightwell wrote: >>> If a new unlogged relation is created after constructed the >>> unloggedHash before sending file, we cannot exclude such relation. It >>> would not be problem if the taking backup is not long because the new >>> unlogged relation unlikely becomes so large. However, if takeing a >>> backup takes a long time, we could include large main fork in the >>> backup. >> >> This is a good point. It's per database directory which makes it a >> little better, but maybe not by much. >> >> Three options here: >> >> 1) Leave it as is knowing that unlogged relations created during the >> backup may be copied and document it that way. >> >> 2) Construct a list for SendDir() to work against so the gap between >> creating that and creating the unlogged hash is as small as possible. >> The downside here is that the list may be very large and take up a lot >> of memory. >> >> 3) Check each file that looks like a relation in the loop to see if it >> has an init fork. This might affect performance since an >> opendir/readdir loop would be required for every relation. >> >> Personally, I'm in favor of #1, at least for the time being. I've >> updated the docs as indicated in case you and Adam agree. > > I agree with #1 and feel the updated docs are reasonable and > sufficient to address this case for now. > > I have retested these patches against master at d6ab720360. > > All test succeed. > > Marking "Ready for Committer". Thanks, Adam! Actually, I was talking to Stephen about this it seems like #3 would be more practical if we just stat'd the init fork for each relation file found. I doubt the stat would add a lot of overhead and we can track each unlogged relation in a hash table to reduce overhead even more. I'll look at that tomorrow and see if I can work out something practical. -- -David david@pgmasters.net
On Thu, Jan 25, 2018 at 3:25 AM, David Steele <david@pgmasters.net> wrote: > Hi Masahiko, > > Thanks for the review! > > On 1/22/18 3:14 AM, Masahiko Sawada wrote: >> On Thu, Dec 14, 2017 at 11:58 PM, Robert Haas <robertmhaas@gmail.com> wrote: >>> >>> We would also have a problem if the missing file caused something in >>> recovery to croak on the grounds that the file was expected to be >>> there, but I don't think anything works that way; I think we just >>> assume missing files are an expected failure mode and silently do >>> nothing if asked to remove them. >> >> I also couldn't see a problem in this approach. >> >> Here is the first review comments. >> >> + unloggedDelim = strrchr(path, '/'); >> >> I think it doesn't work fine on windows. How about using >> last_dir_separator() instead? > > I think this function is OK on Windows -- we use it quite a bit. > However, last_dir_separator() is clearer so I have changed it. Thank you for updating this. I was concerned about a separator character '/' might not work fine on windows. > >> ---- >> + * Find all unlogged relations in the specified directory and return >> their OIDs. >> >> What the ResetUnloggedrelationsHash() actually returns is a hash >> table. The comment of this function seems not appropriate. > > Fixed. > >> + /* Part of path that contains the parent directory. */ >> + int parentPathLen = unloggedDelim - path; >> + >> + /* >> + * Build the unlogged relation hash if the parent path is either >> + * $PGDATA/base or a tablespace version path. >> + */ >> + if (strncmp(path, "./base", parentPathLen) == 0 || >> + (parentPathLen >= >> (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) && >> + strncmp(unloggedDelim - >> (sizeof(TABLESPACE_VERSION_DIRECTORY) - 1), >> + TABLESPACE_VERSION_DIRECTORY, >> + >> sizeof(TABLESPACE_VERSION_DIRECTORY) - 1) == 0)) >> + unloggedHash = ResetUnloggedRelationsHash(path); >> + } >> >> How about using get_parent_directory() to get parent directory name? > > get_parent_directory() munges the string that is passed to it which I > was trying to avoid (we'd need a copy) - and I don't think it makes the > rest of the logic any simpler without constructing yet another string to > hold the tablespace path. Agreed. > > I know performance isn't the most important thing here, so if the > argument is for clarity perhaps it makes sense. Otherwise I don't know > if it's worth it. > >> Also, I think it's better to destroy the unloggedHash after use. > > Whoops! Fixed. > >> + /* Exclude all forks for unlogged tables except the >> init fork. */ >> + if (unloggedHash && ResetUnloggedRelationsMatch( >> + unloggedHash, de->d_name) == unloggedOther) >> + { >> + elog(DEBUG2, "unlogged relation file \"%s\" >> excluded from backup", >> + de->d_name); >> + continue; >> + } >> >> I think it's better to log this debug message at DEBUG2 level for >> consistency with other messages. > > I think you mean DEBUG1? It's already at DEBUG2. Oops, yes I meant DEBUG1. > > I considered using DEBUG1 but decided against it. The other exclusions > will produce a limited amount of output because there are only a few of > them. In the case of unlogged tables there could be any number of > exclusions and I thought that was too noisy for DEBUG1. IMO it's okay to output many unlogged tables for a debug purpose but I see your point. > >> + ok(!-f "$tempdir/tbackup/tblspc1/$tblspc1UnloggedBackupPath", >> + 'unlogged imain fork not in tablespace backup'); >> >> s/imain/main/ > > Fixed. > >> If a new unlogged relation is created after constructed the >> unloggedHash before sending file, we cannot exclude such relation. It >> would not be problem if the taking backup is not long because the new >> unlogged relation unlikely becomes so large. However, if takeing a >> backup takes a long time, we could include large main fork in the >> backup. > > This is a good point. It's per database directory which makes it a > little better, but maybe not by much. > > Three options here: > > 1) Leave it as is knowing that unlogged relations created during the > backup may be copied and document it that way. > > 2) Construct a list for SendDir() to work against so the gap between > creating that and creating the unlogged hash is as small as possible. > The downside here is that the list may be very large and take up a lot > of memory. > > 3) Check each file that looks like a relation in the loop to see if it > has an init fork. This might affect performance since an > opendir/readdir loop would be required for every relation. > > Personally, I'm in favor of #1, at least for the time being. I've > updated the docs as indicated in case you and Adam agree. > See below comment. On Thu, Jan 25, 2018 at 6:23 AM, David Steele <david@pgmasters.net> wrote: > On 1/24/18 4:02 PM, Adam Brightwell wrote: > Actually, I was talking to Stephen about this it seems like #3 would be > more practical if we just stat'd the init fork for each relation file > found. I doubt the stat would add a lot of overhead and we can track > each unlogged relation in a hash table to reduce overhead even more. > Can the readdir handle files that are added during the loop? I think that we still cannot exclude a new unlogged relation if the relation is added after we execute readdir first time. To completely eliminate it we need a sort of lock that prevents to create new unlogged relation from current backends. Or we need to do readdir loop multiple times to see if no new relations were added during sending files. If you're updating the patch to implement #3, this patch should be marked as "Waiting on Author". After updated I'll review it again. Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
On 1/25/18 12:31 AM, Masahiko Sawada wrote: > On Thu, Jan 25, 2018 at 3:25 AM, David Steele <david@pgmasters.net> wrote: >>> >>> Here is the first review comments. >>> >>> + unloggedDelim = strrchr(path, '/'); >>> >>> I think it doesn't work fine on windows. How about using >>> last_dir_separator() instead? >> >> I think this function is OK on Windows -- we use it quite a bit. >> However, last_dir_separator() is clearer so I have changed it. > > Thank you for updating this. I was concerned about a separator > character '/' might not work fine on windows. Ah yes, I see what you mean now. > On Thu, Jan 25, 2018 at 6:23 AM, David Steele <david@pgmasters.net> wrote: >> On 1/24/18 4:02 PM, Adam Brightwell wrote: >> Actually, I was talking to Stephen about this it seems like #3 would be >> more practical if we just stat'd the init fork for each relation file >> found. I doubt the stat would add a lot of overhead and we can track >> each unlogged relation in a hash table to reduce overhead even more. >> > > Can the readdir handle files that are added during the loop? I think > that we still cannot exclude a new unlogged relation if the relation > is added after we execute readdir first time. To completely eliminate > it we need a sort of lock that prevents to create new unlogged > relation from current backends. Or we need to do readdir loop multiple > times to see if no new relations were added during sending files. As far as I know readdir() is platform-dependent in terms of how it scans the dir and if files created after the opendir() will appear. It shouldn't matter, though, since WAL replay will recreate those files. > If you're updating the patch to implement #3, this patch should be > marked as "Waiting on Author". After updated I'll review it again. Attached is a new patch that uses stat() to determine if the init fork for a relation file exists. I decided not to build a hash table as it could use considerable memory and I didn't think it would be much faster than a simple stat() call. The reinit.c refactor has been removed since it was no longer needed. I'll submit the tests I wrote for reinit.c as a separate patch for the next CF. Thanks, -- -David david@pgmasters.net
Attachment
On Wed, Jan 24, 2018 at 1:25 PM, David Steele <david@pgmasters.net> wrote: > I think you mean DEBUG1? It's already at DEBUG2. > > I considered using DEBUG1 but decided against it. The other exclusions > will produce a limited amount of output because there are only a few of > them. In the case of unlogged tables there could be any number of > exclusions and I thought that was too noisy for DEBUG1. +1. Even DEBUG2 seems pretty chatty for a message that just tells you that something is working in an entirely expected fashion; consider DEBUG3. Fortunately, base backups are not so common that this should cause enormous log spam either way, but keeping the amount of debug output down to a reasonable level is an important goal. Before a43f1939d5dcd02f4df1604a68392332168e4be0, it wasn't really practical to run a production server with log_min_messages lower than DEBUG2, because you'd get so much log spam it would cause performance problems (and maybe fill up the disk). -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Fri, Jan 26, 2018 at 4:58 AM, David Steele <david@pgmasters.net> wrote: > On 1/25/18 12:31 AM, Masahiko Sawada wrote: >> On Thu, Jan 25, 2018 at 3:25 AM, David Steele <david@pgmasters.net> wrote: >>>> >>>> Here is the first review comments. >>>> >>>> + unloggedDelim = strrchr(path, '/'); >>>> >>>> I think it doesn't work fine on windows. How about using >>>> last_dir_separator() instead? >>> >>> I think this function is OK on Windows -- we use it quite a bit. >>> However, last_dir_separator() is clearer so I have changed it. >> >> Thank you for updating this. I was concerned about a separator >> character '/' might not work fine on windows. > > Ah yes, I see what you mean now. > >> On Thu, Jan 25, 2018 at 6:23 AM, David Steele <david@pgmasters.net> wrote: >>> On 1/24/18 4:02 PM, Adam Brightwell wrote: >>> Actually, I was talking to Stephen about this it seems like #3 would be >>> more practical if we just stat'd the init fork for each relation file >>> found. I doubt the stat would add a lot of overhead and we can track >>> each unlogged relation in a hash table to reduce overhead even more. >>> >> >> Can the readdir handle files that are added during the loop? I think >> that we still cannot exclude a new unlogged relation if the relation >> is added after we execute readdir first time. To completely eliminate >> it we need a sort of lock that prevents to create new unlogged >> relation from current backends. Or we need to do readdir loop multiple >> times to see if no new relations were added during sending files. > > As far as I know readdir() is platform-dependent in terms of how it > scans the dir and if files created after the opendir() will appear. > > It shouldn't matter, though, since WAL replay will recreate those files. Yea, agreed. > >> If you're updating the patch to implement #3, this patch should be >> marked as "Waiting on Author". After updated I'll review it again. > Attached is a new patch that uses stat() to determine if the init fork > for a relation file exists. I decided not to build a hash table as it > could use considerable memory and I didn't think it would be much faster > than a simple stat() call. > > The reinit.c refactor has been removed since it was no longer needed. > I'll submit the tests I wrote for reinit.c as a separate patch for the > next CF. > Thank you for updating the patch! The patch looks good to me. But I have a question; can we exclude temp tables as well? The pg_basebackup includes even temp tables. But I don't think that it's necessary for backups. Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
On Mon, Jan 29, 2018 at 07:28:22PM +0900, Masahiko Sawada wrote: > Thank you for updating the patch! The patch looks good to me. But I > have a question; can we exclude temp tables as well? The pg_basebackup > includes even temp tables. But I don't think that it's necessary for > backups. They are not needed in base backups. Note that RemovePgTempFiles() does not remove temporary relfilenodes after a crash per the comments on its top. I have not looked at the patch in details, but if you finish by not including those files in what's proposed there is much refactoring possible. -- Michael
Attachment
On 1/29/18 5:28 AM, Masahiko Sawada wrote: > On Fri, Jan 26, 2018 at 4:58 AM, David Steele <david@pgmasters.net> wrote: >> >> Attached is a new patch that uses stat() to determine if the init fork >> for a relation file exists. I decided not to build a hash table as it >> could use considerable memory and I didn't think it would be much faster >> than a simple stat() call. >> >> The reinit.c refactor has been removed since it was no longer needed. >> I'll submit the tests I wrote for reinit.c as a separate patch for the >> next CF. > > Thank you for updating the patch! The patch looks good to me. But I > have a question; can we exclude temp tables as well? The pg_basebackup > includes even temp tables. But I don't think that it's necessary for > backups Thank you for having another look at the patch. Temp tables should be excluded by this code which is already in basebackup.c: /* Skip temporary files */ if (strncmp(de->d_name, PG_TEMP_FILE_PREFIX, strlen(PG_TEMP_FILE_PREFIX)) == 0) continue; This looks right to me. Thanks, -- -David david@pgmasters.net
On 1/29/18 9:13 AM, David Steele wrote: > On 1/29/18 5:28 AM, Masahiko Sawada wrote: >> But I >> have a question; can we exclude temp tables as well? The pg_basebackup >> includes even temp tables. But I don't think that it's necessary for >> backups > Thank you for having another look at the patch. > > Temp tables should be excluded by this code which is already in > basebackup.c: > > /* Skip temporary files */ > if (strncmp(de->d_name, > PG_TEMP_FILE_PREFIX, > strlen(PG_TEMP_FILE_PREFIX)) == 0) > continue; > > This looks right to me. Whoops, my bad. Temp relations are stored in the db directories with a "t" prefix. Looks like we can take care of those easily enough but I think it should be a separate patch. I'll plan to submit that for CF 2018-03. Thanks! -- -David david@pgmasters.net
On Mon, Jan 29, 2018 at 1:17 PM, David Steele <david@pgmasters.net> wrote: > On 1/29/18 9:13 AM, David Steele wrote: >> On 1/29/18 5:28 AM, Masahiko Sawada wrote: >>> But I >>> have a question; can we exclude temp tables as well? The pg_basebackup >>> includes even temp tables. But I don't think that it's necessary for >>> backups >> Thank you for having another look at the patch. >> >> Temp tables should be excluded by this code which is already in >> basebackup.c: >> >> /* Skip temporary files */ >> if (strncmp(de->d_name, >> PG_TEMP_FILE_PREFIX, >> strlen(PG_TEMP_FILE_PREFIX)) == 0) >> continue; >> >> This looks right to me. > > > Whoops, my bad. Temp relations are stored in the db directories with a > "t" prefix. Looks like we can take care of those easily enough but I > think it should be a separate patch. > > I'll plan to submit that for CF 2018-03. I agree, I believe this should be a separate patch. As for the latest patch above, I have reviewed, applied and tested it. It looks good to me. As well, it applies cleanly against master at (97d4445a03). All tests passed when running 'check-world'. If it is agreed that the temp file exclusion should be submitted as a separate patch, then I will mark 'ready for committer'. -Adam
On Tue, Jan 30, 2018 at 5:45 AM, Adam Brightwell <adam.brightwell@crunchydata.com> wrote: > On Mon, Jan 29, 2018 at 1:17 PM, David Steele <david@pgmasters.net> wrote: >> On 1/29/18 9:13 AM, David Steele wrote: >>> On 1/29/18 5:28 AM, Masahiko Sawada wrote: >>>> But I >>>> have a question; can we exclude temp tables as well? The pg_basebackup >>>> includes even temp tables. But I don't think that it's necessary for >>>> backups >>> Thank you for having another look at the patch. >>> >>> Temp tables should be excluded by this code which is already in >>> basebackup.c: >>> >>> /* Skip temporary files */ >>> if (strncmp(de->d_name, >>> PG_TEMP_FILE_PREFIX, >>> strlen(PG_TEMP_FILE_PREFIX)) == 0) >>> continue; >>> >>> This looks right to me. >> >> >> Whoops, my bad. Temp relations are stored in the db directories with a >> "t" prefix. Looks like we can take care of those easily enough but I >> think it should be a separate patch. >> >> I'll plan to submit that for CF 2018-03. +1 > > I agree, I believe this should be a separate patch. > > As for the latest patch above, I have reviewed, applied and tested it. > > It looks good to me. As well, it applies cleanly against master at > (97d4445a03). All tests passed when running 'check-world'. > > If it is agreed that the temp file exclusion should be submitted as a > separate patch, then I will mark 'ready for committer'. Agreed, please mark this patch as "Ready for Committer". Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
On 1/29/18 8:10 PM, Masahiko Sawada wrote: > On Tue, Jan 30, 2018 at 5:45 AM, Adam Brightwell > <adam.brightwell@crunchydata.com> wrote: >> On Mon, Jan 29, 2018 at 1:17 PM, David Steele <david@pgmasters.net> wrote: >>> >>> Whoops, my bad. Temp relations are stored in the db directories with a >>> "t" prefix. Looks like we can take care of those easily enough but I >>> think it should be a separate patch. >>> >>> I'll plan to submit that for CF 2018-03. > > +1 > >> >> I agree, I believe this should be a separate patch. >> >> As for the latest patch above, I have reviewed, applied and tested it. >> >> It looks good to me. As well, it applies cleanly against master at >> (97d4445a03). All tests passed when running 'check-world'. >> >> If it is agreed that the temp file exclusion should be submitted as a >> separate patch, then I will mark 'ready for committer'. > > Agreed, please mark this patch as "Ready for Committer". I marked it just in case some enterprising committer from another time zone swoops in and picks it up. Fingers crossed! -- -David david@pgmasters.net
On 1/29/18 8:10 PM, Masahiko Sawada wrote: > On Tue, Jan 30, 2018 at 5:45 AM, Adam Brightwell >> >> If it is agreed that the temp file exclusion should be submitted as a >> separate patch, then I will mark 'ready for committer'. > > Agreed, please mark this patch as "Ready for Committer". Attached is a rebased patch that applies cleanly. Thanks, -- -David david@pgmasters.net
Attachment
Thank you, pushed David Steele wrote: > On 1/29/18 8:10 PM, Masahiko Sawada wrote: >> On Tue, Jan 30, 2018 at 5:45 AM, Adam Brightwell >>> >>> If it is agreed that the temp file exclusion should be submitted as a >>> separate patch, then I will mark 'ready for committer'. >> >> Agreed, please mark this patch as "Ready for Committer". > > Attached is a rebased patch that applies cleanly. > > Thanks, > -- Teodor Sigaev E-mail: teodor@sigaev.ru WWW: http://www.sigaev.ru/
On 3/23/18 12:14 PM, Teodor Sigaev wrote: > > Thank you, pushed Thank you, Teodor! I'll rebase the temp table exclusion patch and provide an updated patch soon. -- -David david@pgmasters.net
On Fri, Mar 23, 2018 at 9:51 PM, David Steele <david@pgmasters.net> wrote:
On 3/23/18 12:14 PM, Teodor Sigaev wrote:
>
> Thank you, pushed
# Failed test 'unlogged main fork not in backup'
# at t/010_pg_basebackup.pl line 112.
t/010_pg_basebackup.pl ... 86/87 # Looks like you failed 1 test of 87.
I manually ran pg_basebackup and it correctly excludes the main fork on an unlogged table from the backup, but it consistently copies the main fork while running "make check" and thus fails the test.
Thanks,
Pavan
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
PostgreSQL Development, 24x7 Support, Training & Services
On Mon, Mar 26, 2018 at 1:03 PM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:
On Fri, Mar 23, 2018 at 9:51 PM, David Steele <david@pgmasters.net> wrote:On 3/23/18 12:14 PM, Teodor Sigaev wrote:
>
> Thank you, pushedIs it just me or the newly added test in 010_pg_basebackup.pl failing for others too?# Failed test 'unlogged main fork not in backup'# at t/010_pg_basebackup.pl line 112.t/010_pg_basebackup.pl ... 86/87 # Looks like you failed 1 test of 87.
This one-liner patch fixes it for me.
Thanks,
Pavan
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
PostgreSQL Development, 24x7 Support, Training & Services
Attachment
On Mon, Mar 26, 2018 at 4:52 PM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote: > > > On Mon, Mar 26, 2018 at 1:03 PM, Pavan Deolasee <pavan.deolasee@gmail.com> > wrote: >> >> On Fri, Mar 23, 2018 at 9:51 PM, David Steele <david@pgmasters.net> wrote: >>> >>> On 3/23/18 12:14 PM, Teodor Sigaev wrote: >>> > >>> > Thank you, pushed >>> >> >> Is it just me or the newly added test in 010_pg_basebackup.pl failing for >> others too? >> >> # Failed test 'unlogged main fork not in backup' >> # at t/010_pg_basebackup.pl line 112. >> t/010_pg_basebackup.pl ... 86/87 # Looks like you failed 1 test of 87. >> > > This one-liner patch fixes it for me. > Isn't this issue already fixed by commit d0c0c894533f906b13b79813f02b2982ac675074? Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
On Mon, Mar 26, 2018 at 5:16 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
On Mon, Mar 26, 2018 at 4:52 PM, Pavan Deolasee
>>
>
> This one-liner patch fixes it for me.
>
Isn't this issue already fixed by commit
d0c0c894533f906b13b79813f02b2982ac675074?
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
PostgreSQL Development, 24x7 Support, Training & Services