Thread: archive modules loose ends
Andres recently reminded me of some loose ends in archive modules [0], so I'm starting a dedicated thread to address his feedback. The first one is the requirement that archive module authors create their own exception handlers if they want to make use of ERROR. Ideally, there would be a handler in pgarch.c so that authors wouldn't need to deal with this. I do see some previous dicussion about this [1] in which I expressed concerns about memory management. Looking at this again, I may have been overthinking it. IIRC I was thinking about creating a memory context that would be switched into for only the archiving callback (and reset afterwards), but that might not be necessary. Instead, we could rely on module authors to handle this. One example is basic_archive, which maintains its own memory context. Alternatively, authors could simply pfree() anything that was allocated. Furthermore, by moving the exception handling to pgarch.c, module authors can begin using PG_TRY, etc. in their archiving callbacks, which simplifies things a bit. I've attached a work-in-progress patch for this change. On Fri, Feb 17, 2023 at 11:41:32AM -0800, Andres Freund wrote: > On 2023-02-16 13:58:10 -0800, Nathan Bossart wrote: >> On Thu, Feb 16, 2023 at 01:17:54PM -0800, Andres Freund wrote: >> > I'm quite baffled by: >> > /* Close any files left open by copy_file() or compare_files() */ >> > AtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId); >> > >> > in basic_archive_file(). It seems *really* off to call AtEOSubXact_Files() >> > completely outside the context of a transaction environment. And it only does >> > the thing you want because you pass parameters that aren't actually valid in >> > the normal use in AtEOSubXact_Files(). I really don't understand how that's >> > supposed to be ok. >> >> Hm. Should copy_file() and compare_files() have PG_FINALLY blocks that >> attempt to close the files instead? What would you recommend? > > I don't fully now, it's not entirely clear to me what the goals here were. I > think you'd likely need to do a bit of infrastructure work to do this > sanely. So far we just didn't have the need to handle files being released in > a way like you want to do there. > > I suspect a good direction would be to use resource owners. Add a separate set > of functions that release files on resource owner release. Most of the > infrastructure is there already, for temporary files > (c.f. OpenTemporaryFile()). > > Then that resource owner could be reset in case of error. > > > I'm not even sure that erroring out is a reasonable way to implement > copy_file(), compare_files(), particularly because you want to return via a > return code from basic_archive_files(). To initialize this thread, I'll provide a bit more background. basic_archive makes use of copy_file(), and it introduces a function called compare_files() that is used to check whether two files have the same content. These functions make use of OpenTransientFile() and CloseTransientFile(). In basic_archive's sigsetjmp() block, there's a call to AtEOSubXact_Files() to make sure we close any files that are open when there is an ERROR. IIRC I was following the example set by other processes that make use of the AtEOXact* functions in their sigsetjmp() blocks. Looking again, I think AtEOXact_Files() would also work for basic_archive's use-case. That would at least avoid the hack of using InvalidSubTransactionId for the second and third arguments. From the feedback quoted above, it sounds like improving this further will require a bit of infrastructure work. I haven't looked too deeply into this yet. [0] https://postgr.es/m/20230216192956.mhi6uiakchkolpki%40awork3.anarazel.de [1] https://postgr.es/m/20220202224433.GA1036711%40nathanxps13 -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
Attachment
There seems to be no interest in this patch, so I plan to withdraw it from the commitfest system by the end of the month unless such interest materializes. On Fri, Feb 17, 2023 at 01:56:24PM -0800, Nathan Bossart wrote: > The first one is the requirement that archive module authors create their > own exception handlers if they want to make use of ERROR. Ideally, there > would be a handler in pgarch.c so that authors wouldn't need to deal with > this. I do see some previous dicussion about this [1] in which I expressed > concerns about memory management. Looking at this again, I may have been > overthinking it. IIRC I was thinking about creating a memory context that > would be switched into for only the archiving callback (and reset > afterwards), but that might not be necessary. Instead, we could rely on > module authors to handle this. One example is basic_archive, which > maintains its own memory context. Alternatively, authors could simply > pfree() anything that was allocated. > > Furthermore, by moving the exception handling to pgarch.c, module authors > can begin using PG_TRY, etc. in their archiving callbacks, which simplifies > things a bit. I've attached a work-in-progress patch for this change. I took another look at this, and I think І remembered what I was worried about with memory management. One example is the built-in shell archiving. Presently, whenever there is an ERROR during archiving via shell, it gets bumped up to FATAL because the archiver operates at the bottom of the exception stack. Consequently, there's no need to worry about managing memory contexts to ensure that palloc'd memory is cleared up after an error. With the attached patch, we no longer call the archiving callback while we're at the bottom of the exception stack, so ERRORs no longer get bumped up to FATALs, and any palloc'd memory won't be freed. I see two main options for dealing with this. One option is to simply have shell_archive (and any other archive modules out there) maintain its own memory context like basic_archive does. This ends up requiring a whole lot of duplicate code between the two built-in modules, though. Another option is to have the archiver manage a memory context that it resets after every invocation of the archiving callback, ERROR or not. This has the advantage of avoiding code duplication and simplifying things for the built-in modules, but any external modules that rely on palloc'd state being long-lived would need to be adjusted to manage their own long-lived context. (This would need to be appropriately documented.) However, I'm not aware of any archive modules that would be impacted by this. The attached patch is an attempt at the latter option. As I noted above, this probably deserves some discussion in the archive modules documentation, but I don't intend to spend too much more time on this patch right now given it is likely going to be withdrawn. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
Attachment
Hi, On 2023-11-13 16:42:31 -0600, Nathan Bossart wrote: > There seems to be no interest in this patch, so I plan to withdraw it from > the commitfest system by the end of the month unless such interest > materializes. I think it might just have arrived too shortly before the feature freeze to be worth looking at at the time, and then it didn't really re-raise attention until now. I'm so far behind on keeping up with the list that I rarely end up looking far back for things I'd like to have answered... Sorry. I think it's somewhat important to fix this - having a dedicated "recover from error" implementation in a bunch of extension modules seems quite likely to cause problems down the line, when another type of resource needs to be dealt with after errors. I think many non-toy implementations would e.g. need to release lwlocks in case of errors (e.g. because they use a shared hashtable to queue jobs for workers or such). > On Fri, Feb 17, 2023 at 01:56:24PM -0800, Nathan Bossart wrote: > > The first one is the requirement that archive module authors create their > > own exception handlers if they want to make use of ERROR. Ideally, there > > would be a handler in pgarch.c so that authors wouldn't need to deal with > > this. I do see some previous dicussion about this [1] in which I expressed > > concerns about memory management. Looking at this again, I may have been > > overthinking it. IIRC I was thinking about creating a memory context that > > would be switched into for only the archiving callback (and reset > > afterwards), but that might not be necessary. Instead, we could rely on > > module authors to handle this. One example is basic_archive, which > > maintains its own memory context. Alternatively, authors could simply > > pfree() anything that was allocated. > > > > Furthermore, by moving the exception handling to pgarch.c, module authors > > can begin using PG_TRY, etc. in their archiving callbacks, which simplifies > > things a bit. I've attached a work-in-progress patch for this change. > > I took another look at this, and I think І remembered what I was worried > about with memory management. One example is the built-in shell archiving. > Presently, whenever there is an ERROR during archiving via shell, it gets > bumped up to FATAL because the archiver operates at the bottom of the > exception stack. Consequently, there's no need to worry about managing > memory contexts to ensure that palloc'd memory is cleared up after an > error. With the attached patch, we no longer call the archiving callback > while we're at the bottom of the exception stack, so ERRORs no longer get > bumped up to FATALs, and any palloc'd memory won't be freed. > > I see two main options for dealing with this. One option is to simply have > shell_archive (and any other archive modules out there) maintain its own > memory context like basic_archive does. This ends up requiring a whole lot > of duplicate code between the two built-in modules, though. Another option > is to have the archiver manage a memory context that it resets after every > invocation of the archiving callback, ERROR or not. I think passing in a short-lived memory context is a lot nicer to deal with. > This has the advantage of avoiding code duplication and simplifying things > for the built-in modules, but any external modules that rely on palloc'd > state being long-lived would need to be adjusted to manage their own > long-lived context. (This would need to be appropriately documented.) Alternatively we could provide a longer-lived memory context in ArchiveModuleState, set up by the genric infrastructure. That context would obviously still need to be explicitly utilized by a module, but no duplicated setup code would be required. > /* > * check_archive_directory > * > @@ -172,67 +147,19 @@ basic_archive_configured(ArchiveModuleState *state) > static bool > basic_archive_file(ArchiveModuleState *state, const char *file, const char *path) > { > ... > + PG_TRY(); > + { > + /* Archive the file! */ > + basic_archive_file_internal(file, path); > + } > + PG_CATCH(); > { > - /* Since not using PG_TRY, must reset error stack by hand */ > - error_context_stack = NULL; > - > - /* Prevent interrupts while cleaning up */ > - HOLD_INTERRUPTS(); > - > - /* Report the error and clear ErrorContext for next time */ > - EmitErrorReport(); > - FlushErrorState(); > - > /* Close any files left open by copy_file() or compare_files() */ > - AtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId); > - > - /* Reset our memory context and switch back to the original one */ > - MemoryContextSwitchTo(oldcontext); > - MemoryContextReset(basic_archive_context); > - > - /* Remove our exception handler */ > - PG_exception_stack = NULL; > + AtEOXact_Files(false); > > - /* Now we can allow interrupts again */ > - RESUME_INTERRUPTS(); > - > - /* Report failure so that the archiver retries this file */ > - return false; > + PG_RE_THROW(); > } I think we should just have the AtEOXact_Files() in pgarch.c, then no PG_TRY/CATCH is needed here. At the moment I think just about every possible use of an archive modules would require using files, so there doesn't seem much of a reason to not handle it in pgarch.c. I'd probably reset a few other subsystems at the same time (there's probably more): - disable_all_timeouts() - LWLockReleaseAll() - ConditionVariableCancelSleep() - pgstat_report_wait_end() - ReleaseAuxProcessResources() > @@ -511,7 +519,58 @@ pgarch_archiveXlog(char *xlog) > snprintf(activitymsg, sizeof(activitymsg), "archiving %s", xlog); > set_ps_display(activitymsg); > > - ret = ArchiveCallbacks->archive_file_cb(archive_module_state, xlog, pathname); > + oldcontext = MemoryContextSwitchTo(archive_context); > + > + /* > + * Since the archiver operates at the bottom of the exception stack, > + * ERRORs turn into FATALs and cause the archiver process to restart. > + * However, using ereport(ERROR, ...) when there are problems is easy to > + * code and maintain. Therefore, we create our own exception handler to > + * catch ERRORs and return false instead of restarting the archiver > + * whenever there is a failure. > + */ > + if (sigsetjmp(local_sigjmp_buf, 1) != 0) > + { > + /* Since not using PG_TRY, must reset error stack by hand */ > + error_context_stack = NULL; > + > + /* Prevent interrupts while cleaning up */ > + HOLD_INTERRUPTS(); > + > + /* Report the error and clear ErrorContext for next time */ > + EmitErrorReport(); > + MemoryContextSwitchTo(oldcontext); > + FlushErrorState(); > + > + /* Flush any leaked data */ > + MemoryContextReset(archive_context); > + > + /* Remove our exception handler */ > + PG_exception_stack = NULL; > + > + /* Now we can allow interrupts again */ > + RESUME_INTERRUPTS(); > + > + /* Report failure so that the archiver retries this file */ > + ret = false; > + } > + else > + { > + /* Enable our exception handler */ > + PG_exception_stack = &local_sigjmp_buf; > + > + /* Archive the file! */ > + ret = ArchiveCallbacks->archive_file_cb(archive_module_state, > + xlog, pathname); > + > + /* Remove our exception handler */ > + PG_exception_stack = NULL; > + > + /* Reset our memory context and switch back to the original one */ > + MemoryContextSwitchTo(oldcontext); > + MemoryContextReset(archive_context); > + } It could be worth setting up an errcontext providing the module and file that's being processed. I personally find that at least as important as setting up a ps string detailing the log file... But I guess that could be a separate patch. It'd be nice to add a comment explaining why pgarch_archiveXlog() is the right place to handle errors. Greetings, Andres Freund
On Mon, Nov 13, 2023 at 03:35:28PM -0800, Andres Freund wrote: > On 2023-11-13 16:42:31 -0600, Nathan Bossart wrote: >> There seems to be no interest in this patch, so I plan to withdraw it from >> the commitfest system by the end of the month unless such interest >> materializes. > > I think it might just have arrived too shortly before the feature freeze to be > worth looking at at the time, and then it didn't really re-raise attention > until now. I'm so far behind on keeping up with the list that I rarely end up > looking far back for things I'd like to have answered... Sorry. No worries. I appreciate the review. >> I see two main options for dealing with this. One option is to simply have >> shell_archive (and any other archive modules out there) maintain its own >> memory context like basic_archive does. This ends up requiring a whole lot >> of duplicate code between the two built-in modules, though. Another option >> is to have the archiver manage a memory context that it resets after every >> invocation of the archiving callback, ERROR or not. > > I think passing in a short-lived memory context is a lot nicer to deal with. Cool. >> This has the advantage of avoiding code duplication and simplifying things >> for the built-in modules, but any external modules that rely on palloc'd >> state being long-lived would need to be adjusted to manage their own >> long-lived context. (This would need to be appropriately documented.) > > Alternatively we could provide a longer-lived memory context in > ArchiveModuleState, set up by the genric infrastructure. That context would > obviously still need to be explicitly utilized by a module, but no duplicated > setup code would be required. Sure. Right now, I'm not sure there's too much need for that. A module could just throw stuff in TopMemoryContext, and you probably wouldn't have any leaks because the archiver just restarts on any ERROR or archive_library change. But that's probably not a pattern we want to encourage long-term. I'll jot this down for a follow-up patch idea. > I think we should just have the AtEOXact_Files() in pgarch.c, then no > PG_TRY/CATCH is needed here. At the moment I think just about every possible > use of an archive modules would require using files, so there doesn't seem > much of a reason to not handle it in pgarch.c. WFM > I'd probably reset a few other subsystems at the same time (there's probably > more): > - disable_all_timeouts() > - LWLockReleaseAll() > - ConditionVariableCancelSleep() > - pgstat_report_wait_end() > - ReleaseAuxProcessResources() I looked around a bit and thought AtEOXact_HashTables() belonged here as well. I'll probably give this one another pass to see if there's anything else obvious. > It could be worth setting up an errcontext providing the module and file > that's being processed. I personally find that at least as important as > setting up a ps string detailing the log file... But I guess that could be a > separate patch. Indeed. Right now we rely on the module to emit sufficiently-detailed logs, but it'd be nice if they got that for free. > It'd be nice to add a comment explaining why pgarch_archiveXlog() is the right > place to handle errors. Will do. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
Here is a new version of the patch with feedback addressed. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
Attachment
> On Nov 29, 2023, at 01:18, Nathan Bossart <nathandbossart@gmail.com> wrote: > > External Email > > Here is a new version of the patch with feedback addressed. > > -- > Nathan Bossart > Amazon Web Services: https://aws.amazon.com Hi Nathan, The patch looks good to me. With the context explained in the thread, the patch is easy to understand. The patch serves as a refactoring which pulls up common memory management and error handling concerns into the pgarch.c. With the patch, individual archive callbacks can focus on copying the files and leave the boilerplate code to pgarch.c. The patch applies cleanly to HEAD. “make check-world” also runs cleanly with no error. Regards, Yong
On Mon, Jan 15, 2024 at 12:21:44PM +0000, Li, Yong wrote: > The patch looks good to me. With the context explained in the thread, > the patch is easy to understand. > The patch serves as a refactoring which pulls up common memory management > and error handling concerns into the pgarch.c. With the patch, > individual archive callbacks can focus on copying the files and leave the > boilerplate code to pgarch.c.. > > The patch applies cleanly to HEAD. “make check-world” also runs cleanly > with no error. Thanks for reviewing. I've marked this as ready-for-committer, and I'm hoping to commit it in the near future. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
On Mon, Jan 15, 2024 at 08:50:25AM -0600, Nathan Bossart wrote: > Thanks for reviewing. I've marked this as ready-for-committer, and I'm > hoping to commit it in the near future. This one probably ought to go into v17, but I wanted to do one last call for feedback prior to committing. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
On Tue, Mar 26, 2024 at 02:14:14PM -0500, Nathan Bossart wrote: > On Mon, Jan 15, 2024 at 08:50:25AM -0600, Nathan Bossart wrote: >> Thanks for reviewing. I've marked this as ready-for-committer, and I'm >> hoping to commit it in the near future. > > This one probably ought to go into v17, but I wanted to do one last call > for feedback prior to committing. Committed. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com