Re: Proposing pg_hibernate - Mailing list pgsql-hackers
From | MauMau |
---|---|
Subject | Re: Proposing pg_hibernate |
Date | |
Msg-id | 97D312C9E2B743F3B67CB8FD858CA0F4@maumau Whole thread Raw |
In response to | Re: Proposing pg_hibernate (Gurjeet Singh <gurjeet@singh.im>) |
List | pgsql-hackers |
Hello, I've finished reviewing the code. I already marked this patch as waiting on author. I'll be waiting for the revised patch, then proceed to running the program when the patch seems reasonable. (12) Like worker_spi, save and restore errno in signal handlers. (13) Remove the following comment, and similar ones if any, because this module is to be included in 9.5 and above. /* * In Postgres version 9.4 and above, we use the dynamic background worker * infrastructure for BlockReaders, and the BufferSaverprocess does the * legwork of registering the BlockReader workers. */ (14) Expand the body following functions at their call sites and remove the function definition, because they are called only once. It would be straightforward to see the processing where it should be. * DefineGUCs * CreateDirectory (15) I don't understand the use case of pg_hibernator.enabled. Is this really necessary? In what situations does this parameter "really" matter? If it's not crucial, why don't we remove it and make the specification simpler? (16) As mentioned above, you can remove CreateDirectory(). Instead, you can just mkdir() unconditionally and check the result, without first stat()ing. (17) Use AllocateDir/ReadDir/FreeDir instead of opendir/readdir/closedir in the server process. Plus, follow the error handling style of other source files using AllocateDir. (18) The local variable hibernate_dir appears to be unnecessary because it always holds SAVE_LOCATION as its value. If so, remove it. (19) I think the severity below should better be WARNING, but I don't insist. ereport(LOG, (errmsg("registration of background worker failed"))); (20) "iff" should be "if". /* Remove the element from pending list iff we could register a worker successfully. */ (21) How is this true? Does the shared system catalog always have at least one shared buffer? /* Make sure the first buffer we save belongs to global object. */ Assert(buf->database == InvalidOid); ... * Special case for global objects. The sort brings them to the * front of the list. (22) The format should be %u, not %d. ereport(log_level, (errmsg("writer: writing block db %d filenode %d forknum %d blocknum %d", database_counter, prev_filenode, prev_forknum, buf->blocknum))); (23) Why is the first while loop in BlockReaderMain() necessary? Just opening the target save file isn't enough? (24) Use MemoryContextAllocHuge(). palloc() can only allocate chunks up to 1GB. * This is not a concern as of now, so deferred; there's at least one other * place that allocates (NBuffers * (much_bigger_struct)),so this seems to * be an acceptable practice. */ saved_buffers = (SavedBuffer *) palloc(sizeof(SavedBuffer) * NBuffers); (25) It's better for the .save files to be created per tablespace, not per database. Tablespaces are more likely to be allocated on different storage devices for I/O distribution and capacity. So, it would be more natural to expect that we can restore buffers more quickly by letting multiple Block Readers do parallel I/O on different storage devices. (26) Reading the below description in the documentation, it seems that Block Readers can exit with 0 upon successful completion, because bgw_restart_time is set to BGW_NEVER_RESTART. "If bgw_restart_time for a background worker is configured as BGW_NEVER_RESTART, or if it exits with an exit code of 0 or is terminated by TerminateBackgroundWorker, it will be automatically unregistered by the postmaster on exit." (27) As others said, I also think that Buffer Saver should first write to a temp file, say *.tmp, then rename it to *.save upon completion. That prevents the Block Reader from reading possibly corrupted half-baked file that does not represent any hibernated state. Regards MauMau
pgsql-hackers by date: