Re: [HACKERS] Proposal : For Auto-Prewarm. - Mailing list pgsql-hackers
From | Amit Kapila |
---|---|
Subject | Re: [HACKERS] Proposal : For Auto-Prewarm. |
Date | |
Msg-id | CAA4eK1+1Tb+6gWLhyZsVy4NEqEPhz=u4+V1wQJ9rNMk8nAs7XA@mail.gmail.com Whole thread Raw |
In response to | Re: [HACKERS] Proposal : For Auto-Prewarm. (Mithun Cy <mithun.cy@enterprisedb.com>) |
Responses |
Re: [HACKERS] Proposal : For Auto-Prewarm.
|
List | pgsql-hackers |
On Mon, Jun 12, 2017 at 6:31 AM, Mithun Cy <mithun.cy@enterprisedb.com> wrote: > Thanks, Amit, > + /* Perform autoprewarm's task. */ + if (todo_task == TASK_PREWARM_BUFFERPOOL && + !state->skip_prewarm_on_restart) Why have you removed above comment in the new patch? I am not pointing this because above comment is meaningful, rather changing things in different versions of the patch without informing reviewer can increase the time to review. I feel you can write some better comment here. 1. new file mode 100644 index 0000000..6c35fb7 --- /dev/null +++ b/contrib/pg_prewarm/pg_prewarm--1.1--1.2.sql @@ -0,0 +1,14 @@ +/* contrib/pg_prewarm/pg_prewarm--1.0--1.1.sql */ In comments, the SQL file name is wrong. 2. + /* Define custom GUC variables. */ + if (process_shared_preload_libraries_in_progress) + DefineCustomBoolVariable("pg_prewarm.autoprewarm", + "Enable/Disable auto-prewarm feature.", + NULL, + &autoprewarm, + true, + PGC_POSTMASTER, + 0, + NULL, + NULL, + NULL); + + DefineCustomIntVariable("pg_prewarm.dump_interval", + "Sets the maximum time between two buffer pool dumps", + "If set to zero, timer based dumping is disabled." + " If set to -1, stops autoprewarm.", + &dump_interval, + AT_PWARM_DEFAULT_DUMP_INTERVAL, + AT_PWARM_OFF, INT_MAX / 1000, + PGC_SIGHUP, + GUC_UNIT_S, + NULL, + NULL, + NULL); + + EmitWarningsOnPlaceholders("pg_prewarm"); + + /* If not run as a preloaded library, nothing more to do. */ + if (!process_shared_preload_libraries_in_progress) + return; a. You can easily write this code such that process_shared_preload_libraries_in_progress needs to be checked just once. Move the define of pg_prewarm.dump_interval at first place and then check if (!process_shared_preload_libraries_in_progress ) return. b. Variable name autoprewarm isn't self-explanatory, also if you have to search the use of this variable in the code, it is difficult because a lot of unrelated usages can pop-up. How about naming it as start_prewarm_worker or enable_autoprewarm or something like that? 3. +static AutoPrewarmSharedState *state = NULL; Again, the naming of this variable (state) is not meaningful. How about SharedPrewarmState or something like that? 4. + ereport(LOG, + (errmsg("autoprewarm has started"))); .. + ereport(LOG, + (errmsg("autoprewarm shutting down"))); How about changing messages as "autoprewarm worker started" and "autoprewarm worker stopped" respectively? 5. +void +dump_block_info_periodically(void) +{ + TimestampTz last_dump_time = GetCurrentTimestamp(); .. + if (TimestampDifferenceExceeds(last_dump_time, + current_time, + (dump_interval * 1000))) + { + dump_now(true); .. With above coding, it will not dump the very first time it tries to dump the blocks information. I think it is better if it dumps the first time and then dumps after dump_interval. I think it is not difficult to arrange above code to do so if you also think that is better behavior? 6. +dump_now(bool is_bgworker) { .. + if (write(fd, buf, buflen) < buflen) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\" : %m", + transient_dump_file_path))); .. } You seem to forget to close and unlink the file in above code path. There are a lot of places in this function where you have to free memory or close file in case of an error condition. You can use multiple labels to define error exit paths, something like we have done in DecodeXLogRecord. 7. + for (i = 0; i < num_blocks; i++) + { + /* In case of a SIGHUP, just reload the configuration. */ + if (got_sighup) + { + got_sighup = false; + ProcessConfigFile(PGC_SIGHUP); + } + + /* Have we been asked to stop dump? */ + if (dump_interval == AT_PWARM_OFF) + { + pfree(block_info_array); + CloseTransientFile(fd); + unlink(transient_dump_file_path); + return 0; + } + + buflen = sprintf(buf, "%u,%u,%u,%u,%u\n", + block_info_array[i].database, + block_info_array[i].tablespace, + block_info_array[i].filenode, + (uint32) block_info_array[i].forknum, + block_info_array[i].blocknum); + + if (write(fd, buf, buflen) < buflen) + { + int save_errno = errno; + + CloseTransientFile(fd); + unlink(transient_dump_file_path); + errno = save_errno; + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\": %m", + transient_dump_file_path))); + } It seems we can club the above writes into 8K sized writes instead of one write per block information. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
pgsql-hackers by date: