Thread: add checkpoint stats of snapshot and mapping files of pg_logical dir

add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
Hi,

At times, there can be many snapshot and mapping files under
pg_logical dir that the checkpoint might have to delete/fsync based on
the cutoff LSN which can increase the checkpoint time. The
user/admin/developer might want to get an answer for "what's the total
time spent in deleting these files or what's the total size of the
files deleted or what's the cutoff LSN with which these files were
deleted". How about adding these stats to existing checkpoint stats?
IMO this information will be super useful. Attaching a patch doing
that. The output looks like [1].

Thoughts?

[1]
2021-10-29 18:48:10.731 UTC [615566] LOG:  checkpoint complete: wrote
449 buffers (2.7%); 0 WAL file(s) added, 0 removed, 0 recycled;
write=0.008 s, sync=0.035 s, total=0.081 s; sync files=2,
longest=0.033 s, average=0.018 s; distance=6338 kB, estimate=6338 kB;
snapbuild snapshot file(s) removed=4, size=516 bytes, time=0.001 s,
cutoff LSN=0/14C8E08; logical rewrite mapping file(s)
removed/synced=0, size=0 bytes, time=0.001 s, cutoff LSN=0/14C8E08

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Cary Huang
Date:
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:       tested, passed
Spec compliant:           tested, passed
Documentation:            not tested

Hi

The patch applies and tests fine. I don't think there is any harm having little extra statistical information about the
checkpointprocess. In fact, it could be useful in identifying a bottleneck during the checkpoint process as the stats
exactlythe time taken to do the file IO in pg_logical dir. 
 

best

Cary Huang

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Tue, Nov 30, 2021 at 6:24 AM Cary Huang <cary.huang@highgo.ca> wrote:
>
> The following review has been posted through the commitfest application:
> make installcheck-world:  tested, passed
> Implements feature:       tested, passed
> Spec compliant:           tested, passed
> Documentation:            not tested
>
> Hi
>
> The patch applies and tests fine. I don't think there is any harm having little extra statistical information about
thecheckpoint process. In fact, it could be useful in identifying a bottleneck during the checkpoint process as the
statsexactly the time taken to do the file IO in pg_logical dir. 

Thanks for the review. Here's the CF entry
https://commitfest.postgresql.org/36/3389/, please feel free to add
you as reviewer.

Regards,
Bharath Rupireddy.



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
+    CheckpointStats.repl_map_cutoff_lsn = cutoff;

Could we set repl_map_cutoff_lsn closer to where it is calculated?  Right
now, it's set at the bottom of the function just before the directory is
freed.  Is there a strong reason to do it there?

+                        "logical rewrite mapping file(s) removed/synced=" UINT64_FORMAT ", size=%zu bytes,
time=%ld.%03ds, cutoff LSN=%X/%X",
 

Can we split out the "removed" versus "synced" files?  Otherwise, I think
this is basically just telling us how many files are in the mappings
directory.

+    CheckpointStats.repl_snap_cutoff_lsn = cutoff;

I have the same note for this one as I have for repl_map_cutoff_lsn.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Sat, Mar 12, 2022 at 1:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> +    CheckpointStats.repl_map_cutoff_lsn = cutoff;
>
> Could we set repl_map_cutoff_lsn closer to where it is calculated?  Right
> now, it's set at the bottom of the function just before the directory is
> freed.  Is there a strong reason to do it there?
>
> +                        "logical rewrite mapping file(s) removed/synced=" UINT64_FORMAT ", size=%zu bytes,
time=%ld.%03ds, cutoff LSN=%X/%X",
 
>
> Can we split out the "removed" versus "synced" files?  Otherwise, I think
> this is basically just telling us how many files are in the mappings
> directory.
>
> +    CheckpointStats.repl_snap_cutoff_lsn = cutoff;
>
> I have the same note for this one as I have for repl_map_cutoff_lsn.

Thanks for reviewing this. I agree with all of the above suggestions
and incorporated them in the v2 patch.

Another thing I added in v2 is to not emit snapshot and mapping files
stats in case of restartpoint as logical decoding isn't supported on
standbys, so it doesn't make sense to emit the stats there and cause
server log to grow unnecessarily. Having said that, I added a note
there to change it whenever logical decoding on standbys is supported.

Please review v2.

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Sun, Mar 13, 2022 at 01:54:10PM +0530, Bharath Rupireddy wrote:
> Thanks for reviewing this. I agree with all of the above suggestions
> and incorporated them in the v2 patch.

Thanks for the new patch.

> Another thing I added in v2 is to not emit snapshot and mapping files
> stats in case of restartpoint as logical decoding isn't supported on
> standbys, so it doesn't make sense to emit the stats there and cause
> server log to grow unnecessarily. Having said that, I added a note
> there to change it whenever logical decoding on standbys is supported.

I think we actually do want to include this information for restartpoints
since some files might be left over from the snapshot that was used to
create the standby.  Also, presumably these functions could do some work
during recovery on a primary.

Another problem I see is that this patch depends on the return value of the
lstat() calls that we are trying to remove in 0001 from another thread [0].
I think the size of the removed/sync'd files is somewhat useful for
understanding disk space usage, but I suspect the time spent performing
these tasks is more closely related to the number of files.  Do you think
reporting the sizes is worth the extra system call?

[0] https://postgr.es/m/20220215231123.GA2584239%40nathanxps13

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Michael Paquier
Date:
On Sun, Mar 13, 2022 at 02:58:58PM -0700, Nathan Bossart wrote:
> On Sun, Mar 13, 2022 at 01:54:10PM +0530, Bharath Rupireddy wrote:
>> Another thing I added in v2 is to not emit snapshot and mapping files
>> stats in case of restartpoint as logical decoding isn't supported on
>> standbys, so it doesn't make sense to emit the stats there and cause
>> server log to grow unnecessarily. Having said that, I added a note
>> there to change it whenever logical decoding on standbys is supported.
>
> I think we actually do want to include this information for restartpoints
> since some files might be left over from the snapshot that was used to
> create the standby.  Also, presumably these functions could do some work
> during recovery on a primary.

Yes, I would agree that consistency makes sense here, and it is not
complicated to add the code to support this code path anyway.  There
is a risk that folks working on logical decoding on standbys overse
this code path, instead.

> Another problem I see is that this patch depends on the return value of the
> lstat() calls that we are trying to remove in 0001 from another thread [0].
> I think the size of the removed/sync'd files is somewhat useful for
> understanding disk space usage, but I suspect the time spent performing
> these tasks is more closely related to the number of files.  Do you think
> reporting the sizes is worth the extra system call?

We are not talking about files that are large either, are we?

Another thing I am a bit annoyed with in this patch is the fact that
the size of the ereport() call is doubled.  The LOG currently
generated is already bloated, and this does not arrange things.
--
Michael

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Mon, Mar 14, 2022 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Sun, Mar 13, 2022 at 02:58:58PM -0700, Nathan Bossart wrote:
> > On Sun, Mar 13, 2022 at 01:54:10PM +0530, Bharath Rupireddy wrote:
> >> Another thing I added in v2 is to not emit snapshot and mapping files
> >> stats in case of restartpoint as logical decoding isn't supported on
> >> standbys, so it doesn't make sense to emit the stats there and cause
> >> server log to grow unnecessarily. Having said that, I added a note
> >> there to change it whenever logical decoding on standbys is supported.
> >
> > I think we actually do want to include this information for restartpoints
> > since some files might be left over from the snapshot that was used to
> > create the standby.  Also, presumably these functions could do some work
> > during recovery on a primary.
>
> Yes, I would agree that consistency makes sense here, and it is not
> complicated to add the code to support this code path anyway.  There
> is a risk that folks working on logical decoding on standbys overse
> this code path, instead.

I agree to be consistent and emit the message even in case of restartpoint.

> > Another problem I see is that this patch depends on the return value of the
> > lstat() calls that we are trying to remove in 0001 from another thread [0].
> > I think the size of the removed/sync'd files is somewhat useful for
> > understanding disk space usage, but I suspect the time spent performing
> > these tasks is more closely related to the number of files.  Do you think
> > reporting the sizes is worth the extra system call?
>
> We are not talking about files that are large either, are we?
>
> Another thing I am a bit annoyed with in this patch is the fact that
> the size of the ereport() call is doubled.  The LOG currently
> generated is already bloated, and this does not arrange things.

Yes, this is a concern. Also, when there were no logical replication
slots on a plain server or the server removed or cleaned up all the
snapshot/mappings files, why would anyone want to have these messages
with all 0s in the server log?

Here's what I'm thinking:

Leave the existing "checkpoint/restartpoint complete" messages intact,
add the following in LogCheckpointEnd:

if (CheckpointStats.repl_map_files_rmvd_cnt ||
    CheckpointStats.repl_map_files_syncd_cnt ||
    CheckpointStats.repl_snap_files_rmvd_cnt)
{
    ereport(LOG,
            (errmsg("snapbuild snapshot file(s) removed="
UINT64_FORMAT ", size=%zu bytes, time=%ld.%03d s, cutoff LSN=%X/%X; "
                    "logical rewrite mapping file(s) removed="
UINT64_FORMAT ", size=%zu bytes, synced=" UINT64_FORMAT ", size=%zu
bytes, time=%ld.%03d s, cutoff LSN=%X/%X",
                    CheckpointStats.repl_snap_files_rmvd_cnt,
                    CheckpointStats.repl_snap_files_rmvd_sz,
                    repl_snap_msecs / 1000, (int) (repl_snap_msecs % 1000),
                    LSN_FORMAT_ARGS(CheckpointStats.repl_snap_cutoff_lsn),
                    CheckpointStats.repl_map_files_rmvd_cnt,
                    CheckpointStats.repl_map_files_rmvd_sz,
                    CheckpointStats.repl_map_files_syncd_cnt,
                    CheckpointStats.repl_map_files_syncd_sz,
                    repl_map_msecs / 1000, (int) (repl_map_msecs % 1000),
                    LSN_FORMAT_ARGS(CheckpointStats.repl_map_cutoff_lsn))));
}

Thoughts?

Regards,
Bharath Rupireddy.



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Michael Paquier
Date:
On Mon, Mar 14, 2022 at 10:54:56AM +0530, Bharath Rupireddy wrote:
> Yes, this is a concern. Also, when there were no logical replication
> slots on a plain server or the server removed or cleaned up all the
> snapshot/mappings files, why would anyone want to have these messages
> with all 0s in the server log?

The default settings don't enable that, so making things conditional
roughly as you are suggesting with two different LOG messages sounds
rather fine.

> Here's what I'm thinking:
>
> Leave the existing "checkpoint/restartpoint complete" messages intact,
> add the following in LogCheckpointEnd:

FWIW, I also think that it would be good to check if there are cases
where this information is significant enough that its inclusion makes
sense.  In the top message of the thread, the logs you showed refer to
operations that represent 1/2ms worth of checkpoint.  So, if in most
cases this is going to be very quick, adding it to the logs won't
matter because that's not a performance bottleneck.  Perhaps that's
something the patch that works on progress reporting for checkpoint
is able to track?
--
Michael

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Mon, Mar 14, 2022 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Mon, Mar 14, 2022 at 10:54:56AM +0530, Bharath Rupireddy wrote:
> > Yes, this is a concern. Also, when there were no logical replication
> > slots on a plain server or the server removed or cleaned up all the
> > snapshot/mappings files, why would anyone want to have these messages
> > with all 0s in the server log?
>
> The default settings don't enable that, so making things conditional
> roughly as you are suggesting with two different LOG messages sounds
> rather fine.
>
> > Here's what I'm thinking:
> >
> > Leave the existing "checkpoint/restartpoint complete" messages intact,
> > add the following in LogCheckpointEnd:
>
> FWIW, I also think that it would be good to check if there are cases
> where this information is significant enough that its inclusion makes
> sense.  In the top message of the thread, the logs you showed refer to
> operations that represent 1/2ms worth of checkpoint.  So, if in most
> cases this is going to be very quick, adding it to the logs won't
> matter because that's not a performance bottleneck.

At times, the snapshot or mapping files can be large in number and one
some platforms it takes time for checkpoint to process all of them.
Having the stats about them in server logs can help us better analyze
why checkpoint took a long time and provide a better RCA.

> Perhaps that's
> something the patch that works on progress reporting for checkpoint
> is able to track?

pg_stat_progress_checkpoint will give the info about the on-going
checkpoint but the info in the server logs will help analyze the
issues after they occur.

Regards,
Bharath Rupireddy.



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Mon, Mar 14, 2022 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Mon, Mar 14, 2022 at 10:54:56AM +0530, Bharath Rupireddy wrote:
> > Yes, this is a concern. Also, when there were no logical replication
> > slots on a plain server or the server removed or cleaned up all the
> > snapshot/mappings files, why would anyone want to have these messages
> > with all 0s in the server log?
>
> The default settings don't enable that, so making things conditional
> roughly as you are suggesting with two different LOG messages sounds
> rather fine.

Attaching v3 patch which emits logs only when necessary and doesn't
clutter the existing "checkpoint/restartpoint completed" message, see
some sample logs at [1]. Please review it further.

Just another thought - after looking at sample logs at [1], once can
think why can't the checkpoint logs be like [2] or [3]?

[1]
2022-03-14 11:20:28.381 UTC [3257102] LOG:  checkpoint starting:
immediate force wait
2022-03-14 11:20:28.444 UTC [3257102] LOG:  checkpoint complete: wrote
0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;
write=0.001 s, sync=0.001 s, total=0.064 s; sync files=0,
longest=0.000 s, average=0.000 s; distance=0 kB, estimate=35717 kB
2022-03-14 11:20:28.444 UTC [3257102] LOG:  snapbuild snapshot file(s)
removed=2, size=280 bytes, time=0.001 s, cutoff LSN=0/40D2F40
2022-03-14 11:20:28.444 UTC [3257102] LOG:  logical rewrite mapping
file(s) removed=16, size=1368 bytes, synced=16, size=1368 bytes,
time=0.047 s, cutoff LSN=0/40D2F40

[2]
2022-03-14 11:20:28.381 UTC [3257102] LOG:  checkpoint starting:
immediate force wait
2022-03-14 11:20:28.444 UTC [3257102] LOG:  wrote 0 buffers (0.0%); 0
WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s,
total=0.064 s; sync files=0, longest=0.000 s, average=0.000 s;
distance=0 kB, estimate=35717 kB
2022-03-14 11:20:28.444 UTC [3257102] LOG:  snapbuild snapshot file(s)
removed=2, size=280 bytes, time=0.001 s, cutoff LSN=0/40D2F40
2022-03-14 11:20:28.444 UTC [3257102] LOG:  logical rewrite mapping
file(s) removed=16, size=1368 bytes, synced=16, size=1368 bytes,
time=0.047 s, cutoff LSN=0/40D2F40
2022-03-14 11:20:28.444 UTC [3257102] LOG:  checkpoint complete:
immediate force wait

[3]
2022-03-14 11:20:28.381 UTC [3257102] LOG:  checkpoint starting:
immediate force wait
2022-03-14 11:20:28.444 UTC [3257102] LOG:  wrote 0 buffers (0.0%); 0
WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s,
total=0.064 s; sync files=0, longest=0.000 s, average=0.000 s;
distance=0 kB, estimate=35717 kB
2022-03-14 11:20:28.444 UTC [3257102] LOG:  snapbuild snapshot file(s)
removed=2, size=280 bytes, time=0.001 s, cutoff LSN=0/40D2F40
2022-03-14 11:20:28.444 UTC [3257102] LOG:  logical rewrite mapping
file(s) removed=16, size=1368 bytes, synced=16, size=1368 bytes,
time=0.047 s, cutoff LSN=0/40D2F40
2022-03-14 11:20:28.444 UTC [3257102] LOG:  checkpoint complete

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Mon, Mar 14, 2022 at 05:03:59PM +0530, Bharath Rupireddy wrote:
> On Mon, Mar 14, 2022 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:
>> On Mon, Mar 14, 2022 at 10:54:56AM +0530, Bharath Rupireddy wrote:
>> > Yes, this is a concern. Also, when there were no logical replication
>> > slots on a plain server or the server removed or cleaned up all the
>> > snapshot/mappings files, why would anyone want to have these messages
>> > with all 0s in the server log?
>>
>> The default settings don't enable that, so making things conditional
>> roughly as you are suggesting with two different LOG messages sounds
>> rather fine.
> 
> Attaching v3 patch which emits logs only when necessary and doesn't
> clutter the existing "checkpoint/restartpoint completed" message, see
> some sample logs at [1]. Please review it further.

I'm okay with not adding these statistics when the number of files removed
and sync'd is zero.

IMO we shouldn't include the size of the files since it will prevent us
from removing lstat() calls.  As noted upthread, I suspect the time spent
in these tasks is more closely related to the number of files anyway.

I'm -1 on splitting these new statistics to separate LOGs.  In addition to
making it more difficult to discover statistics for a given checkpoint, I
think it actually adds even more bloat to the server log.  If we are
concerned about the amount of information in these LOGs, perhaps we should
adjust the format to make it more human-readable.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Michael Paquier
Date:
On Mon, Mar 14, 2022 at 03:54:19PM +0530, Bharath Rupireddy wrote:
> At times, the snapshot or mapping files can be large in number and one
> some platforms it takes time for checkpoint to process all of them.
> Having the stats about them in server logs can help us better analyze
> why checkpoint took a long time and provide a better RCA.

Do you have any numbers to share regarding that?  Seeing information
about 1k WAL segments being recycled and/or removed by a checkpoint
where the operation takes dozens of seconds to complete because we can
talk about hundred of gigs worth of files moved around.  If we are
talking about 100~200 files up to 10~20kB each for snapshot and
mapping files, the information has less value, worth only a portion of
one WAL segment.
--
Michael

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Tue, Mar 15, 2022 at 11:04:26AM +0900, Michael Paquier wrote:
> On Mon, Mar 14, 2022 at 03:54:19PM +0530, Bharath Rupireddy wrote:
>> At times, the snapshot or mapping files can be large in number and one
>> some platforms it takes time for checkpoint to process all of them.
>> Having the stats about them in server logs can help us better analyze
>> why checkpoint took a long time and provide a better RCA.
> 
> Do you have any numbers to share regarding that?  Seeing information
> about 1k WAL segments being recycled and/or removed by a checkpoint
> where the operation takes dozens of seconds to complete because we can
> talk about hundred of gigs worth of files moved around.  If we are
> talking about 100~200 files up to 10~20kB each for snapshot and
> mapping files, the information has less value, worth only a portion of
> one WAL segment.

I don't have specific numbers to share, but as noted elsewhere [0], I
routinely see lengthy checkpoints that spend a lot of time in these cleanup
tasks.

[0] https://postgr.es/m/18ED8B1F-7F5B-4ABF-848D-45916C938BC7%40amazon.com

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Wed, Mar 16, 2022 at 1:47 AM Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> On Tue, Mar 15, 2022 at 11:04:26AM +0900, Michael Paquier wrote:
> > On Mon, Mar 14, 2022 at 03:54:19PM +0530, Bharath Rupireddy wrote:
> >> At times, the snapshot or mapping files can be large in number and one
> >> some platforms it takes time for checkpoint to process all of them.
> >> Having the stats about them in server logs can help us better analyze
> >> why checkpoint took a long time and provide a better RCA.
> >
> > Do you have any numbers to share regarding that?  Seeing information
> > about 1k WAL segments being recycled and/or removed by a checkpoint
> > where the operation takes dozens of seconds to complete because we can
> > talk about hundred of gigs worth of files moved around.  If we are
> > talking about 100~200 files up to 10~20kB each for snapshot and
> > mapping files, the information has less value, worth only a portion of
> > one WAL segment.
>
> I don't have specific numbers to share, but as noted elsewhere [0], I
> routinely see lengthy checkpoints that spend a lot of time in these cleanup
> tasks.
>
> [0] https://postgr.es/m/18ED8B1F-7F5B-4ABF-848D-45916C938BC7%40amazon.com

I too don't have numbers in hand to share. But I have come across
cases where checkpoints spend a good amount of time cleaning
up/processing snapshots or mapping files.

Regards,
Bharath Rupireddy.



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Mon, Mar 14, 2022 at 10:34 PM Nathan Bossart
<nathandbossart@gmail.com> wrote:
>
> > Attaching v3 patch which emits logs only when necessary and doesn't
> > clutter the existing "checkpoint/restartpoint completed" message, see
> > some sample logs at [1]. Please review it further.
>
> I'm okay with not adding these statistics when the number of files removed
> and sync'd is zero.

Thanks.

> IMO we shouldn't include the size of the files since it will prevent us
> from removing lstat() calls.  As noted upthread, I suspect the time spent
> in these tasks is more closely related to the number of files anyway.

Yeah. I will remove them.

> I'm -1 on splitting these new statistics to separate LOGs.  In addition to
> making it more difficult to discover statistics for a given checkpoint, I
> think it actually adds even more bloat to the server log.  If we are
> concerned about the amount of information in these LOGs, perhaps we should
> adjust the format to make it more human-readable.

Below are the ways that I can think of. Thoughts?

1)
if (restartpoint)
{
   if (snap_files_rmvd > 0 && map_files_rmvd > 0)
     existing "restartpoint complete" message + "snapshot files
removed cnt, time, cutoff lsn" + "mapping files removed cnt, time,
cutoff lsn"
   else if (snap_files_rmvd > 0)
     existing "restartpoint complete" message + "snapshot files
removed cnt, time, cutoff lsn"
   else if (map_files_rmvd > 0)
     existing "restartpoint complete" message + "mapping files removed
cnt, time, cutoff lsn"
   else
     existing "restartpoint complete" message
}
else
{
   same as above but with "checkpoint complete" instead of "restart complete"
}

2) Use StringInfoData to prepare the message dynamically but this will
create  message translation problems.

3) Emit the  "snapshot files removed cnt, time, cutoff lsn" and
"mapping files removed cnt, time, cutoff lsn" at LOG level if
(log_checkpoints) in CheckPointSnapBuild and
CheckPointLogicalRewriteHeap respectively.

4) Have separate log messages in LogCheckpointEnd:
if (snap_files_rmvd > 0)
   "snapshot files removed cnt, time, cutoff lsn"
if (map_files_rmvd > 0)
   "mapping files removed cnt, time, cutoff lsn"

if (restartpoint)
  existing "restartpoint complete" message
else
  existing "checkpoint complete" message

Regards,
Bharath Rupireddy.



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Wed, Mar 16, 2022 at 03:02:41PM +0530, Bharath Rupireddy wrote:
> On Mon, Mar 14, 2022 at 10:34 PM Nathan Bossart
> <nathandbossart@gmail.com> wrote:
>> I'm -1 on splitting these new statistics to separate LOGs.  In addition to
>> making it more difficult to discover statistics for a given checkpoint, I
>> think it actually adds even more bloat to the server log.  If we are
>> concerned about the amount of information in these LOGs, perhaps we should
>> adjust the format to make it more human-readable.
> 
> Below are the ways that I can think of. Thoughts?

I think I prefer the first option.  If these tasks don't do anything, we
leave the stats out of the message.  If they do some work, we add them.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Thu, Mar 17, 2022 at 12:12 AM Nathan Bossart
<nathandbossart@gmail.com> wrote:
>
> On Wed, Mar 16, 2022 at 03:02:41PM +0530, Bharath Rupireddy wrote:
> > On Mon, Mar 14, 2022 at 10:34 PM Nathan Bossart
> > <nathandbossart@gmail.com> wrote:
> >> I'm -1 on splitting these new statistics to separate LOGs.  In addition to
> >> making it more difficult to discover statistics for a given checkpoint, I
> >> think it actually adds even more bloat to the server log.  If we are
> >> concerned about the amount of information in these LOGs, perhaps we should
> >> adjust the format to make it more human-readable.
> >
> > Below are the ways that I can think of. Thoughts?
>
> I think I prefer the first option.  If these tasks don't do anything, we
> leave the stats out of the message.  If they do some work, we add them.

Thanks Nathan.

> Below are the ways that I can think of. Thoughts?
>
> 1)
> if (restartpoint)
> {
>    if (snap_files_rmvd > 0 && map_files_rmvd > 0)
>      existing "restartpoint complete" message + "snapshot files
> removed cnt, time, cutoff lsn" + "mapping files removed cnt, time,
> cutoff lsn"
>    else if (snap_files_rmvd > 0)
>      existing "restartpoint complete" message + "snapshot files
> removed cnt, time, cutoff lsn"
>    else if (map_files_rmvd > 0)
>      existing "restartpoint complete" message + "mapping files removed
> cnt, time, cutoff lsn"
>    else
>      existing "restartpoint complete" message
> }
> else
> {
>    same as above but with "checkpoint complete" instead of "restart complete"
> }
>
> 2) Use StringInfoData to prepare the message dynamically but this will
> create  message translation problems.
>
> 3) Emit the  "snapshot files removed cnt, time, cutoff lsn" and
> "mapping files removed cnt, time, cutoff lsn" at LOG level if
> (log_checkpoints) in CheckPointSnapBuild and
> CheckPointLogicalRewriteHeap respectively.
>
> 4) Have separate log messages in LogCheckpointEnd:
> if (snap_files_rmvd > 0)
>    "snapshot files removed cnt, time, cutoff lsn"
> if (map_files_rmvd > 0)
>    "mapping files removed cnt, time, cutoff lsn"
>
> if (restartpoint)
>   existing "restartpoint complete" message
> else
>   existing "checkpoint complete" message

FWIW, I'm attaching patches as follows:
v4-0001...patch implements option (4)
v4-0002...txt implements option (3)
v4-0003...txt implements option (1)

Personally, I tend to agree with v4-0001 (option (4)) or v4-0002
(option (3)) than v4-0003 (option (1)) as it adds more unreadability
with most of the code duplicated creating more diff with previous
versions and maintainability problems. Having said that, I will leave
it to the committer to decide on that.

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Thu, Mar 17, 2022 at 06:48:43PM +0530, Bharath Rupireddy wrote:
> Personally, I tend to agree with v4-0001 (option (4)) or v4-0002
> (option (3)) than v4-0003 (option (1)) as it adds more unreadability
> with most of the code duplicated creating more diff with previous
> versions and maintainability problems. Having said that, I will leave
> it to the committer to decide on that.

I don't think v4-0003/option 1 needs to be unreadable.  Perhaps we can use
a StringInfo to build the message.  That might be a net improvement in
readability anyway.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Fri, Mar 18, 2022 at 2:15 AM Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> On Thu, Mar 17, 2022 at 06:48:43PM +0530, Bharath Rupireddy wrote:
> > Personally, I tend to agree with v4-0001 (option (4)) or v4-0002
> > (option (3)) than v4-0003 (option (1)) as it adds more unreadability
> > with most of the code duplicated creating more diff with previous
> > versions and maintainability problems. Having said that, I will leave
> > it to the committer to decide on that.
>
> I don't think v4-0003/option 1 needs to be unreadable.  Perhaps we can use
> a StringInfo to build the message.  That might be a net improvement in
> readability anyway.

Looks like we already use StringInfo in PerformAuthentication for
"connection" related log messages, see [1].

I will give it a try using StringInfo for the "checkpoint/restartpoint
complete" message too.

[1]
    if (Log_connections)
    {
        StringInfoData logmsg;

        initStringInfo(&logmsg);
        if (am_walsender)
            appendStringInfo(&logmsg, _("replication connection
authorized: user=%s"),
                             port->user_name);
        else
            appendStringInfo(&logmsg, _("connection authorized: user=%s"),
                             port->user_name);
        if (!am_walsender)
            appendStringInfo(&logmsg, _(" database=%s"), port->database_name);

Regards,
Bharath Rupireddy.



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Fri, Mar 18, 2022 at 11:11 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> On Fri, Mar 18, 2022 at 2:15 AM Nathan Bossart <nathandbossart@gmail.com> wrote:
> >
> > On Thu, Mar 17, 2022 at 06:48:43PM +0530, Bharath Rupireddy wrote:
> > > Personally, I tend to agree with v4-0001 (option (4)) or v4-0002
> > > (option (3)) than v4-0003 (option (1)) as it adds more unreadability
> > > with most of the code duplicated creating more diff with previous
> > > versions and maintainability problems. Having said that, I will leave
> > > it to the committer to decide on that.
> >
> > I don't think v4-0003/option 1 needs to be unreadable.  Perhaps we can use
> > a StringInfo to build the message.  That might be a net improvement in
> > readability anyway.
>
> Looks like we already use StringInfo in PerformAuthentication for
> "connection" related log messages, see [1].
>
> I will give it a try using StringInfo for the "checkpoint/restartpoint
> complete" message too.

Well, here's the v5 patch, sample output at [1]. Please have a look at it.

[1]
2022-03-18 07:44:36.102 UTC [1165375] LOG:  checkpoint starting:
immediate force wait
2022-03-18 07:44:36.158 UTC [1165375] LOG:  checkpoint complete: wrote
444 buffers (2.7%); 0 WAL file(s) added, 0 removed, 0 recycled;
write=0.006 s, sync=0.017 s, total=0.057 s; sync files=2,
longest=0.014 s, average=0.009 s; distance=6270 kB, estimate=6270 kB;
logical decoding snapshot file(s) removed=3, time=0.001 s, cutoff
LSN=0/1B97B10

2022-03-18 07:45:43.263 UTC [1165375] LOG:  checkpoint starting:
immediate force wait
2022-03-18 07:45:43.338 UTC [1165375] LOG:  checkpoint complete: wrote
968 buffers (5.9%); 0 WAL file(s) added, 0 removed, 0 recycled;
write=0.012 s, sync=0.025 s, total=0.075 s; sync files=24,
longest=0.020 s, average=0.002 s; distance=6975 kB, estimate=6975 kB;
logical decoding snapshot file(s) removed=3, time=0.001 s, cutoff
LSN=0/1BDD080; logical decoding rewrite mapping file(s) removed=1,
synced=7, time=0.001 s, cutoff LSN=0/1BDD080

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Fri, Mar 18, 2022 at 01:32:52PM +0530, Bharath Rupireddy wrote:
> Well, here's the v5 patch, sample output at [1]. Please have a look at it.

I think this is on the right track, but I would even take it a step
further by separating each group of information:

    if (restartpoint)
        appendStringInfo(&logmsg, "restartpoint complete: ");
    else
        appendStringInfo(&logmsg, "checkpoint complete: ");

    /* buffer stats */
    appendStringInfo(&logmsg, "wrote %d buffers (%.1f%%); ",
                              CheckpointStats.ckpt_bufs_written,
                              (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers);

    /* WAL file stats */
    appendStringInfo(&logmsg, "%d WAL file(s) added, %d removed, %d recycled; ",
                              CheckpointStats.ckpt_segs_added,
                              CheckpointStats.ckpt_segs_removed,
                              CheckpointStats.ckpt_segs_recycled);

    ...

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Andres Freund
Date:
Hi,

On 2022-03-18 13:32:52 +0530, Bharath Rupireddy wrote:
> @@ -6115,46 +6116,81 @@ LogCheckpointEnd(bool restartpoint)
>              CheckpointStats.ckpt_sync_rels;
>      average_msecs = (long) ((average_sync_time + 999) / 1000);
>  
> +    initStringInfo(&logmsg);
> +
>      if (restartpoint)
> -        ereport(LOG,
> -                (errmsg("restartpoint complete: wrote %d buffers (%.1f%%); "
> -                        "%d WAL file(s) added, %d removed, %d recycled; "
> -                        "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
> -                        "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
> -                        "distance=%d kB, estimate=%d kB",
> -                        CheckpointStats.ckpt_bufs_written,
> -                        (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
> -                        CheckpointStats.ckpt_segs_added,
> -                        CheckpointStats.ckpt_segs_removed,
> -                        CheckpointStats.ckpt_segs_recycled,
> -                        write_msecs / 1000, (int) (write_msecs % 1000),
> -                        sync_msecs / 1000, (int) (sync_msecs % 1000),
> -                        total_msecs / 1000, (int) (total_msecs % 1000),
> -                        CheckpointStats.ckpt_sync_rels,
> -                        longest_msecs / 1000, (int) (longest_msecs % 1000),
> -                        average_msecs / 1000, (int) (average_msecs % 1000),
> -                        (int) (PrevCheckPointDistance / 1024.0),
> -                        (int) (CheckPointDistanceEstimate / 1024.0))));
> +        appendStringInfo(&logmsg,
> +                        _("restartpoint complete: wrote %d buffers (%.1f%%); "
> +                          "%d WAL file(s) added, %d removed, %d recycled; "
> +                          "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
> +                          "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
> +                          "distance=%d kB, estimate=%d kB"),
> +                          CheckpointStats.ckpt_bufs_written,
> +                          (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
> +                          CheckpointStats.ckpt_segs_added,
> +                          CheckpointStats.ckpt_segs_removed,
> +                          CheckpointStats.ckpt_segs_recycled,
> +                          write_msecs / 1000, (int) (write_msecs % 1000),
> +                          sync_msecs / 1000, (int) (sync_msecs % 1000),
> +                          total_msecs / 1000, (int) (total_msecs % 1000),
> +                          CheckpointStats.ckpt_sync_rels,
> +                          longest_msecs / 1000, (int) (longest_msecs % 1000),
> +                          average_msecs / 1000, (int) (average_msecs % 1000),
> +                          (int) (PrevCheckPointDistance / 1024.0),
> +                          (int) (CheckPointDistanceEstimate / 1024.0));
>      else
> -        ereport(LOG,
> -                (errmsg("checkpoint complete: wrote %d buffers (%.1f%%); "
> -                        "%d WAL file(s) added, %d removed, %d recycled; "
> -                        "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
> -                        "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
> -                        "distance=%d kB, estimate=%d kB",
> -                        CheckpointStats.ckpt_bufs_written,
> -                        (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
> -                        CheckpointStats.ckpt_segs_added,
> -                        CheckpointStats.ckpt_segs_removed,
> -                        CheckpointStats.ckpt_segs_recycled,
> -                        write_msecs / 1000, (int) (write_msecs % 1000),
> -                        sync_msecs / 1000, (int) (sync_msecs % 1000),
> -                        total_msecs / 1000, (int) (total_msecs % 1000),
> -                        CheckpointStats.ckpt_sync_rels,
> -                        longest_msecs / 1000, (int) (longest_msecs % 1000),
> -                        average_msecs / 1000, (int) (average_msecs % 1000),
> -                        (int) (PrevCheckPointDistance / 1024.0),
> -                        (int) (CheckPointDistanceEstimate / 1024.0))));
> +        appendStringInfo(&logmsg,
> +                        _("checkpoint complete: wrote %d buffers (%.1f%%); "
> +                          "%d WAL file(s) added, %d removed, %d recycled; "
> +                          "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
> +                          "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
> +                          "distance=%d kB, estimate=%d kB"),
> +                          CheckpointStats.ckpt_bufs_written,
> +                          (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,
> +                          CheckpointStats.ckpt_segs_added,
> +                          CheckpointStats.ckpt_segs_removed,
> +                          CheckpointStats.ckpt_segs_recycled,
> +                          write_msecs / 1000, (int) (write_msecs % 1000),
> +                          sync_msecs / 1000, (int) (sync_msecs % 1000),
> +                          total_msecs / 1000, (int) (total_msecs % 1000),
> +                          CheckpointStats.ckpt_sync_rels,
> +                          longest_msecs / 1000, (int) (longest_msecs % 1000),
> +                          average_msecs / 1000, (int) (average_msecs % 1000),
> +                          (int) (PrevCheckPointDistance / 1024.0),
> +                          (int) (CheckPointDistanceEstimate / 1024.0));

before we further change this (as done in this patch) we should deduplicate
these huge statements in a separate patch / commit.


> +    if (CheckpointStats.repl_snap_files_rmvd_cnt > 0)
> +    {
> +        long t_msecs;
> +
> +        t_msecs = TimestampDifferenceMilliseconds(CheckpointStats.repl_snap_start_t,
> +                                                  CheckpointStats.repl_snap_end_t);
> +
> +        appendStringInfo(&logmsg,
> +                        _("; logical decoding snapshot file(s) removed=" UINT64_FORMAT ", time=%ld.%03d s, cutoff
LSN=%X/%X"),
> +                          CheckpointStats.repl_snap_files_rmvd_cnt,
> +                          t_msecs / 1000, (int) (t_msecs % 1000),
> +                          LSN_FORMAT_ARGS(CheckpointStats.repl_snap_cutoff_lsn));
> +    }

As discussed in a bunch of other threads, we typically prefer to cast to long
long and use %ll instead of using UINT64_FORMAT these days.


> +    if (CheckpointStats.repl_map_files_rmvd_cnt ||
> +        CheckpointStats.repl_map_files_syncd_cnt > 0)
> +    {
> +        long t_msecs;
> +
> +        t_msecs = TimestampDifferenceMilliseconds(CheckpointStats.repl_snap_start_t,
> +                                                  CheckpointStats.repl_snap_end_t);
> +
> +        appendStringInfo(&logmsg,
> +                        _("; logical decoding rewrite mapping file(s) removed=" UINT64_FORMAT ", synced="
UINT64_FORMAT", time=%ld.%03d s, cutoff LSN=%X/%X"),
 
> +                          CheckpointStats.repl_map_files_rmvd_cnt,
> +                          CheckpointStats.repl_map_files_syncd_cnt,
> +                          t_msecs / 1000, (int) (t_msecs % 1000),
> +                          LSN_FORMAT_ARGS(CheckpointStats.repl_map_cutoff_lsn));
> +    }
> +
> +    ereport(LOG, errmsg_internal("%s", logmsg.data));
> +    pfree(logmsg.data);
>  }

This practically doubles the size of the log message - doesn't that seem a bit
disproportionate? Can we make this more dense? "logical decoding rewrite
mapping file(s) removed=" and "logical decoding snapshot file(s) removed=" is
quite long.


Greetings,

Andres Freund



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Sat, Mar 19, 2022 at 4:39 AM Andres Freund <andres@anarazel.de> wrote:
>
> before we further change this (as done in this patch) we should deduplicate
> these huge statements in a separate patch / commit.

Something like attached
v6-0001-Deduplicate-checkpoint-restartpoint-complete-mess.patch?

> As discussed in a bunch of other threads, we typically prefer to cast to long
> long and use %ll instead of using UINT64_FORMAT these days.

Changed.

> > +     if (CheckpointStats.repl_map_files_rmvd_cnt ||
> > +             CheckpointStats.repl_map_files_syncd_cnt > 0)
> > +     {
> > +             long t_msecs;
> > +
> > +             t_msecs = TimestampDifferenceMilliseconds(CheckpointStats.repl_snap_start_t,
> > +
CheckpointStats.repl_snap_end_t);
> > +
> > +             appendStringInfo(&logmsg,
> > +                                             _("; logical decoding rewrite mapping file(s) removed=" UINT64_FORMAT
",synced=" UINT64_FORMAT ", time=%ld.%03d s, cutoff LSN=%X/%X"),
 
> > +                                               CheckpointStats.repl_map_files_rmvd_cnt,
> > +                                               CheckpointStats.repl_map_files_syncd_cnt,
> > +                                               t_msecs / 1000, (int) (t_msecs % 1000),
> > +                                               LSN_FORMAT_ARGS(CheckpointStats.repl_map_cutoff_lsn));
> > +     }
> > +
> > +     ereport(LOG, errmsg_internal("%s", logmsg.data));
> > +     pfree(logmsg.data);
> >  }
>
> This practically doubles the size of the log message - doesn't that seem a bit
> disproportionate? Can we make this more dense? "logical decoding rewrite
> mapping file(s) removed=" and "logical decoding snapshot file(s) removed=" is
> quite long.

Do you suggest something like below? Or some other better wording like
"logical decoding rewrite map files" and "logical decoding snapshot
files" or "logical rewrite map files" and "logical snapshot files" or
just "rewrite mapping files" or "snapshot files" .... ?

    if (repl_snap_files_rmvd_cnt > 0 && (repl_map_files_rmvd_cnt ||
        repl_map_files_syncd_cnt > 0))
        appendStringInfo(&logmsg,
                        _("; logical decoding snapshot file(s)
removed=%lu, time=%ld.%03d s, cutoff LSN=%X/%X", rewrite mapping
file(s), removed=%lu, synced=%lu, time=%ld.%03d s, cutoff LSN=%X/%X"
    else if (repl_snap_files_rmvd_cnt > 0)
        appendStringInfo(&logmsg,
                        _("; logical decoding snapshot file(s)
removed=%lu, time=%ld.%03d s, cutoff LSN=%X/%X",
    else if (repl_map_files_rmvd_cnt || repl_map_files_syncd_cnt > 0
        appendStringInfo(&logmsg,
                       _("; logical decoding rewrite mapping file(s),
removed=%lu, synced=%lu, time=%ld.%03d s, cutoff LSN=%X/%X"

On Sat, Mar 19, 2022 at 3:16 AM Nathan Bossart <nathandbossart@gmail.com> wrote:
>         /* buffer stats */
>         appendStringInfo(&logmsg, "wrote %d buffers (%.1f%%); ",
>                                                           CheckpointStats.ckpt_bufs_written,
>                                                           (double) CheckpointStats.ckpt_bufs_written * 100 /
NBuffers);
>
>         /* WAL file stats */
>         appendStringInfo(&logmsg, "%d WAL file(s) added, %d removed, %d recycled; ",
>                                                           CheckpointStats.ckpt_segs_added,
>                                                           CheckpointStats.ckpt_segs_removed,
>                                                           CheckpointStats.ckpt_segs_recycled);

Do we actually need to granularize the message like this? I actually
don't think so, because we print the stats even if they are 0 (buffers
written is 0 or WAL segments added is 0 and so on).

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Andres Freund
Date:
Hi,

On 2022-03-21 11:27:15 +0530, Bharath Rupireddy wrote:
> On Sat, Mar 19, 2022 at 4:39 AM Andres Freund <andres@anarazel.de> wrote:
> >
> > before we further change this (as done in this patch) we should deduplicate
> > these huge statements in a separate patch / commit.
>
> Something like attached
> v6-0001-Deduplicate-checkpoint-restartpoint-complete-mess.patch?

Mostly. I don't see a reason for the use of the stringinfo. And I think
LogCheckpointStart() should be dealt with similarly.

I'd just make it a restartpoint ? _("restartpoint") : _("checkpoint") or such
in the argument? Then translators don't need to translate the two messages
separately.

Or we could just not translate restartpoint/checkpoint - after all we don't
translate the flags in LogCheckpointStart() either. But on balance I'd lean
towards the above.


> > This practically doubles the size of the log message - doesn't that seem a bit
> > disproportionate? Can we make this more dense? "logical decoding rewrite
> > mapping file(s) removed=" and "logical decoding snapshot file(s) removed=" is
> > quite long.
>
> Do you suggest something like below? Or some other better wording like
> "logical decoding rewrite map files" and "logical decoding snapshot
> files" or "logical rewrite map files" and "logical snapshot files" or
> just "rewrite mapping files" or "snapshot files" .... ?

Both seem still very long. I still am doubtful this level of detail is
appropriate. Seems more like a thing for a tracepoint or such. How about just
printing the time for the logical decoding operations in aggregate, without
breaking down into files, adding LSNs etc?

Greetings,

Andres Freund



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Mon, Mar 21, 2022 at 11:27:15AM +0530, Bharath Rupireddy wrote:
> On Sat, Mar 19, 2022 at 3:16 AM Nathan Bossart <nathandbossart@gmail.com> wrote:
>>         /* buffer stats */
>>         appendStringInfo(&logmsg, "wrote %d buffers (%.1f%%); ",
>>                                                           CheckpointStats.ckpt_bufs_written,
>>                                                           (double) CheckpointStats.ckpt_bufs_written * 100 /
NBuffers);
>>
>>         /* WAL file stats */
>>         appendStringInfo(&logmsg, "%d WAL file(s) added, %d removed, %d recycled; ",
>>                                                           CheckpointStats.ckpt_segs_added,
>>                                                           CheckpointStats.ckpt_segs_removed,
>>                                                           CheckpointStats.ckpt_segs_recycled);
> 
> Do we actually need to granularize the message like this? I actually
> don't think so, because we print the stats even if they are 0 (buffers
> written is 0 or WAL segments added is 0 and so on).

I suggested it because I thought it would improve readability and simplify
optionally adding new stats to the message.  If there is another way to
accomplish those things, that is fine by me.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Tue, Mar 22, 2022 at 12:20 AM Andres Freund <andres@anarazel.de> wrote:
> > Something like attached
> > v6-0001-Deduplicate-checkpoint-restartpoint-complete-mess.patch?
>
> Mostly. I don't see a reason for the use of the stringinfo. And I think
> LogCheckpointStart() should be dealt with similarly.
>
> I'd just make it a restartpoint ? _("restartpoint") : _("checkpoint") or such
> in the argument? Then translators don't need to translate the two messages
> separately.

Do you mean like this?
    ereport(LOG,
    /* translator: the placeholders show checkpoint options */
            (errmsg("%s starting:%s%s%s%s%s%s%s%s",
                    restartpoint ? _("restartpoint") : _("checkpoint"),
                    (flags & CHECKPOINT_IS_SHUTDOWN) ? " shutdown" : "",
                    (flags & CHECKPOINT_END_OF_RECOVERY) ? "
end-of-recovery" : "",
                    (flags & CHECKPOINT_IMMEDIATE) ? " immediate" : "",
                    (flags & CHECKPOINT_FORCE) ? " force" : "",
                    (flags & CHECKPOINT_WAIT) ? " wait" : "",
                    (flags & CHECKPOINT_CAUSE_XLOG) ? " wal" : "",
                    (flags & CHECKPOINT_CAUSE_TIME) ? " time" : "",
                    (flags & CHECKPOINT_FLUSH_ALL) ? " flush-all" : "")));

Are we allowed to use _("foo") with errmsg? Isn't "foo" going to get
translated twice that way?

> Or we could just not translate restartpoint/checkpoint - after all we don't
> translate the flags in LogCheckpointStart() either. But on balance I'd lean
> towards the above.

I'm not sure if it's correct to translate some parts of the message
and leave others. What exactly is the reason for this? I think the
reason in this case might be that some flag names with hyphens and
spaces before words may not have the right/matching words in all
languages. What happens if we choose to translate/not translate the
entire message?

How about just doing this for LogCheckpointStart?
    StringInfoData logmsg;
    initStringInfo(&logmsg);
    appendStringInfo(&logmsg,
    /* translator: the placeholders show checkpoint options */
                     restartpoint ? _("restartpoint
starting:%s%s%s%s%s%s%s%s") :
                                    _("checkpoint starting:%s%s%s%s%s%s%s%s"),
                     (flags & CHECKPOINT_IS_SHUTDOWN) ? " shutdown" : "",
                     (flags & CHECKPOINT_END_OF_RECOVERY) ? "
end-of-recovery" : "",
                     (flags & CHECKPOINT_IMMEDIATE) ? " immediate" : "",
                     (flags & CHECKPOINT_FORCE) ? " force" : "",
                     (flags & CHECKPOINT_WAIT) ? " wait" : "",
                     (flags & CHECKPOINT_CAUSE_XLOG) ? " wal" : "",
                     (flags & CHECKPOINT_CAUSE_TIME) ? " time" : "",
                     (flags & CHECKPOINT_FLUSH_ALL) ? " flush-all" : "");
    ereport(LOG, errmsg_internal("%s", logmsg.data));
    pfree(logmsg.data);

Similarly, for LogCheckpointEnd:
    StringInfoData logmsg;
    initStringInfo(&logmsg);
    appendStringInfo(&logmsg,
                    restartpoint ? _("restartpoint complete: ") :
                                   _("checkpoint complete: "));
    appendStringInfo(&logmsg,
    /* translator: the placeholders show restartpoint/checkpoint stats */
                     _("wrote %d buffers (%.1f%%); "
                      "%d WAL file(s) added, %d removed, %d recycled; "
                      "write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; "
                      "sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; "
                      "distance=%d kB, estimate=%d kB"),
                       CheckpointStats.ckpt_bufs_written,
                       (double) CheckpointStats.ckpt_bufs_written *
100 / NBuffers,
                       CheckpointStats.ckpt_segs_added,
                       CheckpointStats.ckpt_segs_removed,
                       CheckpointStats.ckpt_segs_recycled,
                       write_msecs / 1000, (int) (write_msecs % 1000),
                       sync_msecs / 1000, (int) (sync_msecs % 1000),
                       total_msecs / 1000, (int) (total_msecs % 1000),
                       CheckpointStats.ckpt_sync_rels,
                       longest_msecs / 1000, (int) (longest_msecs % 1000),
                       average_msecs / 1000, (int) (average_msecs % 1000),
                       (int) (PrevCheckPointDistance / 1024.0),
                       (int) (CheckPointDistanceEstimate / 1024.0));
    if (CheckpointStats.repl_snap_files_rmvd_cnt > 0)
    {
        long t_msecs;

        t_msecs =
TimestampDifferenceMilliseconds(CheckpointStats.repl_snap_start_t,

CheckpointStats.repl_snap_end_t);

        appendStringInfo(&logmsg,
                        _("; logical decoding snapshot file(s)
removed=%lu, time=%ld.%03d s, cutoff LSN=%X/%X"),
                          CheckpointStats.repl_snap_files_rmvd_cnt,
                          t_msecs / 1000, (int) (t_msecs % 1000),

LSN_FORMAT_ARGS(CheckpointStats.repl_snap_cutoff_lsn));
    }
    if (CheckpointStats.repl_map_files_rmvd_cnt ||
        CheckpointStats.repl_map_files_syncd_cnt > 0)
    {
        long t_msecs;

        t_msecs =
TimestampDifferenceMilliseconds(CheckpointStats.repl_snap_start_t,

CheckpointStats.repl_snap_end_t);

        appendStringInfo(&logmsg,
                        _("; logical decoding rewrite mapping file(s)
removed=%lu, synced=%lu, time=%ld.%03d s, cutoff LSN=%X/%X"),
                          CheckpointStats.repl_map_files_rmvd_cnt,
                          CheckpointStats.repl_map_files_syncd_cnt,
                          t_msecs / 1000, (int) (t_msecs % 1000),
                          LSN_FORMAT_ARGS(CheckpointStats.repl_map_cutoff_lsn));
    }
    ereport(LOG, errmsg_internal("%s", logmsg.data));
    pfree(logmsg.data);

> > Do you suggest something like below? Or some other better wording like
> > "logical decoding rewrite map files" and "logical decoding snapshot
> > files" or "logical rewrite map files" and "logical snapshot files" or
> > just "rewrite mapping files" or "snapshot files" .... ?
>
> Both seem still very long. I still am doubtful this level of detail is
> appropriate. Seems more like a thing for a tracepoint or such. How about just
> printing the time for the logical decoding operations in aggregate, without
> breaking down into files, adding LSNs etc?

The distinction that the patch makes right now is for snapshot and
rewrite mapping files and it makes sense to have them separately. The
cutoff LSNs might be needed some times but I'm okay to ignore them
from the message. Therefore, I'm planning to have like this:
    if (CheckpointStats.repl_snap_files_rmvd_cnt > 0)
    {
        long t_msecs;

        t_msecs =
TimestampDifferenceMilliseconds(CheckpointStats.repl_snap_start_t,

CheckpointStats.repl_snap_end_t);

        appendStringInfo(&logmsg,
                        _("; logical decoding snapshot file(s)
removed=%lu, time=%ld.%03d s"),
                          CheckpointStats.repl_snap_files_rmvd_cnt,
                          t_msecs / 1000, (int) (t_msecs % 1000));
    }
    if (CheckpointStats.repl_map_files_rmvd_cnt ||
        CheckpointStats.repl_map_files_syncd_cnt > 0)
    {
        long t_msecs;

        t_msecs =
TimestampDifferenceMilliseconds(CheckpointStats.repl_snap_start_t,

CheckpointStats.repl_snap_end_t);

        appendStringInfo(&logmsg,
                        _("; logical decoding rewrite mapping file(s)
removed=%lu, synced=%lu, time=%ld.%03d s"),
                          CheckpointStats.repl_map_files_rmvd_cnt,
                          CheckpointStats.repl_map_files_syncd_cnt,
                          t_msecs / 1000, (int) (t_msecs % 1000));
    }

PS: I'm sorry for posting code snippets embedded in the message here,
but please bear with me until a final conclusion on the formatting is
reached, then I will post all of them as a single patch.

Regards,
Bharath Rupireddy.



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Andres Freund
Date:
Hi,

On 2022-03-22 16:32:24 +0530, Bharath Rupireddy wrote:
> On Tue, Mar 22, 2022 at 12:20 AM Andres Freund <andres@anarazel.de> wrote:
> > > Something like attached
> > > v6-0001-Deduplicate-checkpoint-restartpoint-complete-mess.patch?
> >
> > Mostly. I don't see a reason for the use of the stringinfo. And I think
> > LogCheckpointStart() should be dealt with similarly.
> >
> > I'd just make it a restartpoint ? _("restartpoint") : _("checkpoint") or such
> > in the argument? Then translators don't need to translate the two messages
> > separately.
> 
> Do you mean like this?
>     ereport(LOG,
>     /* translator: the placeholders show checkpoint options */
>             (errmsg("%s starting:%s%s%s%s%s%s%s%s",
>                     restartpoint ? _("restartpoint") : _("checkpoint"),
>                     (flags & CHECKPOINT_IS_SHUTDOWN) ? " shutdown" : "",
>                     (flags & CHECKPOINT_END_OF_RECOVERY) ? "
> end-of-recovery" : "",
>                     (flags & CHECKPOINT_IMMEDIATE) ? " immediate" : "",
>                     (flags & CHECKPOINT_FORCE) ? " force" : "",
>                     (flags & CHECKPOINT_WAIT) ? " wait" : "",
>                     (flags & CHECKPOINT_CAUSE_XLOG) ? " wal" : "",
>                     (flags & CHECKPOINT_CAUSE_TIME) ? " time" : "",
>                     (flags & CHECKPOINT_FLUSH_ALL) ? " flush-all" : "")));

Yes.


> Are we allowed to use _("foo") with errmsg? Isn't "foo" going to get
> translated twice that way?

The format string gets translated, arguments don't. Choosing the German
translation (don't speak other languages :(), you can for example see:

#. translator: the placeholders show checkpoint options
#: access/transam/xlog.c:8692
#, c-format
msgid "checkpoint starting:%s%s%s%s%s%s%s%s"
msgstr "Checkpoint beginnt:%s%s%s%s%s%s%s%s"

you can see that the first words are translated, but the arguments are the
same. So there shouldn't be be any double translation.


> > Or we could just not translate restartpoint/checkpoint - after all we don't
> > translate the flags in LogCheckpointStart() either. But on balance I'd lean
> > towards the above.
> 
> I'm not sure if it's correct to translate some parts of the message
> and leave others. What exactly is the reason for this?

Yea, it's a bit odd. I'd still separate combining the messages (and thus
translating "restartpoint" and "checkpoint" separately) from translating
flags.


> I think the reason in this case might be that some flag names with hyphens
> and spaces before words may not have the right/matching words in all
> languages. What happens if we choose to translate/not translate the entire
> message?

If individual words aren't translated the "original" word would be used.


> How about just doing this for LogCheckpointStart?
>     StringInfoData logmsg;
>     initStringInfo(&logmsg);
>     appendStringInfo(&logmsg,
>     /* translator: the placeholders show checkpoint options */
>                      restartpoint ? _("restartpoint
> starting:%s%s%s%s%s%s%s%s") :
>                                     _("checkpoint starting:%s%s%s%s%s%s%s%s"),
>                      (flags & CHECKPOINT_IS_SHUTDOWN) ? " shutdown" : "",
>                      (flags & CHECKPOINT_END_OF_RECOVERY) ? "
> end-of-recovery" : "",
>                      (flags & CHECKPOINT_IMMEDIATE) ? " immediate" : "",
>                      (flags & CHECKPOINT_FORCE) ? " force" : "",
>                      (flags & CHECKPOINT_WAIT) ? " wait" : "",
>                      (flags & CHECKPOINT_CAUSE_XLOG) ? " wal" : "",
>                      (flags & CHECKPOINT_CAUSE_TIME) ? " time" : "",
>                      (flags & CHECKPOINT_FLUSH_ALL) ? " flush-all" : "");
>     ereport(LOG, errmsg_internal("%s", logmsg.data));
>     pfree(logmsg.data);

Why do you insist on using stringinfos? It doesn't add *anything* here over
just a plain ereport()? And is considerably more verbose?




> > Both seem still very long. I still am doubtful this level of detail is
> > appropriate. Seems more like a thing for a tracepoint or such. How about just
> > printing the time for the logical decoding operations in aggregate, without
> > breaking down into files, adding LSNs etc?
> 
> The distinction that the patch makes right now is for snapshot and
> rewrite mapping files and it makes sense to have them separately.

-1. The line also needs to be readable...

Greetings,

Andres Freund



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Tue, Mar 22, 2022 at 8:12 PM Andres Freund <andres@anarazel.de> wrote:
> > Do you mean like this?
> >     ereport(LOG,
> >     /* translator: the placeholders show checkpoint options */
> >             (errmsg("%s starting:%s%s%s%s%s%s%s%s",
> >                     restartpoint ? _("restartpoint") : _("checkpoint"),
> >                     (flags & CHECKPOINT_IS_SHUTDOWN) ? " shutdown" : "",
> >                     (flags & CHECKPOINT_END_OF_RECOVERY) ? "
> > end-of-recovery" : "",
> >                     (flags & CHECKPOINT_IMMEDIATE) ? " immediate" : "",
> >                     (flags & CHECKPOINT_FORCE) ? " force" : "",
> >                     (flags & CHECKPOINT_WAIT) ? " wait" : "",
> >                     (flags & CHECKPOINT_CAUSE_XLOG) ? " wal" : "",
> >                     (flags & CHECKPOINT_CAUSE_TIME) ? " time" : "",
> >                     (flags & CHECKPOINT_FLUSH_ALL) ? " flush-all" : "")));
>
> Yes.

Done that way, see
v7-0001-Deduplicate-checkpoint-restartpoint-starting-comp.patch.

> > I think the reason in this case might be that some flag names with hyphens
> > and spaces before words may not have the right/matching words in all
> > languages. What happens if we choose to translate/not translate the entire
> > message?
>
> If individual words aren't translated the "original" word would be used.

Interestingly, the translated message for "checkpoint/restart
complete" is empty. Maybe because it has untranslatable strings?

#: access/transam/xlog.c:8752
#, c-format
msgid "restartpoint complete: wrote %d buffers (%.1f%%); %d WAL
file(s) added, %d removed, %d recycled; write=%ld.%03d s,
sync=%ld.%03d s, total=%ld.%03d s; sync files=%d, longest=%ld.%03d s,
average=%ld.%03d s; distance=%d kB, estimate=%d kB"
msgstr ""

> > > Both seem still very long. I still am doubtful this level of detail is
> > > appropriate. Seems more like a thing for a tracepoint or such. How about just
> > > printing the time for the logical decoding operations in aggregate, without
> > > breaking down into files, adding LSNs etc?
> >
> > The distinction that the patch makes right now is for snapshot and
> > rewrite mapping files and it makes sense to have them separately.
>
> -1. The line also needs to be readable...

IMHO, that's subjective. Even now, the existing
"checkpoint/restartpoint complete" message has a good amount of info
which makes it unreadable for some.

The number of logical decoding files(snapshot and mapping) the
checkpoint processed is a good metric to have in server logs along
with the time it took for removing/syncing them. Thoughts?

[1]
2022-03-23 04:13:06.050 UTC [1322043] LOG:  checkpoint complete: wrote
506 buffers (3.1%); 0 WAL file(s) added, 0 removed, 0 recycled;
write=0.010 s, sync=0.193 s, total=0.287 s; sync files=8,
longest=0.189 s, average=0.025 s; distance=12866 kB, estimate=12866
kB; logical snapshot file(s) removed=3, time=0.001 s
2022-03-23 04:14:33.937 UTC [1322043] LOG:  checkpoint complete: wrote
37 buffers (0.2%); 0 WAL file(s) added, 0 removed, 1 recycled;
write=0.004 s, sync=0.012 s, total=0.060 s; sync files=21,
longest=0.007 s, average=0.001 s; distance=1857 kB, estimate=11765 kB;
logical snapshot file(s) removed=3, time=0.001 s; logical rewrite
mapping file(s) removed=0, synced=28, time=0.001 s
2022-03-23 04:15:04.306 UTC [1322043] LOG:  checkpoint complete: wrote
32 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled;
write=0.004 s, sync=0.008 s, total=0.078 s; sync files=11,
longest=0.006 s, average=0.001 s; distance=109 kB, estimate=10600 kB;
logical snapshot file(s) removed=4, time=0.001 s; logical rewrite
mapping file(s) removed=28, synced=28, time=0.001 s

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Wed, Mar 23, 2022 at 10:16 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> On Tue, Mar 22, 2022 at 8:12 PM Andres Freund <andres@anarazel.de> wrote:
> > > Do you mean like this?
> > >     ereport(LOG,
> > >     /* translator: the placeholders show checkpoint options */
> > >             (errmsg("%s starting:%s%s%s%s%s%s%s%s",
> > >                     restartpoint ? _("restartpoint") : _("checkpoint"),
> > >                     (flags & CHECKPOINT_IS_SHUTDOWN) ? " shutdown" : "",
> > >                     (flags & CHECKPOINT_END_OF_RECOVERY) ? "
> > > end-of-recovery" : "",
> > >                     (flags & CHECKPOINT_IMMEDIATE) ? " immediate" : "",
> > >                     (flags & CHECKPOINT_FORCE) ? " force" : "",
> > >                     (flags & CHECKPOINT_WAIT) ? " wait" : "",
> > >                     (flags & CHECKPOINT_CAUSE_XLOG) ? " wal" : "",
> > >                     (flags & CHECKPOINT_CAUSE_TIME) ? " time" : "",
> > >                     (flags & CHECKPOINT_FLUSH_ALL) ? " flush-all" : "")));
> >
> > Yes.
>
> Done that way, see
> v7-0001-Deduplicate-checkpoint-restartpoint-starting-comp.patch.
>
> > > I think the reason in this case might be that some flag names with hyphens
> > > and spaces before words may not have the right/matching words in all
> > > languages. What happens if we choose to translate/not translate the entire
> > > message?
> >
> > If individual words aren't translated the "original" word would be used.
>
> Interestingly, the translated message for "checkpoint/restart
> complete" is empty. Maybe because it has untranslatable strings?
>
> #: access/transam/xlog.c:8752
> #, c-format
> msgid "restartpoint complete: wrote %d buffers (%.1f%%); %d WAL
> file(s) added, %d removed, %d recycled; write=%ld.%03d s,
> sync=%ld.%03d s, total=%ld.%03d s; sync files=%d, longest=%ld.%03d s,
> average=%ld.%03d s; distance=%d kB, estimate=%d kB"
> msgstr ""
>
> > > > Both seem still very long. I still am doubtful this level of detail is
> > > > appropriate. Seems more like a thing for a tracepoint or such. How about just
> > > > printing the time for the logical decoding operations in aggregate, without
> > > > breaking down into files, adding LSNs etc?
> > >
> > > The distinction that the patch makes right now is for snapshot and
> > > rewrite mapping files and it makes sense to have them separately.
> >
> > -1. The line also needs to be readable...
>
> IMHO, that's subjective. Even now, the existing
> "checkpoint/restartpoint complete" message has a good amount of info
> which makes it unreadable for some.
>
> The number of logical decoding files(snapshot and mapping) the
> checkpoint processed is a good metric to have in server logs along
> with the time it took for removing/syncing them. Thoughts?

Realized that the CF bot is applying patches in the alphabetical order
(I've sent out both v7 patches as v7-0001). Attaching v8 patch-set
with v8-0001 and v8-0002 names. Apart from this, no change in v8.

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Thu, Mar 24, 2022 at 8:58 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> On Wed, Mar 23, 2022 at 10:16 AM Bharath Rupireddy
> <bharath.rupireddyforpostgres@gmail.com> wrote:
> >
> > On Tue, Mar 22, 2022 at 8:12 PM Andres Freund <andres@anarazel.de> wrote:
> > > > Do you mean like this?
> > > >     ereport(LOG,
> > > >     /* translator: the placeholders show checkpoint options */
> > > >             (errmsg("%s starting:%s%s%s%s%s%s%s%s",
> > > >                     restartpoint ? _("restartpoint") : _("checkpoint"),
> > > >                     (flags & CHECKPOINT_IS_SHUTDOWN) ? " shutdown" : "",
> > > >                     (flags & CHECKPOINT_END_OF_RECOVERY) ? "
> > > > end-of-recovery" : "",
> > > >                     (flags & CHECKPOINT_IMMEDIATE) ? " immediate" : "",
> > > >                     (flags & CHECKPOINT_FORCE) ? " force" : "",
> > > >                     (flags & CHECKPOINT_WAIT) ? " wait" : "",
> > > >                     (flags & CHECKPOINT_CAUSE_XLOG) ? " wal" : "",
> > > >                     (flags & CHECKPOINT_CAUSE_TIME) ? " time" : "",
> > > >                     (flags & CHECKPOINT_FLUSH_ALL) ? " flush-all" : "")));
> > >
> > > Yes.
> >
> > Done that way, see
> > v7-0001-Deduplicate-checkpoint-restartpoint-starting-comp.patch.
> >
> > > > I think the reason in this case might be that some flag names with hyphens
> > > > and spaces before words may not have the right/matching words in all
> > > > languages. What happens if we choose to translate/not translate the entire
> > > > message?
> > >
> > > If individual words aren't translated the "original" word would be used.
> >
> > Interestingly, the translated message for "checkpoint/restart
> > complete" is empty. Maybe because it has untranslatable strings?
> >
> > #: access/transam/xlog.c:8752
> > #, c-format
> > msgid "restartpoint complete: wrote %d buffers (%.1f%%); %d WAL
> > file(s) added, %d removed, %d recycled; write=%ld.%03d s,
> > sync=%ld.%03d s, total=%ld.%03d s; sync files=%d, longest=%ld.%03d s,
> > average=%ld.%03d s; distance=%d kB, estimate=%d kB"
> > msgstr ""
> >
> > > > > Both seem still very long. I still am doubtful this level of detail is
> > > > > appropriate. Seems more like a thing for a tracepoint or such. How about just
> > > > > printing the time for the logical decoding operations in aggregate, without
> > > > > breaking down into files, adding LSNs etc?
> > > >
> > > > The distinction that the patch makes right now is for snapshot and
> > > > rewrite mapping files and it makes sense to have them separately.
> > >
> > > -1. The line also needs to be readable...
> >
> > IMHO, that's subjective. Even now, the existing
> > "checkpoint/restartpoint complete" message has a good amount of info
> > which makes it unreadable for some.
> >
> > The number of logical decoding files(snapshot and mapping) the
> > checkpoint processed is a good metric to have in server logs along
> > with the time it took for removing/syncing them. Thoughts?
>
> Realized that the CF bot is applying patches in the alphabetical order
> (I've sent out both v7 patches as v7-0001). Attaching v8 patch-set
> with v8-0001 and v8-0002 names. Apart from this, no change in v8.

CF bot is complaining
https://cirrus-ci.com/task/6458067389775872?logs=mingw_cross_warning#L399.

Attaching v9 patch-set to fix it.

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Thu, Mar 24, 2022 at 03:22:11PM +0530, Bharath Rupireddy wrote:
>> > > > > Both seem still very long. I still am doubtful this level of detail is
>> > > > > appropriate. Seems more like a thing for a tracepoint or such. How about just
>> > > > > printing the time for the logical decoding operations in aggregate, without
>> > > > > breaking down into files, adding LSNs etc?
>> > > >
>> > > > The distinction that the patch makes right now is for snapshot and
>> > > > rewrite mapping files and it makes sense to have them separately.
>> > >
>> > > -1. The line also needs to be readable...
>> >
>> > IMHO, that's subjective. Even now, the existing
>> > "checkpoint/restartpoint complete" message has a good amount of info
>> > which makes it unreadable for some.
>> >
>> > The number of logical decoding files(snapshot and mapping) the
>> > checkpoint processed is a good metric to have in server logs along
>> > with the time it took for removing/syncing them. Thoughts?

I took another look at the example output, and I think I agree that logging
the total time for logical decoding operations is probably the best path
forward.  This information would be enough to clue an administrator into
the possible causes of lengthy checkpoints, but it also wouldn't disrupt
the readability of the log statement too much.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Michael Paquier
Date:
On Mon, Apr 25, 2022 at 01:34:38PM -0700, Nathan Bossart wrote:
> I took another look at the example output, and I think I agree that logging
> the total time for logical decoding operations is probably the best path
> forward.  This information would be enough to clue an administrator into
> the possible causes of lengthy checkpoints, but it also wouldn't disrupt
> the readability of the log statement too much.

+       /* translator: the placeholders after first %s show restartpoint/checkpoint options */
+                       (errmsg("%s starting:%s%s%s%s%s%s%s%s",
+                                       restartpoint ?
_("restartpoint") : _("checkpoint"),

0001 breaks translation, as "checkpoint/restartpoint" and "starting"
would treated as separate terms to translate.  That would not matter
for English, but it does in French where we'd say "début du
checkpoint".  You could fix that by adding "starting" to each
refactored term or build a string.  0002 does the latter, so my take
is that you should begin using a StringInfo in 0001.
--
Michael

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Tue, Apr 26, 2022 at 6:31 AM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Mon, Apr 25, 2022 at 01:34:38PM -0700, Nathan Bossart wrote:
> > I took another look at the example output, and I think I agree that logging
> > the total time for logical decoding operations is probably the best path
> > forward.  This information would be enough to clue an administrator into
> > the possible causes of lengthy checkpoints, but it also wouldn't disrupt
> > the readability of the log statement too much.
>
> +       /* translator: the placeholders after first %s show restartpoint/checkpoint options */
> +                       (errmsg("%s starting:%s%s%s%s%s%s%s%s",
> +                                       restartpoint ?
> _("restartpoint") : _("checkpoint"),
>
> 0001 breaks translation, as "checkpoint/restartpoint" and "starting"
> would treated as separate terms to translate.  That would not matter
> for English, but it does in French where we'd say "début du
> checkpoint".  You could fix that by adding "starting" to each
> refactored term or build a string.  0002 does the latter, so my take
> is that you should begin using a StringInfo in 0001.

Thanks for reviewing. I've addressed the review comments, PSA v10
patch. Note that we can't use StringInfo as the checkpointer memory
context doesn't allow pallocs in the critical section and the
checkpoint can sometimes be run in the critical section.

I've also added the total number of WAL files a checkpoint has
processed (scanned the pg_wal directory) while removing old WAL files.
This helps to estimate the pg_wal disk space at the time of a
particular checkpoint, especially useful for debugging issues.

[1] sample output:
2022-07-19 10:33:45.378 UTC [3027866] LOG:  checkpoint starting: wal
2022-07-19 10:33:51.434 UTC [3027866] LOG:  checkpoint complete: wrote
50 buffers (0.3%); 0 WAL file(s) added, 12 removed, 35 recycled, 76
processed; write=3.651 s, sync=0.011 s, total=6.136 s; sync files=11,
longest=0.004 s, average=0.001 s; distance=770045 kB, estimate=770045
kB; lsn=0/95000260, redo lsn=0/79000068; logical decoding file(s)
processing=0.007 s

Regards,
Bharath Rupireddy.

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Tue, Jul 19, 2022 at 05:29:10PM +0530, Bharath Rupireddy wrote:
> I've also added the total number of WAL files a checkpoint has
> processed (scanned the pg_wal directory) while removing old WAL files.
> This helps to estimate the pg_wal disk space at the time of a
> particular checkpoint, especially useful for debugging issues.

І don't think it's clear what "processed" means here.  In any case, I think
this part belongs in a separate patch or maybe even a new thread.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Wed, Aug 17, 2022 at 2:52 AM Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> On Tue, Jul 19, 2022 at 05:29:10PM +0530, Bharath Rupireddy wrote:
> > I've also added the total number of WAL files a checkpoint has
> > processed (scanned the pg_wal directory) while removing old WAL files.
> > This helps to estimate the pg_wal disk space at the time of a
> > particular checkpoint, especially useful for debugging issues.
>
> І don't think it's clear what "processed" means here.  In any case, I think
> this part belongs in a separate patch or maybe even a new thread.

Agreed. PSA v11 patch.

--
Bharath Rupireddy
RDS Open Source Databases: https://aws.amazon.com/rds/postgresql/

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Wed, Aug 17, 2022 at 11:17:24AM +0530, Bharath Rupireddy wrote:
> +                        "logical decoding file(s) processing time=%ld.%03d s",

I would suggest shortening this to something like "logical decoding
processing" or "logical replication processing."

>      CheckPointRelationMap();
>      CheckPointReplicationSlots();
> +
> +    CheckpointStats.l_dec_ops_start_t = GetCurrentTimestamp();
>      CheckPointSnapBuild();
>      CheckPointLogicalRewriteHeap();
> +    CheckpointStats.l_dec_ops_end_t = GetCurrentTimestamp();
> +
>      CheckPointReplicationOrigin();

Shouldn't we include CheckPointReplicationSlots() and
CheckPointReplicationOrigin() in this new stat?

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Kyotaro Horiguchi
Date:
At Tue, 23 Aug 2022 10:49:40 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in 
> On Wed, Aug 17, 2022 at 11:17:24AM +0530, Bharath Rupireddy wrote:
> > +                        "logical decoding file(s) processing time=%ld.%03d s",
> 
> I would suggest shortening this to something like "logical decoding
> processing" or "logical replication processing."
> 
> >      CheckPointRelationMap();
> >      CheckPointReplicationSlots();
> > +
> > +    CheckpointStats.l_dec_ops_start_t = GetCurrentTimestamp();
> >      CheckPointSnapBuild();
> >      CheckPointLogicalRewriteHeap();
> > +    CheckpointStats.l_dec_ops_end_t = GetCurrentTimestamp();
> > +
> >      CheckPointReplicationOrigin();
> 
> Shouldn't we include CheckPointReplicationSlots() and
> CheckPointReplicationOrigin() in this new stat?

By the way, I think we use INSTR_TIME_* macros to do masure internal
durations (mainly for the monotonic clock characteristics, and to
reduce performance degradation on Windows?). I'm not sure that's
crutial here but I don't think there's any reason to use
GetCurrentTimestamp() instead.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Michael Paquier
Date:
On Wed, Aug 24, 2022 at 10:48:01AM +0900, Kyotaro Horiguchi wrote:
> By the way, I think we use INSTR_TIME_* macros to do masure internal
> durations (mainly for the monotonic clock characteristics, and to
> reduce performance degradation on Windows?). I'm not sure that's
> crutial here but I don't think there's any reason to use
> GetCurrentTimestamp() instead.

This implies two calls of gettimeofday(), but that does not worry me
much in this code path.  There is some consistency with
CheckpointGuts() where we take timestamps for the sync requests.
--
Michael

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Tue, Aug 23, 2022 at 11:19 PM Nathan Bossart
<nathandbossart@gmail.com> wrote:
>
> On Wed, Aug 17, 2022 at 11:17:24AM +0530, Bharath Rupireddy wrote:
> > +                                             "logical decoding file(s) processing time=%ld.%03d s",
>
> I would suggest shortening this to something like "logical decoding
> processing" or "logical replication processing."

"logical decoding processing" seems fine to me, see [1]. I've found
other error messages using "logical decoding".

                 errmsg("logical decoding requires wal_level >= logical")));
                 errmsg("logical decoding requires a database connection")));
                 errmsg("logical decoding cannot be used while in recovery")));
                 errmsg("logical decoding cannot be used while in recovery")));
                (errmsg("logical decoding found consistent point at %X/%X",
                (errmsg("logical decoding found initial starting point
at %X/%X",
                (errmsg("logical decoding found initial consistent
point at %X/%X",

> >       CheckPointRelationMap();
> >       CheckPointReplicationSlots();
> > +
> > +     CheckpointStats.l_dec_ops_start_t = GetCurrentTimestamp();
> >       CheckPointSnapBuild();
> >       CheckPointLogicalRewriteHeap();
> > +     CheckpointStats.l_dec_ops_end_t = GetCurrentTimestamp();
> > +
> >       CheckPointReplicationOrigin();
>
> Shouldn't we include CheckPointReplicationSlots() and
> CheckPointReplicationOrigin() in this new stat?

In my experience, I haven't seen those functions take a long time.
CheckPointReplicationSlots() runs a for loop for all slots and may not
take a huge amount of time and it deals with all the slots, not just
the logical slots. Whereas CheckPointReplicationOrigin() does deal
with logical replication and has some heavyweight operation such as
XLogFlush() for all logical slots, hence, counting it is reasonable.

On Wed, Aug 24, 2022 at 10:04 AM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Wed, Aug 24, 2022 at 10:48:01AM +0900, Kyotaro Horiguchi wrote:
> > By the way, I think we use INSTR_TIME_* macros to do masure internal
> > durations (mainly for the monotonic clock characteristics, and to
> > reduce performance degradation on Windows?). I'm not sure that's
> > crutial here but I don't think there's any reason to use
> > GetCurrentTimestamp() instead.
>
> This implies two calls of gettimeofday(), but that does not worry me
> much in this code path.  There is some consistency with
> CheckpointGuts() where we take timestamps for the sync requests.

+1 to use GetCurrentTimestamp() for consistency.

I've also improved the commit message a bit.

Please review the v12 patch attached.

[1] LOG:  checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s)
added, 0 removed, 0 recycled; write=0.003 s, sync=0.002 s, total=0.011
s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB,
estimate=0 kB; lsn=0/1496528, redo lsn=0/14964F0; logical decoding
processing=0.005 s

-- 
Bharath Rupireddy
RDS Open Source Databases: https://aws.amazon.com/rds/postgresql/

Attachment

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Kyotaro Horiguchi
Date:
At Wed, 24 Aug 2022 13:34:54 +0900, Michael Paquier <michael@paquier.xyz> wrote in 
> On Wed, Aug 24, 2022 at 10:48:01AM +0900, Kyotaro Horiguchi wrote:
> > By the way, I think we use INSTR_TIME_* macros to do masure internal
> > durations (mainly for the monotonic clock characteristics, and to
> > reduce performance degradation on Windows?). I'm not sure that's
> > crutial here but I don't think there's any reason to use
> > GetCurrentTimestamp() instead.
> 
> This implies two calls of gettimeofday(), but that does not worry me
> much in this code path.  There is some consistency with
> CheckpointGuts() where we take timestamps for the sync requests.

Mmm. heap_vacuum_rel does the same. From the other direction, the two
are the only use of GetCurrentTimestamp() for this purpose. However,
I'm fine with that. Thanks for the info.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Nathan Bossart
Date:
On Thu, Aug 25, 2022 at 10:12:00AM +0530, Bharath Rupireddy wrote:
> Please review the v12 patch attached.

LGTM.  I've marked this as ready-for-committer.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Hamid Akhtar
Date:


On Sun, 28 Aug 2022 at 17:30, Nathan Bossart <nathandbossart@gmail.com> wrote:
On Thu, Aug 25, 2022 at 10:12:00AM +0530, Bharath Rupireddy wrote:
> Please review the v12 patch attached.

LGTM.  I've marked this as ready-for-committer.

This patch needs an update. It is failing on Windows for the test case
"33/238 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade with an
"exit status 2" error.


--
Hamid Akhtar,
Percona LLC, www.percona.com

Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Justin Pryzby
Date:
On Sun, Sep 25, 2022 at 04:50:54PM -0400, Hamid Akhtar wrote:
> On Sun, 28 Aug 2022 at 17:30, Nathan Bossart <nathandbossart@gmail.com> wrote:
> 
> > On Thu, Aug 25, 2022 at 10:12:00AM +0530, Bharath Rupireddy wrote:
> > > Please review the v12 patch attached.
> >
> > LGTM.  I've marked this as ready-for-committer.
> >
> 
> This patch needs an update. It is failing on Windows for the test case
> "33/238 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade with an
> "exit status 2" error.
> 
> The details are available on the cfbot.cputue.org.
> https://cirrus-ci.com/task/5709014662119424?logs=check_world#L267

It failed like:

https://api.cirrus-ci.com/v1/artifact/task/5709014662119424/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade
[19:35:31.017](76.307s) ok 10 - run of pg_upgrade for new instance
[19:35:31.017](0.000s) not ok 11 - pg_upgrade_output.d/ removed after pg_upgrade success
[19:35:31.018](0.001s) #   Failed test 'pg_upgrade_output.d/ removed after pg_upgrade success'
#   at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 261.

This is a transient failure; it normally passes:
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3389

Actually, it's the same problem as reported here:
20220919213217.ptqfdlcc5idk5xup@awork3.anarazel.de

I don't think there's anything in place to help scrape the cirrusci
logs, but I suppose other patches have had/will have this same failure.

I don't know but maybe meson made this worse, by exercising/stressing
concurrent filesystem operations.

-- 
Justin



Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

From
Bharath Rupireddy
Date:
On Mon, Sep 26, 2022 at 3:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:
>
> > This patch needs an update. It is failing on Windows for the test case
> > "33/238 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade with an
> > "exit status 2" error.
> >
> > The details are available on the cfbot.cputue.org.
> > https://cirrus-ci.com/task/5709014662119424?logs=check_world#L267
>
> It failed like:
>
https://api.cirrus-ci.com/v1/artifact/task/5709014662119424/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade
> [19:35:31.017](76.307s) ok 10 - run of pg_upgrade for new instance
> [19:35:31.017](0.000s) not ok 11 - pg_upgrade_output.d/ removed after pg_upgrade success
> [19:35:31.018](0.001s) #   Failed test 'pg_upgrade_output.d/ removed after pg_upgrade success'
> #   at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 261.
>
> This is a transient failure; it normally passes:
> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3389
>
> Actually, it's the same problem as reported here:
> 20220919213217.ptqfdlcc5idk5xup@awork3.anarazel.de

Thanks Justin for identifying and linking the issue, I've sent a note
in the related thread [1], just for the records.

I've changed the CF entry for this thread
https://commitfest.postgresql.org/39/3389/ back to RfC as the issue is
unrelated to this patch.

[1] https://www.postgresql.org/message-id/CALj2ACXAQqwwfPzbQr0C6spKq6f9rR6PYHWwm-_o%3D_pMuJ2oxg%40mail.gmail.com

--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com