Thanks Laurenz, that confirms what I was assuming. Archiving is via pgbackrest to a backup server, over SSH. Approx 750ms to archive each segment is crazy -- I'll check compression parameters too.
Any reason not to bump it up to 1GB? Or is that overkill?
On Wed, 2025-12-17 at 16:13 +0100, Colin 't Hart wrote: > I see very little advice on tuning WAL segment size. > > One of my clients has a few datawarehouses at around 8 - 16 TB > > On one of the nodes there are approx 15000 WAL segments of 16MB each, totalling > approx 230GB. The archiver is archiving approx one per second, so approx 4 hours to clear. > > Would we gain anything by bumping the WAL segment size?
Very likely yes, if the problem is the overhead of starting the archive_command.
Another thing that can slow down archiving is if you compress these segments too aggressively.