Thread: VacuumDB generating huge WAL filed
Hi All
When we are running vacuumdb for our database of 1.5TB it is generating approximately 60GB of WAL files? Any way we can reduce this WAL file generation?
Regards
On Wednesday, December 11, 2024, SASIKUMAR Devaraj <sashikumard@yahoo.com> wrote:
When we are running vacuumdb for our database of 1.5TB it is generating approximately 60GB of WAL files? Any way we can reduce this WAL file generation?
Set a much more aggressive autovacuum so you don’t accumulate as much dead tuples between runs?
David J.
On Wed, 2024-12-11 at 23:25 -0700, David G. Johnston wrote: > On Wednesday, December 11, 2024, SASIKUMAR Devaraj <sashikumard@yahoo.com> wrote: > > When we are running vacuumdb for our database of 1.5TB it is generating approximately 60GB > > of WAL files? Any way we can reduce this WAL file generation? > > Set a much more aggressive autovacuum so you don’t accumulate as much dead tuples between runs? You can also activate "wal_compression" for smaller full-page images and increase "max_wal_size" to get fewer of them. But 60GB of WAL shouldn't be a problem. Why are they a problem for you? Yours, Laurenz Albe
Hi
Since it is generating 60GB in few mins, my replication lag is increasing upto 10Gb for around 10 mins, which may seen as possible data loss in case of primary node failure
Regards
Sasi
On Thursday, December 12, 2024, 12:06 PM, Laurenz Albe <laurenz.albe@cybertec.at> wrote:
On Wed, 2024-12-11 at 23:25 -0700, David G. Johnston wrote:
> On Wednesday, December 11, 2024, SASIKUMAR Devaraj <sashikumard@yahoo.com> wrote:
> > When we are running vacuumdb for our database of 1.5TB it is generating approximately 60GB
> > of WAL files? Any way we can reduce this WAL file generation?
>
> Set a much more aggressive autovacuum so you don’t accumulate as much dead tuples between runs?
You can also activate "wal_compression" for smaller full-page images
and increase "max_wal_size" to get fewer of them.
But 60GB of WAL shouldn't be a problem. Why are they a problem for you?
Yours,
Laurenz Albe
Check for the max walsize allocated in postgresql.conf and also check on checkpoint completion target default is 0.9 this says how frequent the checkpoint can be applied that adds to wal lan generation.
First do a table level vaccuum and see what's causing it to write more wal files.
From: SASIKUMAR Devaraj <sashikumard@yahoo.com>
Sent: Thursday, December 12, 2024 11:38:26 AM
To: Pgsql-admin <pgsql-admin@lists.postgresql.org>
Subject: VacuumDB generating huge WAL filed
Sent: Thursday, December 12, 2024 11:38:26 AM
To: Pgsql-admin <pgsql-admin@lists.postgresql.org>
Subject: VacuumDB generating huge WAL filed
Hi All
When we are running vacuumdb for our database of 1.5TB it is generating approximately 60GB of WAL files? Any way we can reduce this WAL file generation?
Regards
We were able to identify the table generating around 40GB in few mins by vaccum process. It is partition table with dead tuple less than 0.5 percent. What may be next steps?
On Thursday, December 12, 2024, 8:37 PM, vignesh kumar <vigneshkumar.venugopal@outlook.com> wrote:
Check for the max walsize allocated in postgresql.conf and also check on checkpoint completion target default is 0.9 this says how frequent the checkpoint can be applied that adds to wal lan generation.First do a table level vaccuum and see what's causing it to write more wal files.Sent from Outlook for AndroidFrom: SASIKUMAR Devaraj <sashikumard@yahoo.com>
Sent: Thursday, December 12, 2024 11:38:26 AM
To: Pgsql-admin <pgsql-admin@lists.postgresql.org>
Subject: VacuumDB generating huge WAL filedHi AllWhen we are running vacuumdb for our database of 1.5TB it is generating approximately 60GB of WAL files? Any way we can reduce this WAL file generation?Regards
On Fri, 2024-12-13 at 10:03 +0000, SASIKUMAR Devaraj wrote: > > When we are running vacuumdb for our database of 1.5TB it is > > generating approximately 60GB of WAL files? Any way we can > > reduce this WAL file generation? > > We were able to identify the table generating around 40GB in > few mins by vaccum process. It is partition table with dead > tuple less than 0.5 percent. What may be next steps? I think I told you: increase "max_wal_size" and enable "wal_compression". That should reduce the amount of WAL. Yours, Laurenz Albe