Re: PostgreSQL and a Catch-22 Issue related to dead rows - Mailing list pgsql-performance

From Lars Aksel Opsahl
Subject Re: PostgreSQL and a Catch-22 Issue related to dead rows
Date
Msg-id AM7P189MB102832DD646EF0D81EB710DB9D3D2@AM7P189MB1028.EURP189.PROD.OUTLOOK.COM
Whole thread Raw
In response to Re: PostgreSQL and a Catch-22 Issue related to dead rows  (Rick Otten <rottenwindfish@gmail.com>)
List pgsql-performance
From: Rick Otten <rottenwindfish@gmail.com>
Sent: Monday, December 9, 2024 3:25 PM
To: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>
Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>
Subject: Re: PostgreSQL and a Catch-22 Issue related to dead rows
 

Yes there are very good reason for the way removal for dead rows work now, but is there any chance of adding an option when creating table to disable this behavior for instance for unlogged tables ?

Are you saying your job is I/O bound (not memory or cpu).  And that you can only improve I/O performance by committing more frequently because the commit removes dead tuples which you have no other means to clear?   Is your WAL already on your fastest disk?

All of your parallel jobs are operating on the same set of rows?  So partitioning the table wouldn't help?
 

The problem is not IO or CPU bound, or related to WAL files, but that "dead rows" are impacting the sql queries. About partitioning at this stage, the data are split in about 750 different topology structures. We have many workers working in parallel on these different structures but only one worker at the same on the same structure.

Thanks

Lars

pgsql-performance by date:

Previous
From: Nikolay Samokhvalov
Date:
Subject: Re: can a blocked transaction affect the performance of one that is blocking it?
Next
From: Lars Aksel Opsahl
Date:
Subject: Re: PostgreSQL and a Catch-22 Issue related to dead rows