Re: Deleting older versions in unique indexes to avoid page splits - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: Deleting older versions in unique indexes to avoid page splits
Date
Msg-id CAA4eK1JFOFsx+Ma6M5zeH1H0mBL=QdOw8f4vJy81gtmRmShi=Q@mail.gmail.com
Whole thread Raw
In response to Re: Deleting older versions in unique indexes to avoid page splits  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On Wed, Jan 20, 2021 at 10:50 AM Andres Freund <andres@anarazel.de> wrote:
>
> Hi,
>
> On 2021-01-20 09:24:35 +0530, Amit Kapila wrote:
> > I feel extending the deletion mechanism based on the number of LP_DEAD
> > items sounds more favorable than giving preference to duplicate
> > items. Sure, it will give equally good or better results if there are
> > no long-standing open transactions.
>
> There's a lot of workloads that never set LP_DEAD because all scans are
> bitmap index scans. And there's no obvious way to address that. So I
> don't think it's wise to purely rely on LP_DEAD.
>

Right, I understand this point. The point I was trying to make was
that in this new technique we might not be able to delete any tuple
(or maybe very few) if there are long-running open transactions in the
system and still incur a CPU and I/O cost. I am completely in favor of
this technique and patch, so don't get me wrong. As mentioned in my
reply to Peter, I am just trying to see if there are more ways we can
use this optimization and reduce the chances of regression (if there
is any).

-- 
With Regards,
Amit Kapila.



pgsql-hackers by date:

Previous
From: vignesh C
Date:
Subject: Re: Printing backtrace of postgres processes
Next
From: Amit Kapila
Date:
Subject: Re: Parallel INSERT (INTO ... SELECT ...)