Thread: [PATCH] Microvacuum for gist.

[PATCH] Microvacuum for gist.

From
Anastasia Lubennikova
Date:
Hi,

I have written microvacuum support for gist access method.
Briefly microvacuum includes two steps:
1. When search tells us that the tuple is invisible to all transactions it is marked LP_DEAD and page is marked as "has dead tuples",
2. Then, when insert touches full page which has dead tuples it calls microvacuum instead of splitting page.
You can find a kind of review here [1].

Patch is in attachements. Please review it.

-- 
Best regards,
Lubennikova Anastasia
Attachment

Re: [PATCH] Microvacuum for gist.

From
Alexander Korotkov
Date:
Hi!

On Thu, Jul 30, 2015 at 2:51 PM, Anastasia Lubennikova <lubennikovaav@gmail.com> wrote:
I have written microvacuum support for gist access method.
Briefly microvacuum includes two steps:
1. When search tells us that the tuple is invisible to all transactions it is marked LP_DEAD and page is marked as "has dead tuples",
2. Then, when insert touches full page which has dead tuples it calls microvacuum instead of splitting page.
You can find a kind of review here [1].

Patch is in attachements. Please review it.

Nice!

Some notes about this patch.

1) Could you give same test case demonstrating that microvacuum really work with patch? Finally, we should get index less growing with microvacuum.

2) Generating notices for every dead tuple would be too noisy. I suggest to replace notice with one of debug levels.

elog(NOTICE, "gistkillitems. Mark Item Dead offnum %hd, blkno %d", offnum, BufferGetBlockNumber(buffer));


3) Please, recheck coding style. For instance, this line needs more spaces and open brace should be on the next line.

+ if ((scan->kill_prior_tuple)&&(so->curPageData > 0)&&(so->curPageData == so->nPageData)) {

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
 

Re: [PATCH] Microvacuum for gist.

From
Jim Nasby
Date:
On 7/30/15 7:33 AM, Alexander Korotkov wrote:
> 2) Generating notices for every dead tuple would be too noisy. I suggest
> to replace notice with one of debug levels.
>
> + elog(NOTICE, "gistkillitems. Mark Item Dead offnum %hd, blkno %d",
> offnum, BufferGetBlockNumber(buffer));

Even that seems like pretty serious overkill. vacuumlazy.c doesn't have 
anything like that, and I don't think the BTree code does either. If you 
were debugging something and actually needed it I'd say drop in a 
temporary printf().
-- 
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: [PATCH] Microvacuum for gist.

From
Anastasia Lubennikova
Date:


30.07.2015 16:33, Alexander Korotkov пишет:
Hi!

On Thu, Jul 30, 2015 at 2:51 PM, Anastasia Lubennikova <lubennikovaav@gmail.com> wrote:
I have written microvacuum support for gist access method.
Briefly microvacuum includes two steps:
1. When search tells us that the tuple is invisible to all transactions it is marked LP_DEAD and page is marked as "has dead tuples",
2. Then, when insert touches full page which has dead tuples it calls microvacuum instead of splitting page.
You can find a kind of review here [1].

Patch is in attachements. Please review it.

Nice!

Some notes about this patch.

1) Could you give same test case demonstrating that microvacuum really work with patch? Finally, we should get index less growing with microvacuum.

2) Generating notices for every dead tuple would be too noisy. I suggest to replace notice with one of debug levels.

elog(NOTICE, "gistkillitems. Mark Item Dead offnum %hd, blkno %d", offnum, BufferGetBlockNumber(buffer));


3) Please, recheck coding style. For instance, this line needs more spaces and open brace should be on the next line.

+ if ((scan->kill_prior_tuple)&&(so->curPageData > 0)&&(so->curPageData == so->nPageData)) {

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
 
1) Test and results are in attachments. Everything seems to work as expected.
2) I dropped these notices. It was done only for debug purposes. Updated patch is attached.
3) fixed
-- 
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment

Re: [PATCH] Microvacuum for gist.

From
Alexander Korotkov
Date:
On Mon, Aug 3, 2015 at 12:27 PM, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:
1) Test and results are in attachments. Everything seems to work as expected.
2) I dropped these notices. It was done only for debug purposes. Updated patch is attached.
3) fixed

Good! Another couple of notes from me:
1) I think gistvacuumpage() and gistkillitems() need function-level comments.
2) ItemIdIsDead() can be used in index scan like it's done in _bt_checkkeys().

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
 

Re: [PATCH] Microvacuum for gist.

From
Anastasia Lubennikova
Date:

On Mon, Aug 3, 2015 at 12:27 PM, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:
1) Test and results are in attachments. Everything seems to work as expected.
2) I dropped these notices. It was done only for debug purposes. Updated patch is attached.
3) fixed

Good! Another couple of notes from me:
1) I think gistvacuumpage() and gistkillitems() need function-level comments.
2) ItemIdIsDead() can be used in index scan like it's done in _bt_checkkeys().

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
 
I've added some comments.
ItemIdIsDead() is used now (just skip dead tuples as not matching the quals).
And there is one else check of LSN in gistkillitems to make sure that page was not changed between reads.
-- 
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment

Re: [PATCH] Microvacuum for gist.

From
Andres Freund
Date:
Hi,

I don't know too much about gist, but did a quick read through. Mostly
spotting some stylistic issues. Please fix those making it easier for
the next reviewer.

> *** a/src/backend/access/gist/gist.c
> --- b/src/backend/access/gist/gist.c
> ***************
> *** 36,42 **** static bool gistinserttuples(GISTInsertState *state, GISTInsertStack *stack,
>                    bool unlockbuf, bool unlockleftchild);
>   static void gistfinishsplit(GISTInsertState *state, GISTInsertStack *stack,
>                   GISTSTATE *giststate, List *splitinfo, bool releasebuf);
> ! 

>   #define ROTATEDIST(d) do { \
>       SplitedPageLayout *tmp=(SplitedPageLayout*)palloc(sizeof(SplitedPageLayout)); \
> --- 36,42 ----
>                    bool unlockbuf, bool unlockleftchild);
>   static void gistfinishsplit(GISTInsertState *state, GISTInsertStack *stack,
>                   GISTSTATE *giststate, List *splitinfo, bool releasebuf);
> ! static void gistvacuumpage(Relation rel, Page page, Buffer buffer);
>   
>   #define ROTATEDIST(d) do { \
>       SplitedPageLayout
>       *tmp=(SplitedPageLayout*)palloc(sizeof(SplitedPageLayout)); \

Newline removed.

> +     /*
> +      * If leaf page is full, try at first to delete dead tuples. And then
> +      * check again.
> +      */
> +     if ((is_split) && GistPageIsLeaf(page) && GistPageHasGarbage(page))

superfluous parens around is_split
> + /*
> +  * gistkillitems() -- set LP_DEAD state for items an indexscan caller has
> +  * told us were killed.
> +  *
> +  * We match items by heap TID before mark them. If an item has moved off
> +  * the current page due to a split, we'll fail to find it and do nothing
> +  * (this is not an error case --- we assume the item will eventually get
> +  * marked in a future indexscan).
> +  *
> +  * We re-read page here, so it's significant to check page LSN. If page
> +  * has been modified since the last read (as determined by LSN), we dare not
> +  * flag any antries because it is possible that the old entry was vacuumed
> +  * away and the TID was re-used by a completely different heap tuple.

s/significant/important/?.
s/If page/If the page/
s/dare not/cannot/

> +  */
> + static void
> + gistkillitems(IndexScanDesc scan)
> + {
> +     GISTScanOpaque so = (GISTScanOpaque) scan->opaque;
> +     Buffer        buffer;
> +     Page        page;
> +     OffsetNumber minoff;
> +     OffsetNumber maxoff;
> +     int            i;
> +     bool        killedsomething = false;
> + 
> +     Assert(so->curBlkno != InvalidBlockNumber);
> + 
> +     buffer = ReadBuffer(scan->indexRelation, so->curBlkno);
> +     if (!BufferIsValid(buffer))
> +         return;
> + 
> +     LockBuffer(buffer, GIST_SHARE);
> +     gistcheckpage(scan->indexRelation, buffer);
> +     page = BufferGetPage(buffer);
> + 
> +     /*
> +      * If page LSN differs it means that the page was modified since the last read.
> +      * killedItemes could be not valid so LP_DEAD hints applying is not safe.
> +      */
> +     if(PageGetLSN(page) != so->curPageLSN)
> +     {
> +         UnlockReleaseBuffer(buffer);
> +         so->numKilled = 0; /* reset counter */
> +         return;
> +     }
> + 
> +     minoff = FirstOffsetNumber;
> +     maxoff = PageGetMaxOffsetNumber(page);
> + 
> +     maxoff = PageGetMaxOffsetNumber(page);

duplicated line.

> +     for (i = 0; i < so->numKilled; i++)
> +     {
> +         if (so->killedItems != NULL)
> +         {
> +             OffsetNumber offnum = FirstOffsetNumber;
> + 
> +             while (offnum <= maxoff)
> +             {

This nested loop deserves a comment.

> +                 ItemId        iid = PageGetItemId(page, offnum);
> +                 IndexTuple    ituple = (IndexTuple) PageGetItem(page, iid);
> + 
> +                 if (ItemPointerEquals(&ituple->t_tid, &(so->killedItems[i])))
> +                 {
> +                     /* found the item */
> +                     ItemIdMarkDead(iid);
> +                     killedsomething = true;
> +                     break;        /* out of inner search loop */
> +                 }
> +                 offnum = OffsetNumberNext(offnum);
> +             }
> +         }
> +     }

I know it's the same approach nbtree takes, but if there's a significant
number of deleted items this takes me as a rather suboptimal
approach. The constants are small, but this still essentially is O(n^2).

> ***************
> *** 451,456 **** getNextNearest(IndexScanDesc scan)
> --- 553,575 ----
>   
>       if (scan->xs_itup)
>       {
> +         /*
> +          * If previously returned index tuple is not visible save it into
> +          * so->killedItems. And at the end of the page scan mark all saved
> +          * tuples as dead.
> +          */
> +         if (scan->kill_prior_tuple)
> +         {
> +             if (so->killedItems == NULL)
> +             {
> +                 MemoryContext oldCxt2 = MemoryContextSwitchTo(so->giststate->scanCxt);
> + 
> +                 so->killedItems = (ItemPointerData *) palloc(MaxIndexTuplesPerPage * sizeof(ItemPointerData));
> +                 MemoryContextSwitchTo(oldCxt2);
> +             }

oldCxt2?

> +             if ((so->numKilled < MaxIndexTuplesPerPage))
> +                 so->killedItems[so->numKilled++] = scan->xs_ctup.t_self;
> +         }

superfluous parens.

> +             if ((so->curBlkno != InvalidBlockNumber) && (so->numKilled > 0))
> +                 gistkillitems(scan);

superfluous parens.

> +                 if ((scan->kill_prior_tuple) && (so->curPageData > 0))
> +                 {

superfluous parens.

>   
> +                     if (so->killedItems == NULL)
> +                     {
> +                         MemoryContext oldCxt = MemoryContextSwitchTo(so->giststate->scanCxt);
> + 
> +                         so->killedItems = (ItemPointerData *) palloc(MaxIndexTuplesPerPage *
sizeof(ItemPointerData));
> +                         MemoryContextSwitchTo(oldCxt);
> +                     }
> +                     if (so->numKilled < MaxIndexTuplesPerPage)
> +                         so->killedItems[so->numKilled++] = so->pageData[so->curPageData - 1].heapPtr;
> +                 }
>                   /* continuing to return tuples from a leaf page */
>                   scan->xs_ctup.t_self = so->pageData[so->curPageData].heapPtr;
>                   scan->xs_recheck = so->pageData[so->curPageData].recheck;
> ***************

overlong lines.

> *** 586,594 **** gistgettuple(PG_FUNCTION_ARGS)
> --- 723,751 ----
>                   PG_RETURN_BOOL(true);
>               }
>   
> +             /*
> +              * Check the last returned tuple and add it to killitems if
> +              * necessary
> +              */
> +             if ((scan->kill_prior_tuple) && (so->curPageData > 0) && (so->curPageData == so->nPageData))
> +             {

superfluous parens galore.

> +                 if (so->killedItems == NULL)
> +                 {
> +                     MemoryContext oldCxt = MemoryContextSwitchTo(so->giststate->scanCxt);
> + 
> +                     so->killedItems = (ItemPointerData *) palloc(MaxIndexTuplesPerPage * sizeof(ItemPointerData));
> +                     MemoryContextSwitchTo(oldCxt);
> +                 }
> +                 if ((so->numKilled < MaxIndexTuplesPerPage))
> +                     so->killedItems[so->numKilled++] = so->pageData[so->curPageData - 1].heapPtr;
> +             }
>               /* find and process the next index page */
>               do
>               {
> +                 if ((so->curBlkno != InvalidBlockNumber) && (so->numKilled > 0))
> +                     gistkillitems(scan);
> + 
>                   GISTSearchItem *item = getNextGISTSearchItem(so);
>   
>                   if (!item)

Too long lines.


Greetings,

Andres Freund



Re: [PATCH] Microvacuum for gist.

From
Anastasia Lubennikova
Date:
> Hi,
>
> I don't know too much about gist, but did a quick read through. Mostly
> spotting some stylistic issues. Please fix those making it easier for
> the next reviewer.
Thank you for review! All mentioned issues are fixed.

--
Anastasia Lubennikova
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company


Attachment

Re: [PATCH] Microvacuum for gist.

From
Teodor Sigaev
Date:
Some notices

1 gistvacuumpage():    OffsetNumber deletable[MaxOffsetNumber];  Seems, MaxOffsetNumber is too much,
MaxIndexTuplesPerPageis enough
 

2 Loop in gistkillitems() for searching heap pointer. It seems to me that
it could be a performance problem. To fix it, it's needed to remember index 
tuple's offset number somewhere near GISTScanOpaqueData->killedItems. And
gistkillitems() will loop over offsets and compare heap pointer from killedItems 
and index tuple, if they doesn't match then just skip this index tuple.

3 Connected with previous, could you show some performance tests?


-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 



Re: [PATCH] Microvacuum for gist.

From
Anastasia Lubennikova
Date:
04.09.2015 15:11, Teodor Sigaev:
> Some notices
>
> 1 gistvacuumpage():
>     OffsetNumber deletable[MaxOffsetNumber];
>   Seems, MaxOffsetNumber is too much, MaxIndexTuplesPerPage is enough

Fixed.

> 2 Loop in gistkillitems() for searching heap pointer. It seems to me that
> it could be a performance problem. To fix it, it's needed to remember
> index tuple's offset number somewhere near
> GISTScanOpaqueData->killedItems. And
> gistkillitems() will loop over offsets and compare heap pointer from
> killedItems and index tuple, if they doesn't match then just skip this
> index tuple.
Thanks for suggestion. I've rewritten this function. Now killedItems[]
contains only OffsetNumbers of tuples which we are going to delete.
PageLSN check is enough to ensure that nothing has changed on the page.
Heap pointer recheck is unnecessary. (It's really important for btree,
where tuple could be inserted in the middle of page. But we can't have
such situation for GiST index page).
It works 50% faster than before.

> 3 Connected with previous, could you show some performance tests?

Perfomance test is attached.
Test is following - portion of tuples is deleted and after that selected
several times.

Without microvacuum. All 'select' queries are executed at about same time
Time: 360,468 ms
Time: 243,197 ms
Time: 238,310 ms
Time: 238,018 ms

With microvacuum. First 'select' invokes gistkillitems(). It's executed
a bit slower than before.
But following queries are executed significantly faster than without
microvacuum.
Time: 368,780 ms
Time: 69,769 ms
Time: 9,545 ms
Time: 12,427 ms

Please, review the patch again. I could have missed something.

P.S. Do I need to write any documentation update?

--
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Attachment

Re: [PATCH] Microvacuum for gist.

From
Teodor Sigaev
Date:
Something goes wrong...

gist.c:1465:5: warning: unused variable 'minoff' [-Wunused-variable]                                minoff,
gistget.c:37:1: warning: unused function 'gistkillitems' [-Wunused-function]
gistkillitems(IndexScanDesc scan)

> Without microvacuum. All 'select' queries are executed at about same time
> Time: 360,468 ms
> Time: 243,197 ms
> Time: 238,310 ms
> Time: 238,018 ms
>
> With microvacuum. First 'select' invokes gistkillitems(). It's executed a bit
> slower than before.
> But following queries are executed significantly faster than without microvacuum.
> Time: 368,780 ms
> Time: 69,769 ms
> Time: 9,545 ms
> Time: 12,427 ms
That's perfect, but I can't reproduce that. Suppose, because of "unused function 
'gistkillitems'"

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 



Re: [PATCH] Microvacuum for gist.

From
Teodor Sigaev
Date:
BTW, I slightly modify your test to provide more stable results.


--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCH] Microvacuum for gist.

From
Anastasia Lubennikova
Date:
Fixed patch is attached.

08.09.2015 13:47, Teodor Sigaev:
> BTW, I slightly modify your test to provide more stable results.

Thank you, I tried it, Results are nearly the same.

Without microvacuum
Time: 312,955 ms
Time: 264,597 ms
Time: 243,286 ms
Time: 243,679 ms

With microvacuum:
Time: 354,097 ms
Time: 82,206 ms
Time: 11,714 ms
Time: 11,277 ms


--
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Attachment

Re: [PATCH] Microvacuum for gist.

From
Thom Brown
Date:
On 8 September 2015 at 22:35, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:

Fixed patch is attached.

08.09.2015 13:47, Teodor Sigaev:
BTW, I slightly modify your test to provide more stable results.

Thank you, I tried it, Results are nearly the same.

Without microvacuum
Time: 312,955 ms
Time: 264,597 ms
Time: 243,286 ms
Time: 243,679 ms

With microvacuum:
Time: 354,097 ms
Time: 82,206 ms
Time: 11,714 ms
Time: 11,277 ms

Looks good to me (except for the initial hit):

Without microvacuum: 1st run | 2nd run
Time: 259.996 ms | 246.831 ms
Time: 169.820 ms | 169.501 ms
Time: 176.045 ms | 166.845 ms
Time: 169.230 ms | 167.637 ms

With microvacuum: 1st run | 2nd run
Time: 625.883 ms | 425.231 ms
Time: 10.891 ms | 10.603 ms
Time: 10.002 ms | 10.971 ms
Time: 11.613 ms | 11.643 ms

--
Thom

Re: [PATCH] Microvacuum for gist.

From
Jeff Janes
Date:
On Tue, Sep 8, 2015 at 2:35 PM, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:

Fixed patch is attached.


The commit of this patch seems to have created a bug in which updated tuples can disappear from the index, while remaining in the table.

It looks like the bug depends on going through a crash-recovery cycle, but I am not sure of that yet.

I've looked through the commit diff and don't see anything obviously wrong.  I notice index tuples are marked dead with only a buffer content share lock, and the page is defragmented with only a buffer exclusive lock (as opposed to a super-exclusive buffer clean up lock).  But as far as I can tell, both of those should be safe on an index.  Also, if that was the bug, it should happen without crash-recovery.

The test is pretty simple.  I create a 10,000 row table with a unique-by-construction id column with a btree_gist index on it and a counter column, and fire single-row updates of the counter for random ids in high concurrency (8 processes running flat out).  I force the server to crash frequently with simulated torn-page writes in which md.c writes a partial page and then PANICs.  Eventually (1 to 3 hours) the updates start indicating they updated 0 rows.  At that point, a forced table scan will find the row, but the index doesn't.

Any hints on how to proceed with debugging this?  If I can't get it to reproduce the problem in the absence of crash-recovery cycles with an overnight run, then I think my next step will be to run it over hot-standby and see if WAL replay in the absence of crashes might be broken as well.

Cheers,

Jeff

Re: [PATCH] Microvacuum for gist.

From
Anastasia Lubennikova
Date:


16.09.2015 07:30, Jeff Janes:

The commit of this patch seems to have created a bug in which updated tuples can disappear from the index, while remaining in the table.

It looks like the bug depends on going through a crash-recovery cycle, but I am not sure of that yet.

I've looked through the commit diff and don't see anything obviously wrong.  I notice index tuples are marked dead with only a buffer content share lock, and the page is defragmented with only a buffer exclusive lock (as opposed to a super-exclusive buffer clean up lock).  But as far as I can tell, both of those should be safe on an index.  Also, if that was the bug, it should happen without crash-recovery.

The test is pretty simple.  I create a 10,000 row table with a unique-by-construction id column with a btree_gist index on it and a counter column, and fire single-row updates of the counter for random ids in high concurrency (8 processes running flat out).  I force the server to crash frequently with simulated torn-page writes in which md.c writes a partial page and then PANICs.  Eventually (1 to 3 hours) the updates start indicating they updated 0 rows.  At that point, a forced table scan will find the row, but the index doesn't.

Any hints on how to proceed with debugging this?  If I can't get it to reproduce the problem in the absence of crash-recovery cycles with an overnight run, then I think my next step will be to run it over hot-standby and see if WAL replay in the absence of crashes might be broken as well.


I've found the bug. It's because of mixed calls of
PageIndexMultiDelete() in gistvacuumpage() and
PageIndexTupleDelete() in gistRedoPageUpdateRecord().
These functions are conflicting.

I've fixed my patch by change MultiDelete to TupleDelete in gistvacuumpage(). Patch is attached.
But It seems to me that it would be better to rewrite all mentions of TupleDelete to MultiDelete in gist code.
I'm working on it.

-- 
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company()
Attachment

Re: [PATCH] Microvacuum for gist.

From
Teodor Sigaev
Date:
> But It seems to me that it would be better to rewrite all mentions of
> TupleDelete to MultiDelete in gist code.

Sure. Patch is attached, and it changes WAL format, so be carefull with testing.
Please, have a look.

Also in attach scripts reproduce bug from Jeff's report:
g.pl - creates and fills test table
w.pl - worker, could run in several session

Usage
perl g.pl | psql contrib_regression
perl w.pl |  psql contrib_regression | grep 'UPDATE 0'

and killall -9 postgres while w.pl is running. Recovery will fail with high
probability.

Thank you, Jeff, for report.
--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCH] Microvacuum for gist.

From
Jeff Janes
Date:
On Wed, Sep 16, 2015 at 8:36 AM, Teodor Sigaev <teodor@sigaev.ru> wrote:
But It seems to me that it would be better to rewrite all mentions of
TupleDelete to MultiDelete in gist code.

Sure. Patch is attached, and it changes WAL format, so be carefull with testing.
Please, have a look.

Also in attach scripts reproduce bug from Jeff's report:
g.pl - creates and fills test table
w.pl - worker, could run in several session

Usage
perl g.pl | psql contrib_regression
perl w.pl |  psql contrib_regression | grep 'UPDATE 0'

and killall -9 postgres while w.pl is running. Recovery will fail with high probability.

Thank you, Jeff, for report.

Thanks, that seems to have fixed it.

But I don't understand this comment:

+               /*
+                * While we delete only one tuple at once we could mix calls
+                * PageIndexTupleDelete() here and PageIndexMultiDelete() in
+                * gistRedoPageUpdateRecord()
+                */

Does this mean:

Since we delete only one tuple per WAL record here, we can call PageIndexTupleDelete() here and re-play it with PageIndexMultiDelete() in gistRedoPageUpdateRecord()

Thanks,

Jeff

Re: [PATCH] Microvacuum for gist.

From
Teodor Sigaev
Date:
> But I don't understand this comment:
>
> +               /*
> +                * While we delete only one tuple at once we could mix calls
> +                * PageIndexTupleDelete() here and PageIndexMultiDelete() in
> +                * gistRedoPageUpdateRecord()
> +                */
>
> Does this mean:
>
> Since we delete only one tuple per WAL record here, we can call
> PageIndexTupleDelete() here and re-play it with PageIndexMultiDelete() in
> gistRedoPageUpdateRecord()

yes. The problem was when we delete tuples with offset (2,4,6) with 
PageIndexMultiDelete() we will delete exctly pointed tuples. Bur if we delete 
tuple by PageIndexTupleDelete() with offset  2 then 4-th tuple will be moved 3 3 
and 6 become 5. So, next tuple to delete now is 3 and we should call 
PageIndexTupleDelete(3) and so on. And bug was: we deleted tuples in 
ginpagevacuum with a help of PageIndexMultiDelete() and write to WAL his 
argument, and recovery process uses  PageIndexTupleDelete() without correcting 
offset.

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/