Re: WIP: Covering + unique indexes. - Mailing list pgsql-hackers

From Alexander Korotkov
Subject Re: WIP: Covering + unique indexes.
Date
Msg-id CAPpHfdupuDaJFdhUMAQKhgLYqAzQ4OSc19GP+wsTcghFezo1YA@mail.gmail.com
Whole thread Raw
In response to Re: WIP: Covering + unique indexes.  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: WIP: Covering + unique indexes.
List pgsql-hackers
On Mon, Apr 16, 2018 at 1:05 AM, Peter Geoghegan <pg@bowt.ie> wrote:
Attached patch makes the changes that I talked about, and a few
others. The commit message has full details. The general direction of
the patch is that it documents our assumptions, and verifies them in
more cases. Most of the changes I've made are clear improvements,
though in a few cases I've made changes that are perhaps more
debatable.

Great, thank you very much!
 
These other, more debatable cases are:

* The comments added to _bt_isequal() about suffix truncation may not
be to your taste. The same is true of the way that I restored the
previous _bt_isequal() function signature. (Yes, I want to change it
back despite the fact that I was the person that originally suggested
we change _bt_isequal().)
 
Hmm, what do you think about making BTreeTupGetNAtts() take tupledesc
argument, not relation>  It anyway doesn't need number of key attributes,
only total number of attributes.  Then _bt_isequal() would be able to use
BTreeTupGetNAtts().

* I added BTreeTupSetNAtts() calls to a few places that don't truly
need them, such as the point where we generate a dummy 0-attribute
high key within _bt_mark_page_halfdead(). I think that we should try
to be as consistent as possible about using BTreeTupSetNAtts(), to set
a good example. I don't think it's necessary to use BTreeTupSetNAtts()
for pivot tuples when the number of key attributes matches indnatts
(it seems inconvenient to have to palloc() our own scratch buffer to
do this when we don't have to), but that doesn't apply to these
now-covered cases.

+1 

> I think, we need move _bt_check_natts() and its call under
> USE_ASSERT_CHECKING to prevent performance degradation. Users shouldn't pay
> for unused feature.

I eventually decided that you were right about this, and made the
_bt_compare() call to _bt_check_natts() a simple assertion without
waiting to hear more opinions on the matter. Concurrency isn't a
factor here, so adding a check to standard release builds isn't
particularly likely to detect bugs. Besides, there is really only a
small number of places that need to do truncation for themselves. And,
if you want to be sure that the structure is consistent in the field,
there is always amcheck, which can check _bt_check_natts() (while also
checking other things that we care about just as much).

Good point, risk of performance degradation caused by _bt_check_natts()
in _bt_compare() is high.  So, let's move in to assertion.

Note that I removed some dead code from _bt_insertonpg() that wasn't
added by the INCLUDE patch. It confused matters for this patch, since
we don't want to consider what's supposed to happen when there is a
retail insertion of a new, second negative infinity item -- clearly,
that should simply never happen (I thought about adding a
BTreeTupSetNAtts() call, but then decided to just remove the dead code
and add a new "can't happen" elog error).

I think it's completely OK to fix broken things when you've to touch
them.  Probably, Teodor would decide to make that by separate commit.
So, it's up to him.
 
Finally, I made sure that we
don't drop all tables in the regression tests, so that we have some
pg_dump coverage for INCLUDE indexes, per a request from Tom.

Makes sense, because that've already appeared to be broken.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company 

pgsql-hackers by date:

Previous
From: Etsuro Fujita
Date:
Subject: Re: Expression errors with "FOR UPDATE" and postgres_fdw with partitionwise join enabled.
Next
From: Euler Taveira
Date:
Subject: Re: pg_recvlogical broken in back branches