Re: Optimize single tuple fetch from nbtree index - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Optimize single tuple fetch from nbtree index
Date
Msg-id 26641.1564778586@sss.pgh.pa.us
Whole thread Raw
In response to Optimize single tuple fetch from nbtree index  (Floris Van Nee <florisvannee@Optiver.com>)
Responses Re: Optimize single tuple fetch from nbtree index  (Floris Van Nee <florisvannee@Optiver.com>)
Re: Optimize single tuple fetch from nbtree index  (Peter Geoghegan <pg@bowt.ie>)
List pgsql-hackers
Floris Van Nee <florisvannee@Optiver.com> writes:
> Every time the index scan is done, all tuples from the leaf page are
> read in nbtsearch.c:_bt_readpage. The idea of this patch is to make an
> exception for this *only* the first time amgettuple gets called.

Regardless of whether there's actually a LIMIT 1?  That seems disastrous
for every other case than the narrow one where the optimization wins.
Because every other case is now going to have to touch the index page
twice.  That's more CPU and about double the contention --- if you could
not measure any degradation from that, you're not measuring the right
thing.

In principle, you could pass down knowledge of whether there's a LIMIT,
using the same mechanism used to enable top-N sorting.  But it'd have to
also go through the AM interface layer, so I'm not sure how messy this
would be.

> This calls _bt_first in nbtsearch.c, which will, if there are scankeys, descend the tree to a leaf page and read just
thefirst (or possibly two) tuples. It won't touch the rest of the page yet. If indeed just one tuple was required,
therewon't be a call to _bt_next and we're done. If we do need more than one tuple, _bt_next will resume reading tuples
fromthe index page at the point where we left off. 

How do you know how many index entries you have to fetch to get a tuple
that's live/visible to the query?

> - We need to take into account page splits, insertions and vacuums while we do not have the read-lock in between
_bt_firstand the first call to _bt_next. This made my patch quite a bit more complicated than my initial
implementation.

Meh.  I think the odds that you got this 100% right are small, and the
odds that it would be maintainable are smaller.  There's too much that
can happen if you're not holding any lock --- and there's a lot of active
work on btree indexes, which could break whatever assumptions you might
make today.

I'm not unalterably opposed to doing something like this, but my sense
is that the complexity and probable negative performance impact on other
cases are not going to look like a good trade-off for optimizing the
case at hand.

BTW, you haven't even really made the case that optimizing a query that
behaves this way is the right thing to be doing ... maybe some other
plan shape that isn't a nestloop around a LIMIT query would be a better
solution.

            regards, tom lane



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: partition routing layering in nodeModifyTable.c
Next
From: Ibrar Ahmed
Date:
Subject: Re: SQL:2011 PERIODS vs Postgres Ranges?