On Mon, 21 Nov 2022 at 22:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Andres Freund <andres@anarazel.de> writes:
> > On 2022-11-21 16:17:56 -0500, Robert Haas wrote:
> >> But ... what if they're not? Could the index contain a large number of
> >> pages containing just 1 tuple each, or no tuples at all? If so, maybe
> >> we can read ten bazillion index pages trying to find each heap tuple
> >> and still end up in trouble.
>
> > ISTM that if you have an index in such a poor condition that a single
> > value lookup reads thousands of pages inside the index, planner
> > estimates taking long is going to be the smallest of your worries...
>
> Yeah, that sort of situation is going to make any operation on the
> index slow, not only get_actual_variable_endpoint().
That was also my conclusion: this is actually a common antipattern for
our indexes, not anything specific to the planner.
In another recent situation, I saw a very bad case of performance for
a "queue table". In that use case the rows are inserted at head and
removed from tail. Searching for the next item to be removed from the
queue involves an increasingly long tail search, in the case that a
long running transaction prevents us from marking the index entries
killed. Many tables exhibit such usage, for example, the neworder
table in TPC-C.
We optimized the case of frequent insertions into the rightmost index
page; now we also need to optimize the case of a long series of
deletions from the leftmost index pages. Not sure how, just framing
the problem.
--
Simon Riggs http://www.EnterpriseDB.com/