Re: Poor performance with row wise comparisons - Mailing list pgsql-performance

From Tom Lane
Subject Re: Poor performance with row wise comparisons
Date
Msg-id 2065760.1739140833@sss.pgh.pa.us
Whole thread Raw
In response to Poor performance with row wise comparisons  (Jon Emord <jon@peregrine.io>)
List pgsql-performance
Jon Emord <jon@peregrine.io> writes:
>    ->  Index Only Scan using entity_data_model_id_primary_key_uniq on entity  (cost=0.70..873753.60 rows=15581254
width=31)(actual time=0.093..2712.836 rows=100 loops=1) 
>          Index Cond: ((ROW(data_model_id, primary_key) >= ROW(123, 'ABC'::text)) AND (ROW(data_model_id, primary_key)
<=ROW(123, 'DEF'::text))) 
>          Heap Fetches: 4
>          Buffers: shared hit=97259

>   2.
> data_model_id = 123 is the 15 most common value of data_model_id with 10.8 million records

Hm.  I think your answer is in this comment in nbtree's
key-preprocessing logic:

 * Row comparison keys are currently also treated without any smarts:
 * we just transfer them into the preprocessed array without any
 * editorialization.  We can treat them the same as an ordinary inequality
 * comparison on the row's first index column, for the purposes of the logic
 * about required keys.

That is, for the purposes of deciding when the index scan can stop,
the "<= ROW" condition acts like "data_model_id <= 123".  So it will
run through all of the data_model_id = 123 entries before stopping.

            regards, tom lane



pgsql-performance by date:

Previous
From: kyle Hailey
Date:
Subject: Re: lwlock:LockManager wait_events
Next
From: Laurenz Albe
Date:
Subject: Re: Poor performance with row wise comparisons