Thread: Odd (slow) plan choice with min/max

Odd (slow) plan choice with min/max

From
Paul McGarry
Date:
Hi all,

I have a query where Postgresql (11.9 at the moment) is making an odd plan choice, choosing to use index scans which require filtering out millions of rows, rather than "just" doing an aggregate over the rows the where clause targets which is much faster.
AFAICT it isn't a statistics problem, at least increasing the stats target and analyzing the table doesn't seem to fix the problem.

The query looks like:

======
 explain analyze select min(risk_id),max(risk_id) from risk where time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';
                                                                              QUERY PLAN                                                                              
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Result  (cost=217.80..217.81 rows=1 width=16) (actual time=99722.685..99722.687 rows=1 loops=1)
   InitPlan 1 (returns $0)
     ->  Limit  (cost=0.57..108.90 rows=1 width=8) (actual time=38454.537..38454.538 rows=1 loops=1)
           ->  Index Scan using risk_risk_id_key on risk  (cost=0.57..9280362.29 rows=85668 width=8) (actual time=38454.535..38454.536 rows=1 loops=1)
                 Index Cond: (risk_id IS NOT NULL)
                 Filter: (("time" >= '2020-01-20 15:00:07+00'::timestamp with time zone) AND ("time" < '2020-01-21 15:00:08+00'::timestamp with time zone))
                 Rows Removed by Filter: 161048697
   InitPlan 2 (returns $1)
     ->  Limit  (cost=0.57..108.90 rows=1 width=8) (actual time=61268.140..61268.140 rows=1 loops=1)
           ->  Index Scan Backward using risk_risk_id_key on risk risk_1  (cost=0.57..9280362.29 rows=85668 width=8) (actual time=61268.138..61268.139 rows=1 loops=1)
                 Index Cond: (risk_id IS NOT NULL)
                 Filter: (("time" >= '2020-01-20 15:00:07+00'::timestamp with time zone) AND ("time" < '2020-01-21 15:00:08+00'::timestamp with time zone))
                 Rows Removed by Filter: 41746396
 Planning Time: 0.173 ms
 Execution Time: 99722.716 ms
(15 rows)
======

If I add a count(*) so it has to consider all rows in the range for that part of the query and doesn't consider using the other index for a min/max "shortcut" then the query is fast.
======
explain analyze select min(risk_id),max(risk_id), count(*) from risk where time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';
                                                                       QUERY PLAN                                                                      
--------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=4376.67..4376.68 rows=1 width=24) (actual time=30.011..30.012 rows=1 loops=1)
   ->  Index Scan using risk_time_idx on risk  (cost=0.57..3734.17 rows=85667 width=8) (actual time=0.018..22.441 rows=90973 loops=1)
         Index Cond: (("time" >= '2020-01-20 15:00:07+00'::timestamp with time zone) AND ("time" < '2020-01-21 15:00:08+00'::timestamp with time zone))
 Planning Time: 0.091 ms
 Execution Time: 30.045 ms
(5 rows)
======

My count() hack works around my immediate problem but I'm trying to get my head round why Postgres chooses the plan it does without it, in case there is some general problem with my configuration that may negatively effect other areas, or there's something else I am missing.

Any ideas?

Paul McGarry

Re: Odd (slow) plan choice with min/max

From
Justin Pryzby
Date:
On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:
> I have a query where Postgresql (11.9 at the moment) is making an odd plan
> choice, choosing to use index scans which require filtering out millions of
> rows, rather than "just" doing an aggregate over the rows the where clause
> targets which is much faster.
> AFAICT it isn't a statistics problem, at least increasing the stats target
> and analyzing the table doesn't seem to fix the problem.

>  explain analyze select min(risk_id),max(risk_id) from risk where
> time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';

I'm guessing the time and ID columns are highly correlated...

So the planner thinks it can get the smallest ID by scanning the ID index, but
then ends up rejecting the first 161e6 rows for which the time is too low, and
fails the >= condition.

And thinks it can get the greatest ID by backward scanning the ID idx, but ends
up rejecting/filtering the first 41e6 rows, for which the time is too high,
failing the < condition.

This is easy to reproduce:
postgres=# DROP TABLE t; CREATE TABLE t AS SELECT a i,a j FROM generate_series(1,999999)a; CREATE INDEX ON t(j);
ANALYZEt;
 
postgres=# explain analyze SELECT min(j), max(j) FROM t WHERE i BETWEEN 9999 AND 99999;

One solution seems to be to create an index on (i,j), but I don't know if
there's a better way.

-- 
Justin



Re: Odd (slow) plan choice with min/max

From
Paul McGarry
Date:


On Tue, 23 Mar 2021 at 16:13, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:
> I have a query where Postgresql (11.9 at the moment) is making an odd plan
> choice, choosing to use index scans which require filtering out millions of
> rows, rather than "just" doing an aggregate over the rows the where clause
> targets which is much faster.
> AFAICT it isn't a statistics problem, at least increasing the stats target
> and analyzing the table doesn't seem to fix the problem.

>  explain analyze select min(risk_id),max(risk_id) from risk where
> time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';

I'm guessing the time and ID columns are highly correlated...

So the planner thinks it can get the smallest ID by scanning the ID index, but
then ends up rejecting the first 161e6 rows for which the time is too low, and
fails the >= condition.

And thinks it can get the greatest ID by backward scanning the ID idx, but ends
up rejecting/filtering the first 41e6 rows, for which the time is too high,
failing the < condition.

Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.
Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria.
 
One solution seems to be to create an index on (i,j), but I don't know if
there's a better way.


Adding the count() stops the planner considering the option so that will work for now.
My colleague has pointed out that we had the same issue in November and I came up with the count() workaround then too, but somehow seem to have forgotten it in the meantime and reinvented it today. I wonder if I posted to pgsql-performance then too.....
 
Maybe time for me to read the PG12 release notes.... 

Paul

Re: Odd (slow) plan choice with min/max

From
Rick Otten
Date:


On Tue, Mar 23, 2021 at 2:52 AM Paul McGarry <paul@paulmcgarry.com> wrote:


On Tue, 23 Mar 2021 at 16:13, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:
> I have a query where Postgresql (11.9 at the moment) is making an odd plan
> choice, choosing to use index scans which require filtering out millions of
> rows, rather than "just" doing an aggregate over the rows the where clause
> targets which is much faster.
> AFAICT it isn't a statistics problem, at least increasing the stats target
> and analyzing the table doesn't seem to fix the problem.

>  explain analyze select min(risk_id),max(risk_id) from risk where
> time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';

I'm guessing the time and ID columns are highly correlated...

So the planner thinks it can get the smallest ID by scanning the ID index, but
then ends up rejecting the first 161e6 rows for which the time is too low, and
fails the >= condition.

And thinks it can get the greatest ID by backward scanning the ID idx, but ends
up rejecting/filtering the first 41e6 rows, for which the time is too high,
failing the < condition.

Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.
Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria.


I assume you have a statistic on the correlated columns, ie `create statistic` ?

If you can't use partitions on your date column, can you use partial indexes instead?   Or a functional index with min() over day and max() over day?

 

Re: Odd (slow) plan choice with min/max

From
Paul McGarry
Date:


On Wed, 24 Mar 2021 at 00:07, Rick Otten <rottenwindfish@gmail.com> wrote:

Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.
Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria.


I assume you have a statistic on the correlated columns, ie `create statistic` ?

I didn't, but adding
======
CREATE STATISTICS risk_risk_id_time_correlation_stats ON risk_id,time FROM risk;
analyze risk;
======
doesn't seem to help.
I get the same plan before/after. Second run was faster, but just because data was hot.

 
If you can't use partitions on your date column, can you use partial indexes instead?   Or a functional index with min() over day and max() over day?

I don't particularly want to add more weird indexes to solve this one particular query. as the existing risk_id index should make it efficient enough if only the planner chose to use it. This is part of an archiving job, identifying sections of historical data, so not a query that needs to be super optimised, but essentially doing a full table scan backwards/forwards as it is now is doing a lot of unnecessary IO that would be best left free for more time sensitive queries.My count(() workaround works so we can use that.
I'm more interested in understanding why the planner makes what seems to be an obviously bad choice.

Paul 

Re: Odd (slow) plan choice with min/max

From
Yoan SULTAN
Date:
Another workaround could be :

explain analyze select min(risk_id),max(risk_id) from (select * from risk where time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00') t;  

in order to force the planner to use first the timestamp index.

However, I agree with you; we meet a planner bad behavior here.

Regards,
Yoan SULTAN

Le mar. 23 mars 2021 à 22:38, Paul McGarry <paul@paulmcgarry.com> a écrit :


On Wed, 24 Mar 2021 at 00:07, Rick Otten <rottenwindfish@gmail.com> wrote:

Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.
Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria.


I assume you have a statistic on the correlated columns, ie `create statistic` ?

I didn't, but adding
======
CREATE STATISTICS risk_risk_id_time_correlation_stats ON risk_id,time FROM risk;
analyze risk;
======
doesn't seem to help.
I get the same plan before/after. Second run was faster, but just because data was hot.

 
If you can't use partitions on your date column, can you use partial indexes instead?   Or a functional index with min() over day and max() over day?

I don't particularly want to add more weird indexes to solve this one particular query. as the existing risk_id index should make it efficient enough if only the planner chose to use it. This is part of an archiving job, identifying sections of historical data, so not a query that needs to be super optimised, but essentially doing a full table scan backwards/forwards as it is now is doing a lot of unnecessary IO that would be best left free for more time sensitive queries.My count(() workaround works so we can use that.
I'm more interested in understanding why the planner makes what seems to be an obviously bad choice.

Paul 


--
Regards,
Yo.