Thread: simple query with radically different plan after 9.0 -> 9.2 upgrade

simple query with radically different plan after 9.0 -> 9.2 upgrade

From
Kevin Goess
Date:

Given this table

articles_1=> \d hits_user_daily_count;     Table "public.hits_user_daily_count"         Column           |  Type   | Modifiers 
---------------------------+---------+-----------userid                    | integer | not nulldate                      | date    | not nullnum_hits                  | integer | default 0num_cover_page_hits       | integer | default 0num_additional_files_hits | integer | default 0
Indexes:   "hits_user_daily_count_pkey" PRIMARY KEY, btree (userid, date)   "hits_user_daily_count_date" btree (date)

whose data looks like this:

articles_1=> select * from hits_user_daily_count limit 5;userid  |    date    | num_hits | num_cover_page_hits | num_additional_files_hits 
---------+------------+----------+---------------------+---------------------------1590185 | 2013-07-10 |        3 |                   4 |                         0 391907 | 2013-07-10 |       16 |                  12 |                         01739541 | 2013-08-03 |        1 |                   0 |                         01798435 | 2013-07-10 |        0 |                   1 |                         01521468 | 2013-07-10 |        2 |                   0 |                         0

We noticed a big change after upgrading from 9.0 to 9.2. For yesterday‘s date, the query plan is fine, like you’d expect

articles_1=> explain (analyze, buffers) select 1       from hits_user_daily_count       where userid = 1590185 and date = '2013-07-30';                                                                     QUERY PLAN                                              
------------------------------------------------------------------------------------------------------------------------------Index Only Scan using hits_user_daily_count_pkey on hits_user_daily_count  (cost=0.00..8.02 rows=1 width=0) (actual time=0.02  Index Cond: ((userid = 1590185) AND (date = '2013-07-30'::date))  Heap Fetches: 1  Buffers: shared hit=5Total runtime: 0.044 ms

but for today‘s date it looks like it’s reading all the rows for that date, 15,277 buffers:

articles_1=> explain (analyze, buffers) select 1       from hits_user_daily_count       where userid = 1590185 and date = '2013-08-01';                                                                    QUERY PLAN                                               
------------------------------------------------------------------------------------------------------------------------------Index Scan using hits_user_daily_count_date on hits_user_daily_count  (cost=0.00..7.92 rows=1 width=0) (actual time=11.957..1  Index Cond: (date = '2013-08-01'::date)  Filter: (userid = 1590185)  Rows Removed by Filter: 20149  Buffers: shared hit=15277Total runtime: 17.924 ms

(The dates in the queries are old because I've had this email in draft for a while, but the behavior is still identical).

We‘ve addressed the problem by running ’analyze' on the table every day ate about 1:30am. Buffer hits on that table go from about 1,000/sec to 70,000/sec between midnight and that analyze job, and then go back down to 1,000/sec and stay flat until midnight rolls around again.

Before the 9.0 -> 9.2 upgrade, the behavior would be flat all day.

Any ideas what would be causing that problem?

Re: simple query with radically different plan after 9.0 -> 9.2 upgrade

From
Tom Lane
Date:
Kevin Goess <kgoess@bepress.com> writes:
> We noticed a big change after upgrading from 9.0 to 9.2. For
> *yesterday's*date, the query plan is fine, like you'd expect

> articles_1=> explain (analyze, buffers) select 1
>         from hits_user_daily_count
>         where userid = 1590185 and date = '2013-07-30';

> QUERY PLAN
>
------------------------------------------------------------------------------------------------------------------------------
>  Index Only Scan using hits_user_daily_count_pkey on
> hits_user_daily_count  (cost=0.00..8.02 rows=1 width=0) (actual
> time=0.02
>    Index Cond: ((userid = 1590185) AND (date = '2013-07-30'::date))
>    Heap Fetches: 1
>    Buffers: shared hit=5
>  Total runtime: 0.044 ms

> but for *today's* date it looks like it's reading all the rows for that
> date, 15,277 buffers:

> articles_1=> explain (analyze, buffers) select 1
>         from hits_user_daily_count
>         where userid = 1590185 and date = '2013-08-01';

> QUERY PLAN
>
------------------------------------------------------------------------------------------------------------------------------
>  Index Scan using hits_user_daily_count_date on hits_user_daily_count
> (cost=0.00..7.92 rows=1 width=0) (actual time=11.957..1
>    Index Cond: (date = '2013-08-01'::date)
>    Filter: (userid = 1590185)
>    Rows Removed by Filter: 20149
>    Buffers: shared hit=15277
>  Total runtime: 17.924 ms

Hm.  I can reproduce this fairly easily, per attached script --- but for
me, every PG release back to 8.4 does the same thing, so I'm a bit
mystified as to why it changed for you between 9.0 and 9.2.

The issue as I'm seeing it is that if ANALYZE didn't find any rows with
today's date, the planner will estimate the condition date = 'today'::date
as having zero selectivity, which makes an indexscan using just that
condition look as cheap as an indexscan using both columns.  In fact,
cheaper, because the index on just date is smaller than the pkey index.
So it goes for what looks like the cheaper plan (notice the cost estimates
in your examples above).

Now, the only way to get to a zero selectivity estimate for var = const
is if the planner believes that the pg_stats most-common-values list
for the column is complete, and the constant is nowhere in the list.
So one plausible explanation for the change in behavior is that you
jacked up the statistics target for the date column enough so that
it includes all of the date values you keep in that column.  Am I right
in guessing that you drop old data from this table?  How far back?

> We've addressed the problem by running 'analyze' on the table every day ate
> about 1:30am. Buffer hits on that table go from about 1,000/sec to
> 70,000/sec between midnight and that analyze job, and then go back down to
> 1,000/sec and stay flat until midnight rolls around again.

Yeah, as soon as ANALYZE sees a few rows with the newer date, the
selectivity estimate will move up enough to discourage use of the
single-column index.

            regards, tom lane

drop table hits_user_daily_count;

create table hits_user_daily_count (
 userid                     integer  not null,
 date                       date     not null,
 num_hits                   integer  default 0,
 num_cover_page_hits        integer  default 0,
 num_additional_files_hits  integer  default 0,
 primary key (userid, date));
create index hits_user_daily_count_date on hits_user_daily_count(date);

create or replace function fill_for_date(d date, n int) returns void
language plpgsql as $$
begin
  insert into hits_user_daily_count
    select uid, d, random()*10, random()*10 from
      generate_series(1,n) uid order by random();
end
$$;

select fill_for_date('today'::date - 5, 100000);
select fill_for_date('today'::date - 4, 100000);
select fill_for_date('today'::date - 3, 100000);
select fill_for_date('today'::date - 2, 100000);
select fill_for_date('today'::date - 1, 100000);

analyze hits_user_daily_count;

-- If you include this step, the query for today actually takes a long time;
-- but you risk auto-analyze changing the stats and making the problem go away.
-- select fill_for_date('today'::date - 0, 20000);

explain analyze select 1
        from hits_user_daily_count
        where userid = 15901 and date = 'yesterday'::date;

explain analyze select 1
        from hits_user_daily_count
        where userid = 15901 and date = 'today'::date;

Re: simple query with radically different plan after 9.0 -> 9.2 upgrade

From
Kevin Goess
Date:
Thanks for the reply!  

Your analysis matches everything I see here, so what you say is probably the case. As to why it changed for us with the 9.0 => 9.2 upgrade, I also don't know--the change was pretty dramatic though.  Since we've compensated for it, and since you say the current behavior is actually what's expected, I'm happy.

But since you went to the trouble to reply:

Now, the only way to get to a zero selectivity estimate for var = const
is if the planner believes that the pg_stats most-common-values list
for the column is complete, and the constant is nowhere in the list.
So one plausible explanation for the change in behavior is that you
jacked up the statistics target for the date column enough so that
it includes all of the date values you keep in that column.  

I'm not following you there, but I'm not a full-time database guy.  Attached is the pg_stats for that column in case you find that interesting or helpful.
 
Am I right
in guessing that you drop old data from this table?  How far back?
 
That's right, we store 90 days and roll up data older than that into a different table. 


--
Kevin M. Goess
Software Engineer
Berkeley Electronic Press
kgoess@bepress.com

510-665-1200 x179
www.bepress.com

bepress: sustainable scholarly publishing  
Attachment

Re: simple query with radically different plan after 9.0 -> 9.2 upgrade

From
Tom Lane
Date:
Kevin Goess <kgoess@bepress.com> writes:
>> Now, the only way to get to a zero selectivity estimate for var = const
>> is if the planner believes that the pg_stats most-common-values list
>> for the column is complete, and the constant is nowhere in the list.
>> So one plausible explanation for the change in behavior is that you
>> jacked up the statistics target for the date column enough so that
>> it includes all of the date values you keep in that column.
>> Am I right
>> in guessing that you drop old data from this table?  How far back?

> That's right, we store 90 days and roll up data older than that into a
> different table.

Ah-hah.  The default statistics target is 100, so indeed ANALYZE is going
to be able to fit every date entry in the table into the
most-common-values list.  In this situation, you'd rather that there were
some uncertainty left.  Given that the distribution of the date column is
(I assume) pretty level, you don't really need full information about this
column.  I'd try backing off the stats target for the date column (and
only the date column --- see ALTER TABLE SET STATISTICS) to 50 or even
less.

Still bemused by the change from 9.0 to 9.2.  But there were some small
changes in the cost estimation equations for indexscans, so maybe on your
real data instead of my toy example the pkey index still managed to look
cheaper to 9.0 but not so much to 9.2.

            regards, tom lane


Re: simple query with radically different plan after 9.0 -> 9.2 upgrade

From
Kevin Goess
Date:
On Tue, Nov 12, 2013 at 2:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> That's right, we store 90 days and roll up data older than that into a
> different table.

Ah-hah.  The default statistics target is 100, so indeed ANALYZE is going
to be able to fit every date entry in the table into the
most-common-values list.  In this situation, you'd rather that there were
some uncertainty left.  Given that the distribution of the date column is
(I assume) pretty level, you don't really need full information about this
column.  I'd try backing off the stats target for the date column (and
only the date column --- see ALTER TABLE SET STATISTICS) to 50 or even
less.

That was it!  I set it to 50 on all the 90-days tables and now we no longer see that regular increase in disk reads between midnight of the new day and the 1:37am re-analyze. 

Thanks!