Thread: Planner choice & tuning

Planner choice & tuning

From
"Ed L."
Date:
The planner is choosing a sequential scan for my query.  I am trying
to understand why since it is clearly not the fastest choice, and
what the proper tuning dial is to adjust here.  Here's the query:

    select count(1) from measurement where measure_id = 98;

TIA.  Here are the details:

% psql -c "\d measurement"
                                     Table "public.measurement"
   Column   |           Type           |                          Modifiers
------------+--------------------------+-------------------------------------------------------------
 id         | integer                  | not null default nextval('public.measurement_id_seq'::text)
 host_id    | integer                  | not null
 measure_id | integer                  | not null
 value      | double precision         | not null
 duration   | double precision         | not null default 0.0
 moment     | timestamp with time zone | not null default ('now'::text)::timestamp(6) with time zone
 source_id  | integer                  | not null default 1
Indexes: measurement_duration_idx btree (duration),
         measurement_host_idx btree (host_id),
         measurement_measure_idx btree (measure_id),
         measurement_moment_idx btree (moment),
         measurement_source_idx btree (source_id),
         measurement_value_idx btree (value)
Foreign Key constraints: $2 FOREIGN KEY (measure_id) REFERENCES measure(id) ON UPDATE NO ACTION ON DELETE NO ACTION,
                         $1 FOREIGN KEY (source_id) REFERENCES measurement_source(id) ON UPDATE NO ACTION ON DELETE NO
ACTION

% psql -c "analyze measurement"
ANALYZE

% psql -c "select count(1) from measurement"
  count
---------
 1800866
(1 row)


% psql -c "select count(1) from measurement where measure_id = 98"
 count
-------
 38862
(1 row)

% time psql -c "set enable_seqscan=no; explain analyze select count(1) from measurement where measure_id = 98"
                                                                       QUERY PLAN
                                

--------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=215208.66..215208.66 rows=1 width=0) (actual time=904.58..904.58 rows=1 loops=1)
   ->  Index Scan using measurement_measure_idx on measurement  (cost=0.00..215062.64 rows=58408 width=0) (actual
time=0.17..843.78rows=38866 loops=1) 
         Index Cond: (measure_id = 98)
 Total runtime: 904.77 msec
(4 rows)


real    0m1.298s
user    0m0.010s
sys     0m0.000s

% time psql -c "explain analyze select count(1) from measurement where measure_id = 98"
                                                       QUERY PLAN
 

-------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=97857.09..97857.09 rows=1 width=0) (actual time=2493.90..2493.90 rows=1 loops=1)
   ->  Seq Scan on measurement  (cost=0.00..97711.07 rows=58408 width=0) (actual time=12.94..2430.08 rows=38866
loops=1)
         Filter: (measure_id = 98)
 Total runtime: 2494.11 msec
(4 rows)


real    0m2.885s
user    0m0.000s
sys     0m0.000s


This seems to be saying the planner thinks its less expensive to do the
sequential scan, but why?

Including pg_stats data in case it is relevant here.

% psql -c "select * from pg_stats where tablename = 'measurement' and attname = 'measure_id'"
 schemaname |  tablename  |  attname   | null_frac | avg_width | n_distinct |             most_common_vals
|                               most_common_freqs                                 |            histogram_bounds
   | correlation  

------------+-------------+------------+-----------+-----------+------------+-------------------------------------------+----------------------------------------------------------------------------------+-----------------------------------------+-------------
 public     | measurement | measure_id |         0 |         4 |         52 | {104,108,113,124,106,110,101,107,112,109}
|{0.084,0.0556667,0.052,0.05,0.048,0.0473333,0.0383333,0.0363333,0.034,0.0326667} |
{23,36,39,43,85,89,100,111,120,122,128}|    0.232087 
(1 row)

% psql -c "select name, setting from pg_settings where name like 'random%'"
       name       | setting
------------------+---------
 random_page_cost | 4
(1 row)




Re: Planner choice & tuning

From
"Scott Marlowe"
Date:
On Tue, 2004-06-22 at 16:47, Ed L. wrote:
> The planner is choosing a sequential scan for my query.  I am trying
> to understand why since it is clearly not the fastest choice, and
> what the proper tuning dial is to adjust here.  Here's the query:
>
>     select count(1) from measurement where measure_id = 98;
>
> TIA.  Here are the details:
>
> % psql -c "\d measurement"
>                                      Table "public.measurement"
>    Column   |           Type           |                          Modifiers
> ------------+--------------------------+-------------------------------------------------------------
>  id         | integer                  | not null default nextval('public.measurement_id_seq'::text)
>  host_id    | integer                  | not null
>  measure_id | integer                  | not null
>  value      | double precision         | not null
>  duration   | double precision         | not null default 0.0
>  moment     | timestamp with time zone | not null default ('now'::text)::timestamp(6) with time zone
>  source_id  | integer                  | not null default 1
> Indexes: measurement_duration_idx btree (duration),
>          measurement_host_idx btree (host_id),
>          measurement_measure_idx btree (measure_id),
>          measurement_moment_idx btree (moment),
>          measurement_source_idx btree (source_id),
>          measurement_value_idx btree (value)
> Foreign Key constraints: $2 FOREIGN KEY (measure_id) REFERENCES measure(id) ON UPDATE NO ACTION ON DELETE NO ACTION,
>                          $1 FOREIGN KEY (source_id) REFERENCES measurement_source(id) ON UPDATE NO ACTION ON DELETE
NOACTION 
>
> % psql -c "analyze measurement"
> ANALYZE
>
> % psql -c "select count(1) from measurement"
>   count
> ---------
>  1800866
> (1 row)
>
>
> % psql -c "select count(1) from measurement where measure_id = 98"
>  count
> -------
>  38862
> (1 row)
>
> % time psql -c "set enable_seqscan=no; explain analyze select count(1) from measurement where measure_id = 98"
>                                                                        QUERY PLAN
                                  
>
--------------------------------------------------------------------------------------------------------------------------------------------------------
>  Aggregate  (cost=215208.66..215208.66 rows=1 width=0) (actual time=904.58..904.58 rows=1 loops=1)
>    ->  Index Scan using measurement_measure_idx on measurement  (cost=0.00..215062.64 rows=58408 width=0) (actual
time=0.17..843.78rows=38866 loops=1) 
>          Index Cond: (measure_id = 98)
>  Total runtime: 904.77 msec
> (4 rows)
>
>
> real    0m1.298s
> user    0m0.010s
> sys     0m0.000s
>
> % time psql -c "explain analyze select count(1) from measurement where measure_id = 98"
>                                                        QUERY PLAN
   
>
-------------------------------------------------------------------------------------------------------------------------
>  Aggregate  (cost=97857.09..97857.09 rows=1 width=0) (actual time=2493.90..2493.90 rows=1 loops=1)
>    ->  Seq Scan on measurement  (cost=0.00..97711.07 rows=58408 width=0) (actual time=12.94..2430.08 rows=38866
loops=1)
>          Filter: (measure_id = 98)
>  Total runtime: 2494.11 msec
> (4 rows)
>
>
> real    0m2.885s
> user    0m0.000s
> sys     0m0.000s
>
>
> This seems to be saying the planner thinks its less expensive to do the
> sequential scan, but why?
>
> Including pg_stats data in case it is relevant here.
>
> % psql -c "select * from pg_stats where tablename = 'measurement' and attname = 'measure_id'"
>  schemaname |  tablename  |  attname   | null_frac | avg_width | n_distinct |             most_common_vals
 |                                most_common_freqs                                 |            histogram_bounds
     | correlation  
>
------------+-------------+------------+-----------+-----------+------------+-------------------------------------------+----------------------------------------------------------------------------------+-----------------------------------------+-------------
>  public     | measurement | measure_id |         0 |         4 |         52 |
{104,108,113,124,106,110,101,107,112,109}|
{0.084,0.0556667,0.052,0.05,0.048,0.0473333,0.0383333,0.0363333,0.034,0.0326667}|
{23,36,39,43,85,89,100,111,120,122,128}|    0.232087 
> (1 row)
>
> % psql -c "select name, setting from pg_settings where name like 'random%'"
>        name       | setting
> ------------------+---------
>  random_page_cost | 4
> (1 row)

I'd say your random_page_cost is too low for your setup.  While there's
a slight misguess on the planner in the number of rows (38k verus 58k)
it's not that big.  Try values between 1.2 and 2.0.  Most larger servers
with plenty of memory work well around 1.3 to 1.5.  Also, make sure your
effective_cache_size is properly set.


Re: Planner choice & tuning

From
Tom Lane
Date:
"Ed L." <pgsql@bluepolka.net> writes:
> This seems to be saying the planner thinks its less expensive to do the
> sequential scan, but why?

Because it thinks it's less expensive ;-)

There are a couple issues here: one is overestimation of the number of
rows matching the query.  That is a statistical issue and should be
fixable by increasing the statistics target for the column.  With an
accurate rows estimate the seqscan cost estimate would not change but
the indexscan cost estimate would decrease approximately proportionally.

Allowing for the row estimation error, the indexscan cost estimate is
still about 4x what it ought to be, which means that you could bring the
estimated cost (for this query anyway) in line with reality by setting
random_page_cost = 1.

Note however that we are defining "reality" as "the results of this one
single experiment".  You should realize in particular that the table is
probably fully cached in memory during your test, which is a scenario
in which random_page_cost actually is 1.0 (given reasonable assumptions
about the behavior of the kernel's cache, anyway).  If you optimize for
this case then you are likely to be pessimizing the behavior for larger
tables that don't fit in memory.

My suggestion would be to lower random_page_cost to 3 or so, which would
be enough to tip the decision to indexscan for this case, though not to
make the estimated cost really truly correct.  You should however run a
larger set of experiments before doing anything, and realize that any
random_page_cost setting is a compromise because the model doesn't take
all the variables into account.

You can find much more about this issue in the pgsql-performance archives.

            regards, tom lane