Thread: Postgres Optimizer is not smart enough?

Postgres Optimizer is not smart enough?

From
Litao Wu
Date:
Hi All,

Here is my test comparison between Postgres (7.3.2)
optimizer vs Oracle (10g) optimizer.

It seems to me that Postgres optimizer is not smart
enough.

Did I miss anything?

Thanks,

In Postgres:
============
drop table test;
create table test (
 module        character varying(50),
 action_deny   integer,
 created       timestamp with time zone,
 customer_id   integer,
 domain        character varying(255));
create or replace function insert_rows () returns
integer as '
BEGIN
   for i in 1 .. 500000 loop
     insert into test values (i, 2, now(), 100,  i);
   end loop;
   return 1;
END;
' LANGUAGE 'plpgsql';

select insert_rows();

create index test_id1 on test (customer_id, created,
domain);

analyze test;

explain analyze
SELECT module,  sum(action_deny)
FROM test
WHERE  created >= ('now'::timestamptz - '1
day'::interval) AND customer_id='100'
  AND  domain='100'
GROUP BY module;


                          QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=3.12..3.13 rows=1 width=9) (actual
time=91.05..91.05 rows=1 loops=1)
   ->  Group  (cost=3.12..3.12 rows=1 width=9) (actual
time=91.04..91.04 rows=1 loops=1)
         ->  Sort  (cost=3.12..3.12 rows=1 width=9)
(actual time=91.03..91.03 rows=1 loops=1)
               Sort Key: module
               ->  Index Scan using test_id1 on test
(cost=0.00..3.11 rows=1 width=9) (actual
time=0.03..91.00 rows=1 loops=1)
                     Index Cond: ((customer_id = 100)
AND (created >= '2005-01-11
14:48:44.832552-07'::timestamp with time zone) AND
("domain" = '100'::character varying))
 Total runtime: 91.13 msec
(7 rows)

create index test_id2 on test(domain);
analyze test;

explain analyze
SELECT module,  sum(action_deny)
FROM test
WHERE  created >= ('now'::timestamptz - '1
day'::interval) AND customer_id='100'
  AND  domain='100'
GROUP BY module;


                          QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=3.12..3.13 rows=1 width=9) (actual
time=90.30..90.30 rows=1 loops=1)
   ->  Group  (cost=3.12..3.12 rows=1 width=9) (actual
time=90.29..90.30 rows=1 loops=1)
         ->  Sort  (cost=3.12..3.12 rows=1 width=9)
(actual time=90.29..90.29 rows=1 loops=1)
               Sort Key: module
               ->  Index Scan using test_id1 on test
(cost=0.00..3.11 rows=1 width=9) (actual
time=0.03..90.25 rows=1 loops=1)
                     Index Cond: ((customer_id = 100)
AND (created >= '2005-01-11
14:51:09.555974-07'::timestamp with time zone) AND
("domain" = '100'::character varying))
 Total runtime: 90.38 msec
(7 rows)

WHY PG STILL CHOOSE INDEX test_id1???
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
BECAUSE QUERY WILL RUN MUCH FASTER USING test_id2!!!
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

drop index test_id1;
explain analyze
SELECT module,  sum(action_deny)
FROM test
WHERE  created >= ('now'::timestamptz - '1
day'::interval) AND customer_id='100'
  AND  domain='100'
GROUP BY module;

   QUERY PLAN


-------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=3.12..3.13 rows=1 width=9) (actual
time=0.08..0.08 rows=1 loops=1)
   ->  Group  (cost=3.12..3.13 rows=1 width=9) (actual
time=0.08..0.08 rows=1 loops=1)
         ->  Sort  (cost=3.12..3.13 rows=1 width=9)
(actual time=0.07..0.07 rows=1 loops=1)
               Sort Key: module
               ->  Index Scan using test_id2 on test
(cost=0.00..3.11 rows=1 width=9) (actual
time=0.04..0.05 rows=1 loops=1)
                     Index Cond: ("domain" =
'100'::character varying)
                     Filter: ((created >= '2005-01-11
14:53:58.806364-07'::timestamp with time zone) AND
(customer_id = 100))
 Total runtime: 0.14 msec
(8 rows)

In Oracle:
==========
drop table test;
create table test (
 module        character varying(50),
 action_deny   integer,
 created       timestamp with time zone,
 customer_id   integer,
 domain        character varying(255));

begin
   for i in 1..500000 loop
     insert into test values (i, 2, current_timestamp,
100, i);
   end loop;
end;
/

create index test_id1 on test (customer_id, created,
domain);

analyze table test compute statistics;

set autot on
set timing on

SELECT module,  sum(action_deny)
FROM test
WHERE  created >= (current_timestamp - interval '1'
day) AND customer_id=100
  AND  domain='100'
GROUP BY module
/

MODULE
SUM(ACTION_DENY)
--------------------------------------------------
----------------
100
           2

Elapsed: 00:00:00.67

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=25
Card=1 Bytes=29
          )

   1    0   SORT (GROUP BY) (Cost=25 Card=1 Bytes=29)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'TEST'
(TABLE) (Cost=24
           Card=1 Bytes=29)

   3    2       INDEX (RANGE SCAN) OF 'TEST_ID1'
(INDEX) (Cost=23 Card
          =4500)





Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       2292  consistent gets
       2291  physical reads
          0  redo size
        461  bytes sent via SQL*Net to client
        508  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

create index test_id2 on test (domain);

SELECT module,  sum(action_deny)
FROM test
WHERE  created >= (current_timestamp - interval '1'
day) AND customer_id=100
  AND  domain='100'
GROUP BY module
/

MODULE
SUM(ACTION_DENY)
--------------------------------------------------
----------------
100
           2

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5
Card=1 Bytes=29)
   1    0   SORT (GROUP BY) (Cost=5 Card=1 Bytes=29)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'TEST'
(TABLE) (Cost=4
          Card=1 Bytes=29)

   3    2       INDEX (RANGE SCAN) OF 'TEST_ID2'
(INDEX) (Cost=3 Card=
          1)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        461  bytes sent via SQL*Net to client
        508  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed





__________________________________
Do you Yahoo!?
All your favorites on one personal page � Try My Yahoo!
http://my.yahoo.com

Re: Postgres Optimizer is not smart enough?

From
Mike Mascari
Date:
Litao Wu wrote:
> Hi All,
>
> Here is my test comparison between Postgres (7.3.2)
> optimizer vs Oracle (10g) optimizer.
>
> It seems to me that Postgres optimizer is not smart
> enough.
>
> Did I miss anything?

Yeah, 7.4.

7.3.2 is *ancient*. Here's output from 7.4:

[test@ferrari] explain analyze
test-# SELECT module,  sum(action_deny)
test-# FROM test
test-# WHERE  created >= ('now'::timestamptz - '1
test'# day'::interval) AND customer_id='100'
test-#   AND  domain='100'
test-# GROUP BY module;

  QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------
  HashAggregate  (cost=5.69..5.69 rows=1 width=13) (actual
time=715.058..715.060 rows=1 loops=1)
    ->  Index Scan using test_id1 on test  (cost=0.00..5.68 rows=1
width=13) (actual time=0.688..690.459 rows=1 loops=1)
          Index Cond: ((customer_id = 100) AND (created >= '2005-01-11
17:52:22.364145-05'::timestamp with time zone) AND (("domain")::text =
'100'::text))
  Total runtime: 717.546 ms
(4 rows)

[test@ferrari] create index test_id2 on test(domain);
CREATE INDEX
[test@ferrari] analyze test;
ANALYZE
[test@ferrari]
[test@ferrari] explain analyze
test-# SELECT module,  sum(action_deny)
test-# FROM test
test-# WHERE  created >= ('now'::timestamptz - '1
test'# day'::interval) AND customer_id='100'
test-#   AND  domain='100'
test-# GROUP BY module;
                                                        QUERY PLAN


------------------------------------------------------------------------------------------------------------------------
  HashAggregate  (cost=5.68..5.69 rows=1 width=13) (actual
time=10.778..10.780 rows=1 loops=1)
    ->  Index Scan using test_id2 on test  (cost=0.00..5.68 rows=1
width=13) (actual time=10.702..10.721 rows=1 loops=1)
          Index Cond: (("domain")::text = '100'::text)
          Filter: ((created >= '2005-01-11
17:53:16.720749-05'::timestamp with time zone) AND (customer_id = 100))
  Total runtime: 11.039 ms
(5 rows)

[test@ferrari] select version();
  PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC
i686-pc-linux-gnu-gcc (GCC) 3.4.0 20040204 (prerelease)
(1 row)

Hope that helps,

Mike Mascari

Re: Postgres Optimizer is not smart enough?

From
Mark Kirkwood
Date:
Litao  Wu Wrote:
> explain analyze
> SELECT module,  sum(action_deny)
> FROM test
> WHERE  created >= ('now'::timestamptz - '1
> day'::interval) AND customer_id='100'
>   AND  domain='100'
> GROUP BY module;

Here is my output for this query:

                                                               QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------------
  HashAggregate  (cost=3.03..3.03 rows=1 width=13) (actual
time=0.132..0.135 rows=1 loops=1)
    ->  Index Scan using test_id2 on test  (cost=0.00..3.02 rows=1
width=13) (actual time=0.085..0.096 rows=1 loops=1)
          Index Cond: (("domain")::text = '100'::text)
          Filter: ((created >= ('2005-01-13
11:57:34.673833+13'::timestamp with time zone - '1 day'::interval)) AND
(customer_id = 100))
  Total runtime: 0.337 ms
(5 rows)

Time: 8.424 ms


The version is:
PostgreSQL 8.0.0rc5 on i386-unknown-freebsd5.3, compiled by GCC gcc
(GCC) 3.4.2 [FreeBSD] 20040728


I have random_page_cost = 0.8 in my postgresql.conf. Setting it back to
the default (4) results in a plan using test_id1. A little
experimentation showed that for my system random_page_cost=1 was where
it changed from using test_id1 to test_id2.

So changing this parameter may be helpful.

I happen to have some debugging code enabled for the optimizer, and the
issue appears to be that the costs of paths using these indexes are
quite similar, so are quite sensitive to (some) parameter values.

regards

Mark

P.s : 7.3.2 is quite old.


Re: Postgres Optimizer is not smart enough?

From
Ragnar Hafstað
Date:
On Thu, 2005-01-13 at 12:14 +1300, Mark Kirkwood wrote:

[snip some explains]

>
> I have random_page_cost = 0.8 in my postgresql.conf. Setting it back to
> the default (4) results in a plan using test_id1.

it is not rational to have random_page_cost < 1.

if you see improvement with such a setting, it is as likely that
something else is wrong, such as higher statistic targets needed,
or a much too low effective_cache setting.

gnari



Re: Postgres Optimizer is not smart enough?

From
Mark Kirkwood
Date:
Ragnar Hafstað wrote:
>
>
>
> it is not rational to have random_page_cost < 1.
>
I agree, in theory one should never *need* to set it < 1. However in
cases when the optimizers understanding of things is a little off,
compensation may be required to achieve better plans (e.g. encouraging
index scans on data with funny distributions or collelations).

> if you see improvement with such a setting, it is as likely that
> something else is wrong, such as higher statistic targets needed,
> or a much too low effective_cache setting.
>
Altho this is good advice, it is not always sufficient. For instance I
have my effective_cache_size=20000. Now the machine has 512Mb ram and
right now cache+buf+free is about 100M, and shared_buffers=2000. So in
fact I probably have it a bit high :-).

Increasing stats target will either make the situation better or worse -
a better sample of data is obtained for analysis, but this is not
*guaranteed* to lead to a faster execution plan, even if in
general/usually it does.

cheers

Mark


Re: Postgres Optimizer is not smart enough?

From
Tom Lane
Date:
Mark Kirkwood <markir@coretech.co.nz> writes:
> I happen to have some debugging code enabled for the optimizer, and the
> issue appears to be that the costs of paths using these indexes are
> quite similar, so are quite sensitive to (some) parameter values.

They'll be exactly the same, actually, as long as the thing predicts
exactly one row retrieved.  So it's quasi-random which plan you get.

btcostestimate needs to be improved to understand that in multicolumn
index searches with inequality conditions, we may have to scan through
tuples that don't meet all the qualifications.  It's not accounting for
that cost at the moment, which is why the estimates are the same.

            regards, tom lane

Re: Postgres Optimizer is not smart enough?

From
Mark Kirkwood
Date:
Tom Lane wrote:
> Mark Kirkwood <markir@coretech.co.nz> writes:
> the costs of paths using these indexes are
>>quite similar, so are quite sensitive to (some) parameter values.
>
>
> They'll be exactly the same, actually, as long as the thing predicts
> exactly one row retrieved.  So it's quasi-random which plan you get.
>
> btcostestimate needs to be improved to understand that in multicolumn
> index searches with inequality conditions, we may have to scan through
> tuples that don't meet all the qualifications.  It's not accounting for
> that cost at the moment, which is why the estimates are the same.
>
I see some small differences in the numbers - I am thinking that these
are due to the calculations etc in cost_index(). e.g:

create_index_paths : index oid 12616389 (test_id2)
cost_index : cost=2.839112 (startup_cost=0.000000 run_cost=2.839112)
                    : tuples=1.000000 cpu_per_tuple=0.017500
                    : selectivity=0.000002
                    : run_index_tot_cost=2.003500 run_io_cost=0.818112)

create_index_paths : index oid 12616388 (test_id1)
cost_index : cost=2.933462 (startup_cost=0.002500 run_cost=2.930962)
                    : tuples=1.000000 cpu_per_tuple=0.010000
                    : selectivity=0.000002
                    : run_index_tot_cost=2.008500 run_io_cost=0.912462


Where:

run_index_tot_cost=indexTotalCost - indexStartupCost;
run_io_cost=max_IO_cost + csquared * (min_IO_cost - max_IO_cost)
selectivity=indexSelectivity

Hmmm ... so it's only the selectivity that is the same (sourced from
index->amcostestimate which I am guessing points to btcostestimate), is
that correct?

cheers

Mark



Re: Postgres Optimizer is not smart enough?

From
Tom Lane
Date:
Mark Kirkwood <markir@coretech.co.nz> writes:
> Hmmm ... so it's only the selectivity that is the same (sourced from
> index->amcostestimate which I am guessing points to btcostestimate), is
> that correct?

No, the point is that btcostestimate will compute not only the same
selectivities but the identical index access cost values, because it
thinks that only one index entry will be fetched in both cases.  It
needs to account for the fact that the inequality condition will cause a
scan over a larger range of the index than is actually returned.  See
_bt_preprocess_keys() and _bt_checkkeys().

The small differences you are showing have to do with different
assumptions about where the now() function will get evaluated (once per
row or once at scan start).  That's not the effect that I'm worried
about.

            regards, tom lane