Thread: Bad plan on a view

Bad plan on a view

From
PFC
Date:
I have a table with a few small numeric fields and several text fields, on
pg. 8.1.2.

The numeric fields are used for searching (category_id, price, etc).
The text fields are just a description of the item, comments, email
address, telephone, etc.

So, in order to speed up requests which need a full table scan, I wanted
to put the text fields in another table, and use a view to make it look
like nothing happened. Also, the small table used for searching is a lot
more likely to fit in RAM than the big table with all the text which is
only used for display.

However the query plan for the view is sometimes very bad (see below)

Here is a simplification of my schema with only 2 columns :

CREATE TABLE items (
   id SERIAL PRIMARY KEY,
   price FLOAT NULL,
   category INTEGER NOT NULL,
   description TEXT
);

CREATE TABLE items_data (
   id SERIAL PRIMARY KEY,
   price FLOAT NULL,
   category INTEGER NOT NULL
);

CREATE TABLE items_desc (
   id INTEGER NOT NULL REFERENCES items_data(id) ON DELETE CASCADE,
   PRIMARY KEY (id ),
   description TEXT
);

INSERT INTO items about 100K rows

INSERT INTO items_data (id,price,category) SELECT id,price,category FROM
items;
INSERT INTO items_desc (id,description) SELECT id,description FROM items;
alter table items_data ALTER price set statistics 100;
alter table items_data ALTER category set statistics 100;
VACUUM ANALYZE;

CREATE VIEW items_view1 AS SELECT a.id, a.price, a.category, b.description
 FROM items_data a, items_desc b WHERE a.id=b.id;
CREATE VIEW items_view2 AS SELECT a.id, a.price, a.category, b.description
 FROM items_data a LEFT JOIN items_desc b ON a.id=b.id;

Now, an example query :

** From the plain table

EXPLAIN ANALYZE SELECT * FROM items WHERE price IS NOT NULL AND category=1
ORDER BY price DESC LIMIT 10;
                                                        QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=10308.21..10308.23 rows=10 width=229) (actual
time=391.373..391.379 rows=10 loops=1)
    ->  Sort  (cost=10308.21..10409.37 rows=40466 width=229) (actual
time=391.371..391.375 rows=10 loops=1)
          Sort Key: price
          ->  Seq Scan on items  (cost=0.00..4549.57 rows=40466 width=229)
(actual time=0.652..91.125 rows=42845 loops=1)
                Filter: ((price IS NOT NULL) AND (category = 1))
  Total runtime: 399.511 ms

** From the data only table (no descriptions)

EXPLAIN ANALYZE SELECT * FROM items_data WHERE price IS NOT NULL AND
category=1 ORDER BY price DESC LIMIT 10;
                                                          QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=5250.92..5250.95 rows=10 width=16) (actual
time=275.765..275.769 rows=10 loops=1)
    ->  Sort  (cost=5250.92..5357.83 rows=42763 width=16) (actual
time=275.763..275.766 rows=10 loops=1)
          Sort Key: price
          ->  Seq Scan on items_data  (cost=0.00..1961.58 rows=42763
width=16) (actual time=0.411..57.610 rows=42845 loops=1)
                Filter: ((price IS NOT NULL) AND (category = 1))
  Total runtime: 278.023 ms

It is faster to access the smaller table. Note that I only added the
description column in this example. With all the other columns like
telephone, email, etc of my production table, which are used for display
only and not for searching, it takes about 1.2 seconds, simply because the
table is a lot larger (yes, it fits in RAM... for now).

Now, let's check out the 2 views : the plans are exactly the same

EXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND
category=1 ORDER BY price DESC LIMIT 10;
                                                                           QUERY
PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=13827.38..13827.41 rows=10 width=222) (actual
time=584.704..584.712 rows=10 loops=1)
    ->  Sort  (cost=13827.38..13934.29 rows=42763 width=222) (actual
time=584.703..584.709 rows=10 loops=1)
          Sort Key: a.price
          ->  Merge Left Join  (cost=0.00..7808.02 rows=42763 width=222)
(actual time=1.708..285.663 rows=42845 loops=1)
                Merge Cond: ("outer".id = "inner".id)
                ->  Index Scan using items_data_pkey on items_data a
(cost=0.00..2439.74 rows=42763 width=16) (actual time=0.692..86.330
rows=42845 loops=1)
                      Filter: ((price IS NOT NULL) AND (category = 1))
                ->  Index Scan using items_desc_pkey on items_desc b
(cost=0.00..4585.83 rows=99166 width=210) (actual time=0.038..104.957
rows=99165 loops=1)
  Total runtime: 593.068 ms

Wow. This is a lot slower because it does the big join BEFORE applying the
sort.

Here is the plain query generated by the view :
SELECT a.id, a.price, a.category, b.description FROM items_data a LEFT
JOIN items_desc b ON a.id=b.id WHERE price IS NOT NULL AND category=1
ORDER BY price DESC LIMIT 10;

I would have expected the planner to rewrite it like this :

EXPLAIN ANALYZE SELECT foo.*, b.description FROM (SELECT * FROM items_data
a WHERE price IS NOT NULL AND category=1 ORDER BY price DESC LIMIT 10) AS
foo LEFT JOIN items_desc b ON foo.id=b.id ORDER BY price DESC LIMIT 10;

This query should be equivalent to the view with LEFT JOIN. I am aware it
is not equivalent to the view with a simple join.

                                                                  QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=5250.92..5281.31 rows=10 width=222) (actual
time=273.300..273.363 rows=10 loops=1)
    ->  Nested Loop Left Join  (cost=5250.92..5281.31 rows=10 width=222)
(actual time=273.299..273.361 rows=10 loops=1)
          ->  Limit  (cost=5250.92..5250.95 rows=10 width=16) (actual
time=273.267..273.269 rows=10 loops=1)
                ->  Sort  (cost=5250.92..5357.83 rows=42763 width=16)
(actual time=273.266..273.267 rows=10 loops=1)
                      Sort Key: a.price
                      ->  Seq Scan on items_data a  (cost=0.00..1961.58
rows=42763 width=16) (actual time=0.423..67.149 rows=42845 loops=1)
                            Filter: ((price IS NOT NULL) AND (category = 1))
          ->  Index Scan using items_desc_pkey on items_desc b
(cost=0.00..3.01 rows=1 width=210) (actual time=0.006..0.007 rows=1
loops=10)
                Index Cond: ("outer".id = b.id)
  Total runtime: 275.608 ms

The second form is faster, but more importantly, it does nearly its IO in
the small table, and only fetches the needed 10 rows from the large table.
Thus if the large table is not in disk cache, this is not so bad, which is
the whole point of using a view to split this.

With indexes, fast plans are picked, but they all perform the join before
doing the sort+limit. Only if there is an index on the "ORDER BY" column,
it is used. And bitmap index scan also comes in to save the day (I love
bitmap index scan).

However, I will have a lot of searchable columns, and ORDER BY options.
Ideally I would like to create a few indexes for the common searches and
order-by's. I would prefer not to create about 15 indexes on this table,
because this will slow down updates. Besides, some of the ORDER BY's are
expressions.

A seq scan or an index scan of the small table, followed by a sort and
limit, then joining to the other table, wouls be more logical.

Suppose I create an index on price and on category :

EXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND
category IN (4,32) ORDER BY price LIMIT 10;
                                                                     QUERY
PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=0.00..31.54 rows=10 width=224) (actual time=0.737..0.964
rows=10 loops=1)
    ->  Nested Loop Left Join  (cost=0.00..112594.96 rows=35700 width=224)
(actual time=0.735..0.958 rows=10 loops=1)
          ->  Index Scan using item_data_price on items_data a
(cost=0.00..4566.76 rows=35700 width=16) (actual time=0.696..0.753 rows=10
loops=1)
                Filter: ((price IS NOT NULL) AND ((category = 4) OR
(category = 32)))
          ->  Index Scan using items_desc_pkey on items_desc b
(cost=0.00..3.01 rows=1 width=212) (actual time=0.018..0.018 rows=1
loops=10)
                Index Cond: ("outer".id = b.id)
  Total runtime: 0.817 ms

Now, with a subtly different order by :

EXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND
category IN (4,32) ORDER BY price,category LIMIT 10;
                                                                           QUERY
PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=12931.79..12931.82 rows=10 width=224) (actual
time=1121.426..1121.433 rows=10 loops=1)
    ->  Sort  (cost=12931.79..13021.04 rows=35700 width=224) (actual
time=1121.424..1121.428 rows=10 loops=1)
          Sort Key: a.price, a.category
          ->  Merge Left Join  (cost=0.00..7967.65 rows=35700 width=224)
(actual time=0.060..530.815 rows=36705 loops=1)
                Merge Cond: ("outer".id = "inner".id)
                ->  Index Scan using items_data_pkey on items_data a
(cost=0.00..2687.66 rows=35700 width=16) (actual time=0.051..116.995
rows=36705 loops=1)
                      Filter: ((price IS NOT NULL) AND ((category = 4) OR
(category = 32)))
                ->  Index Scan using items_desc_pkey on items_desc b
(cost=0.00..4585.83 rows=99166 width=212) (actual time=0.003..205.652
rows=95842 loops=1)
  Total runtime: 1128.972 ms

ORDER BY price,category disables the use of index for sort, and thus a
large join is performed. With the rewritten query :

EXPLAIN ANALYZE SELECT foo.*, b.description FROM (SELECT * FROM items_data
a WHERE price IS NOT NULL AND category IN (4,32) ORDER BY price,category
DESC LIMIT 10) AS foo LEFT JOIN items_desc b ON foo.id=b.id ORDER BY
price,category DESC LIMIT 10;
                                                                           QUERY
PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=4229.26..4259.64 rows=10 width=224) (actual
time=222.353..222.410 rows=10 loops=1)
    ->  Nested Loop Left Join  (cost=4229.26..4259.64 rows=10 width=224)
(actual time=222.352..222.405 rows=10 loops=1)
          ->  Limit  (cost=4229.26..4229.28 rows=10 width=16) (actual
time=222.318..222.324 rows=10 loops=1)
                ->  Sort  (cost=4229.26..4318.51 rows=35700 width=16)
(actual time=222.317..222.322 rows=10 loops=1)
                      Sort Key: a.price, a.category
                      ->  Bitmap Heap Scan on items_data a
(cost=239.56..1529.69 rows=35700 width=16) (actual time=6.926..34.018
rows=36705 loops=1)
                            Recheck Cond: ((category = 4) OR (category =
32))
                            Filter: (price IS NOT NULL)
                            ->  BitmapOr  (cost=239.56..239.56 rows=37875
width=0) (actual time=6.778..6.778 rows=0 loops=1)
                                  ->  Bitmap Index Scan on item_data_cat
(cost=0.00..229.61 rows=36460 width=0) (actual time=6.295..6.295
rows=36400 loops=1)
                                        Index Cond: (category = 4)
                                  ->  Bitmap Index Scan on item_data_cat
(cost=0.00..9.95 rows=1415 width=0) (actual time=0.482..0.482 rows=1340
loops=1)
                                        Index Cond: (category = 32)
          ->  Index Scan using items_desc_pkey on items_desc b
(cost=0.00..3.01 rows=1 width=212) (actual time=0.006..0.006 rows=1
loops=10)
                Index Cond: ("outer".id = b.id)
  Total runtime: 224.476 ms

It is not very fast (the sort takes most of the time), but still is a lot
faster !

Now, what should I do ?...






Re: Bad plan on a view

From
Tom Lane
Date:
PFC <lists@peufeu.com> writes:
> So, in order to speed up requests which need a full table scan, I wanted
> to put the text fields in another table, and use a view to make it look
> like nothing happened. Also, the small table used for searching is a lot
> more likely to fit in RAM than the big table with all the text which is
> only used for display.

Aren't you going to a lot of work to reinvent something that TOAST
already does for you?  (At least, in the cases where the text fields
are wide enough that it really matters.)

            regards, tom lane

Re: Bad plan on a view

From
PFC
Date:
> Aren't you going to a lot of work to reinvent something that TOAST
> already does for you?  (At least, in the cases where the text fields
> are wide enough that it really matters.)

    I know. But I have several text fields in the 20 to 200 characters, which
is too small for toast, but large enough to make up about 90% of the table
size, which makes it problematic RAM-wise, especially since it's gonna
grow. Now, if I had 1 big text field, it would be TOASTed and I would be
happy ;)


Re: Bad plan on a view

From
Greg Stark
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> PFC <lists@peufeu.com> writes:
> > So, in order to speed up requests which need a full table scan, I wanted
> > to put the text fields in another table, and use a view to make it look
> > like nothing happened. Also, the small table used for searching is a lot
> > more likely to fit in RAM than the big table with all the text which is
> > only used for display.
>
> Aren't you going to a lot of work to reinvent something that TOAST
> already does for you?  (At least, in the cases where the text fields
> are wide enough that it really matters.)

I think this is a fairly common data modelling trick actually. And it's not a
terribly large amount of work either.

While TOAST has a similar goal I don't think it has enough AI to completely
replace this manual process. It suffers in a number of use cases:

1) When you have a large number of moderate sized text fields instead of a
   single very large text field. This is probably the case here.

2) When you know exactly which fields you'll be searching on and which you
   won't be. Often many speed-sensitive queries don't need to access the
   extended information at all.

   Instead of making the decision on a per-record basis you can *always* move
   the data to the other table saving even more space even in cases where
   you're gaining very little per record. In total across the entire scan you
   still gain a lot being able to scan just the dense integer fields.


Also, is the optimizer capable of coming up with merge join type plans for
TOAST tables when necessary?


--
greg

Re: Bad plan on a view

From
PFC
Date:

> While TOAST has a similar goal I don't think it has enough AI to
> completely
> replace this manual process. It suffers in a number of use cases:
>
> 1) When you have a large number of moderate sized text fields instead of
> a    single very large text field. This is probably the case here.

    Exactly.

> 2) When you know exactly which fields you'll be searching on and which
> you won't be. Often many speed-sensitive queries don't need to access the
>    extended information at all.

    Also true. I only need the large fields to display the few rows which
survive the LIMIT...

    Here's one of the same :
    Although the subselect has no influence on the WHERE condition, 97021
subselects are computed, and only 10 kept...
    This data also bloats the sort (if the subselect yields a large text
field instead of an int, the sort time doubles).

explain analyze select raw_annonce_id, price, rooms, surface, terrain,
contact_telephones, description, (SELECT price FROM raw_annonces r WHERE
r.id=raw_annonce_id) from annonces where price is not null order by price
desc limit 10;
                                                                         QUERY
PLAN

----------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=459568.37..459568.40 rows=10 width=272) (actual
time=1967.360..1967.368 rows=10 loops=1)
    ->  Sort  (cost=459568.37..459812.60 rows=97689 width=272) (actual
time=1967.357..1967.361 rows=10 loops=1)
          Sort Key: price
          ->  Seq Scan on annonces  (cost=0.00..443102.59 rows=97689
width=272) (actual time=0.059..949.507 rows=97021 loops=1)
                Filter: (price IS NOT NULL)
                SubPlan
                  ->  Index Scan using raw_annonces_pkey on raw_annonces r
(cost=0.00..4.46 rows=1 width=8) (actual time=0.005..0.006 rows=1
loops=97021)
                        Index Cond: (id = $0)
  Total runtime: 1988.786 ms






Re: Bad plan on a view

From
"Jim C. Nasby"
Date:
On Wed, Mar 01, 2006 at 04:43:53PM +0100, PFC wrote:
>
> >Aren't you going to a lot of work to reinvent something that TOAST
> >already does for you?  (At least, in the cases where the text fields
> >are wide enough that it really matters.)
>
>     I know. But I have several text fields in the 20 to 200 characters,
>     which  is too small for toast, but large enough to make up about 90% of the
> table  size, which makes it problematic RAM-wise, especially since it's
> gonna  grow. Now, if I had 1 big text field, it would be TOASTed and I
> would be  happy ;)

Cases like this are why I really wish we had the ability to specify
something other than BLKSZ/4 as when to trigger TOAST. In many cases the
text field is seldom refered to, so getting it out of the main heap is a
big win.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461