Thread: Query with "like" is really slow

Query with "like" is really slow

From
Christian Schröder
Date:
Hi list,
if you please have a look at the following query:

SELECT DISTINCT a FROM table1
INNER JOIN table2 USING (b)
INNER JOIN view1 USING (a)
WHERE c like '1131%'
AND d IS NOT NULL
AND e IS NOT NULL;

Unfortunately, I have not been able to construct a suitable test case,
so I had to take the query from our productive system and had to replace
all table and field names because our table layout is considered
confidential. The tables and views are explained at the end of this mail.

This query is really slow, so I had a look at the query plan:


QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Unique  (cost=14960.17..14960.18 rows=2 width=16) (actual
time=532691.185..532692.714 rows=1625 loops=1)
   ->  Sort  (cost=14960.17..14960.17 rows=2 width=16) (actual
time=532691.183..532691.632 rows=1625 loops=1)
         Sort Key: table1.a
         ->  Nested Loop  (cost=0.00..14960.16 rows=2 width=16) (actual
time=145.972..532673.631 rows=1625 loops=1)
               Join Filter: ((table1.a)::bpchar = (table3.a)::bpchar)
               ->  Nested Loop  (cost=0.00..1673.61 rows=2 width=16)
(actual time=0.094..31.692 rows=1627 loops=1)
                     ->  Seq Scan on table2  (cost=0.00..186.64 rows=2
width=4) (actual time=0.052..2.259 rows=42 loops=1)
                           Filter: (c ~~ '1131%'::text)
                     ->  Index Scan using "table1_b_index" on table1
(cost=0.00..743.39 rows=8 width=20) (actual time=0.047..0.625 rows=39
loops=42)
                           Index Cond: (table1.b = table2.b)
                           Filter: ((d IS NOT NULL) AND (e IS NOT NULL))
               ->  Index Scan using "table3_f" on table3
(cost=0.00..3737.05 rows=232498 width=16) (actual time=0.092..243.643
rows=225893 loops=1627)
                     Index Cond: (f = 'foo'::bpchar)
 Total runtime: 532693.200 ms
(14 rows)

If I replace "where c like '1131%'" with "where substring(c from 1 for
4) = '1131'" (which to me seems to be exactly equivalent) I get a
completely different query plan which is much faster:


QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Unique  (cost=30626.05..30626.11 rows=13 width=16) (actual
time=378.237..379.773 rows=1625 loops=1)
   ->  Sort  (cost=30626.05..30626.08 rows=13 width=16) (actual
time=378.236..378.685 rows=1625 loops=1)
         Sort Key: table1.a
         ->  Hash Join  (cost=9507.43..30625.80 rows=13 width=16)
(actual time=22.189..368.361 rows=1625 loops=1)
               Hash Cond: ((table3.a)::bpchar = (table1.a)::bpchar)
               ->  Seq Scan on table3  (cost=0.00..20246.38 rows=232498
width=16) (actual time=0.012..253.335 rows=225893 loops=1)
                     Filter: (f = 'foo'::bpchar)
               ->  Hash  (cost=9507.22..9507.22 rows=17 width=16)
(actual time=20.921..20.921 rows=1627 loops=1)
                     ->  Nested Loop  (cost=0.00..9507.22 rows=17
width=16) (actual time=0.121..19.837 rows=1627 loops=1)
                           ->  Seq Scan on table2  (cost=0.00..200.89
rows=14 width=4) (actual time=0.084..3.419 rows=42 loops=1)
                                 Filter: ("substring"((c)::text, 1, 4) =
'1131'::text)
                           ->  Index Scan using "table1_b_index" on
table1  (cost=0.00..664.64 rows=8 width=20) (actual time=0.024..0.364
rows=39 loops=42)
                                 Index Cond: (table1.b = table2.b)
                                 Filter: ((d IS NOT NULL) AND (e IS NOT
NULL))
 Total runtime: 380.259 ms
(15 rows)

My question is: Why do I have to optimize my query (use "substring"
instead of "like") instead of having the database do this for me? Or is
there a difference between both queries which I cannot see?
What can I do to get better results when using "like"? Do I have to add
indices?
And last question: I do not really understand the first query plan. The
actual time for the outer nested loop is 532673.631 ms. As far as I have
understood the docs this includes the child nodes. But I cannot find the
time-consuming child node. I only see two child nodes: The inner nested
loop (which took 31.692 ms) and the index scan (which took 243.643 ms).
Or do I have to multiply the 243.643 ms with 1627 (number of loops)? But
even then I get 396407.161 ms, which is still far away from the
532673.631 ms in the parent node.

Thanks for any help!

Regards,
    Christian

Appendix:

The tables and views look like this (I have left out the fields and
constraints that don't participate in the queries):

               Table "table1"
 Column |       Type       | Modifiers
--------+------------------+-------------------------------
 a      | character(12)    | not null
 b      | integer          | not null
 d      | double precision |
 e      | double precision |
 ...    | ...              | ...
Indexes:
    "table1_pkey" PRIMARY KEY, btree (a)
    "table1_b_index" btree (b)

               Table "table2"
 Column |       Type       | Modifiers
--------+------------------+-------------------------------
 b      | integer          | not null
 c      | character(10)    | not null
 ...    | ...              | ...
Indexes:
    "table2_pkey" PRIMARY KEY, btree (b)

               View "view1"
 Column |       Type       | Modifiers
--------+------------------+-------------------------------
 a      | character(12)    |
 g      | double precision | not null
 ...    | ...              | ...
View definition:
 SELECT table3.a, table3.g
 FROM ONLY table3
 WHERE f = 'foo'::bpchar;

               Table "table3"
 Column |       Type       | Modifiers
--------+------------------+-------------------------------
 id     | integer          | not null
 a      | character(12)    | not null
 f      | character(20)    | not null
 g      | double precision |
 ...    | ...              | ...
Indexes:
    "table3_pkey" PRIMARY KEY, btree (id)
    "table3_f" btree (f)



--
Deriva GmbH                         Tel.: +49 551 489500-42
Financial IT and Consulting         Fax:  +49 551 489500-91
Hans-Böckler-Straße 2                  http://www.deriva.de
D-37079 Göttingen

Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer



Re: Query with "like" is really slow

From
Gregory Stark
Date:
Christian Schröder <cs@deriva.de> writes:

...
> ->  Seq Scan on table2  (cost=0.00..186.64 rows=2 width=4) (actual time=0.052..2.259 rows=42 loops=1)
>       Filter: (c ~~ '1131%'::text)
...
> ->  Seq Scan on table2  (cost=0.00..200.89 rows=14 width=4) (actual time=0.084..3.419 rows=42 loops=1)
>       Filter: ("substring"((c)::text, 1, 4) = 1131'::text)
...

> My question is: Why do I have to optimize my query (use "substring" instead
> of "like") instead of having the database do this for me? Or is there a
> difference between both queries which I cannot see?

The only difference is that the optimizer understands LIKE better than it does
substring and so it tries harder to come up with a good estimate of how many
rows will match. In this case it seems its estimate is actually better (by
pure luck) with the substring() call. But it's still not very good.

Have these tables been analyzed recently? If so try raising the statistics
target on the "c" column. If the number of rows estimated goes up from 2 to
the 14 it's estimating with substring() then you'll get the better plan.
Hopefully it would be even better than that though.

> What can I do to get better results when using "like"? Do I have to add
> indices?

You could add an index on "c" (using the varchar_pattern_ops opclass if you're
not in C locale) but I don't know how much it would help. Most of the time is
being spent in the index lookups as you suspected below. So speeding up this
side of the join might not help much. On the other hand it might let the
optimizer do a sequential scan on the other side and use index lookups on this
side instead.

Note that the optimizer is unable to use an index for the substring() approach
so i really think you're better off getting the estimates better for LIKE.

> And last question: I do not really understand the first query plan. The actual
> time for the outer nested loop is 532673.631 ms. As far as I have understood
> the docs this includes the child nodes. But I cannot find the time-consuming
> child node. I only see two child nodes: The inner nested loop (which took
> 31.692 ms) and the index scan (which took 243.643 ms). Or do I have to multiply
> the 243.643 ms with 1627 (number of loops)? But even then I get 396407.161 ms,
> which is still far away from the 532673.631 ms in the parent node.

The nested loop still has to do some work. Actually it's quite possible that
that extra overhead in nested loop is largely gettimeofday() calls for the
explain analyze. Does the query take less time run without explain analyze
than it does run with it?

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

Re: Query with "like" is really slow

From
"Carlo Stonebanks"
Date:
<<Actually it's quite possible that
that extra overhead in nested loop is largely gettimeofday() calls for the
explain analyze. Does the query take less time run without explain analyze
than it does run with it?
>>

Do you mean that an EXPLAIN ANALYZE that runs longer than the actual query
is a symptom of time being lost to a function call?


Re: Query with "like" is really slow

From
Alvaro Herrera
Date:
Carlo Stonebanks wrote:
> <<Actually it's quite possible that
> that extra overhead in nested loop is largely gettimeofday() calls for the
> explain analyze. Does the query take less time run without explain analyze
> than it does run with it?
>>>
>
> Do you mean that an EXPLAIN ANALYZE that runs longer than the actual query
> is a symptom of time being lost to a function call?

It means that measuring alters the reality.

--
Alvaro Herrera                         http://www.flickr.com/photos/alvherre/
"I must say, I am absolutely impressed with what pgsql's implementation of
VALUES allows me to do. It's kind of ridiculous how much "work" goes away in
my code.  Too bad I can't do this at work (Oracle 8/9)."       (Tom Allison)
           http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php

Re: Query with "like" is really slow

From
"Carlo Stonebanks"
Date:
>> It means that measuring alters the reality.

Heisenberg would agree!




Re: Query with "like" is really slow

From
"Scott Marlowe"
Date:
On 9/7/07, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> >> It means that measuring alters the reality.
>
> Heisenberg would agree!

I'm not entirely sure what Schrödinger would think about it.

Re: Query with "like" is really slow

From
"Andrej Ricnik-Bay"
Date:
On 9/8/07, Scott Marlowe <scott.marlowe@gmail.com> wrote:

> > Heisenberg would agree!
> I'm not entirely sure what Schrödinger would think about it.
He'd kill Heisenberg's cat?

Re: Query with "like" is really slow

From
"Carlo Stonebanks"
Date:
>> I'm not entirely sure what Schrödinger would think about it.

I know what the cat thinks:

"What do you mean 'no observer'? What the #&*! am I??? LET ME OUT!"



Re: Query with "like" is really slow

From
Christian Schröder
Date:
Gregory Stark wrote:<br /><br /><blockquote type="cite"><pre wrap="">Christian Schröder <a
class="moz-txt-link-rfc2396E"href="mailto:cs@deriva.de"><cs@deriva.de></a> writes:
 

... </pre><blockquote type="cite"><pre wrap="">->  Seq Scan on table2  (cost=0.00..186.64 rows=2 width=4) (actual
time=0.052..2.259rows=42 loops=1)     Filter: (c ~~ '1131%'::text)   </pre></blockquote><pre wrap="">...
</pre><blockquotetype="cite"><pre wrap="">->  Seq Scan on table2  (cost=0.00..200.89 rows=14 width=4) (actual
time=0.084..3.419rows=42 loops=1)     Filter: ("substring"((c)::text, 1, 4) = 1131'::text)   </pre></blockquote><pre
wrap="">...
 </pre><blockquote type="cite"><pre wrap="">My question is: Why do I have to optimize my query (use "substring"
instead
of "like") instead of having the database do this for me? Or is there a
difference between both queries which I cannot see?   </pre></blockquote><pre wrap="">
The only difference is that the optimizer understands LIKE better than it does
substring and so it tries harder to come up with a good estimate of how many
rows will match. In this case it seems its estimate is actually better (by
pure luck) with the substring() call. But it's still not very good.

Have these tables been analyzed recently? If so try raising the statistics
target on the "c" column. If the number of rows estimated goes up from 2 to
the 14 it's estimating with substring() then you'll get the better plan.
Hopefully it would be even better than that though. </pre></blockquote><br /> Yes, all tables are "vacuum analyzed"
twiceper day. (I did not have time to configure the auto-vacuum feature.)<br /><br /> But after increasing the
statisticstarget of the column to 20 and re-analyzing the table the query planner chose the better plan and the query
gotsped up dramatically. You seem to have found the problem!<br /> I have now increased the default statistics target
from10 to 20 and the statistics target of this column to 500. We have about 190 distinct values in this column, so with
astatistics target of 500 the statistics should be as exact as possible. (At least if I have understood well what this
parametermeans.) Since we have many queries that rely on this column to me it seems to be a good idea to have best
statisticsabout it. I cannot see any disadvantage of this approach, at least if I do it only for one single column. Or
doI overlook anything?<br /><br /><blockquote type="cite"><pre wrap=""></pre><blockquote type="cite"><pre wrap="">And
lastquestion: I do not really understand the first query plan. The actual
 
time for the outer nested loop is 532673.631 ms. As far as I have understood
the docs this includes the child nodes. But I cannot find the time-consuming
child node. I only see two child nodes: The inner nested loop (which took
31.692 ms) and the index scan (which took 243.643 ms). Or do I have to multiply
the 243.643 ms with 1627 (number of loops)? But even then I get 396407.161 ms,
which is still far away from the 532673.631 ms in the parent node.   </pre></blockquote><pre wrap="">
The nested loop still has to do some work. Actually it's quite possible that
that extra overhead in nested loop is largely gettimeofday() calls for the
explain analyze. Does the query take less time run without explain analyze
than it does run with it? </pre></blockquote> You seem to be right with your assumption that most of the extra time is
spentin the gettimeofday() calls: Without "explain analyze" the query took about 6 minutes which is close to 380
secondsthat I get from multiplying the number of loops (1627) with the actual time per loop (234.643 ms).<br /><br />
Manythanks for your very helpful explanations!<br /><br /> Regards,<br />     Christian<br /><pre class="moz-signature"
cols="72">--
 
Deriva GmbH                         Tel.: +49 551 489500-42
Financial IT and Consulting         Fax:  +49 551 489500-91
Hans-Böckler-Straße 2                  <a class="moz-txt-link-freetext"
href="http://www.deriva.de">http://www.deriva.de</a>
D-37079 Göttingen

Deriva CA Certificate: <a class="moz-txt-link-freetext"
href="http://www.deriva.de/deriva-ca.cer">http://www.deriva.de/deriva-ca.cer</a></pre>