Thread: 512,600ms query becomes 7500ms... but why? Postgres 8.3 query planner quirk?

From:
Bryce Nesbitt
Date:

I've got a very slow query, which I can make faster by doing something
seemingly trivial.
The query has been trouble for years (always slow, sometimes taking hours):

 512,600ms Original, filter on articles.indexed (622 results)
   7,500ms Remove "AND articles.indexed" (726 results, undesirable).
   7,675ms Extra join for "AND articles.indexed" (622 results, same as
original).

Hardware is Postgres 8.3 on a Sunfire X4240 under Debian Lenny, with a
fresh ANALYZE. What I don't understand is why the improvement? Is the
second way of doing things actually superior, or is this just a query
planner edge case?


Original (512,600ms)
---------------------
EXPLAIN ANALYZE
SELECT contexts.context_key FROM contexts
JOIN articles ON (articles.context_key=contexts.context_key)
JOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)
WHERE contexts.context_key IN
  (SELECT context_key FROM article_words JOIN words using (word_key)
WHERE word = 'insider'
   INTERSECT
   SELECT context_key FROM article_words JOIN words using (word_key)
WHERE word = 'trading')
AND contexts.context_key IN
  (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories ON
(a.ancestor_key = bp_categories.context_key)
   WHERE lower(bp_categories.category) = 'law')
AND articles.indexed;

Extra join (7,675ms)
---------------------------------------------
EXPLAIN ANALYZE
SELECT contexts.context_key FROM contexts JOIN articles using (context_key)
WHERE contexts.context_key IN
(
SELECT contexts.context_key FROM contexts
JOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)
WHERE contexts.context_key IN
    (SELECT context_key FROM article_words JOIN words using (word_key)
WHERE word = 'insider'
     INTERSECT
     SELECT context_key FROM article_words JOIN words using (word_key)
WHERE word = 'trading')
AND contexts.context_key IN
    (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories ON
(a.ancestor_key = bp_categories.context_key)
     WHERE lower(bp_categories.category) = 'law')
)
AND articles.indexed;



# select indexed,count(*) from articles group by indexed;
 indexed | count
---------+--------
 t       | 354605
 f       | 513552


QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=131663.39..140005.40 rows=4769 width=4) (actual
time=511893.241..512599.348 rows=622 loops=1)
   Hash Cond: (matview_82034.context_key = articles.context_key)
   ->  Seq Scan on matview_82034  (cost=0.00..6596.00 rows=465600
width=4) (actual time=0.019..463.278 rows=438220 loops=1)
   ->  Hash  (cost=131663.38..131663.38 rows=1 width=16) (actual
time=511659.671..511659.671 rows=622 loops=1)
         ->  Nested Loop IN Join  (cost=46757.70..131663.38 rows=1
width=16) (actual time=1142.789..511656.211 rows=622 loops=1)
               Join Filter: (a.context_key = articles.context_key)
               ->  Nested Loop  (cost=46757.70..46789.06 rows=2
width=12) (actual time=688.057..839.297 rows=1472 loops=1)
                     ->  Nested Loop  (cost=46757.70..46780.26 rows=2
width=8) (actual time=688.022..799.945 rows=1472 loops=1)
                           ->  Subquery Scan "IN_subquery"
(cost=46757.70..46757.97 rows=5 width=4) (actual time=687.963..743.587
rows=1652 loops=1)
                                 ->  SetOp Intersect
(cost=46757.70..46757.93 rows=5 width=4) (actual time=687.961..738.955
rows=1652 loops=1)
                                       ->  Sort
(cost=46757.70..46757.81 rows=46 width=4) (actual time=687.943..709.972
rows=19527 loops=1)
                                             Sort Key: "*SELECT*
1".context_key
                                             Sort Method:  quicksort
Memory: 1684kB
                                             ->  Append
(cost=0.00..46756.43 rows=46 width=4) (actual time=8.385..657.839
rows=19527 loops=1)
                                                   ->  Subquery Scan
"*SELECT* 1"  (cost=0.00..23378.21 rows=23 width=4) (actual
time=8.383..215.613 rows=4002 loops=1)
                                                         ->  Nested
Loop  (cost=0.00..23377.98 rows=23 width=4) (actual time=8.380..207.499
rows=4002 loops=1)
                                                               ->  Index
Scan using words_word on words  (cost=0.00..5.47 rows=1 width=4) (actual
time=0.102..0.105 rows=1 loops=1)

Index Cond: ((word)::text = 'insider'::text)
                                                               ->  Index
Scan using article_words_wc on article_words  (cost=0.00..23219.17
rows=12268 width=8) (actual time=8.272..199.224 rows=4002 loops=1)

Index Cond: (public.article_words.word_key = public.words.word_key)
                                                   ->  Subquery Scan
"*SELECT* 2"  (cost=0.00..23378.21 rows=23 width=4) (actual
time=5.397..404.164 rows=15525 loops=1)
                                                         ->  Nested
Loop  (cost=0.00..23377.98 rows=23 width=4) (actual time=5.394..372.883
rows=15525 loops=1)
                                                               ->  Index
Scan using words_word on words  (cost=0.00..5.47 rows=1 width=4) (actual
time=0.054..0.056 rows=1 loops=1)

Index Cond: ((word)::text = 'trading'::text)
                                                               ->  Index
Scan using article_words_wc on article_words  (cost=0.00..23219.17
rows=12268 width=8) (actual time=5.331..341.535 rows=15525 loops=1)

Index Cond: (public.article_words.word_key = public.words.word_key)
                           ->  Index Scan using article_key_idx on
articles  (cost=0.00..4.44 rows=1 width=4) (actual time=0.026..0.029
rows=1 loops=1652)
                                 Index Cond: (articles.context_key =
"IN_subquery".context_key)
                                 Filter: articles.indexed
                     ->  Index Scan using contexts_pkey on contexts
(cost=0.00..4.39 rows=1 width=4) (actual time=0.018..0.021 rows=1
loops=1472)
                           Index Cond: (contexts.context_key =
articles.context_key)
               ->  Nested Loop  (cost=0.00..111539.51 rows=1261757
width=4) (actual time=0.019..306.679 rows=39189 loops=1472)
                     ->  Seq Scan on bp_categories  (cost=0.00..1315.59
rows=16613 width=4) (actual time=0.008..57.960 rows=14529 loops=1472)
                           Filter: (lower(category) = 'law'::text)
                     ->  Index Scan using virtual_ancestor_key_idx on
virtual_ancestors a  (cost=0.00..5.18 rows=116 width=8) (actual
time=0.005..0.010 rows=3 loops=21386112)
                           Index Cond: (a.ancestor_key =
bp_categories.context_key)
 Total runtime: 512600.354 ms
(37 rows)


From:
Bryce Nesbitt
Date:

Or, if you want to actually read that query plan, try:
http://explain.depesz.com/s/qYq


From:
Pavel Stehule
Date:

2010/2/10 Bryce Nesbitt <>:
> Or, if you want to actually read that query plan, try:
> http://explain.depesz.com/s/qYq
>

hello,

check your work_mem sesttings. Hash join is very slow in your case.

Pavel

>
> --
> Sent via pgsql-performance mailing list ()
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

From:
Robert Haas
Date:

On Wed, Feb 10, 2010 at 3:29 AM, Bryce Nesbitt <> wrote:
> Or, if you want to actually read that query plan, try:
> http://explain.depesz.com/s/qYq

Much better, though I prefer a text attachment...  anyhow, I think the
root of the problem may be that both of the subquery scans under the
append node are seeing hundreds of times more rows than they're
expecting, which is causing the planner to choose nested loops higher
up that it otherwise might have preferred to implement in some other
way.  I'm not quite sure why, though.

...Robert

From:
"Jorge Montero"
Date:

That sure looks like the source of the problem to me too. I've seen similar behavior in queries not very different from
that.It's hard to guess  what the problem is exactly without having more knowledge of the data distribution in
article_wordsthough. 

Given the results of analyze, I'd try to run the deepest subquery and try to see if I could get the estimate to match
reality,either by altering statistics targets, or tweaking the query to give more information to the planner.  

For example, i'd check if the number of expected rows from

SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'insider'

is much less accurate than the estimate for

SELECT context_key FROM article_words WHERE word_key = (whatever the actual word_key for insider is)


>>> Robert Haas <> 02/10/10 2:31 PM >>>
On Wed, Feb 10, 2010 at 3:29 AM, Bryce Nesbitt <> wrote:
> Or, if you want to actually read that query plan, try:
> http://explain.depesz.com/s/qYq

Much better, though I prefer a text attachment...  anyhow, I think the
root of the problem may be that both of the subquery scans under the
append node are seeing hundreds of times more rows than they're
expecting, which is causing the planner to choose nested loops higher
up that it otherwise might have preferred to implement in some other
way.  I'm not quite sure why, though.

...Robert

--
Sent via pgsql-performance mailing list ()
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


From:
Bryce Nesbitt
Date:

If you guys succeed in making this class of query perform, you'll have beat out the professional consulting firm we hired, which was all but useless!  The query is usually slow, but particular combinations of words seem to make it obscenely slow.

The query plans are now attached (sorry I did not start there: many lists reject attachments). Or you can click on "text" at the query planner analysis site http://explain.depesz.com/s/qYq


Here's typical server load:
Tasks: 166 total,   1 running, 165 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.2%us,  0.9%sy,  0.0%ni, 86.5%id, 11.2%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  32966936k total, 32873860k used,    93076k free,     2080k buffers
Swap: 33554424k total,      472k used, 33553952k free, 30572904k cached



Configurations modified from Postgres 8.3 default are:

listen_addresses = '10.100.2.11, 10.101.2.11'   # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 400                   # (change requires restart)
shared_buffers = 4096MB                 # min 128kB or max_connections*16kB
work_mem = 16MB                         # min 64kB
max_fsm_pages =  500000                 # default:20000 min:max_fsm_relations*16,6 bytes see:MAIN-5740
max_fsm_relations = 2700                # min 100, ~70 bytes each
checkpoint_segments = 20                # in logfile segments, min 1, 16MB each
random_page_cost = 2.0                  # same scale as above
effective_cache_size = 28672MB
default_statistics_target = 150         # range 1-1000
log_destination = 'syslog'              # Valid values are combinations of
log_min_error_statement = error # values in order of decreasing detail:
log_min_duration_statement = 5000       # -1 is disabled, 0 logs all statements
log_checkpoints = on                    # default off
autovacuum_naptime = 5min               # time between autovacuum runs
escape_string_warning = off             # default:on  (See bepress MAIN-4857)
standard_conforming_strings = off       # deafult:off (See bepress MAIN-4857)



production=# EXPLAIN ANALYZE SELECT context_key FROM article_words
JOIN words using (word_key) WHERE word = 'insider';
-------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..23393.15 rows=23 width=4) (actual time=0.077..15.637 rows=4003 loops=1)
   ->  Index Scan using words_word on words  (cost=0.00..5.47 rows=1 width=4) (actual time=0.049..0.051 rows=1 loops=1)
         Index Cond: ((word)::text = 'insider'::text)
   ->  Index Scan using article_words_wc on article_words  (cost=0.00..23234.38 rows=12264 width=8) (actual time=0.020..7.237 rows=4003 loops=1)
         Index Cond: (article_words.word_key = words.word_key)
 Total runtime: 19.776 ms

production=# EXPLAIN ANALYZE SELECT context_key FROM article_words
WHERE word_key = 3675;
-------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using article_words_wc on article_words  (cost=0.00..21433.53 rows=11309 width=4) (actual time=0.025..7.579 rows=4003 loops=1)
   Index Cond: (word_key = 3675)
 Total runtime: 11.704 ms



production=# explain analyze select count(*) from article_words;
Aggregate  (cost=263831.63..263831.64 rows=1 width=0) (actual time=35851.654..35851.655 rows=1 loops=1)
   ->  Seq Scan on words  (cost=0.00..229311.30 rows=13808130 width=0) (actual time=0.043..21281.124 rows=13808184 loops=1)
 Total runtime: 35851.723 ms

production=# select count(*) from words;
13,808,184


production=# explain analyze select count(*) from article_words;
Aggregate  (cost=5453242.40..5453242.41 rows=1 width=0) (actual time=776504.017..776504.018 rows=1 loops=1)
   ->  Seq Scan on article_words  (cost=0.00..4653453.52 rows=319915552 width=0) (actual time=0.034..438969.347 rows=319956663 loops=1)
 Total runtime: 776504.177 ms

production=# select count(*) from article_words;
319,956,720

From:
Tom Lane
Date:

Bryce Nesbitt <> writes:
> The query plans are now attached (sorry I did not start there: many
> lists reject attachments). Or you can click on "text" at the query
> planner analysis site http://explain.depesz.com/s/qYq

At least some of the problem is the terrible quality of the rowcount
estimates in the IN subquery, as you extracted here:

>  Nested Loop  (cost=0.00..23393.15 rows=23 width=4) (actual time=0.077..15.637 rows=4003 loops=1)
>    ->  Index Scan using words_word on words  (cost=0.00..5.47 rows=1 width=4) (actual time=0.049..0.051 rows=1
loops=1)
>          Index Cond: ((word)::text = 'insider'::text)
>    ->  Index Scan using article_words_wc on article_words (cost=0.00..23234.38 rows=12264 width=8) (actual
time=0.020..7.237rows=4003 loops=1) 
>          Index Cond: (article_words.word_key = words.word_key)
>  Total runtime: 19.776 ms

Given that it estimated 1 row out of "words" (quite correctly) and 12264
rows out of each scan on article_words, you'd think that the join size
estimate would be 12264, which would be off by "only" a factor of 3 from
the true result.  Instead it's 23, off by a factor of 200 :-(.

Running a roughly similar test case here, I see that 8.4 gives
significantly saner estimates, which I think is because of this patch:
http://archives.postgresql.org/pgsql-committers/2008-10/msg00191.php

At the time I didn't want to risk back-patching it, because there
were a lot of other changes in the same general area in 8.4.  But
it would be interesting to see what happens with your example if
you patch 8.3 similarly.  (Note: I think only the first diff hunk
is relevant to 8.3.)

            regards, tom lane

From:
Robert Haas
Date:

On Wed, Feb 10, 2010 at 8:52 PM, Bryce Nesbitt <> wrote:
> If you guys succeed in making this class of query perform, you'll have beat
> out the professional consulting firm we hired, which was all but useless!
> The query is usually slow, but particular combinations of words seem to make
> it obscenely slow.

Heh heh heh professional consulting firm.

> production=# EXPLAIN ANALYZE SELECT context_key FROM article_words
> WHERE word_key = 3675;
>
-------------------------------------------------------------------------------------------------------------------------------------------
>  Index Scan using article_words_wc on article_words  (cost=0.00..21433.53
> rows=11309 width=4) (actual time=0.025..7.579 rows=4003 loops=1)
>    Index Cond: (word_key = 3675)
>  Total runtime: 11.704 ms

That's surprisingly inaccurate.  Since this table is large:

> production=# explain analyze select count(*) from article_words;
> Aggregate  (cost=263831.63..263831.64 rows=1 width=0) (actual
> time=35851.654..35851.655 rows=1 loops=1)
>    ->  Seq Scan on words  (cost=0.00..229311.30 rows=13808130 width=0)
> (actual time=0.043..21281.124 rows=13808184 loops=1)
>  Total runtime: 35851.723 ms

...you may need to crank up the statistics target.  I would probably
try cranking it all the way up to the max, though there is a risk that
might backfire, in which case you'll need to decrease it again.

ALTER TABLE article_words ALTER COLUMN word_key SET STATISTICS 1000;

That's probably not going to fix your whole problem, but it should be
interesting to see whether it makes things better or worse and by how
much.

...Robert

From:
"Kevin Grittner"
Date:

Bryce Nesbitt <> wrote:

> I've got a very slow query, which I can make faster by doing
> something seemingly trivial.

Out of curiosity, what kind of performance do you get with?:

EXPLAIN ANALYZE
SELECT contexts.context_key
  FROM contexts
  JOIN articles ON (articles.context_key = contexts.context_key)
  JOIN matview_82034 ON (matview_82034.context_key =
                         contexts.context_key)
  WHERE EXISTS
        (
          SELECT *
            FROM article_words
            JOIN words using (word_key)
            WHERE context_key = contexts.context_key
              AND word = 'insider'
        )
    AND EXISTS
        (
          SELECT *
            FROM article_words
            JOIN words using (word_key)
            WHERE context_key = contexts.context_key
              AND word = 'trading'
        )
    AND EXISTS
        (
          SELECT *
            FROM virtual_ancestors a
            JOIN bp_categories ON (bp_categories.context_key =
                                   a.ancestor_key)
            WHERE a.context_key = contexts.context_key
              AND lower(bp_categories.category) = 'law'
        )
    AND articles.indexed
;

(You may have to add some table aliases in the subqueries.)

If you are able to make a copy on 8.4 and test the various forms,
that would also be interesting.  I suspect that the above might do
pretty well in 8.4.

-Kevin

From:
Karl Denninger
Date:

"Exists" can be quite slow.  So can "not exists"

See if you can re-write it using a sub-select - just replace the "exists ...." with "(select ...) is not null"

Surprisingly this often results in a MUCH better query plan under Postgresql.  Why the planner evaluates it "better" eludes me (it shouldn't) but the differences are often STRIKING - I've seen factor-of-10 differences in execution performance.


Kevin Grittner wrote:
Bryce Nesbitt <> wrote: 
I've got a very slow query, which I can make faster by doing
something seemingly trivial.    
 
Out of curiosity, what kind of performance do you get with?:
EXPLAIN ANALYZE
SELECT contexts.context_key FROM contexts JOIN articles ON (articles.context_key = contexts.context_key) JOIN matview_82034 ON (matview_82034.context_key =                        contexts.context_key) WHERE EXISTS       (         SELECT *           FROM article_words           JOIN words using (word_key)           WHERE context_key = contexts.context_key             AND word = 'insider'       )   AND EXISTS       (         SELECT *           FROM article_words           JOIN words using (word_key)           WHERE context_key = contexts.context_key             AND word = 'trading'       )   AND EXISTS       (         SELECT *           FROM virtual_ancestors a           JOIN bp_categories ON (bp_categories.context_key =                                  a.ancestor_key)           WHERE a.context_key = contexts.context_key             AND lower(bp_categories.category) = 'law'       )   AND articles.indexed
;
(You may have to add some table aliases in the subqueries.)
If you are able to make a copy on 8.4 and test the various forms,
that would also be interesting.  I suspect that the above might do
pretty well in 8.4.
-Kevin
 
From:
"Kevin Grittner"
Date:

Karl Denninger <> wrote:
Kevin Grittner wrote:

>> I suspect that the above might do pretty well in 8.4.

> "Exists" can be quite slow.  So can "not exists"
>
> See if you can re-write it using a sub-select - just replace the
> "exists ...." with "(select ...) is not null"
>
> Surprisingly this often results in a MUCH better query plan under
> Postgresql.  Why the planner evaluates it "better" eludes me (it
> shouldn't) but the differences are often STRIKING - I've seen
> factor-of-10 differences in execution performance.

Have you seen such a difference under 8.4?  Can you provide a
self-contained example?

-Kevin

From:
Karl Denninger
Date:

Yes:

select forum, * from post where
    marked is not true
    and toppost = 1
    and (select login from ignore_thread where login='xxx' and number=post.number) is null
    and (replied > now() - '30 days'::interval)
    and (replied > (select lastview from forumlog where login='xxx' and forum=post.forum and number is null)) is not false
    and (replied > (select lastview from forumlog where login='xxx' and forum=post.forum and number=post.number)) is not false
    and ((post.forum = (select who from excludenew where who='xxx' and forum_name = post.forum)) or (select who from excludenew where who='xxx' and forum_name = post.forum) is null)
   order by pinned desc, replied desc offset 0 limit 100

Returns the following query plan:
                                                              QUERY PLAN                                                                                                                           
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=5301575.63..5301575.63 rows=1 width=433) (actual time=771.000..771.455 rows=100 loops=1)
   ->  Sort  (cost=5301575.63..5301575.63 rows=1 width=433) (actual time=770.996..771.141 rows=100 loops=1)
         Sort Key: post.pinned, post.replied
         Sort Method:  top-N heapsort  Memory: 120kB
         ->  Index Scan using post_top on post  (cost=0.00..5301575.62 rows=1 width=433) (actual time=0.664..752.994 rows=3905 loops=1)
               Index Cond: (toppost = 1)
               Filter: ((marked IS NOT TRUE) AND (replied > (now() - '30 days'::interval)) AND ((SubPlan 1) IS NULL) AND ((replied > (SubPlan 2)) IS NOT FALSE) AND ((replied > (SubPlan 3)) IS NOT FALSE) AND ((forum = (SubPlan 4)) OR ((SubPlan 5) IS NULL)))
               SubPlan 1
                 ->  Seq Scan on ignore_thread  (cost=0.00..5.45 rows=1 width=7) (actual time=0.037..0.037 rows=0 loops=3905)
                       Filter: ((login = 'xxx'::text) AND (number = $0))
               SubPlan 2
                 ->  Index Scan using forumlog_composite on forumlog  (cost=0.00..9.50 rows=1 width=8) (actual time=0.008..0.008 rows=0 loops=3905)
                       Index Cond: ((login = 'xxx'::text) AND (forum = $1) AND (number IS NULL))
               SubPlan 3
                 ->  Index Scan using forumlog_composite on forumlog  (cost=0.00..9.50 rows=1 width=8) (actual time=0.006..0.006 rows=0 loops=3905)
                       Index Cond: ((login = 'xxx'::text) AND (forum = $1) AND (number = $0))
               SubPlan 4
                 ->  Index Scan using excludenew_pkey on excludenew  (cost=0.00..8.27 rows=1 width=9) (actual time=0.004..0.004 rows=0 loops=3905)
                       Index Cond: ((who = 'xxx'::text) AND (forum_name = $1))
               SubPlan 5
                 ->  Index Scan using excludenew_pkey on excludenew  (cost=0.00..8.27 rows=1 width=9) (actual time=0.004..0.004 rows=0 loops=3905)
                       Index Cond: ((who = 'xxx'::text) AND (forum_name = $1))
 Total runtime: 771.907 ms
(23 rows)

The alternative:

select forum, * from post where
    marked is not true
    and toppost = 1
    and not exists (select login from ignore_thread where login='xxx' and number=post.number)
    and (replied > now() - '30 days'::interval)
    and (replied > (select lastview from forumlog where login='xxx' and forum=post.forum and number is null)) is not false
    and (replied > (select lastview from forumlog where login='xxx' and forum=post.forum and number=post.number)) is not false
    and ((post.forum = (select who from excludenew where who='xxx' and forum_name = post.forum)) or (select who from excludenew where who='xxx' and forum_name = post.forum) is null)
   order by pinned desc, replied desc offset 0 limit 100

goes nuts.

(Yes, I know, most of those others which are "not false" could be "Exists" too)

Explain Analyze on the alternative CLAIMS the same query planner time (within a few milliseconds) with explain analyze.  But if I replace the executing code with one that has the alternative ("not exists") syntax in it, the system load goes to crap instantly and the execution times "in the wild" go bananas.

I don't know why it does - I just know THAT it does.  When I first added that top clause in there (to allow people an individual "ignore thread" list) the system load went bananas immediately and forced me to back it out.   When I re-wrote the query as above the performance was (and remains) fine.

I'm running 8.4.2.

I agree (in advance) it shouldn't trash performance - all I know is that it does and forced me to re-write the query.


Kevin Grittner wrote:
Karl Denninger <> wrote:
Kevin Grittner wrote: 
I suspect that the above might do pretty well in 8.4.     
  
"Exists" can be quite slow.  So can "not exists"

See if you can re-write it using a sub-select - just replace the
"exists ...." with "(select ...) is not null"

Surprisingly this often results in a MUCH better query plan under
Postgresql.  Why the planner evaluates it "better" eludes me (it
shouldn't) but the differences are often STRIKING - I've seen
factor-of-10 differences in execution performance.   
 
Have you seen such a difference under 8.4?  Can you provide a
self-contained example?
-Kevin
 
From:
"Kevin Grittner"
Date:

Karl Denninger <> wrote:
> Kevin Grittner wrote:

>> Have you seen such a difference under 8.4?  Can you provide a
>> self-contained example?

> Yes:
>
> [query and EXPLAIN ANALYZE of fast query]

> The alternative:
>
> [query with no other information]
>
> goes nuts.

Which means what?  Could you post an EXPLAIN ANALYZE, or at least an
EXPLAIN, of the slow version?  Can you post the table structure,
including indexes?

-Kevin

From:
Tom Lane
Date:

Karl Denninger <> writes:
> Explain Analyze on the alternative CLAIMS the same query planner time
> (within a few milliseconds) with explain analyze.  But if I replace the
> executing code with one that has the alternative ("not exists") syntax
> in it, the system load goes to crap instantly and the execution times
> "in the wild" go bananas.

Could we see the actual explain analyze output, and not some handwaving?

What I would expect 8.4 to do with the NOT EXISTS version is to convert
it to an antijoin --- probably a hash antijoin given that the subtable
is apparently small.  That should be a significant win compared to
repeated seqscans as you have now.  The only way I could see for it to
be a loss is that that join would probably be performed after the other
subplan tests instead of before.  However, the rowcounts for your
original query suggest that all the subplans get executed the same
number of times; so at least on the test values you used here, all
those conditions succeed.  Maybe your test values were not
representative of "in the wild" cases, and in the real usage it's
important to make this test before the others.

If that's what it is, you might see what happens when all of the
sub-selects are converted to exists/not exists style, instead of
having a mishmash...

            regards, tom lane

From:
Bryce Nesbitt
Date:



Kevin Grittner wrote:
Bryce Nesbitt <> wrote: 
I've got a very slow query, which I can make faster by doing
something seemingly trivial.    
 
Out of curiosity, what kind of performance do you get with?:
EXPLAIN ANALYZE
SELECT contexts.context_key FROM contexts JOIN articles ON (articles.context_key = contexts.context_key) JOIN matview_82034 ON (matview_82034.context_key =                        contexts.context_key) WHERE EXISTS       (         SELECT *           FROM article_words           JOIN words using (word_key)           WHERE context_key = contexts.context_key             AND word = 'insider'       )   AND EXISTS       (         SELECT *           FROM article_words           JOIN words using (word_key)           WHERE context_key = contexts.context_key             AND word = 'trading'       )   AND EXISTS       (         SELECT *           FROM virtual_ancestors a           JOIN bp_categories ON (bp_categories.context_key =                                  a.ancestor_key)           WHERE a.context_key = contexts.context_key             AND lower(bp_categories.category) = 'law'       )   AND articles.indexed
; 
512,600ms query becomes 225,976ms.  Twice as fast on pos
Definitely not beating the 7500ms version.
PostgreSQL 8.3.4




From:
Bryce Nesbitt
Date:

Tom Lane wrote:
> Given that it estimated 1 row out of "words" (quite correctly) and 12264
> rows out of each scan on article_words, you'd think that the join size
> estimate would be 12264, which would be off by "only" a factor of 3 from
> the true result.  Instead it's 23, off by a factor of 200 :-(.
>
Has anyone every proposed a "learning" query planner?  One that
eventually clues in to estimate mismatches like this?

From:
Bryce Nesbitt
Date:

So as the op, back to the original posting....

In the real world, what should I do?  Does it make sense to pull the "AND articles.indexed" clause into an outer query?  Will that query simply perform poorly on other arbitrary combinations of words?


I'm happy to test any given query against the same set of servers. If it involves a persistent change
it has to run on a test server).  For example, the Robert Haas method:
# ...
Total runtime: 254207.857 ms

# ALTER TABLE article_words ALTER COLUMN word_key SET STATISTICS 1000;
# ANALYZE VERBOSE article_words
INFO:  analyzing "public.article_words"
INFO:  "article_words": scanned 300000 of 1342374 pages, containing 64534899 live rows and 3264839 dead rows; 300000 rows in sample, 288766568 estimated total rows
ANALYZE
# ...
Total runtime: 200591.751 ms

# ALTER TABLE article_words ALTER COLUMN word_key SET STATISTICS 50;
# ANALYZE VERBOSE article_words
# ...
Total runtime: 201204.972 ms


Sadly, it made essentially zero difference.  Attached.


From:
Bryce Nesbitt
Date:

So how about the removal of the "AND" clause? On a test server, this
drops the query from 201204 to 438 ms.
Is this just random, or is it a real solution that might apply to any
arbitrary combination of words?

Attached are three test runs:
Total runtime: 201204.972 ms
Total runtime: 437.766 ms
Total runtime: 341.727 ms

preproduction-20091214=# ALTER TABLE article_words ALTER COLUMN word_key SET STATISTICS 50;
ALTER TABLE
preproduction-20091214=# ANALYZE VERBOSE article_words;
INFO:  analyzing "public.article_words"
INFO:  "article_words": scanned 75000 of 1342374 pages, containing 16124750 live rows and 825250 dead rows; 75000 rows
insample, 288605935 estimated total rows 
ANALYZE


preproduction-20091214=# EXPLAIN ANALYZE
SELECT contexts.context_key FROM contexts
JOIN articles ON (articles.context_key=contexts.context_key)
JOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)
WHERE contexts.context_key IN
    (SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'insider'
     INTERSECT
     SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'trading')
AND contexts.context_key IN
         (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories ON (a.ancestor_key =
bp_categories.context_key)
         WHERE lower(bp_categories.category) = 'law') AND articles.indexed;
                                                                                              QUERY PLAN
                                               

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=12665.41..61953.71 rows=2 width=4) (actual time=140.799..201204.025 rows=546 loops=1)
   ->  Nested Loop IN Join  (cost=12665.41..61951.82 rows=1 width=16) (actual time=140.786..201196.526 rows=546
loops=1)
         Join Filter: (a.context_key = articles.context_key)
         ->  Nested Loop  (cost=12665.41..12678.98 rows=2 width=12) (actual time=55.453..116.790 rows=1306 loops=1)
               ->  Nested Loop  (cost=12665.41..12675.18 rows=2 width=8) (actual time=55.439..98.132 rows=1306 loops=1)
                     ->  Subquery Scan "IN_subquery"  (cost=12665.41..12665.68 rows=5 width=4) (actual
time=55.415..76.025rows=1473 loops=1) 
                           ->  SetOp Intersect  (cost=12665.41..12665.64 rows=5 width=4) (actual time=55.413..73.885
rows=1473loops=1) 
                                 ->  Sort  (cost=12665.41..12665.52 rows=46 width=4) (actual time=55.406..62.222
rows=17892loops=1) 
                                       Sort Key: "*SELECT* 1".context_key
                                       Sort Method:  quicksort  Memory: 1607kB
                                       ->  Append  (cost=0.00..12664.14 rows=46 width=4) (actual time=0.060..42.065
rows=17892loops=1) 
                                             ->  Subquery Scan "*SELECT* 1"  (cost=0.00..6332.07 rows=23 width=4)
(actualtime=0.059..6.962 rows=3583 loops=1) 
                                                   ->  Nested Loop  (cost=0.00..6331.84 rows=23 width=4) (actual
time=0.058..5.148rows=3583 loops=1) 
                                                         ->  Index Scan using words_word on words  (cost=0.00..2.22
rows=1width=4) (actual time=0.018..0.019 rows=1 loops=1) 
                                                               Index Cond: ((word)::text = 'insider'::text)
                                                         ->  Index Scan using article_words_wc on article_words
(cost=0.00..6227.18rows=8195 width=8) (actual time=0.036..3.275 rows=3583 loops=1) 
                                                               Index Cond: (public.article_words.word_key =
public.words.word_key)
                                             ->  Subquery Scan "*SELECT* 2"  (cost=0.00..6332.07 rows=23 width=4)
(actualtime=0.037..27.136 rows=14309 loops=1) 
                                                   ->  Nested Loop  (cost=0.00..6331.84 rows=23 width=4) (actual
time=0.035..19.912rows=14309 loops=1) 
                                                         ->  Index Scan using words_word on words  (cost=0.00..2.22
rows=1width=4) (actual time=0.015..0.017 rows=1 loops=1) 
                                                               Index Cond: ((word)::text = 'trading'::text)
                                                         ->  Index Scan using article_words_wc on article_words
(cost=0.00..6227.18rows=8195 width=8) (actual time=0.018..12.464 rows=14309 loops=1) 
                                                               Index Cond: (public.article_words.word_key =
public.words.word_key)
                     ->  Index Scan using article_key_idx on articles  (cost=0.00..1.89 rows=1 width=4) (actual
time=0.011..0.012rows=1 loops=1473) 
                           Index Cond: (articles.context_key = "IN_subquery".context_key)
                           Filter: articles.indexed
               ->  Index Scan using contexts_pkey on contexts  (cost=0.00..1.89 rows=1 width=4) (actual
time=0.010..0.011rows=1 loops=1306) 
                     Index Cond: (contexts.context_key = articles.context_key)
         ->  Nested Loop  (cost=0.00..59848.83 rows=1124834 width=4) (actual time=0.012..142.771 rows=39193 loops=1306)
               ->  Seq Scan on bp_categories  (cost=0.00..1231.49 rows=16669 width=4) (actual time=0.006..19.719
rows=14552loops=1306) 
                     Filter: (lower(category) = 'law'::text)
               ->  Index Scan using virtual_ancestor_key_idx on virtual_ancestors a  (cost=0.00..2.57 rows=76 width=8)
(actualtime=0.004..0.006 rows=3 loops=19004304) 
                     Index Cond: (a.ancestor_key = bp_categories.context_key)
   ->  Index Scan using matview_82034_ck on matview_82034  (cost=0.00..1.88 rows=1 width=4) (actual time=0.010..0.011
rows=1loops=546) 
         Index Cond: (matview_82034.context_key = articles.context_key)
 Total runtime: 201204.972 ms
(36 rows)





preproduction-20091214=# EXPLAIN ANALYZE
SELECT contexts.context_key FROM contexts
JOIN articles ON (articles.context_key=contexts.context_key)
JOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)
WHERE contexts.context_key IN
    (SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'insider'
     INTERSECT
     SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'trading')
AND contexts.context_key IN
         (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories ON (a.ancestor_key =
bp_categories.context_key)
         WHERE lower(bp_categories.category) = 'law');

                                                                                              QUERY PLAN
                                               

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=90968.66..95382.31 rows=5 width=4) (actual time=284.789..437.241 rows=648 loops=1)
   ->  Nested Loop  (cost=90968.66..95378.53 rows=2 width=16) (actual time=284.778..338.030 rows=649 loops=1)
         ->  Hash IN Join  (cost=90968.66..95374.73 rows=2 width=12) (actual time=284.764..333.145 rows=649 loops=1)
               Hash Cond: (contexts.context_key = a.context_key)
               ->  Nested Loop  (cost=12665.41..12675.20 rows=5 width=8) (actual time=55.557..81.206 rows=1473 loops=1)
                     ->  Subquery Scan "IN_subquery"  (cost=12665.41..12665.68 rows=5 width=4) (actual
time=55.537..69.892rows=1473 loops=1) 
                           ->  SetOp Intersect  (cost=12665.41..12665.64 rows=5 width=4) (actual time=55.535..69.075
rows=1473loops=1) 
                                 ->  Sort  (cost=12665.41..12665.52 rows=46 width=4) (actual time=55.528..60.439
rows=17892loops=1) 
                                       Sort Key: "*SELECT* 1".context_key
                                       Sort Method:  quicksort  Memory: 1607kB
                                       ->  Append  (cost=0.00..12664.14 rows=46 width=4) (actual time=0.040..42.014
rows=17892loops=1) 
                                             ->  Subquery Scan "*SELECT* 1"  (cost=0.00..6332.07 rows=23 width=4)
(actualtime=0.038..6.953 rows=3583 loops=1) 
                                                   ->  Nested Loop  (cost=0.00..6331.84 rows=23 width=4) (actual
time=0.038..5.129rows=3583 loops=1) 
                                                         ->  Index Scan using words_word on words  (cost=0.00..2.22
rows=1width=4) (actual time=0.019..0.020 rows=1 loops=1) 
                                                               Index Cond: ((word)::text = 'insider'::text)
                                                         ->  Index Scan using article_words_wc on article_words
(cost=0.00..6227.18rows=8195 width=8) (actual time=0.015..3.292 rows=3583 loops=1) 
                                                               Index Cond: (public.article_words.word_key =
public.words.word_key)
                                             ->  Subquery Scan "*SELECT* 2"  (cost=0.00..6332.07 rows=23 width=4)
(actualtime=0.037..27.131 rows=14309 loops=1) 
                                                   ->  Nested Loop  (cost=0.00..6331.84 rows=23 width=4) (actual
time=0.036..19.717rows=14309 loops=1) 
                                                         ->  Index Scan using words_word on words  (cost=0.00..2.22
rows=1width=4) (actual time=0.016..0.018 rows=1 loops=1) 
                                                               Index Cond: ((word)::text = 'trading'::text)
                                                         ->  Index Scan using article_words_wc on article_words
(cost=0.00..6227.18rows=8195 width=8) (actual time=0.017..12.360 rows=14309 loops=1) 
                                                               Index Cond: (public.article_words.word_key =
public.words.word_key)
                     ->  Index Scan using contexts_pkey on contexts  (cost=0.00..1.89 rows=1 width=4) (actual
time=0.006..0.006rows=1 loops=1473) 
                           Index Cond: (contexts.context_key = "IN_subquery".context_key)
               ->  Hash  (cost=59848.83..59848.83 rows=1124834 width=4) (actual time=220.171..220.171 rows=48207
loops=1)
                     ->  Nested Loop  (cost=0.00..59848.83 rows=1124834 width=4) (actual time=0.029..175.278 rows=48207
loops=1)
                           ->  Seq Scan on bp_categories  (cost=0.00..1231.49 rows=16669 width=4) (actual
time=0.015..23.290rows=16669 loops=1) 
                                 Filter: (lower(category) = 'law'::text)
                           ->  Index Scan using virtual_ancestor_key_idx on virtual_ancestors a  (cost=0.00..2.57
rows=76width=8) (actual time=0.004..0.007 rows=3 loops=16669) 
                                 Index Cond: (a.ancestor_key = bp_categories.context_key)
         ->  Index Scan using article_key_idx on articles  (cost=0.00..1.89 rows=1 width=4) (actual time=0.006..0.006
rows=1loops=649) 
               Index Cond: (articles.context_key = contexts.context_key)
   ->  Index Scan using matview_82034_ck on matview_82034  (cost=0.00..1.88 rows=1 width=4) (actual time=0.151..0.152
rows=1loops=649) 
         Index Cond: (matview_82034.context_key = articles.context_key)
 Total runtime: 437.766 ms
(36 rows)


preproduction-20091214=# EXPLAIN ANALYZE
SELECT contexts.context_key FROM articles JOIN contexts using (context_key)
WHERE articles.context_key IN
(
SELECT contexts.context_key FROM contexts
JOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)
WHERE contexts.context_key IN
    (SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'insider'
     INTERSECT
     SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'trading')
AND contexts.context_key IN
    (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories ON (a.ancestor_key = bp_categories.context_key)
     WHERE lower(bp_categories.category) = 'law')
)
AND articles.indexed;
                                                                                                    QUERY PLAN
                                               

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=95378.52..95391.87 rows=2 width=4) (actual time=332.545..341.194 rows=546 loops=1)
   ->  Nested Loop  (cost=95378.52..95388.07 rows=2 width=20) (actual time=332.531..337.526 rows=546 loops=1)
         ->  HashAggregate  (cost=95378.52..95378.57 rows=5 width=16) (actual time=332.514..332.830 rows=648 loops=1)
               ->  Nested Loop  (cost=90968.66..95378.51 rows=5 width=16) (actual time=285.869..331.943 rows=648
loops=1)
                     ->  Hash IN Join  (cost=90968.66..95374.73 rows=2 width=12) (actual time=285.858..327.625 rows=649
loops=1)
                           Hash Cond: (public.contexts.context_key = a.context_key)
                           ->  Nested Loop  (cost=12665.41..12675.20 rows=5 width=8) (actual time=55.625..79.734
rows=1473loops=1) 
                                 ->  Subquery Scan "IN_subquery"  (cost=12665.41..12665.68 rows=5 width=4) (actual
time=55.602..69.942rows=1473 loops=1) 
                                       ->  SetOp Intersect  (cost=12665.41..12665.64 rows=5 width=4) (actual
time=55.601..69.115rows=1473 loops=1) 
                                             ->  Sort  (cost=12665.41..12665.52 rows=46 width=4) (actual
time=55.593..60.422rows=17892 loops=1) 
                                                   Sort Key: "*SELECT* 1".context_key
                                                   Sort Method:  quicksort  Memory: 1607kB
                                                   ->  Append  (cost=0.00..12664.14 rows=46 width=4) (actual
time=0.039..41.913rows=17892 loops=1) 
                                                         ->  Subquery Scan "*SELECT* 1"  (cost=0.00..6332.07 rows=23
width=4)(actual time=0.039..6.911 rows=3583 loops=1) 
                                                               ->  Nested Loop  (cost=0.00..6331.84 rows=23 width=4)
(actualtime=0.037..5.043 rows=3583 loops=1) 
                                                                     ->  Index Scan using words_word on words
(cost=0.00..2.22rows=1 width=4) (actual time=0.018..0.019 rows=1 loops=1) 
                                                                           Index Cond: ((word)::text = 'insider'::text)
                                                                     ->  Index Scan using article_words_wc on
article_words (cost=0.00..6227.18 rows=8195 width=8) (actual time=0.015..3.202 rows=3583 loops=1) 
                                                                           Index Cond: (public.article_words.word_key =
public.words.word_key)
                                                         ->  Subquery Scan "*SELECT* 2"  (cost=0.00..6332.07 rows=23
width=4)(actual time=0.035..27.132 rows=14309 loops=1) 
                                                               ->  Nested Loop  (cost=0.00..6331.84 rows=23 width=4)
(actualtime=0.034..20.008 rows=14309 loops=1) 
                                                                     ->  Index Scan using words_word on words
(cost=0.00..2.22rows=1 width=4) (actual time=0.014..0.015 rows=1 loops=1) 
                                                                           Index Cond: ((word)::text = 'trading'::text)
                                                                     ->  Index Scan using article_words_wc on
article_words (cost=0.00..6227.18 rows=8195 width=8) (actual time=0.018..12.502 rows=14309 loops=1) 
                                                                           Index Cond: (public.article_words.word_key =
public.words.word_key)
                                 ->  Index Scan using contexts_pkey on contexts  (cost=0.00..1.89 rows=1 width=4)
(actualtime=0.005..0.005 rows=1 loops=1473) 
                                       Index Cond: (public.contexts.context_key = "IN_subquery".context_key)
                           ->  Hash  (cost=59848.83..59848.83 rows=1124834 width=4) (actual time=221.465..221.465
rows=48207loops=1) 
                                 ->  Nested Loop  (cost=0.00..59848.83 rows=1124834 width=4) (actual
time=0.030..174.941rows=48207 loops=1) 
                                       ->  Seq Scan on bp_categories  (cost=0.00..1231.49 rows=16669 width=4) (actual
time=0.015..23.377rows=16669 loops=1) 
                                             Filter: (lower(category) = 'law'::text)
                                       ->  Index Scan using virtual_ancestor_key_idx on virtual_ancestors a
(cost=0.00..2.57rows=76 width=8) (actual time=0.004..0.007 rows=3 loops=16669) 
                                             Index Cond: (a.ancestor_key = bp_categories.context_key)
                     ->  Index Scan using matview_82034_ck on matview_82034  (cost=0.00..1.88 rows=1 width=4) (actual
time=0.005..0.006rows=1 loops=649) 
                           Index Cond: (matview_82034.context_key = public.contexts.context_key)
         ->  Index Scan using article_key_idx on articles  (cost=0.00..1.89 rows=1 width=4) (actual time=0.006..0.006
rows=1loops=648) 
               Index Cond: (articles.context_key = public.contexts.context_key)
               Filter: articles.indexed
   ->  Index Scan using contexts_pkey on contexts  (cost=0.00..1.89 rows=1 width=4) (actual time=0.005..0.006 rows=1
loops=546)
         Index Cond: (public.contexts.context_key = articles.context_key)
 Total runtime: 341.727 ms
(41 rows)




preproduction-20091214=# select count(*),indexed from articles group by indexed;
 count  | indexed
--------+---------
 499730 | f
 341930 | t

From:
Pierre Frédéric Caillaud
Date:

Your Query :

SELECT contexts.context_key FROM contexts
JOIN articles ON (articles.context_key=contexts.context_key)
JOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)
WHERE contexts.context_key IN
     (SELECT context_key FROM article_words JOIN words using (word_key)
WHERE word = 'insider'
      INTERSECT
      SELECT context_key FROM article_words JOIN words using (word_key)
WHERE word = 'trading')
AND contexts.context_key IN
          (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories
ON (a.ancestor_key = bp_categories.context_key)
          WHERE lower(bp_categories.category) = 'law') AND articles.indexed;


I guess this is some form of keyword search, like :
- search for article
- with keywords "insider" and "trading"
- and belongs to a subcategory of "law"

The way you do it is exactly the same as the way phpBB forum implements
it, in the case you use a database that doesn't support full text search.
It is a fallback mechanism only meant for small forums on old versions of
MySQL, because it is extremely slow.

Even your faster timing (7500 ms) is extremely slow.

Option 1 :

a) Instead of building your own keywords table, use Postgres' fulltext
search, which is a lot smarter about combining keywords than using
INTERSECT.
You can either index the entire article, or use a separate keyword field,
or both.

b) If an article belongs to only one category, use an integer field. If,
as is most often the case, an article can belong to several categories,
use gist. When an article belongs to categories 1,2,3, set a column
article_categories to the integer array {1,2,3}::INTEGER[]. Then, use a
gist index on it.

You can then do a SELECT from articles (only one table) using an AND on
the intersection of article_categories with an array of the required
categories, and using Postgres' full text search on keywords.

This will most likely result in a Bitmap Scan, which will do the ANDing
much faster than any other solution.

Alternately, you can also use keywords like category_1234, stuff
everything in your keywords column, and use only Fulltext search.

You should this solution first, it works really well. When the data set
becomes quite a bit larger than your RAM, it can get slow, though.

Option 2 :

Postgres' full text search is perfectly integrated and has benefits :
fast, high write concurrency, etc. However full text search can be made
much faster with some compromises.

For instance, I have tried Xapian : it is a lot faster than Postgres for
full text search (and more powerful too), but the price you pay is
- a bit of work to integrate it
    - I suggest using triggers and a Python indexer script running in the
background to update the index
    - You can't SQL query it, so you need some interfacing
- updates are not concurrent (single-writer).

So, if you don't make lots of updates, Xapian may work for you. Its
performance is unbelievable, even on huge datasets.

From:
Robert Haas
Date:

On Sat, Feb 13, 2010 at 2:58 AM, Bryce Nesbitt <> wrote:
> So as the op, back to the original posting....
>
> In the real world, what should I do?  Does it make sense to pull the "AND
> articles.indexed" clause into an outer query?  Will that query simply
> perform poorly on other arbitrary combinations of words?

It's really hard to say whether a query that performs better is going
to always perform better on every combination of words.  My suggestion
is - try it and see.  It's my experience that rewriting queries is a
pretty effective way of speeding them up, but I can't vouch for what
will happen in your particular case.  It's depressing to see that
increasing the statistics target didn't help much; but that makes me
think that the problem is that your join selectivity estimates are
off, as Tom and Jorge said upthread.  Rewriting the query or trying an
upgrade are probably your only options.

...Robert