Query plan from hell - Mailing list pgsql-performance

From PFC
Subject Query plan from hell
Date
Msg-id op.s6xwlb1mcigqcu@apollo13
Whole thread Raw
In response to Re: Performance problems with multiple layers of functions  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Query plan from hell
List pgsql-performance
    Whoa !

    bookmark_delta contains very few rows but is inserted/deleted very
often... the effect is spectacular !
    I guess I'll have to vacuum analyze this table every minute...


annonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces WHERE id IN
(SELECT annonce_id FROM bookmark_delta);
                                                        QUERY PLAN

------------------------------------------------------------------------------------------------------------------------
  Hash IN Join  (cost=32.12..8607.08 rows=1770 width=6) (actual
time=387.011..387.569 rows=1 loops=1)
    Hash Cond: ("outer".id = "inner".annonce_id)
    ->  Seq Scan on annonces  (cost=0.00..7796.00 rows=101500 width=6)
(actual time=0.022..164.369 rows=101470 loops=1)
    ->  Hash  (cost=27.70..27.70 rows=1770 width=4) (actual
time=0.013..0.013 rows=5 loops=1)
          ->  Seq Scan on bookmark_delta  (cost=0.00..27.70 rows=1770
width=4) (actual time=0.004..0.010 rows=5 loops=1)
  Total runtime: 387.627 ms
(6 lignes)

annonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces a, (SELECT
annonce_id FROM bookmark_delta GROUP BY annonce_id) foo WHERE
a.id=foo.annonce_id;
                                                            QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------
  Nested Loop  (cost=32.12..10409.31 rows=1770 width=6) (actual
time=0.081..0.084 rows=1 loops=1)
    ->  HashAggregate  (cost=32.12..49.83 rows=1770 width=4) (actual
time=0.038..0.040 rows=1 loops=1)
          ->  Seq Scan on bookmark_delta  (cost=0.00..27.70 rows=1770
width=4) (actual time=0.024..0.027 rows=5 loops=1)
    ->  Index Scan using annonces_pkey on annonces a  (cost=0.00..5.83
rows=1 width=6) (actual time=0.039..0.040 rows=1 loops=1)
          Index Cond: (a.id = "outer".annonce_id)
  Total runtime: 0.163 ms
(6 lignes)

annonces=# vacuum bookmark_delta ;
VACUUM
annonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces WHERE id IN
(SELECT annonce_id FROM bookmark_delta);
                                                        QUERY PLAN

------------------------------------------------------------------------------------------------------------------------
  Hash IN Join  (cost=32.12..8607.08 rows=1770 width=6) (actual
time=195.284..196.063 rows=1 loops=1)
    Hash Cond: ("outer".id = "inner".annonce_id)
    ->  Seq Scan on annonces  (cost=0.00..7796.00 rows=101500 width=6)
(actual time=0.014..165.626 rows=101470 loops=1)
    ->  Hash  (cost=27.70..27.70 rows=1770 width=4) (actual
time=0.008..0.008 rows=2 loops=1)
          ->  Seq Scan on bookmark_delta  (cost=0.00..27.70 rows=1770
width=4) (actual time=0.003..0.004 rows=2 loops=1)
  Total runtime: 196.122 ms
(6 lignes)

annonces=# vacuum analyze bookmark_delta ;
VACUUM
annonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces WHERE id IN
(SELECT annonce_id FROM bookmark_delta);
                                                           QUERY PLAN

------------------------------------------------------------------------------------------------------------------------------
  Nested Loop  (cost=1.02..6.88 rows=1 width=6) (actual time=0.025..0.027
rows=1 loops=1)
    ->  HashAggregate  (cost=1.02..1.03 rows=1 width=4) (actual
time=0.011..0.012 rows=1 loops=1)
          ->  Seq Scan on bookmark_delta  (cost=0.00..1.02 rows=2 width=4)
(actual time=0.004..0.006 rows=2 loops=1)
    ->  Index Scan using annonces_pkey on annonces  (cost=0.00..5.83 rows=1
width=6) (actual time=0.009..0.010 rows=1 loops=1)
          Index Cond: (annonces.id = "outer".annonce_id)
  Total runtime: 0.104 ms
(6 lignes)

pgsql-performance by date:

Previous
From: Chris Browne
Date:
Subject: Re: Scaling up PostgreSQL in Multiple CPU / Dual Core
Next
From: K C Lau
Date:
Subject: Re: limitation using LIKE on ANY(array)