Thread: Re: Help Me Understand Why I'm Getting a Bad Query Plan

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Bryan Murphy
Date:
I hate to nag, but could anybody help me with this?  We have a few
related queries that are causing noticeable service delays in our
production system.  I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.

Thanks,
Bryan

On Mon, Mar 23, 2009 at 2:03 PM, Bryan Murphy <bmurphy1976@gmail.com> wrote:
> Hey Guys,
>
> I've got a query on our production system that isn't choosing a good
> plan.  I can't see why it's choosing to do a sequential scan on the
> ItemExperienceLog table.  That table is about 800mb and has about 2.5
> million records.  This example query only returns 4 records.  I've
> tried upping the statics for ItemExperienceLog.VistorId and
> ItemExperienceLog.ItemId to 1000 (from out default of 100) with no
> success.
>
> Our primary keys are guids stored as char(32) not null.  Our database
> uses UTF-8 encoding and is currently version v8.3.5.
>
> The queries:
>
>
>
> --SET enable_seqscan = off
> --SET enable_seqscan = on
>
> --ALTER TABLE ItemExperienceLog ALTER COLUMN VisitorId SET STATISTICS 1000
> --ALTER TABLE ItemExperienceLog ALTER COLUMN ItemId SET STATISTICS 1000
> --ANALYZE ItemExperienceLog
>
> SELECT MAX(l.Id) as Id, l.ItemId
> FROM ItemExperienceLog l
> INNER JOIN Items_Primary p ON p.Id = l.ItemId
> INNER JOIN Feeds f ON f.Id = p.FeedId
> INNER JOIN Visitors v ON v.Id = l.VisitorId
> WHERE
>        v.UserId = 'fbe2537f21d94f519605612c0bf7c2c5'
>        AND LOWER(f.Slug) = LOWER('Wealth_Building_by_NightingaleConant')
> GROUP BY l.ItemId
>
>
>
> Explain verbose output (set enable_seqscan = on):
>
>
>
> HashAggregate  (cost=124392.54..124392.65 rows=9 width=37) (actual
> time=7765.650..7765.654 rows=4 loops=1)
>  ->  Nested Loop  (cost=2417.68..124392.49 rows=9 width=37) (actual
> time=1706.703..7765.611 rows=11 loops=1)
>        ->  Nested Loop  (cost=2417.68..123868.75 rows=1807 width=70)
> (actual time=36.374..7706.677 rows=3174 loops=1)
>              ->  Hash Join  (cost=2417.68..119679.50 rows=1807
> width=37) (actual time=36.319..7602.221 rows=3174 loops=1)
>                    Hash Cond: (l.visitorid = v.id)
>                    ->  Seq Scan on itemexperiencelog l
> (cost=0.00..107563.09 rows=2581509 width=70) (actual
> time=0.010..4191.251 rows=2579880 loops=1)
>                    ->  Hash  (cost=2401.43..2401.43 rows=1300
> width=33) (actual time=3.673..3.673 rows=897 loops=1)
>                          ->  Bitmap Heap Scan on visitors v
> (cost=22.48..2401.43 rows=1300 width=33) (actual time=0.448..2.523
> rows=897 loops=1)
>                                Recheck Cond: (userid =
> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
>                                ->  Bitmap Index Scan on
> visitors_userid_index2  (cost=0.00..22.16 rows=1300 width=0) (actual
> time=0.322..0.322 rows=897 loops=1)
>                                      Index Cond: (userid =
> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
>              ->  Index Scan using items_primary_pkey on items_primary
> p  (cost=0.00..2.31 rows=1 width=66) (actual time=0.027..0.029 rows=1
> loops=3174)
>                    Index Cond: (p.id = l.itemid)
>        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28
> rows=1 width=33) (actual time=0.016..0.016 rows=0 loops=3174)
>              Index Cond: (f.id = p.feedid)
>              Filter: (lower((f.slug)::text) =
> 'wealth_building_by_nightingaleconant'::text)
> Total runtime: 7765.767 ms
>
>
>
> Explain verbose output (set enable_seqscan = off):
>
>
>
> HashAggregate  (cost=185274.71..185274.82 rows=9 width=37) (actual
> time=185.024..185.028 rows=4 loops=1)
>  ->  Nested Loop  (cost=0.00..185274.67 rows=9 width=37) (actual
> time=0.252..184.989 rows=11 loops=1)
>        ->  Nested Loop  (cost=0.00..184751.21 rows=1806 width=70)
> (actual time=0.223..134.943 rows=3174 loops=1)
>              ->  Nested Loop  (cost=0.00..180564.28 rows=1806
> width=37) (actual time=0.192..60.214 rows=3174 loops=1)
>                    ->  Index Scan using visitors_userid_index2 on
> visitors v  (cost=0.00..2580.97 rows=1300 width=33) (actual
> time=0.052..2.342 rows=897 loops=1)
>                          Index Cond: (userid =
> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
>                    ->  Index Scan using
> itemexperiencelog__index__visitorid on itemexperiencelog l
> (cost=0.00..134.04 rows=230 width=70) (actual time=0.013..0.040 rows=4
> loops=897)
>                          Index Cond: (l.visitorid = v.id)
>              ->  Index Scan using items_primary_pkey on items_primary
> p  (cost=0.00..2.31 rows=1 width=66) (actual time=0.019..0.020 rows=1
> loops=3174)
>                    Index Cond: (p.id = l.itemid)
>        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28
> rows=1 width=33) (actual time=0.014..0.014 rows=0 loops=3174)
>              Index Cond: (f.id = p.feedid)
>              Filter: (lower((f.slug)::text) =
> 'wealth_building_by_nightingaleconant'::text)
> Total runtime: 185.117 ms
>
>
>
> The relevent portions of postgresql.conf:
>
>
> max_connections = 100
> shared_buffers = 2GB
> temp_buffers = 32MB
> work_mem = 64MB
> maintenance_work_mem = 256MB
> max_stack_depth = 8MB
> wal_buffers = 1MB
> checkpoint_segments = 32
> random_page_cost = 2.0
> effective_cache_size = 12GB
> default_statistics_target = 100
>
>
>
> The tables and the indexes that matter:
>
>
>
> CREATE TABLE itemexperiencelog
> (
>  id integer NOT NULL DEFAULT nextval('itemexperiencelog__sequence'::regclass),
>  visitorid character(32) NOT NULL,
>  itemid character(32) NOT NULL,
>  created timestamp without time zone NOT NULL,
>  modified timestamp without time zone NOT NULL,
>  "position" integer NOT NULL DEFAULT 0,
>  itemlength integer NOT NULL DEFAULT 0,
>  usercomplete boolean NOT NULL DEFAULT false,
>  contentcomplete boolean NOT NULL DEFAULT false,
>  source character varying(32) NOT NULL,
>  sessionid character(32) NOT NULL,
>  authenticatedatcreation boolean NOT NULL DEFAULT false,
>  CONSTRAINT itemexperiencelog_pkey PRIMARY KEY (id),
>  CONSTRAINT itemexperiencelog_itemid_fkey FOREIGN KEY (itemid)
>      REFERENCES items_primary (id) MATCH SIMPLE
>      ON UPDATE NO ACTION ON DELETE NO ACTION
> )
>
> CREATE INDEX itemexperiencelog__index__itemid
>  ON itemexperiencelog
>  USING btree
>  (itemid);
>
> CREATE INDEX itemexperiencelog__index__visitorid
>  ON itemexperiencelog
>  USING btree
>  (visitorid);
>
>
>
> CREATE TABLE items_primary
> (
>  id character(32) NOT NULL,
>  feedid character(32),
>  slug character varying(255),
>  pubdate timestamp without time zone,
>  isvisible boolean,
>  deleted integer,
>  itemgroupid integer,
>  lockedflags integer NOT NULL,
>  CONSTRAINT items_primary_pkey PRIMARY KEY (id),
>  CONSTRAINT items_itemgroupid_fkey FOREIGN KEY (itemgroupid)
>      REFERENCES itemgroups (id) MATCH SIMPLE
>      ON UPDATE NO ACTION ON DELETE NO ACTION,
>  CONSTRAINT items_primary_feedid_fkey FOREIGN KEY (feedid)
>      REFERENCES feeds (id) MATCH SIMPLE
>      ON UPDATE NO ACTION ON DELETE NO ACTION
> )
> WITH (OIDS=FALSE);
>
> CREATE INDEX items_primary_feedid_index
>  ON items_primary
>  USING btree
>  (feedid);
>
>
>
> CREATE TABLE feeds
> (
>  id character(32) NOT NULL,
>  rssupdateinterval integer,
>  lastrssupdate timestamp without time zone,
>  nextrssupdate timestamp without time zone,
>  url text NOT NULL,
>  title text NOT NULL,
>  subtitle text,
>  link text,
>  slug character varying(2048) NOT NULL,
>  description text,
>  lang character varying(255),
>  copyright text,
>  pubdate timestamp without time zone,
>  lastbuilddate character varying(255),
>  docs text,
>  generator character varying(255),
>  managingeditor character varying(255),
>  webmaster character varying(255),
>  status integer,
>  ttl character varying(255),
>  image_title text,
>  image_url text,
>  image_link text,
>  image_description text,
>  image_width integer,
>  image_height integer,
>  rating character varying(255),
>  skiphours character varying(255),
>  skipdays character varying(255),
>  genretagid character(32),
>  sourceuserid character(32),
>  created timestamp without time zone NOT NULL,
>  deleted integer NOT NULL,
>  cloud_domain character varying(255),
>  cloud_port character varying(255),
>  cloud_path character varying(255),
>  cloud_registerprocedure character varying(255),
>  cloud_protocol character varying(255),
>  itunesauthor character varying(255),
>  itunesblock character varying(255),
>  itunescategories text,
>  itunesimage character varying(255),
>  itunesexplicit character varying(255),
>  ituneskeywords text,
>  itunesnewfeedurl character varying(255),
>  itunesowner_name character varying(255),
>  itunesowner_email character varying(255),
>  itunessubtitle text,
>  itunessummary text,
>  yahooimage text,
>  modified timestamp without time zone NOT NULL,
>  mediatype integer,
>  isvisible boolean NOT NULL DEFAULT false,
>  mediaplayertype integer NOT NULL,
>  episodebrowsecount integer,
>  comments text NOT NULL DEFAULT ''::text,
>  lockedflags integer NOT NULL DEFAULT 0,
>  sequenceid integer NOT NULL DEFAULT nextval('feeds_sequence'::regclass),
>  feedgroupid character(32),
>  bannerurl text,
>  categories text NOT NULL,
>  publisher text,
>  episodefetchstrategy integer NOT NULL,
>  averageuserrating real,
>  userratingscount integer,
>  lastupdate timestamp without time zone NOT NULL,
>  userratingstotal double precision NOT NULL,
>  CONSTRAINT feeds_pkey PRIMARY KEY (id),
>  CONSTRAINT feeds_feedgroupid_fkey FOREIGN KEY (feedgroupid)
>      REFERENCES feedgroups (id) MATCH SIMPLE
>      ON UPDATE NO ACTION ON DELETE NO ACTION,
>  CONSTRAINT fk5cb07d0e1080d15d FOREIGN KEY (sourceuserid)
>      REFERENCES users (id) MATCH SIMPLE
>      ON UPDATE NO ACTION ON DELETE NO ACTION,
>  CONSTRAINT fk5cb07d0eb113835f FOREIGN KEY (genretagid)
>      REFERENCES tags (id) MATCH SIMPLE
>      ON UPDATE NO ACTION ON DELETE NO ACTION,
>  CONSTRAINT feeds_slug_key UNIQUE (slug)
> )
> WITH (OIDS=FALSE);
>
> CREATE UNIQUE INDEX feeds_slug_unique
>  ON feeds
>  USING btree
>  (lower(slug::text));
>
>
>
> CREATE TABLE visitors
> (
>  id character(32) NOT NULL,
>  visitorid character(32) NOT NULL,
>  userid character(32),
>  sessionid text NOT NULL,
>  created timestamp without time zone NOT NULL,
>  modified timestamp without time zone NOT NULL,
>  deleted integer NOT NULL,
>  sequenceid integer DEFAULT nextval('visitors_sequence'::regclass),
>  CONSTRAINT visitors_pkey PRIMARY KEY (id)
> )
> WITH (OIDS=FALSE);
>
> CREATE INDEX visitors_userid_index2
>  ON visitors
>  USING btree
>  (userid);
>
> CREATE INDEX visitors_visitorid_index2
>  ON visitors
>  USING btree
>  (visitorid);
>

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
marcin mank
Date:
There is one thing I don`t understand:

              ->  Nested Loop  (cost=0.00..180564.28 rows=1806
width=37) (actual time=0.192..60.214 rows=3174 loops=1)
                    ->  Index Scan using visitors_userid_index2 on
visitors v  (cost=0.00..2580.97 rows=1300 width=33) (actual
time=0.052..2.342 rows=897 loops=1)
                          Index Cond: (userid =
'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
                    ->  Index Scan using
itemexperiencelog__index__visitorid on itemexperiencelog l
(cost=0.00..134.04 rows=230 width=70) (actual time=0.013..0.040 rows=4
loops=897)
                          Index Cond: (l.visitorid = v.id)

If it expects 1300 visitors with the userid, and for each of them to
have 230 entries in itemexperiencelog, how can it come up with 1806
returned rows (and be about right!)?

Greetings
Marcin

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Bryan Murphy
Date:
On Tue, Mar 24, 2009 at 10:04 PM, marcin mank <marcin.mank@gmail.com> wrote:
> There is one thing I don`t understand:
>
>              ->  Nested Loop  (cost=0.00..180564.28 rows=1806
> width=37) (actual time=0.192..60.214 rows=3174 loops=1)
>                    ->  Index Scan using visitors_userid_index2 on
> visitors v  (cost=0.00..2580.97 rows=1300 width=33) (actual
> time=0.052..2.342 rows=897 loops=1)
>                          Index Cond: (userid =
> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
>                    ->  Index Scan using
> itemexperiencelog__index__visitorid on itemexperiencelog l
> (cost=0.00..134.04 rows=230 width=70) (actual time=0.013..0.040 rows=4
> loops=897)
>                          Index Cond: (l.visitorid = v.id)
>
> If it expects 1300 visitors with the userid, and for each of them to
> have 230 entries in itemexperiencelog, how can it come up with 1806
> returned rows (and be about right!)?

I'm not sure I follow what you're saying.

One thing to keep in mind, due to a lapse in our judgement at the
time, this itemexperiencelog table serves as both a current state
table, and a log table.  Therefore, it potentially has multiple
redundant entries, but we typically only look at the most recent entry
to figure out the state of the current item.

We're in the process of re-factoring this now, as well as
denormalizing some of the tables to eliminate unnecessary joins, but I
keep running into these problems and need to understand what is going
on so that I know we're fixing the correct things.

Thanks,
Bryan

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Josh Berkus
Date:
Bryan,

> One thing to keep in mind, due to a lapse in our judgement at the
> time, this itemexperiencelog table serves as both a current state
> table, and a log table.  Therefore, it potentially has multiple
> redundant entries, but we typically only look at the most recent entry
> to figure out the state of the current item.

Oh, I see.  It thinks that it'll need to pull 260,000 redundant rows in
order to get 1800 unique ones.  Only it's wrong; you're only pulling
about 4000.

Try increasing some stats still further: itemexperiencelog.visitorid and
visitors.user_id both to 500.

--Josh


Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Bryan Murphy
Date:
On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus <josh@agliodbs.com> wrote:
> Oh, I see.  It thinks that it'll need to pull 260,000 redundant rows in
> order to get 1800 unique ones.  Only it's wrong; you're only pulling about
> 4000.
>
> Try increasing some stats still further: itemexperiencelog.visitorid and
> visitors.user_id both to 500.

I tried that already, but I decided to try again in case I messed up
something last time.  Here's what I ran.  As you can see, it still
chooses to do a sequential scan.  Am I changing the stats for those
columns correctly?

Thanks,
Bryan


First, the query:


SELECT MAX(l.Id) as Id, l.ItemId
FROM ItemExperienceLog l
INNER JOIN Items_Primary p ON p.Id = l.ItemId
INNER JOIN Feeds f ON f.Id = p.FeedId
INNER JOIN Visitors v ON v.Id = l.VisitorId
WHERE
    v.UserId = 'fbe2537f21d94f519605612c0bf7c2c5'
    AND LOWER(f.Slug) = LOWER('Wealth_Building_by_NightingaleConant')
GROUP BY l.ItemId


The query plan:


HashAggregate  (cost=130291.23..130291.35 rows=9 width=37) (actual
time=8385.428..8385.433 rows=4 loops=1)
  ->  Nested Loop  (cost=2649.02..130291.19 rows=9 width=37) (actual
time=3707.336..8385.388 rows=11 loops=1)
        ->  Nested Loop  (cost=2649.02..129744.01 rows=1888 width=70)
(actual time=8.881..8322.029 rows=3210 loops=1)
              ->  Hash Join  (cost=2649.02..125273.81 rows=1888
width=37) (actual time=8.836..8196.469 rows=3210 loops=1)
                    Hash Cond: (l.visitorid = v.id)
                    ->  Seq Scan on itemexperiencelog l
(cost=0.00..112491.03 rows=2697303 width=70) (actual
time=0.048..4459.139 rows=2646177 loops=1)
                    ->  Hash  (cost=2631.24..2631.24 rows=1422
width=33) (actual time=7.751..7.751 rows=899 loops=1)
                          ->  Bitmap Heap Scan on visitors v
(cost=23.44..2631.24 rows=1422 width=33) (actual time=0.577..6.347
rows=899 loops=1)
                                Recheck Cond: (userid =
'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
                                ->  Bitmap Index Scan on
visitors_userid_index2  (cost=0.00..23.08 rows=1422 width=0) (actual
time=0.419..0.419 rows=899 loops=1)
                                      Index Cond: (userid =
'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
              ->  Index Scan using items_primary_pkey on items_primary
p  (cost=0.00..2.36 rows=1 width=66) (actual time=0.024..0.025 rows=1
loops=3210)
                    Index Cond: (p.id = l.itemid)
        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28
rows=1 width=33) (actual time=0.018..0.018 rows=0 loops=3210)
              Index Cond: (f.id = p.feedid)
              Filter: (lower((f.slug)::text) =
'wealth_building_by_nightingaleconant'::text)
Total runtime: 8385.538 ms


Bump up the stats:


ALTER TABLE ItemExperienceLog ALTER COLUMN VisitorId SET STATISTICS 500;
ALTER TABLE ItemExperienceLog ALTER COLUMN ItemId SET STATISTICS 500;
ANALYZE ItemExperienceLog;

ALTER TABLE Visitors ALTER COLUMN UserId SET STATISTICS 500;
ALTER TABLE Visitors ALTER COLUMN Id SET STATISTICS 500;
ANALYZE Visitors;


The new query plan:


HashAggregate  (cost=127301.63..127301.72 rows=7 width=37) (actual
time=11447.033..11447.037 rows=4 loops=1)
  ->  Nested Loop  (cost=1874.67..127301.60 rows=7 width=37) (actual
time=4717.880..11446.987 rows=11 loops=1)
        ->  Nested Loop  (cost=1874.67..126923.09 rows=1306 width=70)
(actual time=20.565..11345.756 rows=3210 loops=1)
              ->  Hash Join  (cost=1874.67..123822.53 rows=1306
width=37) (actual time=20.445..8292.235 rows=3210 loops=1)
                    Hash Cond: (l.visitorid = v.id)
                    ->  Seq Scan on itemexperiencelog l
(cost=0.00..112010.04 rows=2646604 width=70) (actual
time=0.065..4438.481 rows=2646549 loops=1)
                    ->  Hash  (cost=1862.32..1862.32 rows=988
width=33) (actual time=19.360..19.360 rows=899 loops=1)
                          ->  Bitmap Heap Scan on visitors v
(cost=18.08..1862.32 rows=988 width=33) (actual time=0.666..17.788
rows=899 loops=1)
                                Recheck Cond: (userid =
'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
                                ->  Bitmap Index Scan on
visitors_userid_index2  (cost=0.00..17.83 rows=988 width=0) (actual
time=0.520..0.520 rows=899 loops=1)
                                      Index Cond: (userid =
'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
              ->  Index Scan using items_primary_pkey on items_primary
p  (cost=0.00..2.36 rows=1 width=66) (actual time=0.944..0.945 rows=1
loops=3210)
                    Index Cond: (p.id = l.itemid)
        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28
rows=1 width=33) (actual time=0.029..0.029 rows=0 loops=3210)
              Index Cond: (f.id = p.feedid)
              Filter: (lower((f.slug)::text) =
'wealth_building_by_nightingaleconant'::text)
Total runtime: 11447.155 ms

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Tom Lane
Date:
Bryan Murphy <bmurphy1976@gmail.com> writes:
> I tried that already, but I decided to try again in case I messed up
> something last time.  Here's what I ran.  As you can see, it still
> chooses to do a sequential scan.  Am I changing the stats for those
> columns correctly?

I think what you should be doing is messing with the cost parameters
... and not in the direction you tried before.  I gather from
    effective_cache_size = 12GB
that you have plenty of RAM on this machine.  If the tables involved
are less than 1GB then it's likely that you are operating in a fully
cached condition, and the default cost parameters are not set up for
that.  You want to be charging a lot less for page accesses relative to
CPU effort.  Try reducing both seq_page_cost and random_page_cost to 0.5
or even 0.1.  You'll need to watch your other queries to make sure
nothing gets radically worse though ...

            regards, tom lane

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Bryan Murphy
Date:
On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I think what you should be doing is messing with the cost parameters
> ... and not in the direction you tried before.  I gather from
>        effective_cache_size = 12GB
> that you have plenty of RAM on this machine.  If the tables involved
> are less than 1GB then it's likely that you are operating in a fully
> cached condition, and the default cost parameters are not set up for
> that.  You want to be charging a lot less for page accesses relative to
> CPU effort.  Try reducing both seq_page_cost and random_page_cost to 0.5
> or even 0.1.  You'll need to watch your other queries to make sure
> nothing gets radically worse though ...
>
>                        regards, tom lane

Thanks Tom, I think that did the trick.  I'm going to have to keep an
eye on the database for a few days to make sure there are no
unintended consequences, but it looks good.  Here's the new query
plan:


HashAggregate  (cost=40906.58..40906.67 rows=7 width=37) (actual
time=204.661..204.665 rows=4 loops=1)
  ->  Nested Loop  (cost=0.00..40906.55 rows=7 width=37) (actual
time=0.293..204.628 rows=11 loops=1)
        ->  Nested Loop  (cost=0.00..40531.61 rows=1310 width=70)
(actual time=0.261..113.576 rows=3210 loops=1)
              ->  Nested Loop  (cost=0.00..39475.97 rows=1310
width=37) (actual time=0.232..29.484 rows=3210 loops=1)
                    ->  Index Scan using visitors_userid_index2 on
visitors v  (cost=0.00..513.83 rows=1002 width=33) (actual
time=0.056..2.307 rows=899 loops=1)
                          Index Cond: (userid =
'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)
                    ->  Index Scan using
itemexperiencelog__index__visitorid on itemexperiencelog l
(cost=0.00..37.43 rows=116 width=70) (actual time=0.013..0.021 rows=4
loops=899)
                          Index Cond: (l.visitorid = v.id)
              ->  Index Scan using items_primary_pkey on items_primary
p  (cost=0.00..0.79 rows=1 width=66) (actual time=0.018..0.019 rows=1
loops=3210)
                    Index Cond: (p.id = l.itemid)
        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.27
rows=1 width=33) (actual time=0.023..0.023 rows=0 loops=3210)
              Index Cond: (f.id = p.feedid)
              Filter: (lower((f.slug)::text) =
'wealth_building_by_nightingaleconant'::text)
Total runtime: 204.759 ms


What I did was change seq_page_cost back to 1.0 and then changed
random_page_cost to 0.5

This also makes logical sense to me.  We've completely rewritten our
caching layer over the last three weeks, and introduced slony into our
architecture, so our usage patterns have transformed overnight.
Previously we were very i/o bound, now most of the actively used data
is actually in memory.  Just a few weeks ago there was so much churn
almost nothing stayed cached for long.

This is great, thanks guys!

Bryan

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Tom Lane
Date:
Bryan Murphy <bmurphy1976@gmail.com> writes:
> What I did was change seq_page_cost back to 1.0 and then changed
> random_page_cost to 0.5

[ squint... ]  It makes no physical sense for random_page_cost to be
less than seq_page_cost.  Please set them the same.

            regards, tom lane

Re: Help Me Understand Why I'm Getting a Bad Query Plan

From
Bryan Murphy
Date:
On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Bryan Murphy <bmurphy1976@gmail.com> writes:
>> What I did was change seq_page_cost back to 1.0 and then changed
>> random_page_cost to 0.5
>
> [ squint... ]  It makes no physical sense for random_page_cost to be
> less than seq_page_cost.  Please set them the same.
>
>                        regards, tom lane

Done.  Just  saw the tip in the docs as well.  Both are set to 0.5 now
and I'm still getting the good query plan.

Thanks!
Bryan