Thread: Re: Sequential scan where Index scan expected (update)

Re: Sequential scan where Index scan expected (update)

From
"Gregory S. Williamson"
Date:
Perhaps it is clogged with dead tuples -- has it been vacuumed recently with enough FSM space ?

It seems unlikely but maybe try an explict cast for the thing_id call, e.g.
explain update xx_thing_event set thing_color='foo' where
thing_event_id=10000::bigint;

It may also be that 5842 rows is enough that the planner decides it is faster to do a sequential scan that the busier
indexscan (read index, get data row, versus just reading all the necessary pages in one faster sequential scan). 

If you set the sequential scan parameter in the config file and reload postgres does the same query get faster ? (not
suggestingthis for real runtime use but it can be useful to diagnose issues). 

Greg Williamson
DBA
GlobeXplorer LLC


-----Original Message-----
From:    pgsql-sql-owner@postgresql.org on behalf of Bryce Nesbitt
Sent:    Thu 3/2/2006 11:28 PM
To:    pgsql-sql@postgresql.org
Cc:
Subject:    [SQL] Sequential scan where Index scan expected (update)
I'm getting sequential scans (and poor performance), on scans using my
primary keys.  This is an older postgres.
Can anyone help figure out why?


demo=# \d xx_thing                    Table "public.xx_thing"        Column          |            Type             |
Modifiers
-------------------------+-----------------------------+-----------thing_id              | bigint
|not null 
thing_model           | character varying(128)      |thing_color           | character varying(128)      |thing_year
       | integer                     | 
Indexes:   "xx_thing_pkey" primary key, btree (thing_id)


demo=# analyze verbose xx_thing_event;
INFO:  analyzing "public.xx_thing_event"
INFO:  "xx_thing_event": 3374 pages, 3000 rows sampled, 197478 estimated
total rows


demo=# explain update xx_thing_event set thing_color='foo' where
thing_event_id=10000;                            QUERY PLAN
---------------------------------------------------------------------Seq Scan on xx_thing_event  (cost=0.00..5842.48
rows=1width=110)  Filter: (thing_event_id = 10000) 
(2 rows)



demo=# select * from version();                                                version
----------------------------------------------------------------------------------------------------------PostgreSQL
7.4.1on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3 
20030502 (Red Hat Linux 3.2.3-20)
(1 row)


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate      subscribe-nomail command to
majordomo@postgresql.orgso that your      message can get through to the mailing list cleanly 

!DSPAM:4407f048253644359117518!






Re: Sequential scan where Index scan expected (update)

From
Michael Fuhr
Date:
On Thu, Mar 02, 2006 at 11:37:53PM -0800, Gregory S. Williamson wrote:
> It seems unlikely but maybe try an explict cast for the thing_id call, e.g.
> explain update xx_thing_event set thing_color='foo' where
> thing_event_id=10000::bigint;

The server is pre-8.0 so it's likely that this is indeed the problem.
Unfortunately the table that was shown in the \d output (xx_thing)
isn't the same table as in the update command (xx_thing_event) so
we can only guess that thing_event_id is a bigint based on what
xx_thing looks like.

> It may also be that 5842 rows is enough that the planner decides
> it is faster to do a sequential scan that the busier index scan
> (read index, get data row, versus just reading all the necessary
> pages in one faster sequential scan).
[...]
>                              QUERY PLAN
> ---------------------------------------------------------------------
>  Seq Scan on xx_thing_event  (cost=0.00..5842.48 rows=1 width=110)
>    Filter: (thing_event_id = 10000)
> (2 rows)

The 5842.48 figure isn't a row count, it's a cost estimate that
includes the number of pages and rows in the file.  The row count
estimate is 1 row; if the cost estimate for a sequential scan is
5842.48 then an index scan to fetch one row would surely be faster.

If thing_event_id is a bigint and has an index then casting the
value to bigint should result in an index scan.  Another way would
be to quote the value:

UPDATE xx_thing_event SET thing_color = 'foo' WHERE thing_event_id = '10000';

-- 
Michael Fuhr