On Thu, Mar 02, 2006 at 11:37:53PM -0800, Gregory S. Williamson wrote:
> It seems unlikely but maybe try an explict cast for the thing_id call, e.g.
> explain update xx_thing_event set thing_color='foo' where
> thing_event_id=10000::bigint;
The server is pre-8.0 so it's likely that this is indeed the problem.
Unfortunately the table that was shown in the \d output (xx_thing)
isn't the same table as in the update command (xx_thing_event) so
we can only guess that thing_event_id is a bigint based on what
xx_thing looks like.
> It may also be that 5842 rows is enough that the planner decides
> it is faster to do a sequential scan that the busier index scan
> (read index, get data row, versus just reading all the necessary
> pages in one faster sequential scan).
[...]
> QUERY PLAN
> ---------------------------------------------------------------------
> Seq Scan on xx_thing_event (cost=0.00..5842.48 rows=1 width=110)
> Filter: (thing_event_id = 10000)
> (2 rows)
The 5842.48 figure isn't a row count, it's a cost estimate that
includes the number of pages and rows in the file. The row count
estimate is 1 row; if the cost estimate for a sequential scan is
5842.48 then an index scan to fetch one row would surely be faster.
If thing_event_id is a bigint and has an index then casting the
value to bigint should result in an index scan. Another way would
be to quote the value:
UPDATE xx_thing_event SET thing_color = 'foo' WHERE thing_event_id = '10000';
--
Michael Fuhr