On Thu, 14 Feb 2008, Michael Lorenz wrote:
> When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following:
> Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1)
> -> Index Scan using account_objectname on "object" o (cost=0.00..1151102.10 rows=749559 width=35) (actual
time=0.086..14.981rows=10020 loops=1)
It looks like the planner thinks that index scan will have to go through
749559 rows, but there are actually only 10020 there. Is this table is
getting ANALYZE'd usefully? VACUUM FULL doesn't do that. If the row
estimates are so far off, that might explain why it thinks the index scan
is going to be so huge it might as well just walk the whole thing.
Actually, VACUUM FULL can be its own problem--you probably want a very
regular VACUUM instead.
> Is the shared_buffers = 2000 setting way too low?
Quite; with 4GB of ram that could easily be 100,000+ instead. I wouldn't
make that whole jump at once, but 2000 is only a mere 16MB of memory
dedicated to the database. Also, be sure to set effective_cache_size to
something reflective of your total memory minus application+OS as it also
has an impact here; you've probably also got that set extremely low and if
this server is mostly for PostgreSQL a good starting point would be
something like 300000 (=2.4GB).
> Are there any other settings I can change to get back to that super-fast
> index scan?
Well, you can try to turn off sequential scans for the query. You can
test if that makes a difference like this:
SET enable_seq_scan to off;
EXPLAIN ANALYZE <x>;
SET enable_seq_scan to on;
It's also possible to tweak parameters like random_page_cost to similarly
prefer indexes. Far better to fix the true underlying issues though
(above and below).
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD