Re: Getting rid of a seq scan in query on a large table - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: Getting rid of a seq scan in query on a large table
Date
Msg-id 4E084DAF020000250003EC54@gw.wicourts.gov
Whole thread Raw
In response to Getting rid of a seq scan in query on a large table  (Jens Hoffrichter <jens.hoffrichter@gmail.com>)
List pgsql-performance
Jens Hoffrichter <jens.hoffrichter@gmail.com> wrote:

> I'm having trouble getting rid of a sequential scan on a table
> with roughly 120k entries it.

Please post your configuration information and some information
about your hardware and OS.

http://wiki.postgresql.org/wiki/SlowQueryQuestions

Since the table scan went through about 120000 rows in 60 ms, it is
clear that your data is heavily cached, so random_page_cost should
probably be close to or equal to seq_page_cost, and that value
should probably be somewhere around 0.1 to 0.5.  You should have
effective_cache_size set to the sum of shared_buffers plus whatever
your OS cache is.  I have sometimes found that I get faster plans
with cpu_tuple_cost increased.

If such tuning does cause it to choose the plan you expect, be sure
to time it against what you have been getting.  If the new plan is
slower, you've taken the adjustments too far.

-Kevin

pgsql-performance by date:

Previous
From: Jenish
Date:
Subject: Performance issue with Insert
Next
From: "Kevin Grittner"
Date:
Subject: Re: Performance issue with Insert