My current setting is 22G. According to some documentation, I want to
set effective_cache_size to my OS disk cache + shared_buffers. In this
case, I have 4 quad-core processors with 512K cache (8G) and my
shared_buffers is 7680M. Therefore my effective_cache_size should be
approximately 16G? Most of our other etl processes are running fine,
however I'm curious if I could see a significant performance boost by
reducing the effective_cache_size.
On Wed, 2011-06-08 at 13:03 -0400, Tom Lane wrote:
> Tony Capobianco <tcapobianco@prospectiv.com> writes:
> > Well, this ran much better. However, I'm not sure if it's because of
> > set enable_nestloop = 0, or because I'm executing the query twice in a
> > row, where previous results may be cached. I will try this setting in
> > my code for when this process runs later today and see what the result
> > is.
>
> If the performance differential holds up, you should look at adjusting
> your cost parameters so that the planner isn't so wrong about which one
> is faster. Hacking enable_nestloop is a band-aid, not something you
> want to use in production.
>
> Looking at the values you gave earlier, I wonder whether the
> effective_cache_size setting isn't unreasonably high. That's reducing
> the estimated cost of accessing the large table via indexscans, and
> I'm thinking it reduced it too much.
>
> regards, tom lane
>