Re: significant slow down with various LIMIT

From: Kevin Grittner
Subject: Re: significant slow down with various LIMIT
Date: ,
Msg-id: 4BC34A340200002500030759@gw.wicourts.gov
(view: Whole thread, Raw)
In response to: Re: significant slow down with various LIMIT  (norn)
List: pgsql-performance

Tree view

significant slow down with various LIMIT  (Helio Campos Mello de Andrade, )
 Re: significant slow down with various LIMIT  ("Kevin Grittner", )
  Re: significant slow down with various LIMIT  (norn, )
   Re: significant slow down with various LIMIT  ("Kevin Grittner", )
  Re: significant slow down with various LIMIT  (norn, )
   Re: significant slow down with various LIMIT  ("Kevin Grittner", )
    Re: significant slow down with various LIMIT  (Chris Bowlby, )
    Re: significant slow down with various LIMIT  ("Kevin Grittner", )
  Re: significant slow down with various LIMIT  (norn, )
   Re: significant slow down with various LIMIT  ("Kevin Grittner", )
    Re: significant slow down with various LIMIT  ("Kevin Grittner", )
  Re: significant slow down with various LIMIT  (norn, )
  Re: significant slow down with various LIMIT  (norn, )

norn <> wrote:

> I restarted Postgresql with new settings and got no performance
> improvements in this particular query...

The cost settings help the optimizer make good decisions about plan
choice.  I guess I don't have much reason to believe, at this point,
that there is a better plan for it to choose for this query.  Do you
think you see one?  What would that be?  (We might be able to force
that plan and find out if you're right, which can be a valuable
diagnostic step, even if the way it gets forced isn't a
production-quality solution.)

Are you able to share the table descriptions?  (That might help us
suggest an index or some such which might help.)

> Do you have ideas how much random_page_cost and seq_page_cost
> should be decreased?

It really depends on how much of your active data set is cached.  If
it is effectively fully cached, you might want to go to 0.01 for
both (or even lower).  Many of our databases perform best with
seq_page_cost = 1 and random_page_cost = 2.  With some, either of
those "extremes" causes some queries to optimize poorly, and we've
had luck with 0.3 and 0.5.  This is one worth testing with your
workload, because you can make some queries faster at the expense of
others; sometimes it comes down to which needs better response time
to keep your users happy.

-Kevin


pgsql-performance by date:

From: Bruce Momjian
Date:
Subject: Re: *** PROBABLY SPAM *** Does the psql executable support a "fetch many" approach when dumping large queries to stdout?
From: norn
Date:
Subject: Re: significant slow down with various LIMIT