Re: Large # of rows in query extremely slow, not using index - Mailing list pgsql-performance

From Stephen Crowley
Subject Re: Large # of rows in query extremely slow, not using index
Date
Msg-id 3f71fdf104092316367c5f3052@mail.gmail.com
Whole thread Raw
In response to Re: Large # of rows in query extremely slow, not using  (Kris Jurka <books@ejurka.com>)
Responses Re: Large # of rows in query extremely slow, not using
List pgsql-performance
Thanks for the explanation. So what sort of changes need to be made to
the client/server protocol to fix this problem?



On Thu, 23 Sep 2004 18:22:15 -0500 (EST), Kris Jurka <books@ejurka.com> wrote:
>
>
> On Tue, 14 Sep 2004, Stephen Crowley wrote:
>
> > Problem solved.. I set the fetchSize to a reasonable value instead of
> > the default of unlimited  in the PreparedStatement and now the query
> > is . After some searching it seeems this is a common problem, would it
> > make sense to change the default value to something other than 0 in
> > the JDBC driver?
>
> In the JDBC driver, setting the fetch size to a non-zero value means that
> the query will be run using what the frontend/backend protocol calls a
> named statement.  What this means on the backend is that the planner will
> not be able to use the values from the query parameters to generate the
> optimum query plan and must use generic placeholders and create a generic
> plan.  For this reason we have decided not to default to a non-zero
> fetch size.  This is something whose default value could be set by a URL
> parameter if you think that is something that is really required.
>
> Kris Jurka
>
>

pgsql-performance by date:

Previous
From: Kris Jurka
Date:
Subject: Re: Large # of rows in query extremely slow, not using
Next
From: Kris Jurka
Date:
Subject: Re: Large # of rows in query extremely slow, not using