Re: Database performance problem - Mailing list pgsql-general

From Tom Lane
Subject Re: Database performance problem
Date
Msg-id 27859.1181688303@sss.pgh.pa.us
Whole thread Raw
In response to Re: Database performance problem  ("Porell, Chris" <Chris.Porell@ceridian.com>)
List pgsql-general
"Porell, Chris" <Chris.Porell@ceridian.com> writes:
> NEW NEWS... turning off "enable_seqscan" made the query run in about .25
> seconds!!!

[ squint... ]  It was not the seqscans that were killing you, and
changing just that setting wouldn't have moved the rowcount estimates
one millimeter.  I suppose this made it switch to some differently
shaped plan that happened not to evaluate the functionscan so many
times, but you should realize that what you got there is luck, not a
trustworthy fix.

BTW, on reflection the functionscan in itself shouldn't have been
real expensive, because it would have materialized the function result
in a tuplestore and then just rescanned that N times.  I think the
actual expense came from evaluating the (aaaa >= 25::numeric) filter
condition over and over --- not only is numeric arithmetic pretty slow,
but it might have done it vastly more than 11306 * 4816 times.  We can't
tell from this output how selective the filter was, but there could have
been a lot more than 11306 rows in the raw function output.  You might
want to think about adjusting the function definition so that the
min-value filtering happens inside the function instead of outside.
For that matter, do you really need the value to be numeric rather than
plain integer?

            regards, tom lane

pgsql-general by date:

Previous
From: "Porell, Chris"
Date:
Subject: Re: Database performance problem
Next
From: "Joshua D. Drake"
Date:
Subject: Re: [pgsql-advocacy] Re: Looking for Graphical people for PostgreSQL tradeshow signage