>>> On Thu, Mar 23, 2006 at 11:27 am, in message
<10223.1143134839@sss.pgh.pa.us>,
Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> The run time of the NOT IN query, as measured by elapsed time
between
>> SELECT CURRENT_TIMESTAMP executions, increased by 31 ms.
>
> Works out to about 30 microsec per node execution, which seems a bit
> high for modern machines ... and the coarse quantization of the
> CURRENT_TIMESTAMP results is odd too. What platform is this on
exactly?
This is a smaller machine with a copy of the full production database.
A single 3.6 GHz Xeon with 4 GB RAM running Windows Server 2003. It was
being used to test update scripts before applying them to the production
machines. I stumbled across a costing issue I thought worth posting,
and in the course of gathering data noticed this time difference I
didn't understand.
>> What is the best way to see where this time is going?
>
> Profiling with gprof or some such tool might be educational.
Our builds are all done with --enable-debug, but this machine doesn't
even have msys installed. I'll try to put together some way to profile
it on this machine or some other. (It might be easier to move it to a
Linux machine and confirm the problem there, then profile.)
Thanks,
-Kevin