Re: Feature Request --- was: PostgreSQL Performance Tuning - Mailing list pgsql-performance

From Carlos Moreno
Subject Re: Feature Request --- was: PostgreSQL Performance Tuning
Date
Msg-id 463A88AF.9040207@mochima.com
Whole thread Raw
In response to Re: Feature Request --- was: PostgreSQL Performance Tuning  (david@lang.hm)
Responses Re: Feature Request --- was: PostgreSQL Performance Tuning  (david@lang.hm)
List pgsql-performance
>> That would be a valid argument if the extra precision came at a
>> considerable cost  (well, or at whatever cost, considerable or not).
>
> the cost I am seeing is the cost of portability (getting similarly
> accruate info from all the different operating systems)

Fair enough --- as I mentioned, I was arguing under the premise that
there would be a quite similar solution for all the Unix-flavours (and
hopefully an equivalent --- and equivalently simple --- one for Windows)
...
Whether or not that premise holds, I wouldn't bet either way.

>> error like this or even a hundred times this!!   Most of the time
>> you wouldn't, and definitely if the user is careful it would not
>> happen --- but it *could* happen!!!  (and when I say could, I
>> really mean:  trust me, I have actually seen it happen)
> Part of my claim is that measuring real-time you could get an
>
> if you have errors of several orders of magnatude in the number of
> loops it can run in a given time period then you don't have something
> that you can measure to any accuracy (and it wouldn't matter anyway,
> if your loops are that variable, your code execution would be as well)

Not necessarily --- operating conditions may change drastically from
one second to the next;  that does not mean that your system is useless;
simply that the measuring mechanism is way too vulnerable to the
particular operating conditions at the exact moment it was executed.

I'm not sure if that was intentional, but you bring up an interesting
issue --- or in any case, your comment made me drastically re-think
my whole argument: do we *want* to measure the exact speed, or
rather the effective speed under normal operating conditions on the
target machine?

I know the latter is almost impossible --- we're talking about an estimate
of a random process' parameter (and we need to do it in a short period
of time) ...  But the argument goes more or less like this:  if you have a
machine that runs at  1000 MIPS, but it's usually busy running things
that in average consume 500 of those 1000 MIPS, would we want PG's
configuration file to be obtained based on 1000 or based on 500 MIPS???
After all, the CPU is, as far as PostgreSQL will be able see, 500 MIPS
fast, *not* 1000.

I think I better stop, if we want to have any hope that the PG team will
ever actually implement this feature (or similar) ...  We're probably just
scaring them!!  :-)

Carlos
--


pgsql-performance by date:

Previous
From: david@lang.hm
Date:
Subject: Re: Feature Request --- was: PostgreSQL Performance Tuning
Next
From: david@lang.hm
Date:
Subject: Re: Feature Request --- was: PostgreSQL Performance Tuning