On Thu, 3 May 2007, Carlos Moreno wrote:
>> I don't think it's that hard to get system time to a reasonable level (if
>> this config tuner needs to run for a min or two to generate numbers that's
>> acceptable, it's only run once)
>>
>> but I don't think that the results are really that critical.
>
> Still --- this does not provide a valid argument against my claim.
>
> Ok, we don't need precision --- but do we *need* to have less
> precision?? I mean, you seem to be proposing that we deliberately
> go out of our way to discard a solution with higher precision and
> choose the one with lower precision --- just because we do not
> have a critical requirement for the extra precision.
>
> That would be a valid argument if the extra precision came at a
> considerable cost (well, or at whatever cost, considerable or not).
the cost I am seeing is the cost of portability (getting similarly
accruate info from all the different operating systems)
> But my point is still that obtaining the time in the right ballpark
> and obtaining the time with good precision are two things that
> have, from any conceivable point of view (programming effort,
> resources consumption when executing it, etc. etc.), the exact
> same cost --- why not pick the one that gives us the better results?
>
> Mostly when you consider that:
>
>> I'd argue that we don't even care about 1,000,000 times per second vs
>> 1,100,000 times per second, what we care about is 1,000,000 times per
>> second vs 100,000 times per second
>
> Part of my claim is that measuring real-time you could get an
> error like this or even a hundred times this!! Most of the time
> you wouldn't, and definitely if the user is careful it would not
> happen --- but it *could* happen!!! (and when I say could, I
> really mean: trust me, I have actually seen it happen)
if you have errors of several orders of magnatude in the number of loops
it can run in a given time period then you don't have something that you
can measure to any accuracy (and it wouldn't matter anyway, if your loops
are that variable, your code execution would be as well)
> Why not just use an *extremely simple* solution that is getting
> information from the kernel reporting the actual CPU time that
> has been used???
>
> Of course, this goes under the premise that in all platforms there
> is such a simple solution like there is on Linux (the exact name
> of the API function still eludes me, but I have used it in the past,
> and I recall that it was just three or five lines of code).
I think the problem is that it's a _different_ 3-5 lines of code for each
OS.
if I'm wrong and it's the same for the different operating systems then I
agree that we should use the most accurate clock we can get. I just don't
think we have that.
David Lang