Re: Feature Request --- was: PostgreSQL Performance Tuning - Mailing list pgsql-performance

From david@lang.hm
Subject Re: Feature Request --- was: PostgreSQL Performance Tuning
Date
Msg-id Pine.LNX.4.64.0705031544120.26172@asgard.lang.hm
Whole thread Raw
In response to Re: Feature Request --- was: PostgreSQL Performance Tuning  (Carlos Moreno <moreno_pg@mochima.com>)
Responses Re: Feature Request --- was: PostgreSQL Performance Tuning
List pgsql-performance
On Thu, 3 May 2007, Carlos Moreno wrote:

>> >  been just being naive) --- I can't remember the exact name, but I
>> >  remember
>> >  using (on some Linux flavor) an API call that fills a struct with data
>> >  on the
>> >  resource usage for the process, including CPU time;  I assume measured
>> >  with precision  (that is, immune to issues of other applications running
>> >  simultaneously, or other random events causing the measurement to be
>> >  polluted by random noise).
>>
>>  since what we are looking for here is a reasonable first approximation,
>>  not perfection I don't think we should worry much about pollution of the
>>  value.
>
> Well, it's not as much worrying as it is choosing the better among two
> equally
> difficult options --- what I mean is that obtaining the *real* resource usage
> as
> reported by the kernel is, from what I remember, equally hard as it is
> obtaining
> the time with milli- or micro-seconds resolution.
>
> So, why not choosing this option?  (in fact, if we wanted to do it "the
> scripted
> way", I guess we could still use "time test_cpuspeed_loop" and read the
> report
> by the command time, specifying CPU time and system calls time.

I don't think it's that hard to get system time to a reasonable level (if
this config tuner needs to run for a min or two to generate numbers that's
acceptable, it's only run once)

but I don't think that the results are really that critical.

do we really care if the loop runs 1,000,000 times per second or 1,001,000
times per second? I'd argue that we don't even care about 1,000,000 times
per second vs 1,100,000 times per second, what we care about is 1,000,000
times per second vs 100,000 times per second, if you do a 10 second test
and run it for 11 seconds you are still in the right ballpark (i.e. close
enough that you really need to move to the stage2 tuneing to figure the
exact values)

>> >  As for 32/64 bit --- doesn't PG already know that information?  I mean,
>> >  ./configure does gather that information --- does it not?
>>
>>  we're not talking about comiling PG, we're talking about getting sane
>>  defaults for a pre-compiled binary. if it's a 32 bit binary assume a 32
>>  bit cpu, if it's a 64 bit binary assume a 64 bit cpu (all hardcoded into
>>  the binary at compile time)
>
> Right --- I was thinking that configure, which as I understand, generates the
> Makefiles to compile applications including initdb, could plug those values
> as compile-time constants, so that initdb (or a hypothetical additional
> utility
> that would do what we're discussing in this thread) already has them.
> Anyway,
> yes, that would go for the binaries as well --- we're pretty much saying the
> same thing  :-)

I'm thinking along the lines of a script or pre-compiled binary (_not_
initdb) that you could run and have it generate a new config file that has
values that are at within about an order of magnatude of being correct.

David Lang

pgsql-performance by date:

Previous
From: Carlos Moreno
Date:
Subject: Re: Feature Request --- was: PostgreSQL Performance Tuning
Next
From: Carlos Moreno
Date:
Subject: Re: Feature Request --- was: PostgreSQL Performance Tuning