Re: performance-test farm - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: performance-test farm
Date
Msg-id 4DCB1339.1080305@dunslane.net
Whole thread Raw
In response to Re: performance-test farm  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: performance-test farm
List pgsql-hackers

On 05/11/2011 06:21 PM, Kevin Grittner wrote:
> Tomas Vondra<tv@fuzzy.cz>  wrote:
>> Dne 11.5.2011 23:41, Kevin Grittner napsal(a):
>>> Tomas Vondra<tv@fuzzy.cz>  wrote:


First up, you guys should be aware that Greg Smith at least is working 
on this. Let's not duplicate effort.


>>>
>>>> 1) Is there something that might serve as a model?
>>>
>>> I've been assuming that we would use the PostgreSQL Buildfarm as
>>> a model.
>>>
>>> http://buildfarm.postgresql.org/
>> Yes, I was thinking about that too, but
>>
>> 1) A buildfarm used for regular building / unit testing IMHO may
>>     not be the right place to do performance testing (not sure how
>>     isolated the benchmarks can be etc.).
>
> I'm not saying that we should use the existing buildfarm, or expect
> current buildfarm machines to support this; just that the pattern of
> people volunteering hardware in a similar way would be good.


Some buildfarm members might well be suitable for it.

I recently added support for running optional steps, and made the SCM 
module totally generic. Soon I'm hoping to provide for more radical 
extensibility by having addon modules, which will register themselves 
with the framework and the  have their tests run. I'm currently working 
on an API for such modules. This was inspired by Mike Fowler's work on a 
module to test JDBC builds, which his buildfarm member is currently 
doing: See 
<http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=piapiac&dt=2011-05-11%2000%3A00%3A02> 
for example. Obvious candidate modules might be other client libraries 
(e.g. perl DBD::Pg), non-committed patches, non-standard tests, and 
performance testing.

>> 2) Not sure how open this might be for the developers (if they
>>     could issue their own builds etc.).
>
> I haven't done it, but I understand that you can create a "local"
> buildfarm instance which isn't reporting its results.  Again,
> something similar might be good.


You can certainly create a client that doesn't report its results (just 
run it in --test mode). And you can create your own private server 
(that's been done by at least two organizations I know of).

But to test your own stuff, what we really need is a module to run 
non-committed patches, I think (see above).

There buildfarm client does have a mode (--from-source) that lets you 
test your own stuff and doesn't report on it if you do, but I don't see 
that it would be useful here.

>
>> 3) If this should be part of the current buildfarm, then I'm
>>     afraid I can't do much about it.
>

Sure you can. Contribute to the efforts mentioned above.


> Not part of the current buildfarm; just using a similar overall
> pattern.  Others may have different ideas; I'm just speaking for
> myself here about what seems like a good idea to me.


The buildfarm server is a pretty generic reporting framework. Sure we 
can build another. But it seems a bit redundant.

>
>>>> 2) How would you use it? What procedure would you expect?
>>>
>>> People who had suitable test environments could sign up to
>>> periodically build and performance test using the predetermined
>>> test suite, and report results back for a consolidated status
>>> display. That would spot regressions.
>> So it would be a 'distributed farm'? Not sure it that's a good
>> idea, as to get reliable benchmark results you need a proper
>> environment (not influenced by other jobs, changes of hw etc.).


You are not going to get a useful performance farm except in a 
distributed way. We don't own any labs, nor have we any way of 
assembling the dozens or hundreds of machines to represent the spectrum 
of platforms that we want tested in one spot. Knowing that we have 
suddenly caused a performance regression on, say, FreeBSD 8.1 running on 
AMD64, is a critical requirement.

>
> Yeah, accurate benchmarking is not easy.  We would have to make sure
> people understood that the machine should be dedicated to the
> benchmark while it is running, which is not a requirement for the
> buildfarm.  Maybe provide some way to annotate HW or OS changes?
> So if one machine goes to a new kernel and performance changes
> radically, but other machines which didn't change their kernel
> continue on a level graph, we'd know to suspect the kernel rather
> than some change in PostgreSQL code.
>


Indeed, there are lots of moving pieces.

cheers

andrew


pgsql-hackers by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: performance-test farm
Next
From: Noah Misch
Date:
Subject: Re: XML with invalid chars