Paul Thomas wrote:
>
> On 11/01/2004 22:40 Oliver Jowett wrote:
>
>> [snip]
>> I'm still in favour of an "undefined behaviour" interpretation here.
>> There's not much benefit to application code in nailing down one
>> behaviour or the other, and leaving it undefined gives the driver the
>> flexibility to do whichever is a better implementation for the DB in
>> question.
>
>
> Having followed this very interesting thread, I'm still wondering
> exactly how much measurable improvement could be achieved. I read an
> article on IBM developerWorks (sorry can't remember the URL) which
> stated that, on modern VMs, things like object creation aren't the
> performance bogeys that they once were. So I'm thinking that before we
> make a decision about committing to a change which might break someones
> app, is there any way by which we could measure the effects of the
> proposed change?
The problem is that it's very specific to the application workload; what
case do we measure?
The reason I'm interested in doing this for is not the direct CPU
overhead of object creation (none of the JDBC code is on our main
execution path), but the effect that object creation has on GC interval
and pause. We're running a low-latency app where an extra 50ms pause due
to GC has a large impact on our latency figures .. so the less garbage
we generate in the first place, the better. We could farm the JDBC work
out to a separate VM, but that gets complex quite fast.
Aside from the change in behaviour, the change I'm proposing has no real
downside I can see -- it's not a CPU-vs-memory tradeoff, we're just
generating fewer intermediate copies altogether. More of a code
complexity-vs-runtime cost tradeoff I suppose.
-O