Re: Re: JDBC Performance - Mailing list pgsql-general

From Gunnar R|nning
Subject Re: Re: JDBC Performance
Date
Msg-id x6og17c5t2.fsf@thor.candleweb.no
Whole thread Raw
In response to Re: Re: JDBC Performance  ("Keith L. Musser" <kmusser@idisys.com>)
List pgsql-general
"Keith L. Musser" <kmusser@idisys.com> writes:

> I'm thinking caching byte arrays on a per-connection basis is the way to
> go.
>
> However, how much difference do you expect this to make?  How many byte
> arrays to you allocate and destroy per SQL statement?  And how big are
> the arrays?  How much memory will they occupy per open connection?
>

The current algorithm is greedy and it does not free up anything, so how
many arrays that are cached depends on the size of the resultset. A
resultset require one byte array for all values in all columns.

> Will this really make a big difference?

My web application improved it throughput/execution speed by 50%. I think
that is quite good considering that JDBC is not the only bottleneck of my
application. I also saw a complete shift in where the JDBC part of the
application spent the time. Earlier the most significant part was in the
allocation of byte arrays, in the new implementation this part is reduced
dramativally and the new bottlenecks are byte to char conversions(done when
you retrieve values from the result set) and reading data from the
database. I don't think the reading can be much faster, maybe cursored
results could help in some situations where you don't actually need the
entire result set. But cursors might also add overhead for other queries,
but I know to little about cursors in postgres yet to do any qualified
statement on that.

Regards,

    Gunnar

pgsql-general by date:

Previous
From: "Martin A. Marques"
Date:
Subject: Re: Redhat 7 and PgSQL
Next
From: "Martin A. Marques"
Date:
Subject: Re: Redhat 7 and PgSQL