On Wed, Jul 11, 2012 at 4:46 AM, Andy Halsall <halsall_andy@hotmail.com> wrote:
> Version.....
> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,
> 64-bit
>
> Server.....
> Server: RX800 S2 (8 x Xeon 7040 3GHz dual-core processors, 32GB memory
> O/S: SLES11 SP1 64-bit
I don't really know how to compare these, but I've got:
Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz
>
> Scenario.....
> Legacy application with bespoke but very efficient interface to its
> persistent data. We're looking to replace the application and use
> PostgreSQL to hold the data. Performance measures on the legacy application
> on the same server shows that it can perform a particular read operation in
> ~215 microseconds (averaged) which includes processing the request and
> getting the result out.
>
> Question......
> I've written an Immutable stored procedure that takes no parameters and
> returns a fixed value to try and determine the round trip overhead of a
> query to PostgreSQL. Call to sp is made using libpq. We're all local and
> using UNIX domain sockets.
>
> Client measures are suggesting ~150-200 microseconds to call sp and get the
> answer back
using the plpgsql function you provided down thread:
cat dummy2.sql
select sp_select_no_op();
pgbench -f dummy2.sql -T300
tps = 18703.309132 (excluding connections establishing)
So that comes out to 53.5 microseconds/call.
If I use a prepared statement:
pgbench -M prepared -f dummy2.sql -T300
tps = 30289.908144 (excluding connections establishing)
or 33 us/call.
So unless your server is a lot slower than mine, I think your client
may be the bottleneck. What is your client program? what does "top"
show as the relative CPU usage of your client program vs the "postgres
... [local]" program to which it is connected?
Cheers,
Jeff