Thread: query overhead

query overhead

From
Andy Halsall
Date:
Version.....
PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2, 64-bit
 
Server.....
Server: RX800 S2 (8 x Xeon 7040 3GHz dual-core processors, 32GB memory
O/S: SLES11 SP1 64-bit
 
Scenario.....
Legacy application with bespoke but very efficient interface to its persistent data. We're looking to replace the application and use
PostgreSQL to hold the data. Performance measures on the legacy application on the same server shows that it can perform a particular read operation in ~215 microseconds (averaged) which includes processing the request and getting the result out.
 
Question......
I've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine the round trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain sockets.
 
Client measures are suggesting ~150-200 microseconds to call sp and get the answer back
 
ping to loopback returns in ~20 microseconds (I assume domain sockets are equivalent).
 
strace of server process I think confirms time at server to be ~150-200 microsecs. For example:
11:17:50.109936 recvfrom(6, "P\0\0\0'\0SELECT * FROM sp_select_no"..., 8192, 0, NULL, NULL) = 77 <0.000018>
11:17:50.110098 sendto(6, "1\0\0\0\0042\0\0\0\4T\0\0\0(\0\1sp_select_no_op"..., 86, 0, NULL, 0) = 86 <0.000034>

So it looks like a local no-op overhead of at least 150 microseconds which would leave us struggling.
Could someone please let me know if this is usual and if so where the time's spent?
Short of getting a faster server, is there anything I can do to influence this?
 
Thanks,
Andy

Re: query overhead

From
Tom Lane
Date:
Andy Halsall <halsall_andy@hotmail.com> writes:
> I've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine
theround trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain
sockets.

> Client measures are suggesting ~150-200 microseconds to call sp and get the answer back

That doesn't sound out of line for what you're doing, which appears to
include parsing/planning a SELECT command.  Some of that overhead could
probably be avoided by using a prepared statement instead of a plain
query.  Or you could try using the "fast path" API (see libpq's PQfn)
to invoke the function directly without any SQL query involved.

Really, however, the way to make things fly is to get rid of the round
trip overhead in the first place by migrating more of your application
logic into the stored procedure.  I realize that that might require
pretty significant rewrites, but if you can't tolerate per-query
overheads in the 100+ usec range, that's where you're going to end up.

If you don't like any of those answers, maybe Postgres isn't the
solution for you.  You might consider an embeddable database such
as SQLLite.

            regards, tom lane

Re: query overhead

From
Craig Ringer
Date:
On 07/11/2012 07:46 PM, Andy Halsall wrote:

I've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine the round trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain sockets.

PL/PgSQL or SQL stored proc? There's a definite calling overhead for PL/PgSQL compared to plain SQL functions. SQL functions in turn cost more than a direct statement.

These costs aren't big. They're massively outweighed by any kind of disk access or any non-trivial query. They start to add up if you have a lot of procs that wrap a simple "SELECT * FROM x WHERE x.id = $1" though.

Client measures are suggesting ~150-200 microseconds to call sp and get the answer back
0.0015 to 0.002 milliseconds?

That's ... um ... fast. Presumably that's during a loop where your no-op is run repeatedly without connection setup costs, etc.

 
ping to loopback returns in ~20 microseconds (I assume domain sockets are equivalent).
UNIX domain sockets are typically at least as fast and somewhat lower overhead.

So it looks like a local no-op overhead of at least 150 microseconds which would leave us struggling.
Could someone please let me know if this is usual and if so where the time's spent?
Short of getting a faster server, is there anything I can do to influence this?

I'm not sure how much a faster server would help with single query response time. It'll help with response to many parallel queries, but may not speed up a single query, especially a tiny lightweight one, particularly dramatically.

The Xeon 7040:
     http://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-(4M-Cache-3_00-GHz-667-MHz-FSB)
is not the newest beast out there, but it's not exactly slow.

Honestly, PostgreSQL's focus is on performance with bigger units of work, not on being able to return a response to a tiny query in microseconds. If you are converting an application that has microsecond response time requirements and hammers its database with millions of tiny queries, PostgreSQL is probably not going to be your best bet.

If you're able to adapt the app to use set-oriented access patterns instead of looping, eg instead of (pseudocode):

customer_ids = [ ... array from somewhere ... ]
for customer_id in ids:
    c = SELECT c FROM customer c WHERE customer_id = :id
    if c.some_var:
        UPDATE customer SET c.some_other_var = 't'

you can do:

UPDATE customer SET c.some_other_var = [expression] WHERE [expression]

then you'll get much better results from Pg.

--
Craig Ringer

Re: query overhead

From
Craig Ringer
Date:
On 07/16/2012 06:13 PM, Andy Halsall wrote:
Thanks for the responses. I take the points - the times we're dealing with are very small. Sorry but I'm a bit confused by the suggestions around function types / prepared statements, but probably haven't been clear in my question: I'm invoking a PL/PgSQL function from libpq, for example the no_op mentioned in first post does:
 
    CREATE OR REPLACE FUNCTION sp_select_no_op() RETURNS integer AS
    '
    begin
         return 1;
    end
    '
    language 'plpgsql' IMMUTABLE;
 
My understanding was that the plan for this would be prepared once and reused. So no addvantage in a prepared statement? Also no advantage in making this a plain SQL function as these don't get cached?
AFAIK SQL functions don't get cached plans - though I'm not 100% on this. They can be lots cheaper for wrapping simple operations, though.

I'm just questioning why you're going immediately to PL/PgSQL - or stored procs at all. It might be a bigger hammer than you need.

What sorts of operations will your application be performing? Is there any reason it can't directly use simple INSERT, UPDATE, DELETE and SELECT statements, possibly with PREPARE and EXECUTE at libpq level?

If you're trying to go as fast as humanly possible in emulating an ISAM-like access model with lots of small fast accesses, PQprepare of simple S/I/U/D statements, then proper use of PQexecPrepared, is likely to be hard to beat.

If you're working with ISAM-like access though, cursors may well be very helpful for you. It's a pity for your app that Pg doesn't support cursors that see changes committed after cursor creation, since these are ideal when emulating ISAM "next record" / "previous record" access models. They're still suitable for tasks where you know the app doesn't need to see concurrently modified data, though.

Can you show a typical sequence of operations for your DB?

Also, out of interest, are you migrating from a traditional shared-file ISAM-derived database system, or something funkier?

 
Embedded database such as SQLLite is a good idea except that we'll be multi-process and my understanding is that they lock the full database on any write, which is off-putting.

Write concurrency in SQLite is miserable, yeah, but you get very fast shared-access reads as a trade-off and it's much closer to your app's old DB design. It depends a lot on your workload.

--
Craig Ringer

Re: query overhead

From
Craig Ringer
Date:
On 07/17/2012 11:33 PM, Andy Halsall wrote:

>
>  If you're working with ISAM-like access though, cursors may well be
> very helpful for you. It's a pity for your app that Pg doesn't support
> cursors that see changes committed after cursor creation, since these
> are ideal when emulating ISAM "next record" / "previous record" access
> models. They're still suitable for tasks where you know the app
> doesn't need to see concurrently modified data, though.
>
> > That's right, that would've been ideal behaviour for us. We're going
> to manage our own shared cache in the application layer to give
> similar functionality. We have lots of reads but fewer writes.

How have you gone with this? I'm curious.

By the way, when replying it's the convention to indent the text written
by the person you're replying to, not indent your own text. It's kind of
hard to read.


> > In the context of what we've been talking about, we're reading a set
> of information which is ordered in a reasonably complex way. Set is
> about 10000 records and requires a table join. This sort takes a while
> as it heap scans - couldn't persuade it to use indexes.
>
> > Having read the set, the application "gets next" until the end. To
> start with we were re-establishing the set (minus the previous record)
> and choosing the first (LIMIT 1) on each "get next" - obviously a
> non-starter. We moved to caching the record keys for the set and only
> visiting the database for the specific records on each "get next" -
> hence the questions about round trip overhead for small queries.
Given that pattern, why aren't you using a cursor? Do you need to see
concurrent changes? Is the cursor just held open too long, affecting
autovacum?

--
Craig Ringer



Re: query overhead

From
Jeff Janes
Date:
On Wed, Jul 11, 2012 at 4:46 AM, Andy Halsall <halsall_andy@hotmail.com> wrote:
> Version.....
> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,
> 64-bit
>
> Server.....
> Server: RX800 S2 (8 x Xeon 7040 3GHz dual-core processors, 32GB memory
> O/S: SLES11 SP1 64-bit

I don't really know how to compare these, but I've got:

Intel(R) Core(TM)2 CPU   6400  @ 2.13GHz

>
> Scenario.....
> Legacy application with bespoke but very efficient interface to its
> persistent data. We're looking to replace the application and use
> PostgreSQL to hold the data. Performance measures on the legacy application
> on the same server shows that it can perform a particular read operation in
> ~215 microseconds (averaged) which includes processing the request and
> getting the result out.
>
> Question......
> I've written an Immutable stored procedure that takes no parameters and
> returns a fixed value to try and determine the round trip overhead of a
> query to PostgreSQL. Call to sp is made using libpq. We're all local and
> using UNIX domain sockets.
>
> Client measures are suggesting ~150-200 microseconds to call sp and get the
> answer back

using the plpgsql function you provided down thread:

cat dummy2.sql
select sp_select_no_op();

pgbench -f dummy2.sql -T300
tps = 18703.309132 (excluding connections establishing)

So that comes out to 53.5 microseconds/call.

If I use a prepared statement:

pgbench -M prepared -f dummy2.sql -T300
tps = 30289.908144 (excluding connections establishing)

or 33 us/call.

So unless your server is a lot slower than mine, I think your client
may be the bottleneck.  What is your client program?  what does "top"
show as the relative CPU usage of your client program vs the "postgres
... [local]" program to which it is connected?

Cheers,

Jeff