Re: TCP Overhead on Local Loopback - Mailing list pgsql-performance

From Claudio Freire
Subject Re: TCP Overhead on Local Loopback
Date
Msg-id CAGTBQpZyPw2Y=JWOZf=tgWS7V_cw9_FMc1d90cg0ZbiPF0yPig@mail.gmail.com
Whole thread Raw
In response to Re: TCP Overhead on Local Loopback  (Ofer Israeli <oferi@checkpoint.com>)
Responses Re: TCP Overhead on Local Loopback
List pgsql-performance
On Tue, Apr 3, 2012 at 12:24 PM, Ofer Israeli <oferi@checkpoint.com> wrote:
> On Sun, Apr 2, 2012 at 11:25 AM, Samuel Gendler < sgendler@ideasculptor.com >  wrote:
>> But suggesting moving away from TCP/IP with no actual evidence that it is network overhead that is the problem is a
littlepremature, regardless. 
>
> Agreed, that's why I'd like to understand what tools / methodologies are available in order to test whether TCP is
theissue. 

As it was pointed out already, if you perform 60.000 x (5+1+2) "select
1" queries you'll effectively measure TCP overhead, as planning and
execution will be down to negligible times.

>> What, exactly, are the set of operations that each update is performing and is there any way to batch them into
fewerstatements 
>> within the transaction.  For example, could you insert all 60,000 records into a temporary table via COPY, then run
justa couple of queries to do 
>> bulk inserts and bulk updates into the destination tble via joins to the temp table?
>
> I don't see how a COPY can be faster here as I would need to both run the COPY into the temp table and then UPDATE
allthe columns in the real table. 
> Are you referring to saving the time where all the UPDATEs would be performed via a stored procedure strictly in the
dbdomain without networking back and forth? 

You'll be saving a lot of planning and parsing time, as COPY is
significantly simpler to plan and parse, and the complex UPDATEs and
INSERTs required to move data from the temp table will only incur a
one-time planning cost. In general, doing it that way is significantly
faster than 480.000 separate queries. But it does depend on the
operations themselves.

>> 60,000 rows updated with 25 columns, 1 indexed in 3ms is not exactly slow.  That's a not insignificant quantity of
datawhich must be transferred from client to server, 
>> parsed, and then written to disk, regardless of TCP overhead.  That is happening via at least 60,000 individual SQL
statementsthat are not even prepared statements.  I don't 
>> imagine that TCP overhead is really the problem here.  Regardless, you can reduce both statement parse time and TCP
overheadby doing bulk inserts 
>> (COPY) followed by multi-row selects/updates into the final table.  I don't know how much below 3ms you are going to
get,but that's going to be as fast 
>> as you can possibly do it on your hardware, assuming the rest of your configuration is as efficient as possible.
>
> The 3ms is per each event processing, not the whole 60K batch.  Each event processing includes:
> 5 SELECTs
> 1 DELETE
> 2 UPDATEs
> where each query performed involves TCP connections, that is, the queries are not grouped in a stored procedure or
such.

If you run the 480.000 queries on a single transaction, you use a
single connection already. So you only have transmission overhead,
without the TCP handshake. You still might gain a bit by disabling
Nagle's algorithm (if that's possible in windows), which is the main
source of latency for TCP. But that's very low-level tinkering.

> For all these queries does 3ms sound like a reasonable time?  If so, do you have an estimation of how long the
networkportion would be here? 

You perform 8 roundtrips minimum per event, so that's 375us per query.
It doesn't look like much. That's probably Nagle and task switching
time, I don't think you can get it much lower than that, without
issuing less queries (ie: using the COPY method).

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: ...WHERE TRUE" condition in union results in bad query pla
Next
From: Cesar Martin
Date:
Subject: Re: H800 + md1200 Performance problem