Re: Flushing large data immediately in pqcomm - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Flushing large data immediately in pqcomm
Date
Msg-id CA+TgmoZYS32e_t10Br3KD50vjUgr4i5yEtmcBKMeaX5AacYJDA@mail.gmail.com
Whole thread Raw
In response to Re: Flushing large data immediately in pqcomm  (Melih Mutlu <m.melihmutlu@gmail.com>)
Responses Re: Flushing large data immediately in pqcomm
List pgsql-hackers
On Tue, Jan 30, 2024 at 12:58 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:
> Sounds like it's difficult to come up with a heuristic that would work well enough for most cases.
> One thing with sending data instead of copying it if the buffer is empty is that initially the buffer is empty. I
believeit will stay empty forever if we do not copy anything when the buffer is empty. We can maybe simply set the
thresholdto the buffer size/2 (4kB) and hope that will work better. Or copy the data only if it fits into the remaining
spacein the buffer. What do you think? 
>
> An additional note while I mentioned pq_putmessage_noblock(), I've been testing sending input data immediately in
pq_putmessage_noblock()without blocking and copy the data into PqSendBuffer only if the socket would block and cannot
sendit. Unfortunately, I don't have strong numbers to demonstrate any improvement in perf or timing yet. But I still
liketo know what would you think about it? 

I think this is an area where it's very difficult to foresee on
theoretical grounds what will be right in practice. The problem is
that the best algorithm probably depends on what usage patterns are
common in practice. I think one common usage pattern will be a bunch
of roughly equal-sized messages in a row, like CopyData or DataRow
messages -- but those messages won't have a consistent width. It would
probably be worth testing what behavior you see in such cases -- start
with say a stream of 100 byte messages and then gradually increase and
see how the behavior evolves.

But you can also have other patterns, with messages of different sizes
interleaved. In the case of FE-to-BE traffic, the extended query
protocol might be a good example of that: the Parse message could be
quite long, or not, but the Bind Describe Execute Sync messages that
follow are probably all short. That case doesn't arise in this
direction, but I can't think exactly of what cases that do. It seems
like someone would need to play around and try some different cases
and maybe log the sizes of the secure_write() calls with various
algorithms, and then try to figure out what's best. For example, if
the alternating short-write, long-write behavior that Heikki mentioned
is happening, and I do think that particular thing is a very real
risk, then you haven't got it figured out yet...

--
Robert Haas
EDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: Bytea PL/Perl transform
Next
From: Melanie Plageman
Date:
Subject: Re: Add LSN <-> time conversion functionality