Re: Flushing large data immediately in pqcomm - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Flushing large data immediately in pqcomm
Date
Msg-id CA+TgmobpDvzyjQYX7_Y+gJFW=1_TLZt_EB71y30b0gtLBwfgAQ@mail.gmail.com
Whole thread Raw
In response to Re: Flushing large data immediately in pqcomm  (Jelte Fennema-Nio <postgres@jeltef.nl>)
Responses Re: Flushing large data immediately in pqcomm
Re: Flushing large data immediately in pqcomm
List pgsql-hackers
On Tue, Jan 30, 2024 at 6:39 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
> I agree that it's hard to prove that such heuristics will always be
> better in practice than the status quo. But I feel like we shouldn't
> let perfect be the enemy of good here.

Sure, I agree.

> I one approach that is a clear
> improvement over the status quo is:
> 1. If the buffer is empty AND the data we are trying to send is larger
> than the buffer size, then don't use the buffer.
> 2. If not, fill up the buffer first (just like we do now) then send
> that. And if the left over data is then still larger than the buffer,
> then now the buffer is empty so 1. applies.

That seems like it might be a useful refinement of Melih Mutlu's
original proposal, but consider a message stream that consists of
messages exactly 8kB in size. If that message stream begins when the
buffer is empty, all messages are sent directly. If it begins when
there are any number of bytes in the buffer, we buffer every message
forever. That's kind of an odd artifact, but maybe it's fine in
practice. I say again that it's good to test out a bunch of scenarios
and see what shakes out.

--
Robert Haas
EDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Ranier Vilela
Date:
Subject: Abort if dup fail (src/bin/pg_dump/compress_none.c)
Next
From: Fabrízio de Royes Mello
Date:
Subject: Re: speed up a logical replica setup