Re: Flushing large data immediately in pqcomm - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Flushing large data immediately in pqcomm
Date
Msg-id CA+TgmobbAbOS7CPPC7Tj1bRy=bN738GxbxBc19RX6bw7Y02OLQ@mail.gmail.com
Whole thread Raw
In response to Re: Flushing large data immediately in pqcomm  (David Rowley <dgrowleyml@gmail.com>)
Responses Re: Flushing large data immediately in pqcomm
List pgsql-hackers
On Wed, Mar 27, 2024 at 7:39 AM David Rowley <dgrowleyml@gmail.com> wrote:
> Robert, I understand you'd like a bit more from this patch. I'm
> wondering if you planning on blocking another committer from going
> ahead with this? Or if you have a reason why the current state of the
> patch is not a meaningful enough improvement that would justify
> possibly not getting any improvements in this area for PG17?

So, I think that the first version of the patch, when it got a big
chunk of data, would just flush whatever was already in the buffer and
then send the rest without copying. The current version, as I
understand it, only does that if the buffer is empty; otherwise, it
copies data as much data as it can into the partially-filled buffer. I
think that change addresses most of my concern about the approach; the
old way could, I believe, lead to an increased total number of flushes
with the right usage pattern, but I don't believe that's possible with
the revised approach. I do kind of wonder whether there is some more
fine-tuning of the approach that would improve things further, but I
realize that we have very limited time to figure this out, and there's
no sense letting the perfect be the enemy of the good.

So in short... no, I don't have big concerns at this point. Melih's
latest benchmarks look fairly promising to me, too.

--
Robert Haas
EDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: "Daniel Verite"
Date:
Subject: Re: Built-in CTYPE provider
Next
From: Daniel Gustafsson
Date:
Subject: Re: pgsql: Clean up role created in new subscription test.