Andres Freund <andres@anarazel.de> writes:
> On 2019-07-27 18:34:50 -0400, Tom Lane wrote:
>> Yeah. The existing commentary about that is basically justifying 8K
>> as being large enough to avoid performance issues; if somebody can
>> show that that's not true, I wouldn't have any hesitation about
>> kicking it up.
> You think that unnecessary fragmentation, which I did show, isn't good
> enough? That does have cost on the network level, even if it possibly
> doesn't show up that much in timing.
I think it is worth doing some testing, rather than just blindly changing
buffer size, because we don't know how much we'd have to change it to
have any useful effect.
> Additionally we perhaps ought to just not use the send buffer when
> internal_putbytes() is called with more data than can fit in the
> buffer. We should fill it with as much data as fits in it (so the
> pending data like the message header, or smaller previous messages, are
> flushed out in the largest size), and then just call secure_write()
> directly on the rest. It's not free to memcpy all that data around, when
> we already have a buffer.
Maybe, but how often does a single putbytes call transfer more than 16K?
(If you fill the existing buffer, but don't have a full bufferload
left to transfer, I doubt you want to shove the fractional bufferload
directly to the kernel.) Perhaps this added complexity will pay for
itself, but I don't think we should just assume that.
> While the receive side is statically allocated, I don't think it ends up
> in the process image as-is - as the contents aren't initialized, it ends
> up in .bss.
Right, but then we pay for COW when a child process first touches it,
no? Maybe the kernel is smart about pages that started as BSS, but
I wouldn't bet on it.
regards, tom lane