On Sat, Apr 26, 2025 at 5:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Andres Freund <andres@anarazel.de> writes:
> > It's kinda sad to not have any test that tests a larger
> > io_combine_limit/io_max_combine_limit - as evidenced by this bug that'd be
> > good. However, not all platforms have PG_IOV_MAX > 16, so it seems like it'd
> > be somewhat painful to test?
>
> Maybe just skip the test if the maximum value of the GUC isn't
> high enough?
We could also change IOV_MAX and thus PG_IOV_MAX to (say) 32 on
Windows, if it's useful for testing. It's not real, I just made that
number up when writing pwritev/preadv replacements, and POSIX says
that 16 is the minimum a system should support. I have patches lined
up to add real vectored I/O for Windows, and then the number will
change anyway, probably harmonizing so that it works out to 1MB
everywhere in practice. If it's useful to change it now for a test
then I don't know any reason why not. The idea of the maximally
conservative 16 was not to encourage people to set it to high numbers
while it's emulated, but it's not especially important.
Unixen have converged on IOV_MAX == 1024, most decades ago. I think
AIX might be a hold-out, but we don't currently care about that, and
Solaris only moved 16->1024 recently. If we change the fake numbers
made up for Windows, say 16->32, then I suspect that would leave just
one single machine in the 'farm that would skip the test if I
understood the proposal correctly: margay, a Solaris box not receiving
OS updates and thus missing "SRU72".
Sorry for screwing up the GUC, it looks like I completely goofed on
the contract for GUC assign functions! They aren't in charge of
assigning, they're just called *on* assignment. Whoops. And thanks
for fixing it.