Thread: Re: Allow io_combine_limit up to 1MB
Hi, On 2025-02-11 13:12:17 +1300, Thomas Munro wrote: > Tomas queried[1] the limit of 256kB (or really 32 blocks) for > io_combine_limit. Yeah, I think we should increase it and allow > experimentation with larger numbers. Note that real hardware and > protocols have segment and size limits that can force the kernel to > split your I/Os, so it's not at all a given that it'll help much or at > all to use very large sizes, but YMMV. FWIW, I see substantial performance *regressions* with *big* IO sizes using fio. Just looking at cached buffered IO. for s in 4 8 16 32 64 128 256 512 1024 2048 4096 8192;do echo -ne "$s\t\t"; numactl --physcpubind 3 fio --directory /srv/dev/fio/--size=32GiB --overwrite 1 --time_based=0 --runtime=10 --name test --rw read --buffered 0 --ioengine psync --buffered1 --invalidate 0 --output-format json --bs=$((1024*${s})) |jq '.jobs[] | .read.bw_mean';done io size kB throughput in MB/s 4 6752 8 9297 16 11082 32 14392 64 15967 128 16658 256 16864 512 19114 1024 12874 2048 11770 4096 11781 8192 11744 I.e. throughput peaks at 19GB/s and drops of fairly noticeably after that. I've measured this on a number of different AMD and Intel Systems, with similar results, albeit with different inflection points. On the Intel systems I have access to the point where things slows down seems typically be earlier than on AMD. It's worth noting that if I boot with mitigations=off clearcpuid=smap I get *vastly* better performance: io size kB throughput in MB/s 4 12054 8 13872 16 16709 32 20564 64 22559 128 23133 256 23317 512 25829 1024 15912 2048 15213 4096 14129 8192 13795 Most of the gain isn't due to mitigations=off but clearcpuid=smap. Apparently SMAP, which requires explicit code to allow kernel space to access userspace memory, to make exploitation harder, reacts badly to copying lots of memory. This seems absolutely bonkers to me. > I was originally cautious because I didn't want to make a few stack buffers > too big, but arrays of BlockNumber, struct iovec, and pointer don't seem too > excessive at say 128 (cf whole blocks on the stack, a thing we do, which > would still be many times larger that the relevant arrays). I was also > anticipating future code that would need to multiply that number by other > terms to allocate shared memory, but after some off-list discussion, that > seems OK: such code should be able to deal with that using GUCs instead of > maximally pessimal allocation. 128 gives a nice round number of 1M as a > maximum transfer size, and comparable systems seem to have upper limits > around that mark. Patch attached. To make that possible we'd need two different io_combine_limit GUCs, one PGC_POSTMASTER that defines a hard max, and one that can be changed at runtime, up to the PGC_POSTMASTER one. It's somewhat painful to have such GUCs, because we don't have real infrastructure for interdependent GUCs. Typically the easiest way is to just do a Min() at runtime between the two GUCs. But looking at the number of references to io_combine_limit in read_stream.c, that doesn't look like fun. Do you have a good idea how to keep read_stream.c readable? Greetings, Andres Freund
Hi, On 2025-02-12 13:59:21 +1300, Thomas Munro wrote: > How about just maintaining it in a new variable > effective_io_combine_limit, whenever either of them is assigned? Yea, that's probably the least bad way. I wonder if we should just name that variable io_combine_limit and have the GUC be _raw or _guc or such? There's gonna be a fair number of references to the variable in code... Greetings, Andres Freund
Hi, On 2025-02-12 15:24:21 +1300, Thomas Munro wrote: > On Wed, Feb 12, 2025 at 3:22 PM Andres Freund <andres@anarazel.de> wrote: > > On 2025-02-12 13:59:21 +1300, Thomas Munro wrote: > > > How about just maintaining it in a new variable > > > effective_io_combine_limit, whenever either of them is assigned? > > > > Yea, that's probably the least bad way. > > > > I wonder if we should just name that variable io_combine_limit and have the > > GUC be _raw or _guc or such? There's gonna be a fair number of references to > > the variable in code... > > Alternatively, we could compute that as stream->io_combine_limit and > use that. That has the advantage that it's fixed for the life of the > stream, even if you change it (eg between fetches from a CURSOR that > has streams). Pretty sure it won't break anything today, but it might > just run out of queue space limiting concurrency arbitrarily if you > increase it, which is a bit weird now that I focus on that. Capturing > the value we'll use up front seems better on that front. > On the other hand, other future code might also have to remember to compute > that too (write streams, ...), a tiny bit of duplication. Yep, was also "worried" about that. > Or ... I guess we could do both things? Maybe that'd be the best approach? Not sure. > From 8cfa23a370a4564a0369991e2b0068b48983a0f6 Mon Sep 17 00:00:00 2001 > From: Thomas Munro <thomas.munro@gmail.com> > Date: Wed, 12 Feb 2025 13:52:22 +1300 > Subject: [PATCH v2] Introduce max_io_combine_limit. > > The existing io_combine_limit parameter can be set by users. The new > max_io_combine_limit parameter can be set only at server startup time. > Code that combines I/Os should respect both of them by taking the > smaller value. > > This allows the administrator to cap the total I/O size system-wide, but > also provides a way for proposed patches to know what the maximum could > possibly be in cases where it is multiplied by other GUCs to allocate > shared memory, without having to assume that it's as high as the > compile-time MAX_IO_COMBINE_LIMIT value. > > The read_stream.c code is changed to compute the minimum value up front > as stream->io_combine_limit instead of using io_combine_limit directly. > That has the extra benefit of remaining stable throughout the lifetime > of the stream even if the user changes it (eg while consuming from a > CURSOR). As previously coded, an mid-stream increase could limit > concurrency artificially just because we run out of queue space too > soon. > --- > doc/src/sgml/config.sgml | 23 ++++++++++++++- > src/backend/commands/variable.c | 1 - > src/backend/storage/aio/read_stream.c | 29 ++++++++++++------- > src/backend/storage/buffer/bufmgr.c | 5 ++-- > src/backend/utils/misc/postgresql.conf.sample | 2 ++ > src/include/storage/bufmgr.h | 1 + > 6 files changed, 47 insertions(+), 14 deletions(-) > > diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml > index 3b557ecabfb..c6de8b9e236 100644 > --- a/doc/src/sgml/config.sgml > +++ b/doc/src/sgml/config.sgml > @@ -2605,6 +2605,24 @@ include_dir 'conf.d' > </listitem> > </varlistentry> > > + <varlistentry id="guc-max-io-combine-limit" xreflabel="max_io_combine_limit"> > + <term><varname>max_io_combine_limit</varname> (<type>integer</type>) > + <indexterm> > + <primary><varname>max_io_combine_limit</varname> configuration parameter</primary> > + </indexterm> > + </term> > + <listitem> > + <para> > + Controls the largest I/O size in operations that combine I/O, and silently > + limits the user-settable parameter <varname>io_combine_limit</varname>. > + This parameter can only be set in > + the <filename>postgresql.conf</filename> file or on the server > + command line. > + The default is 128kB. > + </para> > + </listitem> > + </varlistentry> I can see an argument for having the max be slightly higher than the default, but still less than MAX_IO_COMBINE_LIMIT. But I think just about anything is fine for now. > --- a/src/backend/commands/variable.c > +++ b/src/backend/commands/variable.c > @@ -1156,7 +1156,6 @@ assign_maintenance_io_concurrency(int newval, void *extra) > #endif > } > > - > /* > * These show hooks just exist because we want to show the values in octal. > */ Bogus hunk? > @@ -402,6 +403,7 @@ read_stream_begin_impl(int flags, > size_t per_buffer_data_size) > { > ReadStream *stream; > + int effective_io_combine_limit; > size_t size; > int16 queue_size; > int max_ios; > @@ -409,6 +411,12 @@ read_stream_begin_impl(int flags, > uint32 max_pinned_buffers; > Oid tablespace_id; > > + /* > + * Respect both the system-wide limit and the user-settable limit on I/O > + * combining size. > + */ > + effective_io_combine_limit = Min(max_io_combine_limit, io_combine_limit); > + > /* > * Decide how many I/Os we will allow to run at the same time. That > * currently means advice to the kernel to tell it that we will soon read. Heh, somehow effective_* now gives me hives almost immediately :) Greetings, Andres Freund
On Wed, Feb 12, 2025 at 1:03 AM Andres Freund <andres@anarazel.de> wrote: > > Hi, > > On 2025-02-11 13:12:17 +1300, Thomas Munro wrote: > > Tomas queried[1] the limit of 256kB (or really 32 blocks) for > > io_combine_limit. Yeah, I think we should increase it and allow > > experimentation with larger numbers. Note that real hardware and > > protocols have segment and size limits that can force the kernel to > > split your I/Os, so it's not at all a given that it'll help much or at > > all to use very large sizes, but YMMV. +0.02 to the initiative, I've been always wondering why the IOs were so capped, I know :) > FWIW, I see substantial performance *regressions* with *big* IO sizes using > fio. Just looking at cached buffered IO. > > for s in 4 8 16 32 64 128 256 512 1024 2048 4096 8192;do echo -ne "$s\t\t"; numactl --physcpubind 3 fio --directory /srv/dev/fio/--size=32GiB --overwrite 1 --time_based=0 --runtime=10 --name test --rw read --buffered 0 --ioengine psync --buffered1 --invalidate 0 --output-format json --bs=$((1024*${s})) |jq '.jobs[] | .read.bw_mean';done > > io size kB throughput in MB/s [..] > 256 16864 > 512 19114 > 1024 12874 [..] > It's worth noting that if I boot with mitigations=off clearcpuid=smap I get > *vastly* better performance: > > io size kB throughput in MB/s [..] > 128 23133 > 256 23317 > 512 25829 > 1024 15912 [..] > Most of the gain isn't due to mitigations=off but clearcpuid=smap. Apparently > SMAP, which requires explicit code to allow kernel space to access userspace > memory, to make exploitation harder, reacts badly to copying lots of memory. > > This seems absolutely bonkers to me. There are two bizarre things there, +35% perf boost just like that due to security drama, and that io_size=512kb being so special to give a 10-13% boost in Your case? Any ideas, why? I've got on that Lsv2 individual MS nvme under Hyper-V, on ext4, which seems to be much more real world and average Joe situation, and it is much slower, but it is not showing advantage for blocksize beyond let's say 128: io size kB throughput in MB/s 4 1070 8 1117 16 1231 32 1264 64 1249 128 1313 256 1323 512 1257 1024 1216 2048 1271 4096 1304 8192 1214 top hitter on of course stuff like clear_page_rep [k] and rep_movs_alternative [k] (that was with mitigations=on). -J.
Hi, On 2025-02-14 09:32:32 +0100, Jakub Wartak wrote: > On Wed, Feb 12, 2025 at 1:03 AM Andres Freund <andres@anarazel.de> wrote: > > FWIW, I see substantial performance *regressions* with *big* IO sizes using > > fio. Just looking at cached buffered IO. > > > > for s in 4 8 16 32 64 128 256 512 1024 2048 4096 8192;do echo -ne "$s\t\t"; numactl --physcpubind 3 fio --directory /srv/dev/fio/--size=32GiB --overwrite 1 --time_based=0 --runtime=10 --name test --rw read --buffered 0 --ioengine psync --buffered1 --invalidate 0 --output-format json --bs=$((1024*${s})) |jq '.jobs[] | .read.bw_mean';done > > > > io size kB throughput in MB/s > [..] > > 256 16864 > > 512 19114 > > 1024 12874 > [..] > > > It's worth noting that if I boot with mitigations=off clearcpuid=smap I get > > *vastly* better performance: > > > > io size kB throughput in MB/s > [..] > > 128 23133 > > 256 23317 > > 512 25829 > > 1024 15912 > [..] > > Most of the gain isn't due to mitigations=off but clearcpuid=smap. Apparently > > SMAP, which requires explicit code to allow kernel space to access userspace > > memory, to make exploitation harder, reacts badly to copying lots of memory. > > > > This seems absolutely bonkers to me. > > There are two bizarre things there, +35% perf boost just like that due > to security drama, and that io_size=512kb being so special to give a > 10-13% boost in Your case? Any ideas, why? I think there are a few overlapping "cost factors" and that turns out to be the global minimum: - syscall overhead: the fewer the better - memory copy cost: higher for small-ish amounts, then lower - smap costs: seems to increase with larger amounts of memory - CPU cache: copying less than L3 cache will be faster, as otherwise memory bandwidth plays a role > I've got on that Lsv2 > individual MS nvme under Hyper-V, on ext4, which seems to be much more > real world and average Joe situation, and it is much slower, but it is > not showing advantage for blocksize beyond let's say 128: > > io size kB throughput in MB/s > 4 1070 > 8 1117 > 16 1231 > 32 1264 > 64 1249 > 128 1313 > 256 1323 > 512 1257 > 1024 1216 > 2048 1271 > 4096 1304 > 8192 1214 > > top hitter on of course stuff like clear_page_rep [k] and > rep_movs_alternative [k] (that was with mitigations=on). I think you're measuring something different than I was. I was purposefully measuring a fully-cached workload, which worked with that recipe, because I have more than 32GB of RAM available. But I assume you're running this in a VM that doesnt have that much, and thus your're actually bencmarking reading data from disk and - probably more influential in this case - finding buffers to put the newly read data in. Greetings, Andres Freund
On Wed, Feb 12, 2025 at 3:24 PM Thomas Munro <thomas.munro@gmail.com> wrote: > On Wed, Feb 12, 2025 at 3:22 PM Andres Freund <andres@anarazel.de> wrote: > > On 2025-02-12 13:59:21 +1300, Thomas Munro wrote: > > > How about just maintaining it in a new variable > > > effective_io_combine_limit, whenever either of them is assigned? > > > > Yea, that's probably the least bad way. > > > > I wonder if we should just name that variable io_combine_limit and have the > > GUC be _raw or _guc or such? There's gonna be a fair number of references to > > the variable in code... > > Alternatively, we could compute that as stream->io_combine_limit and > use that. That has the advantage that it's fixed for the life of the > stream, even if you change it (eg between fetches from a CURSOR that > has streams). Pretty sure it won't break anything today, Unfortunately that turned out to be untrue. :-( 0001 is a patch to address that, which should be back-patched. It's hard to come up with a repro for an actual problem, but I'm sure it's possible, will try... 0002 and 0003 are new versions of the patches to add max_io_combine_limit and increase MAX_IO_COMBINE_LIMIT to 1MB. This time using the name io_combine_limit_guc, so that io_combine_limit remains as the name referred to in the rest of the tree. I imagine/hope that we'll one day figure out how to make at least the easy case (?) of dependencies on PGC_POSTMASTER GUCs work for PGC_USERSET values, and then io_combine_limit_guc could be deleted...
Attachment
Here's a new version that also adjusts the code that just landed in da722699: - /* - * Each IO handle can have an PG_IOV_MAX long iovec. - * - * XXX: Right now the amount of space available for each IO is PG_IOV_MAX. - * While it's tempting to use the io_combine_limit GUC, that's - * PGC_USERSET, so we can't allocate shared memory based on that. - */ + /* each IO handle can have up to io_max_combine_limit iovec objects */ return mul_size(sizeof(struct iovec), - mul_size(mul_size(PG_IOV_MAX, AioProcs()), + mul_size(mul_size(io_max_combine_limit, AioProcs()), io_max_concurrency)); While doing that, this juxtaposition jumped out at me: + uint32 per_backend_iovecs = io_max_concurrency * max_io_combine_limit; Surely one of these names has to give! First commit wins, so the new name in this version is "io_max_combine_limit". I was contemplating making the same change to the MAX_IO_COMBINE_LIMIT macro when it occurred to me that we could just delete it. It has few references and they could just use PG_IOV_MAX instead; it's perhaps a little less clear as names go, but it's also pretty confusing that there's a macro with the same name as a GUC...
Attachment
Hi, On 2025-03-18 16:18:17 +1300, Thomas Munro wrote: > Here's a new version that also adjusts the code that just landed in > da722699: Something isn't quite right with this code. If I just add -c io_combine_limit=32 to the options and do a seqscan, I get odd failures. Mostly assertion failures about buffers not being valid in CheckReadBuffersOperation(). Sure glad I added CheckReadBuffersOperation(), that'd have been much nastier to figure out without those assertion failures. A bit of debugging later: Ie figured out that this is because io_combine_limit is bigger than io_max_combine_limit, so the iovecs of one IO overlap with another. With predictably hilarious results. I think it might be a small thing: Since our GUC system doesn't support dependencies or cross-checks between GUCs, the user-settable one now assigns a "raw" value to io_combine_limit_guc, and the lower of io_combine_limit_guc and io_max_combine_limit is maintained in io_combine_limit. However, the IO combine limit GUC still references io_combine_limit: { {"io_combine_limit", PGC_USERSET, RESOURCES_IO, gettext_noop("Limit on the size of data reads and writes."), NULL, GUC_UNIT_BLOCKS }, &io_combine_limit, DEFAULT_IO_COMBINE_LIMIT, 1, MAX_IO_COMBINE_LIMIT, NULL, assign_io_combine_limit, NULL }, Therefore the GUC machinery undoes the work of io_combine_limit done in assign_io_combine_limit: /* * GUC assign hooks that recompute io_combine_limit whenever * io_combine_limit_guc and io_max_combine_limit are changed. These are needed * because the GUC subsystem doesn't support dependencies between GUCs, and * they may be assigned in either order. */ void assign_io_max_combine_limit(int newval, void *extra) { io_max_combine_limit = newval; io_combine_limit = Min(io_max_combine_limit, io_combine_limit_guc); } void assign_io_combine_limit(int newval, void *extra) { io_combine_limit_guc = newval; io_combine_limit = Min(io_max_combine_limit, io_combine_limit_guc); } So we end up with a larger io_combine_limit than io_max_combine_limit. Hilarity ensues. Besides the obvious fix, I think we should also add Assert(len <= io_max_combine_limit); to pgaio_io_set_handle_data_32/64, that'd have made the bug much more obvious. Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > void > assign_io_max_combine_limit(int newval, void *extra) > { > io_max_combine_limit = newval; > io_combine_limit = Min(io_max_combine_limit, io_combine_limit_guc); > } > void > assign_io_combine_limit(int newval, void *extra) > { > io_combine_limit_guc = newval; > io_combine_limit = Min(io_max_combine_limit, io_combine_limit_guc); > } > So we end up with a larger io_combine_limit than > io_max_combine_limit. Hilarity ensues. There's another thing that's rather tin-eared about these assign hooks: the hook is not supposed to be setting the GUC variable by itself. guc.c will do that. It's relatively harmless for these int-valued GUCs to be assigned twice, but I think it might be problematic if this coding pattern were followed for a string GUC. I suggest instead void assign_io_max_combine_limit(int newval, void *extra) { io_combine_limit = Min(newval, io_combine_limit_guc); } void assign_io_combine_limit(int newval, void *extra) { io_combine_limit = Min(io_max_combine_limit, newval); } > Besides the obvious fix, I think we should also add > Assert(len <= io_max_combine_limit); +1 regards, tom lane
Hi, On 2025-04-25 10:26:09 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > void > > assign_io_max_combine_limit(int newval, void *extra) > > { > > io_max_combine_limit = newval; > > io_combine_limit = Min(io_max_combine_limit, io_combine_limit_guc); > > } > > void > > assign_io_combine_limit(int newval, void *extra) > > { > > io_combine_limit_guc = newval; > > io_combine_limit = Min(io_max_combine_limit, io_combine_limit_guc); > > } > > > So we end up with a larger io_combine_limit than > > io_max_combine_limit. Hilarity ensues. > > There's another thing that's rather tin-eared about these assign > hooks: the hook is not supposed to be setting the GUC variable by > itself. Agreed. Without that the bug would have been more apparent... Pushed the fix, alongside the change you outlined. It's kinda sad to not have any test that tests a larger io_combine_limit/io_max_combine_limit - as evidenced by this bug that'd be good. However, not all platforms have PG_IOV_MAX > 16, so it seems like it'd be somewhat painful to test? Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > It's kinda sad to not have any test that tests a larger > io_combine_limit/io_max_combine_limit - as evidenced by this bug that'd be > good. However, not all platforms have PG_IOV_MAX > 16, so it seems like it'd > be somewhat painful to test? Maybe just skip the test if the maximum value of the GUC isn't high enough? regards, tom lane
On Sat, Apr 26, 2025 at 5:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Andres Freund <andres@anarazel.de> writes: > > It's kinda sad to not have any test that tests a larger > > io_combine_limit/io_max_combine_limit - as evidenced by this bug that'd be > > good. However, not all platforms have PG_IOV_MAX > 16, so it seems like it'd > > be somewhat painful to test? > > Maybe just skip the test if the maximum value of the GUC isn't > high enough? We could also change IOV_MAX and thus PG_IOV_MAX to (say) 32 on Windows, if it's useful for testing. It's not real, I just made that number up when writing pwritev/preadv replacements, and POSIX says that 16 is the minimum a system should support. I have patches lined up to add real vectored I/O for Windows, and then the number will change anyway, probably harmonizing so that it works out to 1MB everywhere in practice. If it's useful to change it now for a test then I don't know any reason why not. The idea of the maximally conservative 16 was not to encourage people to set it to high numbers while it's emulated, but it's not especially important. Unixen have converged on IOV_MAX == 1024, most decades ago. I think AIX might be a hold-out, but we don't currently care about that, and Solaris only moved 16->1024 recently. If we change the fake numbers made up for Windows, say 16->32, then I suspect that would leave just one single machine in the 'farm that would skip the test if I understood the proposal correctly: margay, a Solaris box not receiving OS updates and thus missing "SRU72". Sorry for screwing up the GUC, it looks like I completely goofed on the contract for GUC assign functions! They aren't in charge of assigning, they're just called *on* assignment. Whoops. And thanks for fixing it.