Re: MultiXact\SLRU buffers configuration - Mailing list pgsql-hackers
From | Tomas Vondra |
---|---|
Subject | Re: MultiXact\SLRU buffers configuration |
Date | |
Msg-id | 20201028013651.de5cj2xadgmba5nf@development Whole thread Raw |
In response to | Re: MultiXact\SLRU buffers configuration (Alexander Korotkov <aekorotkov@gmail.com>) |
Responses |
Re: MultiXact\SLRU buffers configuration
Re: MultiXact\SLRU buffers configuration |
List | pgsql-hackers |
On Tue, Oct 27, 2020 at 08:23:26PM +0300, Alexander Korotkov wrote: >On Tue, Oct 27, 2020 at 8:02 PM Alexander Korotkov <aekorotkov@gmail.com> wrote: >> On Mon, Oct 26, 2020 at 6:45 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote: >> > Thanks for your review, Alexander! >> > +1 for avoiding double locking in SimpleLruReadPage_ReadOnly(). >> > Other changes seem correct to me too. >> > >> > >> > I've tried to find optimal value for cache size and it seems to me that it affects multixact scalability much less thansizes of offsets\members buffers. I concur that patch 2 of the patchset does not seem documented enough. >> >> Thank you. I've made a few more minor adjustments to the patchset. >> I'm going to push 0001 and 0003 if there are no objections. > >I get that patchset v5 doesn't pass the tests due to typo in assert. >The fixes version is attached. > I did a quick review on this patch series. A couple comments: 0001 ---- This looks quite suspicious to me - SimpleLruReadPage_ReadOnly is changed to return information about what lock was used, merely to allow the callers to do an Assert() that the value is not LW_NONE. IMO we could achieve exactly the same thing by passing a simple flag that would say 'make sure we got a lock' or something like that. In fact, aren't all callers doing the assert? That'd mean we can just do the check always, without the flag. (I see GetMultiXactIdMembers does two calls and only checks the second result, but I wonder if that's intended or omission.) In any case, it'd make the lwlock.c changes unnecessary, I think. 0002 ---- Specifies the number cached MultiXact by backend. Any SLRU lookup ... should be 'number of cached ...' 0003 ---- * Conditional variable for waiting till the filling of the next multixact * will be finished. See GetMultiXactIdMembers() and RecordNewMultiXact() * for details. Perhaps 'till the next multixact is filled' or 'gets full' would be better. Not sure. This thread started with a discussion about making the SLRU sizes configurable, but this patch version only adds a local cache. Does this achieve the same goal, or would we still gain something by having GUCs for the SLRUs? If we're claiming this improves performance, it'd be good to have some workload demonstrating that and measurements. I don't see anything like that in this thread, so it's a bit hand-wavy. Can someone share details of such workload (even synthetic one) and some basic measurements? regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
pgsql-hackers by date: