Thread: truncating pg_multixact/members
I started looking at bug #8673 some days ago, and I identified three separate issues that need fixing: 1. slru.c doesn't consider file names longer than 4 hexadecimal chars. 2. pg_multixact/members truncation requires more intelligence to avoid removing files that are still needed. Right now we use modulo-2^32 arithmetic, but this doesn't work because the useful range can span longer than what we can keep within that range. 3. New pg_multixact/members generation requires more intelligence to avoid stomping on files from the previous wraparound cycle. Right now there is no defense against this at all. Fixing (1) is simple: we can have each SLRU user declare how many digits to have in file names. All existing users but pg_multixact/members should declare 4 digits; that one should declare 5. That way, the correct number of zeroes are allocated at the start point and we get nice, equal-width file names. Eventually, predicate.c can change to wider file names and get rid of some strange code it has to deal with overrun. For 9.3, I propose we skip this and tweak the code to consider files whose names are 4 or 5 chars in length, to remain compatible with existing installations that have pg_multixact/member having a mixture of 4-char and 5-char file names. For (2) a simple-minded proposal is to have a new SlruScanDirectory callback that knows to delete only files within a certain range. Then, at truncate time, collect the existing valid range (i.e. files containing multixacts between oldestMulti and nextMulti) and delete files outside this range. However there is a race condition: if pg_multixact/member grows concurrently while the truncation is happening, the new files would be outside the range and would be deleted. There is no bound to how much the directory can grow, so it doesn't seem reasonable to just add some arbitrary safety limit. I see three possible fixes: #2a Interlock directory truncation with new file generation: in GetNewMultiXactId and TruncateMultiXact grab a lock (perhaps a boolean in MultiXactState, or perhaps just a new LWLock) to exclude each from the other. That way, truncation can obtain a range that will continue to be meaningful until truncation is complete, and no new files will be erased. #2b During truncation, first obtain a directory listing, *then* compute the range of files to keep, then delete files outside that range but only if they are present in the listing previously obtained. That way, files created during truncation are not removed. #2c At start of truncation, save end-of-range in MultiXactState. This state is updated by GetNewMultiXactId as new files are created. That way, before each new file is created, the truncation routine knows to skip it. I don't like #2a because of concurrency loss, and I don't like #2b because it seems ugly and potentially slow. #2c seems to most reasonable way to attack this problem, but if somebody has a differing opinion please voice it. For (3) there is a hand-wavy idea that we can compare the oldest offset to the next offset, and avoid enlarging (raise an ERROR) if an overrun occurs; but we don't have the oldest offset stored anywhere convenient. We would have to scan multixact/offsets at start of service to determine it, and then perhaps keep it in MultiXactState. Arguably this info should be part of pg_control, but we don't have that in 9.3 so we'll have to find some other idea. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > 1. slru.c doesn't consider file names longer than 4 hexadecimal chars. > Fixing (1) is simple: we can have each SLRU user declare how many digits > to have in file names. All existing users but pg_multixact/members > should declare 4 digits; that one should declare 5. That way, the > correct number of zeroes are allocated at the start point and we get > nice, equal-width file names. Eventually, predicate.c can change to > wider file names and get rid of some strange code it has to deal with > overrun. That would be nice. There would be the issue of how to deal with pg_upgrade, though. If I remember correctly, there is no strong reason not to blow away any existing files in the pg_serial subdirectory at startup (the way NOTIFY code does), and at one point I had code to do that. I think we took that code out because the files would be deleted "soon enough" anyway. Barring objection, deleting them at startup seems like a sane way to handle pg_upgrade issues when we do increase the filename size. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Kevin Grittner wrote: > Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > > > 1. slru.c doesn't consider file names longer than 4 hexadecimal chars. > > > Fixing (1) is simple: we can have each SLRU user declare how many digits > > to have in file names. All existing users but pg_multixact/members > > should declare 4 digits; that one should declare 5. That way, the > > correct number of zeroes are allocated at the start point and we get > > nice, equal-width file names. Eventually, predicate.c can change to > > wider file names and get rid of some strange code it has to deal with > > overrun. > > That would be nice. > > There would be the issue of how to deal with pg_upgrade, though. If > I remember correctly, there is no strong reason not to blow away > any existing files in the pg_serial subdirectory at startup (the > way NOTIFY code does), and at one point I had code to do that. I > think we took that code out because the files would be deleted > "soon enough" anyway. Barring objection, deleting them at startup > seems like a sane way to handle pg_upgrade issues when we do > increase the filename size. Agreed. It's easy to have the files deleted at startup now that the truncation stuff uses a callback. There is already a callback that's used to delete all files, so you won't need to write any code to make it behave that way. FWIW for pg_multixact/members during pg_upgrade from 9.3 to 9.4 we will need to rename existing files, prepending a zero to each file whose name is four chars in length. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Alvaro Herrera wrote: > 1. slru.c doesn't consider file names longer than 4 hexadecimal chars. > For 9.3, I propose we skip this and tweak the code to consider files > whose names are 4 or 5 chars in length, to remain compatible with > existing installations that have pg_multixact/member having a mixture of > 4-char and 5-char file names. Attached is a patch for this. > 2. pg_multixact/members truncation requires more intelligence to avoid > removing files that are still needed. Right now we use modulo-2^32 > arithmetic, but this doesn't work because the useful range can span > longer than what we can keep within that range. > #2c At start of truncation, save end-of-range in MultiXactState. This > state is updated by GetNewMultiXactId as new files are created. That > way, before each new file is created, the truncation routine knows to > skip it. Attached is a patch implementing this. I also attach a patch implementing a "burn multixact" utility, initially coded by Andres Freund, tweaked by me. I used it to run a bunch of wraparound cycles and everything seems to behave as expected. (I don't recommend applying this patch; I'm posting merely because it's a very useful debugging tool.) One problem I see is length of time before freezing multis: they live for far too long, causing the SLRU files to eat way too much disk space. I ran burnmulti in a loop, creating multis of 3 members each, with a min freeze age of 50 million, and this leads to ~770 files in pg_multixact/offsets and ~2900 files in pg_multixact/members. Each file is 32 pages long. 256kB apiece. Probably enough to be bothersome. I think for computing the freezing point for multis, we should slash min_freeze_age by 10 or something like that. Or just set a hardcoded one million. > 3. New pg_multixact/members generation requires more intelligence to > avoid stomping on files from the previous wraparound cycle. Right now > there is no defense against this at all. I still have no idea how to attack this. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment
On Mon, Dec 30, 2013 at 10:59 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > One problem I see is length of time before freezing multis: they live > for far too long, causing the SLRU files to eat way too much disk space. > I ran burnmulti in a loop, creating multis of 3 members each, with a min > freeze age of 50 million, and this leads to ~770 files in > pg_multixact/offsets and ~2900 files in pg_multixact/members. Each file > is 32 pages long. 256kB apiece. Probably enough to be bothersome. > > I think for computing the freezing point for multis, we should slash > min_freeze_age by 10 or something like that. Or just set a hardcoded > one million. Yeah. Since we expect mxids to be composed at a much lower rate than xids, we can keep pg_multixact small without needing to increase the rate of full table scans. However, it seems to me that we ought to have GUCs for mxid_freeze_table_age and mxid_freeze_min_age. There's no principled way to derive those values from the corresponding values for XIDs, and I can't see any reason to suppose that we know how to auto-tune brand new values better than we know how to auto-tune their XID equivalents that we've had for years. One million is probably a reasonable default for mxid_freeze_min_age, though. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas escribió: > On Mon, Dec 30, 2013 at 10:59 PM, Alvaro Herrera > <alvherre@2ndquadrant.com> wrote: > > One problem I see is length of time before freezing multis: they live > > for far too long, causing the SLRU files to eat way too much disk space. > > I ran burnmulti in a loop, creating multis of 3 members each, with a min > > freeze age of 50 million, and this leads to ~770 files in > > pg_multixact/offsets and ~2900 files in pg_multixact/members. Each file > > is 32 pages long. 256kB apiece. Probably enough to be bothersome. > > > > I think for computing the freezing point for multis, we should slash > > min_freeze_age by 10 or something like that. Or just set a hardcoded > > one million. > > Yeah. Since we expect mxids to be composed at a much lower rate than > xids, we can keep pg_multixact small without needing to increase the > rate of full table scans. However, it seems to me that we ought to > have GUCs for mxid_freeze_table_age and mxid_freeze_min_age. There's > no principled way to derive those values from the corresponding values > for XIDs, and I can't see any reason to suppose that we know how to > auto-tune brand new values better than we know how to auto-tune their > XID equivalents that we've had for years. > > One million is probably a reasonable default for mxid_freeze_min_age, though. I didn't want to propose having new GUCs, but if there's no love for my idea of deriving it from the Xid freeze policy, I guess it's the only solution. Just keep in mind we will need to back-patch these new GUCs to 9.3. Are there objections to this? Also, what would be good names? Peter E. complained recently about the word MultiXactId being exposed in some error messages; maybe "mxid" is too short an abbreviation of that. Perhaps multixactid_freeze_min_age = 1 million multixactid_freeze_table_age = 3 million ? I imagine this stuff would be described somewhere in the docs, perhaps within the "routine maintenance" section somewhere. FWIW the idea of having a glossary sounds good to me. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Hi, On 2014-01-03 11:11:13 -0300, Alvaro Herrera wrote: > > Yeah. Since we expect mxids to be composed at a much lower rate than > > xids, we can keep pg_multixact small without needing to increase the > > rate of full table scans. I don't think that's necessarily true - there have been several pg_controldata outputs posted lately which had more multis used than xids. In workloads using explicit row locking or heavily used FKs that's not that suprising. > > However, it seems to me that we ought to > > have GUCs for mxid_freeze_table_age and mxid_freeze_min_age. There's > > no principled way to derive those values from the corresponding values > > for XIDs, and I can't see any reason to suppose that we know how to > > auto-tune brand new values better than we know how to auto-tune their > > XID equivalents that we've had for years. > > > > One million is probably a reasonable default for mxid_freeze_min_age, though. I think setting mxid_freeze_min_age to something lower is fair game, I'd even start at 100k or so. What I think is important is that we do *not* set mxid_freeze_table_age to something very low. People justifiedly hate anti-wraparound vacuums. What's your thought about the autovacuum_freeze_max_age equivalent? I am not sure about introducing new GUCs in the back branches, I don't have a problem with it, but I am also not sure it's necessary. Fixing members wraparound into itself seems more important and once we trigger vacuums via that it doesn't seem to be too important to have low settings. > Also, what would be good names? Peter E. complained recently about the > word MultiXactId being exposed in some error messages; maybe "mxid" is > too short an abbreviation of that. Perhaps > multixactid_freeze_min_age = 1 million > multixactid_freeze_table_age = 3 million > ? I personally am fine with mxid - we use xid in other settings after all. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Fri, Jan 3, 2014 at 9:11 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > Robert Haas escribió: >> On Mon, Dec 30, 2013 at 10:59 PM, Alvaro Herrera >> <alvherre@2ndquadrant.com> wrote: >> > One problem I see is length of time before freezing multis: they live >> > for far too long, causing the SLRU files to eat way too much disk space. >> > I ran burnmulti in a loop, creating multis of 3 members each, with a min >> > freeze age of 50 million, and this leads to ~770 files in >> > pg_multixact/offsets and ~2900 files in pg_multixact/members. Each file >> > is 32 pages long. 256kB apiece. Probably enough to be bothersome. >> > >> > I think for computing the freezing point for multis, we should slash >> > min_freeze_age by 10 or something like that. Or just set a hardcoded >> > one million. >> >> Yeah. Since we expect mxids to be composed at a much lower rate than >> xids, we can keep pg_multixact small without needing to increase the >> rate of full table scans. However, it seems to me that we ought to >> have GUCs for mxid_freeze_table_age and mxid_freeze_min_age. There's >> no principled way to derive those values from the corresponding values >> for XIDs, and I can't see any reason to suppose that we know how to >> auto-tune brand new values better than we know how to auto-tune their >> XID equivalents that we've had for years. >> >> One million is probably a reasonable default for mxid_freeze_min_age, though. > > I didn't want to propose having new GUCs, but if there's no love for my > idea of deriving it from the Xid freeze policy, I guess it's the only > solution. Just keep in mind we will need to back-patch these new GUCs > to 9.3. Are there objections to this? > > Also, what would be good names? Peter E. complained recently about the > word MultiXactId being exposed in some error messages; maybe "mxid" is > too short an abbreviation of that. Perhaps > multixactid_freeze_min_age = 1 million > multixactid_freeze_table_age = 3 million > ? > I imagine this stuff would be described somewhere in the docs, perhaps > within the "routine maintenance" section somewhere. Yeah, this stuff is definitely underdocumented relative to vacuum right now. As far as back-patching the GUCs, my thought would be to back-patch them but mark them GUC_NOT_IN_SAMPLE in 9.3, so we don't have to touch the default postgresql.conf. Also, while multixactid_freeze_min_age should be low, perhaps a million as you suggest, multixactid_freeze_table_age should NOT be lowered to 3 million or anything like it. If you do that, people who are actually doing lots of row locking will start getting many more full-table scans. We want to avoid that at all cost. I'd probably make the default the same as for vacuum_freeze_table_age, so that mxids only cause extra full-table scans if they're being used more quickly than xids. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > As far as back-patching the GUCs, my thought would be to back-patch > them but mark them GUC_NOT_IN_SAMPLE in 9.3, so we don't have to touch > the default postgresql.conf. That seems bizarre and pointless. Keep in mind that 9.3 is still wet behind the ears and many many people haven't adopted it yet. If we do what you're suggesting then we're creating a completely useless inconsistency that will nonetheless affect all those future adopters ... while accomplishing nothing much for those who have already installed 9.3. The latter are not going to have these GUCs in their existing postgresql.conf, true, but there's nothing we can do about that. (Hint: GUC_NOT_IN_SAMPLE doesn't actually *do* anything, other than prevent the variable from being shown by SHOW ALL, which is not exactly helpful here.) regards, tom lane
On Sat, Jan 4, 2014 at 12:38 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> As far as back-patching the GUCs, my thought would be to back-patch >> them but mark them GUC_NOT_IN_SAMPLE in 9.3, so we don't have to touch >> the default postgresql.conf. > > That seems bizarre and pointless. > > Keep in mind that 9.3 is still wet behind the ears and many many people > haven't adopted it yet. If we do what you're suggesting then we're > creating a completely useless inconsistency that will nonetheless affect > all those future adopters ... while accomplishing nothing much for those > who have already installed 9.3. The latter are not going to have these > GUCs in their existing postgresql.conf, true, but there's nothing we can > do about that. (Hint: GUC_NOT_IN_SAMPLE doesn't actually *do* anything, > other than prevent the variable from being shown by SHOW ALL, which is not > exactly helpful here.) Well, I guess what I'm really wondering is whether we should refrain from patching postgresql.conf.sample in 9.3, even if we add the GUC, just because people may have existing configuration files that they've already modified, and it could perhaps create confusion. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > On Sat, Jan 4, 2014 at 12:38 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Keep in mind that 9.3 is still wet behind the ears and many many people >> haven't adopted it yet. If we do what you're suggesting then we're >> creating a completely useless inconsistency that will nonetheless affect >> all those future adopters ... while accomplishing nothing much for those >> who have already installed 9.3. The latter are not going to have these >> GUCs in their existing postgresql.conf, true, but there's nothing we can >> do about that. (Hint: GUC_NOT_IN_SAMPLE doesn't actually *do* anything, >> other than prevent the variable from being shown by SHOW ALL, which is not >> exactly helpful here.) > Well, I guess what I'm really wondering is whether we should refrain > from patching postgresql.conf.sample in 9.3, even if we add the GUC, > just because people may have existing configuration files that they've > already modified, and it could perhaps create confusion. If we don't update postgresql.conf.sample then we'll just be creating different confusion. My argument above is that many more people are likely to be affected in the future by an omission in postgresql.conf.sample than would be affected now by an inconsistency between postgresql.conf.sample and their actual conf file. regards, tom lane
On Mon, Jan 6, 2014 at 2:53 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Sat, Jan 4, 2014 at 12:38 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Keep in mind that 9.3 is still wet behind the ears and many many people >>> haven't adopted it yet. If we do what you're suggesting then we're >>> creating a completely useless inconsistency that will nonetheless affect >>> all those future adopters ... while accomplishing nothing much for those >>> who have already installed 9.3. The latter are not going to have these >>> GUCs in their existing postgresql.conf, true, but there's nothing we can >>> do about that. (Hint: GUC_NOT_IN_SAMPLE doesn't actually *do* anything, >>> other than prevent the variable from being shown by SHOW ALL, which is not >>> exactly helpful here.) > >> Well, I guess what I'm really wondering is whether we should refrain >> from patching postgresql.conf.sample in 9.3, even if we add the GUC, >> just because people may have existing configuration files that they've >> already modified, and it could perhaps create confusion. > > If we don't update postgresql.conf.sample then we'll just be creating > different confusion. My argument above is that many more people are > likely to be affected in the future by an omission in > postgresql.conf.sample than would be affected now by an inconsistency > between postgresql.conf.sample and their actual conf file. I don't really have a horse in the race, so I'm OK with that if that's the consensus. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 1/4/14, 8:19 AM, Robert Haas wrote: > Also, while multixactid_freeze_min_age should be low, perhaps a > million as you suggest, multixactid_freeze_table_age should NOT be > lowered to 3 million or anything like it. If you do that, people who > are actually doing lots of row locking will start getting many more > full-table scans. We want to avoid that at all cost. I'd probably > make the default the same as for vacuum_freeze_table_age, so that > mxids only cause extra full-table scans if they're being used more > quickly than xids. Same default as vacuum_freeze_table_age, or default TO vacuum_freeze_table_age? I'm thinking the latter makes more sense... -- Jim C. Nasby, Data Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net
On Mon, Jan 6, 2014 at 7:50 PM, Jim Nasby <jim@nasby.net> wrote: > On 1/4/14, 8:19 AM, Robert Haas wrote: >> Also, while multixactid_freeze_min_age should be low, perhaps a >> million as you suggest, multixactid_freeze_table_age should NOT be >> lowered to 3 million or anything like it. If you do that, people who >> are actually doing lots of row locking will start getting many more >> full-table scans. We want to avoid that at all cost. I'd probably >> make the default the same as for vacuum_freeze_table_age, so that >> mxids only cause extra full-table scans if they're being used more >> quickly than xids. > > Same default as vacuum_freeze_table_age, or default TO > vacuum_freeze_table_age? I'm thinking the latter makes more sense... Same default. I think it's a mistake to keep leading people to think that the sensible values for one set of parameters are somehow related to a sensible set of values for the other set. They're really quite different things. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 2014-01-06 20:51:57 -0500, Robert Haas wrote: > On Mon, Jan 6, 2014 at 7:50 PM, Jim Nasby <jim@nasby.net> wrote: > > On 1/4/14, 8:19 AM, Robert Haas wrote: > >> Also, while multixactid_freeze_min_age should be low, perhaps a > >> million as you suggest, multixactid_freeze_table_age should NOT be > >> lowered to 3 million or anything like it. If you do that, people who > >> are actually doing lots of row locking will start getting many more > >> full-table scans. We want to avoid that at all cost. I'd probably > >> make the default the same as for vacuum_freeze_table_age, so that > >> mxids only cause extra full-table scans if they're being used more > >> quickly than xids. > > > > Same default as vacuum_freeze_table_age, or default TO > > vacuum_freeze_table_age? I'm thinking the latter makes more sense... > > Same default. I think it's a mistake to keep leading people to think > that the sensible values for one set of parameters are somehow related > to a sensible set of values for the other set. They're really quite > different things. Valid argument - on the other hand, defaulting to the current variable's value has the advantage of being less likely to cause pain when doing a minor version upgrade because suddenly full table vacuums are much more frequent. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
Robert Haas escribió: > On Fri, Jan 3, 2014 at 9:11 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > Yeah, this stuff is definitely underdocumented relative to vacuum right now. I have added a paragraph or two. It's a (probably insufficient) start. I would like to add a sample query to monitor usage, but I just realize we don't have a function such as age(xid) to expose this info usefully. We can't introduce one in 9.3 now, but probably we should do so in HEAD. > Also, while multixactid_freeze_min_age should be low, perhaps a > million as you suggest, multixactid_freeze_table_age should NOT be > lowered to 3 million or anything like it. If you do that, people who > are actually doing lots of row locking will start getting many more > full-table scans. We want to avoid that at all cost. I'd probably > make the default the same as for vacuum_freeze_table_age, so that > mxids only cause extra full-table scans if they're being used more > quickly than xids. I agree that the freeze_table limit should not be low, but 150 million seems too high. Not really sure what's a good value here. Here's a first cut at this. Note I have omitted a setting equivalent to autovacuum_freeze_max_age, but I think we should have one too. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment
Alvaro Herrera escribió: > Here's a first cut at this. Note I have omitted a setting equivalent to > autovacuum_freeze_max_age, but I think we should have one too. Some more comments on the patch: * I haven't introduced settings to tweak this per table for autovacuum. I don't think those are needed. It's not hard to do, however; if people opine against this, I will implement that. * The multixact_freeze_table_age value has been set to 5 million. I feel this is a big enough number that shouldn't cause too much vacuuming churn, while at the same time not leaving excessive storage occupied by pg_multixact/members, which amplifies the space used by the average number of member in each multi. (A bit of math: each Xid uses 2 bits. Therefore for the default 200 million transactions of vacuum_freeze_table_age we use 50 million bytes, or about 27 MB of space, plus some room for per-page LSNs. For each Multi we use 4 bytes in offset plus 5 bytes per member; if we consider 2 members per multi in average, that totals 70 million bytes for the default multixact_freeze_table_age, so 66 MB of space.) * I have named the parameters by simply replacing "vacuum" with "multixact". I could instead have added the "multixact" word in the middle: vacuum_multixact_freeze_min_age but this doesn't seem an improvement. * In the word "Multixact" in the docs I left the X as lowercase. I used uppercase first but that looked pretty odd. In the middle of a sentence, the M is also lowercase. I reworded the paragraph in maintenance.sgml a bit. If there are suggestions, please shout. <para> Similar to transaction IDs, Multixact IDs are implemented as a 32-bit counter and corresponding storage whichrequires careful aging management, storage cleanup, and wraparound handling. Multixacts are used to implement row locking by multiple transactions: since there is limited space in the tuple header to store lock information, thatinformation is stored separately and only a reference to it is in the <structfield>xmax</> field in the tuple header. </para> <para> As with transaction IDs, <command>VACUUM</> is in charge of removing old values. Each <command>VACUUM</>run sets <structname>pg_class</>.<structfield>relminmxid</> indicating the oldest possible value stillstored in that table; every time this value is older than <xref linkend="guc-multixact-freeze-table-age">, a full-tablescan is forced. During any table scan (either partial or full-table), any multixact older than <xref linkend="guc-multixact-freeze-min-age">is replaced by something else, which can be the zero value, a single transactionID, or a newer multixact. Eventually, as all tables in all databases are scanned and their oldest multixactvalues are advanced, on-disk storage for older multixacts can be removed. </para> </sect3> -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Hi, On 2014-01-20 15:39:33 -0300, Alvaro Herrera wrote: > * The multixact_freeze_table_age value has been set to 5 million. > I feel this is a big enough number that shouldn't cause too much > vacuuming churn, while at the same time not leaving excessive storage > occupied by pg_multixact/members, which amplifies the space used by the > average number of member in each multi. That seems to be *far* too low to me. In some workloads, remember we've seen pg_controldata outputs with far high next multi than next xid, that will cause excessive full table scans. I really think that we shouldn't change the default for freeze_table_age for multis at all. I think we should have a lower value for the vacuum_freeze_min_age equivalent, but that's it. > (A bit of math: each Xid uses 2 bits. Therefore for the default 200 > million transactions of vacuum_freeze_table_age we use 50 million bytes, > or about 27 MB of space, plus some room for per-page LSNs. For each > Multi we use 4 bytes in offset plus 5 bytes per member; if we consider 2 > members per multi in average, that totals 70 million bytes for the > default multixact_freeze_table_age, so 66 MB of space.) That doesn't seem sufficient cause to change the default to me. > * I have named the parameters by simply replacing "vacuum" with > "multixact". I could instead have added the "multixact" word in the > middle: > vacuum_multixact_freeze_min_age > but this doesn't seem an improvement. I vote for the longer version. Right now you can get all relevant vacuum parameters by grepping/searching for vacuum, we shouldn't give up on that. If we consider vacuum_multixact_freeze_min_age to be too long, I'd rather vote for replacing multixact by mxid or such. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Mon, Jan 20, 2014 at 1:39 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > * I haven't introduced settings to tweak this per table for > autovacuum. I don't think those are needed. It's not hard to do, > however; if people opine against this, I will implement that. I can't think of any reason to believe that it will be less important to tune these values on a per-table basis than it is to be able to do the same with the autovacuum parameters. Indeed, all the discussion on this thread suggests precisely that we have no real idea how to set these values yet, so more configurability is good. Even if you reject that argument, I think it's a bad idea to start making xmax vacuuming and xmin vacuuming less than parallel; such decisions confuse users. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas escribió: > On Mon, Jan 20, 2014 at 1:39 PM, Alvaro Herrera > <alvherre@2ndquadrant.com> wrote: > > * I haven't introduced settings to tweak this per table for > > autovacuum. I don't think those are needed. It's not hard to do, > > however; if people opine against this, I will implement that. > > I can't think of any reason to believe that it will be less important > to tune these values on a per-table basis than it is to be able to do > the same with the autovacuum parameters. Indeed, all the discussion > on this thread suggests precisely that we have no real idea how to set > these values yet, so more configurability is good. Even if you reject > that argument, I think it's a bad idea to start making xmax vacuuming > and xmin vacuuming less than parallel; such decisions confuse users. Yeah, I can relate to this argument. I have added per-table configurability to this, and also added the an equivalent of autovacuum_freeze_max_age to force a for-wraparound full scan of a table based on multixacts. I haven't really tested this beyond ensuring that it compiles, and I haven't changed the default values, but here it is in case someone wants to have a look and comment --- particularly on the doc additions. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment
On Tue, Feb 11, 2014 at 5:16 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > Robert Haas escribió: >> On Mon, Jan 20, 2014 at 1:39 PM, Alvaro Herrera >> <alvherre@2ndquadrant.com> wrote: >> > * I haven't introduced settings to tweak this per table for >> > autovacuum. I don't think those are needed. It's not hard to do, >> > however; if people opine against this, I will implement that. >> >> I can't think of any reason to believe that it will be less important >> to tune these values on a per-table basis than it is to be able to do >> the same with the autovacuum parameters. Indeed, all the discussion >> on this thread suggests precisely that we have no real idea how to set >> these values yet, so more configurability is good. Even if you reject >> that argument, I think it's a bad idea to start making xmax vacuuming >> and xmin vacuuming less than parallel; such decisions confuse users. > > Yeah, I can relate to this argument. I have added per-table > configurability to this, and also added the an equivalent of > autovacuum_freeze_max_age to force a for-wraparound full scan of a table > based on multixacts. > > I haven't really tested this beyond ensuring that it compiles, and I > haven't changed the default values, but here it is in case someone wants > to have a look and comment --- particularly on the doc additions. Using Multixact capitalized just so seems odd to me. Probably should be lower case (multiple places). This part needs some copy-editing: + <para> + Vacuum also allows removal of old files from the + <filename>pg_multixact/members</> and <filename>pg_multixact/offsets</> + subdirectories, which is why the default is a relatively low + 50 million transactions. Vacuuming multixacts also allows...? And: 50 million multixacts, not transactions. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
In this new version, I added a couple of fields to VacuumStmt node. How strongly do we feel this would cause an ABI break? Would we be more comfortable if I put them at the end of the struct for 9.3 instead? Do we expect third-party code to be calling vacuum()? Also, AutoVacOpts (used as part of reloptions) gained three extra fields. Since this is in the middle of StdRdOptions, it'd be somewhat more involve to put these at the end of that struct. This might be a problem if somebody has a module calling RelationIsSecurityView(). If anyone thinks we should be concerned about such an ABI change, please shout quickly. Here is patch v3, which should be final or close to. Changes from previous: Robert Haas wrote: > Using Multixact capitalized just so seems odd to me. Probably should > be lower case (multiple places). Changed it to be all lower case. Originally the X was also upper case, which looked even odder. > This part needs some copy-editing: > > + <para> > + Vacuum also allows removal of old files from the > + <filename>pg_multixact/members</> and <filename>pg_multixact/offsets</> > + subdirectories, which is why the default is a relatively low > + 50 million transactions. > > Vacuuming multixacts also allows...? And: 50 million multixacts, not > transactions. I reworded this rather completely. I was missing a change to SetMultiXactIdLimit to use the multixact value instead of the one for Xids, and passing the values computed by autovacuum to vacuum(). Per discussion, new default values are 150 million for vacuum_multixact_freeze_table_age (same as the one for Xids), and 5 million for vacuum_multixact_freeze_min_age. I decided to raise autovacuum_multixact_freeze_max_age to 400 million, i.e. double the one for Xids; so there should be no more emergency vacuuming than before unless multixact consumption is more than double that for Xids. (Now that I re-read this, the same rationale would have me setting the default for vacuum_multixact_freeze_table_age to 300 million. Any votes on that?). I adjusted the default values everywhere (docs and sample config), and fixed one or two typos in the docco for Xid vacuuming that I happened to notice, as well. postgresql.conf.sample contained a couple of space-before-tab which I removed. <!-- I thought about using a struct to pass all four values around in multiple routines rather than 4 ints (vacuum_set_xid_limits, cluster_rel, rebuild_relation, copy_heap_data). Decided not to for the time being. Perhaps a patch for HEAD only. --> -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment
Alvaro Herrera <alvherre@2ndquadrant.com> writes: > In this new version, I added a couple of fields to VacuumStmt node. How > strongly do we feel this would cause an ABI break? Would we be more > comfortable if I put them at the end of the struct for 9.3 instead? In the past we've usually added such members at the end of the struct in back branches (but put them in the logical place in HEAD). I'd recommend doing that just on principle. > Also, AutoVacOpts (used as part of reloptions) gained three extra > fields. Since this is in the middle of StdRdOptions, it'd be somewhat > more involve to put these at the end of that struct. This might be a > problem if somebody has a module calling RelationIsSecurityView(). If > anyone thinks we should be concerned about such an ABI change, please > shout quickly. That sounds problematic --- surely StdRdOptions might be something extensions are making use of? regards, tom lane
Tom Lane escribió: > Alvaro Herrera <alvherre@2ndquadrant.com> writes: > > In this new version, I added a couple of fields to VacuumStmt node. How > > strongly do we feel this would cause an ABI break? Would we be more > > comfortable if I put them at the end of the struct for 9.3 instead? > > In the past we've usually added such members at the end of the struct > in back branches (but put them in the logical place in HEAD). I'd > recommend doing that just on principle. Okay. > > Also, AutoVacOpts (used as part of reloptions) gained three extra > > fields. Since this is in the middle of StdRdOptions, it'd be somewhat > > more involve to put these at the end of that struct. This might be a > > problem if somebody has a module calling RelationIsSecurityView(). If > > anyone thinks we should be concerned about such an ABI change, please > > shout quickly. > > That sounds problematic --- surely StdRdOptions might be something > extensions are making use of? So can we assume that security_barrier is the only thing to be concerned about? If so, the attached patch should work around the issue by placing it in the same physical location. I guess if there are modules that add extra stuff beyond StdRdOptions, this wouldn't work, but I'm not really sure how likely this is given that our reloptions design hasn't proven to be the most extensible thing in the world. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment
On 2014-02-12 17:40:44 -0300, Alvaro Herrera wrote: > > > Also, AutoVacOpts (used as part of reloptions) gained three extra > > > fields. Since this is in the middle of StdRdOptions, it'd be somewhat > > > more involve to put these at the end of that struct. This might be a > > > problem if somebody has a module calling RelationIsSecurityView(). If > > > anyone thinks we should be concerned about such an ABI change, please > > > shout quickly. > > > > That sounds problematic --- surely StdRdOptions might be something > > extensions are making use of? > > So can we assume that security_barrier is the only thing to be concerned > about? If so, the attached patch should work around the issue by > placing it in the same physical location. Aw. How instead about temporarily introducing AutoVacMXactOpts or something? Changing the name of the member variable sounds just as likely to break things. > I guess if there are modules > that add extra stuff beyond StdRdOptions, this wouldn't work, but I'm > not really sure how likely this is given that our reloptions design > hasn't proven to be the most extensible thing in the world. Hm, I don't see how it'd be problematic, even if they do. I don't really understand the design of the reloptions code, but afaics, they shouldn't do so by casting around rd_options but by parsing it anew, right? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
Andres Freund escribió: > On 2014-02-12 17:40:44 -0300, Alvaro Herrera wrote: > > > > Also, AutoVacOpts (used as part of reloptions) gained three extra > > > > fields. Since this is in the middle of StdRdOptions, it'd be somewhat > > > > more involve to put these at the end of that struct. This might be a > > > > problem if somebody has a module calling RelationIsSecurityView(). If > > > > anyone thinks we should be concerned about such an ABI change, please > > > > shout quickly. > > > > > > That sounds problematic --- surely StdRdOptions might be something > > > extensions are making use of? > > > > So can we assume that security_barrier is the only thing to be concerned > > about? If so, the attached patch should work around the issue by > > placing it in the same physical location. > > Aw. How instead about temporarily introducing AutoVacMXactOpts or > something? Changing the name of the member variable sounds just as > likely to break things. Yes, that's what I did --- see the attached patch, which I would apply on top of the code for master and would be only in 9.3. The idea here is to keep the existing bits of StdRdOpts identical, so that macros such as RelationIsSecurityView() that were compiled with the old rel.h continue to work unchanged and without requiring a recompile. > > I guess if there are modules > > that add extra stuff beyond StdRdOptions, this wouldn't work, but I'm > > not really sure how likely this is given that our reloptions design > > hasn't proven to be the most extensible thing in the world. > > Hm, I don't see how it'd be problematic, even if they do. I don't really > understand the design of the reloptions code, but afaics, they shouldn't > do so by casting around rd_options but by parsing it anew, right? Now that I think about it, I don't think adding stuff at the end of StdRdOptions has anything to do with adding nonstandard options. So if we extend that struct we're not breaking any ABI contract. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Alvaro Herrera escribió: > Yes, that's what I did --- see the attached patch, which I would apply > on top of the code for master and would be only in 9.3. (Of course, these changes affect other parts of the code, in particular autovacuum.c and reloptions.c. But that's not important here). -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment
On 2014-02-13 14:40:39 -0300, Alvaro Herrera wrote: > Andres Freund escribió: > > On 2014-02-12 17:40:44 -0300, Alvaro Herrera wrote: > > > > > Also, AutoVacOpts (used as part of reloptions) gained three extra > > > > > fields. Since this is in the middle of StdRdOptions, it'd be somewhat > > > > > more involve to put these at the end of that struct. This might be a > > > > > problem if somebody has a module calling RelationIsSecurityView(). If > > > > > anyone thinks we should be concerned about such an ABI change, please > > > > > shout quickly. > > > > > > > > That sounds problematic --- surely StdRdOptions might be something > > > > extensions are making use of? > > > > > > So can we assume that security_barrier is the only thing to be concerned > > > about? If so, the attached patch should work around the issue by > > > placing it in the same physical location. > > > > Aw. How instead about temporarily introducing AutoVacMXactOpts or > > something? Changing the name of the member variable sounds just as > > likely to break things. > > Yes, that's what I did --- see the attached patch, which I would apply > on top of the code for master and would be only in 9.3. The idea here > is to keep the existing bits of StdRdOpts identical, so that macros such > as RelationIsSecurityView() that were compiled with the old rel.h > continue to work unchanged and without requiring a recompile. What I mean is that earlier code using StdRelOptions->security_barrier directly now won't compile anymore. So you've changed a ABI breakage into a API break. That's why I suggest adding the new options into a separate struct at the end of StdRelOptions, that won't break anything. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
Andres Freund escribió: > On 2014-02-12 17:40:44 -0300, Alvaro Herrera wrote: > > > > Also, AutoVacOpts (used as part of reloptions) gained three extra > > > > fields. Since this is in the middle of StdRdOptions, it'd be somewhat > > > > more involve to put these at the end of that struct. This might be a > > > > problem if somebody has a module calling RelationIsSecurityView(). If > > > > anyone thinks we should be concerned about such an ABI change, please > > > > shout quickly. > > > > > > That sounds problematic --- surely StdRdOptions might be something > > > extensions are making use of? > > > > So can we assume that security_barrier is the only thing to be concerned > > about? If so, the attached patch should work around the issue by > > placing it in the same physical location. > > Aw. How instead about temporarily introducing AutoVacMXactOpts or > something? Changing the name of the member variable sounds just as > likely to break things. So here are two patches -- the first one, for 9.3 and HEAD, introduce the new aging variables and use them throughout vacuum and autovacuum, including per-table options; the second one adjusts the struct declarations to avoid the ABI break in VacuumStmt and StdRdOptions. (Actually, for HEAD I needed to fix a failed merge due to the removal of freeze age params to cluster_rel in commit 3cff1879f, but there's nothing interesting there so I'm not posting that part.) -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment
Alvaro Herrera escribió: > So here are two patches -- the first one, for 9.3 and HEAD, introduce > the new aging variables and use them throughout vacuum and autovacuum, > including per-table options; the second one adjusts the struct > declarations to avoid the ABI break in VacuumStmt and StdRdOptions. I forgot to ask: what opinions are there about vacuum_multixact_freeze_table_age's default value? Right now I have 150 million, same as for Xids. However, it might make sense to use 300 millions, so that whole-table scans are not forced earlier than for Xids unless consumption rate for multixacts is double the one for Xids. I already have set autovacuum_multixact_freeze_max_age to 400 million, i.e. double that for Xids. This means emergency vacuums will not take place for multis, unless consumption rate is double that for Xids. This seems pretty reasonable to me. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Alvaro Herrera escribió: > So here are two patches -- the first one, for 9.3 and HEAD, introduce > the new aging variables and use them throughout vacuum and autovacuum, > including per-table options; the second one adjusts the struct > declarations to avoid the ABI break in VacuumStmt and StdRdOptions. I have pushed this for both 9.3 and master. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services