Thread: Re: [HACKERS] Concurrent ALTER SEQUENCE RESTART Regression
On Fri, Apr 28, 2017 at 04:55:45PM +0900, Michael Paquier wrote: > On Thu, Apr 27, 2017 at 4:10 PM, Andres Freund <andres@anarazel.de> wrote: > > On April 27, 2017 12:06:55 AM PDT, Michael Paquier <michael.paquier@gmail.com> wrote: > >>On Thu, Apr 27, 2017 at 3:23 PM, Andres Freund <andres@anarazel.de> > >>wrote: > >>> More fun: > >>> > >>> A: CREATE SEQUENCE someseq; > >>> A: BEGIN; > >>> A: ALTER SEQUENCE someseq MAXVALUE 10; > >>> B: SELECT nextval('someseq') FROM generate_series(1, 1000); > >>> > >>> => ignores maxvalue > >> > >>Well, for this one that's because the catalog change is > >>transactional... > > > > Or because the locking model is borked. > > The operation actually relies heavily on the fact that the exclusive > lock on the buffer of pg_sequence is hold until the end of the catalog > update. And using heap_inplace_update() seems mandatory to me as the > metadata update should be non-transactional, giving the attached. I > have added some isolation tests. Thoughts? The attached makes HEAD map > with the pre-9.6 behavior. [Action required within three days. This is a generic notification.] The above-described topic is currently a PostgreSQL 10 open item. Peter, since you committed the patch believed to have created it, you own this open item. If some other commit is more relevant or if this does not belong as a v10 open item, please let us know. Otherwise, please observe the policy on open item ownership[1] and send a status update within three calendar days of this message. Include a date for your subsequent status update. Testers may discover new open items at any time, and I want to plan to get them all fixed well in advance of shipping v10. Consequently, I will appreciate your efforts toward speedy resolution. Thanks. [1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
On 4/30/17 04:05, Noah Misch wrote: > The above-described topic is currently a PostgreSQL 10 open item. Peter, > since you committed the patch believed to have created it, you own this open > item. If some other commit is more relevant or if this does not belong as a > v10 open item, please let us know. Otherwise, please observe the policy on > open item ownership[1] and send a status update within three calendar days of > this message. Include a date for your subsequent status update. Testers may > discover new open items at any time, and I want to plan to get them all fixed > well in advance of shipping v10. Consequently, I will appreciate your efforts > toward speedy resolution. Thanks. I'm working on this and will report on Friday. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, May 03, 2017 at 11:29:29PM -0400, Peter Eisentraut wrote: > On 4/30/17 04:05, Noah Misch wrote: > > The above-described topic is currently a PostgreSQL 10 open item. Peter, > > since you committed the patch believed to have created it, you own this open > > item. If some other commit is more relevant or if this does not belong as a > > v10 open item, please let us know. Otherwise, please observe the policy on > > open item ownership[1] and send a status update within three calendar days of > > this message. Include a date for your subsequent status update. Testers may > > discover new open items at any time, and I want to plan to get them all fixed > > well in advance of shipping v10. Consequently, I will appreciate your efforts > > toward speedy resolution. Thanks. > > I'm working on this and will report on Friday. This PostgreSQL 10 open item is past due for your status update. Kindly send a status update within 24 hours, and include a date for your subsequent status update. Refer to the policy on open item ownership: https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
Hi, Moving discussion to -hackers, this isn't really a bug, it's an architectural issue with the new in-tree approach. Short recap: With the patch applied in [1] ff, sequences partially behave transactional (because pg_sequence is updated transactionally), partially non-transctionally (because there's no locking enforcing it, and it's been stated as undesirable to change that). This leads to issues like described in [2]. For more context, read the whole discussion. On 2017-05-03 23:29:29 -0400, Peter Eisentraut wrote: > I'm working on this and will report on Friday. What's the plan here? I think the problem is that the code as is is trying to marry two incompatible things: You're trying to make nextval() not block, but have ALTER SEQUENCE be transactional. Given MAXVAL, INCREMENT, etc. those two simply aren't compatible. I think there's three basic ways to a resolution here: 1. Revert the entire patch for now. That's going to be very messy because of the number of followup patches & features. 2. Keep the catalog, but only ever update the records using heap_inplace_update. That'll make the transactional behaviourvery similar to before. It'd medium-term also allow to move the actual sequence block into the pg_sequence catalogitself. 3. Keep the catalog, make ALTER properly transactional, blocking concurrent nextval()s. This resolves the issue that nextval()can't see the changed definition of the sequence. I think this really needs design agreement from multiple people, because any of these choices will have significant impact. To me 3. seems like the best approach long-term, because both the current and the <v10 behaviour certainly is confusing and inconsistent (but in different ways). But it'll be quite noticeable to users. - Andres [1] http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1753b1b027035029c2a2a1649065762fafbf63f3 [2] http://archives.postgresql.org/message-id/20170427062304.gxh3rczhh6vblrwh%40alap3.anarazel.de
On Mon, May 8, 2017 at 8:43 AM, Andres Freund <andres@anarazel.de> wrote: > Moving discussion to -hackers, this isn't really a bug, it's an > architectural issue with the new in-tree approach. > > Short recap: With the patch applied in [1] ff, sequences partially > behave transactional (because pg_sequence is updated transactionally), > partially non-transactionally (because there's no locking enforcing it, > and it's been stated as undesirable to change that). This leads to > issues like described in [2]. For more context, read the whole > discussion. Thanks for summarizing. > On 2017-05-03 23:29:29 -0400, Peter Eisentraut wrote: >> I'm working on this and will report on Friday. > > What's the plan here? I think the problem is that the code as is is > trying to marry two incompatible things: You're trying to make nextval() > not block, but have ALTER SEQUENCE be transactional. Given MAXVAL, > INCREMENT, etc. those two simply aren't compatible. > > I think there's three basic ways to a resolution here: > 1. Revert the entire patch for now. That's going to be very messy because > of the number of followup patches & features. > 2. Keep the catalog, but only ever update the records using > heap_inplace_update. That'll make the transactional behaviour very > similar to before. It'd medium-term also allow to move the actual > sequence block into the pg_sequence catalog itself. In this case it is enough to use ShareUpdateExclusiveLock on the sequence object, not pg_sequence. > 3. Keep the catalog, make ALTER properly transactional, blocking > concurrent nextval()s. This resolves the issue that nextval() can't > see the changed definition of the sequence. This will need an even stronger lock, AccessExclusiveLock to make this work properly. > I think this really needs design agreement from multiple people, because > any of these choices will have significant impact. > To me 3. seems like the best approach long-term, because both the > current and the <v10 behaviour certainly is confusing and inconsistent > (but in different ways). But it'll be quite noticeable to users. Count me in for the 3. bucket. This loses the properly of having nextval() available to users when a parallel ALTER SEQUENCE is running, but consistency matters even more. I think that the final patch closing this thread should also have proper isolation tests. -- Michael
On Sat, May 06, 2017 at 07:02:46PM +0000, Noah Misch wrote: > On Wed, May 03, 2017 at 11:29:29PM -0400, Peter Eisentraut wrote: > > On 4/30/17 04:05, Noah Misch wrote: > > > The above-described topic is currently a PostgreSQL 10 open item. Peter, > > > since you committed the patch believed to have created it, you own this open > > > item. If some other commit is more relevant or if this does not belong as a > > > v10 open item, please let us know. Otherwise, please observe the policy on > > > open item ownership[1] and send a status update within three calendar days of > > > this message. Include a date for your subsequent status update. Testers may > > > discover new open items at any time, and I want to plan to get them all fixed > > > well in advance of shipping v10. Consequently, I will appreciate your efforts > > > toward speedy resolution. Thanks. > > > > I'm working on this and will report on Friday. > > This PostgreSQL 10 open item is past due for your status update. Kindly send > a status update within 24 hours, and include a date for your subsequent status > update. Refer to the policy on open item ownership: > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com IMMEDIATE ATTENTION REQUIRED. This PostgreSQL 10 open item is long past due for your status update. Please reacquaint yourself with the policy on open item ownership[1] and then reply immediately. If I do not hear from you by 2017-05-09 07:00 UTC, I will transfer this item to release management team ownership without further notice. [1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
On 5/8/17 02:58, Noah Misch wrote: > IMMEDIATE ATTENTION REQUIRED. This PostgreSQL 10 open item is long past due > for your status update. Please reacquaint yourself with the policy on open > item ownership[1] and then reply immediately. If I do not hear from you by > 2017-05-09 07:00 UTC, I will transfer this item to release management team > ownership without further notice. I got side-tracked by the minor releases preparation. I will work on this now and report on Wednesday. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2017-05-08 12:28:38 -0400, Peter Eisentraut wrote: > On 5/8/17 02:58, Noah Misch wrote: > > IMMEDIATE ATTENTION REQUIRED. This PostgreSQL 10 open item is long past due > > for your status update. Please reacquaint yourself with the policy on open > > item ownership[1] and then reply immediately. If I do not hear from you by > > 2017-05-09 07:00 UTC, I will transfer this item to release management team > > ownership without further notice. > > I got side-tracked by the minor releases preparation. I will work on > this now and report on Wednesday. This first needs discussion, rather than work on a patch. Could you please reply to http://archives.postgresql.org/message-id/20170507234334.yimkp6aq2o43wtt2%40alap3.anarazel.de You've so far not actually replied to the concerns raised in this thread in a more than superficial way, not responded to questions, and it's been two weeks. This is an architectural issue, not just a bug. We can't even think of going to beta before it's resolved. It's understand you not having time to properly resolve it immediately, but we need to be able to have a discussion about where to go from here. - Andres
On 5/7/17 19:43, Andres Freund wrote: > 3. Keep the catalog, make ALTER properly transactional, blocking > concurrent nextval()s. This resolves the issue that nextval() can't > see the changed definition of the sequence. This was the intended choice. I think I have more clarity about the different, overlapping issues: 1. Concurrent ALTER SEQUENCE causes "tuple concurrently updated" error, caused by updates to pg_sequence catalog. This can be fixed by taking a self-exclusive lock somewhere. A typical answer for other catalogs is to use ShareRowExclusiveLock. But in another context it was argued that is undesirable for other reasons, and it's better to stick with RowExclusiveLock and deal with the occasional consequences. But this question is obsoleted by #2. 2. There is some inconsistency or disagreement about what to lock when modifying relation metadata. When you alter a non-relation object, you just have the respective catalog to lock, and you have to make the trade-offs described in #1. When you alter a relation object, you can lock the catalog or the actual relation, or both. I had gone with locking the catalog, but existing practice in tablecmds.c is to lock the relation with the most appropriate lock level, and use RowExclusiveLock for the catalog. We can make sequences do that, too. 3. Sequence WAL writes and catalog updates are not protected by same lock. We can fix that by holding a lock longer. If per #2 we make the lock on the sequence the "main" one, then we just hold that one across the catalog update. 4. Some concerns about in AlterSequence() opening the pg_sequence catalog while holding the sequence buffer lock. Move some things around to fix that. 5. nextval() disregarding uncommitted ALTER SEQUENCE changes. In <PG10, it would read the uncommitted metadata and observe it. Currently, it goes ahead even if there is an uncommitted ALTER SEQUENCE pending that would prohibit what it's about to do (e.g., MAXVALUE change). I think the correct fix is to have nextval() and ALTER SEQUENCE use sensible lock levels so that they block each other. Since nextval() currently uses AccessShareLock, the suggestion was for ALTER SEQUENCE to therefore use AccessExclusiveLock. But I think a better idea would be for nextval() to use RowExclusiveLock (analogous to UPDATE) and ALTER SEQUENCE to use ShareRowExclusiveLock, which would also satisfy issue #1. Attached patches: 0001 Adds isolation tests for sequence behavior, in particular regarding point #5. The expected files in 0001 show how it behaves in 9.6, and if you apply it to 10 without the subsequent fixes, it will show some problems. 0002 Locking changes discussed above, with test changes. 0003 (optional) Reduce locking if only RESTART is specified (because that does not do a catalog update, so it can run concurrently with nextval). -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Attachment
On Wed, May 10, 2017 at 2:09 PM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote: > I think I have more clarity about the different, overlapping issues: > > 1. Concurrent ALTER SEQUENCE causes "tuple concurrently updated" > error, caused by updates to pg_sequence catalog. This can be fixed > by taking a self-exclusive lock somewhere. > > A typical answer for other catalogs is to use > ShareRowExclusiveLock. But in another context it was argued that > is undesirable for other reasons, and it's better to stick with > RowExclusiveLock and deal with the occasional consequences. But > this question is obsoleted by #2. Yes. > 2. There is some inconsistency or disagreement about what to lock > when modifying relation metadata. When you alter a non-relation > object, you just have the respective catalog to lock, and you have > to make the trade-offs described in #1. When you alter a relation > object, you can lock the catalog or the actual relation, or both. > I had gone with locking the catalog, but existing practice in > tablecmds.c is to lock the relation with the most appropriate lock > level, and use RowExclusiveLock for the catalog. We can make > sequences do that, too. Agreed, sequences are relations as well at the end. Consistency sounds good to me. > 3. Sequence WAL writes and catalog updates are not protected by same > lock. We can fix that by holding a lock longer. If per #2 we make > the lock on the sequence the "main" one, then we just hold that one > across the catalog update. > > 4. Some concerns about in AlterSequence() opening the pg_sequence > catalog while holding the sequence buffer lock. Move some things > around to fix that. > > 5. nextval() disregarding uncommitted ALTER SEQUENCE changes. In > <PG10, it would read the uncommitted metadata and observe it. > Currently, it goes ahead even if there is an uncommitted ALTER > SEQUENCE pending that would prohibit what it's about to do (e.g., > MAXVALUE change). Yes, and we want to block all nextval() calls until concurrent ALTER SEQUENCE are committed. > I think the correct fix is to have nextval() and ALTER SEQUENCE use > sensible lock levels so that they block each other. Since > nextval() currently uses AccessShareLock, the suggestion was for > ALTER SEQUENCE to therefore use AccessExclusiveLock. But I think a > better idea would be for nextval() to use RowExclusiveLock > (analogous to UPDATE) and ALTER SEQUENCE to use > ShareRowExclusiveLock, which would also satisfy issue #1. No objections to that. > Attached patches: > > 0001 Adds isolation tests for sequence behavior, in particular regarding > point #5. The expected files in 0001 show how it behaves in 9.6, and if > you apply it to 10 without the subsequent fixes, it will show some problems. > > 0002 Locking changes discussed above, with test changes. > > 0003 (optional) Reduce locking if only RESTART is specified (because > that does not do a catalog update, so it can run concurrently with nextval). Looking at 0001 and 0002... So you are correctly blocking nextval() when ALTER SEQUENCE holds a lock on the sequence object. And concurrent calls of nextval() don't conflict. As far as I can see this matches the implementation of 3. Here are some minor comments. + /* lock page' buffer and read tuple into new sequence structure */ Nit: there is a single quote in this comment. /* + * TODO rename for lock level change + * * Open the sequence and acquire AccessShareLock if needed * Some of the functions calling init_sequence() reference AccessShareLock in comments. Those will need an update as well. +++ b/src/test/isolation/expected/sequence-nextval.out @@ -0,0 +1,35 @@ +Parsed test spec with 2 sessions + +starting permutation: s1a s1b s2a s2b s1c s1d s2c s2d +step s1a: SELECT nextval('seq1'); +nextval I am not sure that this brings much value... -- Michael
On 10/05/17 07:09, Peter Eisentraut wrote: > On 5/7/17 19:43, Andres Freund wrote: >> 3. Keep the catalog, make ALTER properly transactional, blocking >> concurrent nextval()s. This resolves the issue that nextval() can't >> see the changed definition of the sequence. > > This was the intended choice. > > [...] > > 5. nextval() disregarding uncommitted ALTER SEQUENCE changes. In > <PG10, it would read the uncommitted metadata and observe it. > Currently, it goes ahead even if there is an uncommitted ALTER > SEQUENCE pending that would prohibit what it's about to do (e.g., > MAXVALUE change). > > I think the correct fix is to have nextval() and ALTER SEQUENCE use > sensible lock levels so that they block each other. Since > nextval() currently uses AccessShareLock, the suggestion was for > ALTER SEQUENCE to therefore use AccessExclusiveLock. But I think a > better idea would be for nextval() to use RowExclusiveLock > (analogous to UPDATE) and ALTER SEQUENCE to use > ShareRowExclusiveLock, which would also satisfy issue #1. > When I proposed this upstream, Andres raised concern about performance of nextval() if we do this, did you try to run any benchmark on this? Looking at the changes to open_share_lock(), I wonder if we need to have lock level as parameter so that lastval() is not blocked. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Petr Jelinek <petr.jelinek@2ndquadrant.com> writes: > On 10/05/17 07:09, Peter Eisentraut wrote: >> I think the correct fix is to have nextval() and ALTER SEQUENCE use >> sensible lock levels so that they block each other. Since >> nextval() currently uses AccessShareLock, the suggestion was for >> ALTER SEQUENCE to therefore use AccessExclusiveLock. But I think a >> better idea would be for nextval() to use RowExclusiveLock >> (analogous to UPDATE) and ALTER SEQUENCE to use >> ShareRowExclusiveLock, which would also satisfy issue #1. > When I proposed this upstream, Andres raised concern about performance > of nextval() if we do this, did you try to run any benchmark on this? As long as it doesn't block, the change in lock strength doesn't actually make any speed difference does it? If we were taking AccessExclusiveLock somewhere, I'd be worried about the cost of WAL-logging those; but this proposal does not include any. regards, tom lane
On 2017-05-10 10:29:02 -0400, Tom Lane wrote: > Petr Jelinek <petr.jelinek@2ndquadrant.com> writes: > > On 10/05/17 07:09, Peter Eisentraut wrote: > >> I think the correct fix is to have nextval() and ALTER SEQUENCE use > >> sensible lock levels so that they block each other. Since > >> nextval() currently uses AccessShareLock, the suggestion was for > >> ALTER SEQUENCE to therefore use AccessExclusiveLock. But I think a > >> better idea would be for nextval() to use RowExclusiveLock > >> (analogous to UPDATE) and ALTER SEQUENCE to use > >> ShareRowExclusiveLock, which would also satisfy issue #1. > > > When I proposed this upstream, Andres raised concern about performance > > of nextval() if we do this, did you try to run any benchmark on this? > > As long as it doesn't block, the change in lock strength doesn't actually > make any speed difference does it? The issue isn't the strength, but that we currently have this weird hackery around open_share_lock(): /** Open the sequence and acquire AccessShareLock if needed** If we haven't touched the sequence already in this transaction,*we need to acquire AccessShareLock. We arrange for the lock to* be owned by the top transaction, so that wedon't need to do it* more than once per xact.*/ This'd probably need to be removed, as we'd otherwise would get very weird semantics around aborted subxacts. Upthread I theorized whether that's actually still meaningful given fastpath locking and such, but I guess we'll have to evaluate that. Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > On 2017-05-10 10:29:02 -0400, Tom Lane wrote: >> As long as it doesn't block, the change in lock strength doesn't actually >> make any speed difference does it? > The issue isn't the strength, but that we currently have this weird > hackery around open_share_lock(): Oh! I'd forgotten about that. Yes, if we change that then we'd need to do some performance checking. regards, tom lane
On 5/10/17 12:24, Andres Freund wrote: > The issue isn't the strength, but that we currently have this weird > hackery around open_share_lock(): > /* > * Open the sequence and acquire AccessShareLock if needed > * > * If we haven't touched the sequence already in this transaction, > * we need to acquire AccessShareLock. We arrange for the lock to > * be owned by the top transaction, so that we don't need to do it > * more than once per xact. > */ > > This'd probably need to be removed, as we'd otherwise would get very > weird semantics around aborted subxacts. Can you explain in more detail what you mean by this? -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 5/10/17 12:24, Andres Freund wrote: > Upthread I theorized whether > that's actually still meaningful given fastpath locking and such, but I > guess we'll have to evaluate that. I did some testing. I ran this script CREATE SEQUENCE seq1; DO LANGUAGE plpythonu $$ plan = plpy.prepare("SELECT nextval('seq1')") for i in range(0, 10000000): plpy.execute(plan) $$; and timed the "DO". I compared 9.1 (pre fast-path locking) with 9.2 (with fast-path locking). I compared the stock releases with a patched version that replaces the body of open_share_lock() with just return relation_open(seq->relid, AccessShareLock); First, a single session: 9.1 unpatched Time: 57634.098 ms 9.1 patched Time: 58674.655 ms (These were within each other's variance over several runs.) 9.2 unpatched Time: 64868.305 ms 9.2 patched Time: 60585.317 ms (So without contention fast-path locking beats the extra dance that open_share_lock() does.) Then, with four concurrent sessions (one timed, three just running): 9.1 unpatched Time: 73123.661 ms 9.1 patched Time: 78019.079 ms So the "hack" avoids the slowdown from contention here. 9.2 unpatched Time: 73308.873 ms 9.2 patched Time: 70353.971 ms Here, fast-path locking beats out the "hack". If I add more concurrent sessions, everything gets much slower but the advantage of the patched version stays about the same. But I can't reliably test that on this hardware. (One wonders whether these differences remain relevant if this is run within the context of an INSERT or COPY instead of just a tight loop.) -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2017-05-11 11:35:22 -0400, Peter Eisentraut wrote: > On 5/10/17 12:24, Andres Freund wrote: > > The issue isn't the strength, but that we currently have this weird > > hackery around open_share_lock(): > > /* > > * Open the sequence and acquire AccessShareLock if needed > > * > > * If we haven't touched the sequence already in this transaction, > > * we need to acquire AccessShareLock. We arrange for the lock to > > * be owned by the top transaction, so that we don't need to do it > > * more than once per xact. > > */ > > > > This'd probably need to be removed, as we'd otherwise would get very > > weird semantics around aborted subxacts. > > Can you explain in more detail what you mean by this? Well, right now we don't do proper lock-tracking for sequences, always assigning them to the toplevel transaction. But that doesn't seem proper when nextval() would conflict with ALTER SEQUENCE et al, because then locks would continue to be held by aborted savepoints. - Andres
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes: > On 5/10/17 12:24, Andres Freund wrote: >> Upthread I theorized whether >> that's actually still meaningful given fastpath locking and such, but I >> guess we'll have to evaluate that. > [ with or without contention, fast-path locking beats the extra dance that > open_share_lock() does. ] That is pretty cool. It would be good to verify the same on master, but assuming it holds up, I think it's ok to remove open_share_lock(). regards, tom lane
Hi, On 2017-05-11 16:27:48 -0400, Peter Eisentraut wrote: > On 5/10/17 12:24, Andres Freund wrote: > > Upthread I theorized whether > > that's actually still meaningful given fastpath locking and such, but I > > guess we'll have to evaluate that. > > I did some testing. That's with the open_share_lock stuff ripped out entirely, replaced by a plain lock acquisition within the current subxact? > (These were within each other's variance over several runs.) > > 9.2 unpatched > Time: 64868.305 ms > > 9.2 patched > Time: 60585.317 ms > > (So without contention fast-path locking beats the extra dance that > open_share_lock() does.) That's kind of surprising, I really wouldn't have thought it'd be faster without. I guess it's the overhead of sigsetjmp(). Cool. - Andres
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes: > I ran this script > CREATE SEQUENCE seq1; > DO LANGUAGE plpythonu $$ > plan = plpy.prepare("SELECT nextval('seq1')") > for i in range(0, 10000000): > plpy.execute(plan) > $$; > and timed the "DO". It occurred to me that plpy.execute is going to run a subtransaction for each call, which makes this kind of a dubious test case, both because of the extra subtransaction overhead and because subtransaction lock ownership effects will possibly skew the results. So I tried to reproduce the test using plain plpgsql, CREATE SEQUENCE IF NOT EXISTS seq1; \timing on do $$ declare x int; begin for i in 0 .. 10000000 loop x := nextval('seq1'); end loop; end $$; On HEAD, building without cassert, I got a fairly reproducible result of about 10.6 seconds. I doubt my machine is 6X faster than yours, so this indicates that the subtransaction overhead is pretty real. > I compared the stock releases with a patched version that replaces the > body of open_share_lock() with just > return relation_open(seq->relid, AccessShareLock); Hm. I don't think that's a sufficient code change, because if you do it like that then the lock remains held after nextval() returns. This means that (a) subsequent calls aren't hitting the shared lock manager at all, they're just incrementing a local lock counter, and whether it would be fastpath or not is irrelevant; and (b) this means that the semantic issues Andres is worried about remain in place, because we will hold the lock till transaction end. I experimented with something similar, just replacing open_share_lock as above, and I got runtimes of just about 12 seconds, which surprised me a bit. I'd have thought the locallock-already-exists code path would be faster than that. I then further changed the code so that nextval_internal ends with "relation_close(seqrel, AccessShareLock);" rather than NoLock, so that the lock is actually released between calls. This boosted the runtime up to 15.5 seconds, or a 50% penalty over HEAD. My conclusion is that in low-overhead cases, such as using a sequence to assign default values during COPY IN, the percentage overhead from acquiring and releasing the lock could be pretty dire. Still, we might not have much choice if we want nice semantics. regards, tom lane
Andres Freund <andres@anarazel.de> writes: > On 2017-05-11 16:27:48 -0400, Peter Eisentraut wrote: >> (So without contention fast-path locking beats the extra dance that >> open_share_lock() does.) > That's kind of surprising, I really wouldn't have thought it'd be faster > without. I guess it's the overhead of sigsetjmp(). Cool. My results (posted nearby) lead me to suspect that the improvement Peter sees from 9.1 to 9.2 has little to do with fastpath locking and a lot to do with some improvement or other in subtransaction lock management. regards, tom lane
On 2017-05-11 17:21:18 -0400, Tom Lane wrote: > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes: > > I ran this script > > > CREATE SEQUENCE seq1; > > > DO LANGUAGE plpythonu $$ > > plan = plpy.prepare("SELECT nextval('seq1')") > > for i in range(0, 10000000): > > plpy.execute(plan) > > $$; > > > and timed the "DO". > > It occurred to me that plpy.execute is going to run a subtransaction for > each call, which makes this kind of a dubious test case, both because of > the extra subtransaction overhead and because subtransaction lock > ownership effects will possibly skew the results. So I tried to > reproduce the test using plain plpgsql, > > CREATE SEQUENCE IF NOT EXISTS seq1; > > \timing on > > do $$ > declare x int; > begin > for i in 0 .. 10000000 loop > x := nextval('seq1'); > end loop; > end > $$; > > On HEAD, building without cassert, I got a fairly reproducible result > of about 10.6 seconds. I doubt my machine is 6X faster than yours, > so this indicates that the subtransaction overhead is pretty real. Isn't that pretty much the point? The whole open_share_lock() optimization looks like it really only can make a difference with subtransactions? > > I compared the stock releases with a patched version that replaces the > > body of open_share_lock() with just > > return relation_open(seq->relid, AccessShareLock); > > Hm. I don't think that's a sufficient code change, because if you do it > like that then the lock remains held after nextval() returns. Hm? That's not new, is it? We previously did a LockRelationOid(seq->relid) and then relation_open(seq->relid, NoLock)? > This means > that (a) subsequent calls aren't hitting the shared lock manager at all, > they're just incrementing a local lock counter, and whether it would be > fastpath or not is irrelevant; and (b) this means that the semantic issues > Andres is worried about remain in place, because we will hold the lock > till transaction end. My concern was aborted subtransactions, and those should release their lock via resowner stuff at abort? > I experimented with something similar, just replacing open_share_lock > as above, and I got runtimes of just about 12 seconds, which surprised > me a bit. Yea, I suspect we'd need something like the current open_share_lock, except with the resowner trickery removed. > I'd have thought the locallock-already-exists code path would be > faster than that. It's a hash-table lookup, via dynahash no less, that's definitley going to be more expensive than a single seq->lxid != thislxid... Greetings, Andres Freund
Andres Freund <andres@anarazel.de> writes: > On 2017-05-11 17:21:18 -0400, Tom Lane wrote: >> I doubt my machine is 6X faster than yours, >> so this indicates that the subtransaction overhead is pretty real. > Isn't that pretty much the point? The whole open_share_lock() > optimization looks like it really only can make a difference with > subtransactions? Uh, no; I'm pretty sure that that code is older than subtransactions. The point of it is to avoid taking and releasing a lock over and over within a single transaction. >> Hm. I don't think that's a sufficient code change, because if you do it >> like that then the lock remains held after nextval() returns. > Hm? That's not new, is it? We previously did a LockRelationOid(seq->relid) and > then relation_open(seq->relid, NoLock)? Right, but the existing code is *designed* to hold the lock till end of top-level transaction, regardless of what happens in any subtransaction. My understanding of your complaint is that you do not think that's OK for any lock stronger than AccessShareLock. regards, tom lane
On 5/11/17 16:34, Andres Freund wrote: >>> This'd probably need to be removed, as we'd otherwise would get very >>> weird semantics around aborted subxacts. >> Can you explain in more detail what you mean by this? > Well, right now we don't do proper lock-tracking for sequences, always > assigning them to the toplevel transaction. But that doesn't seem > proper when nextval() would conflict with ALTER SEQUENCE et al, because > then locks would continue to be held by aborted savepoints. I see what you mean here. We already have this issue with DROP SEQUENCE. While it would be nice to normalize this, I think it's quite esoteric. I doubt users have any specific expectations how sequences behave in aborted subtransactions. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 5/11/17 17:28, Andres Freund wrote: > Isn't that pretty much the point? The whole open_share_lock() > optimization looks like it really only can make a difference with > subtransactions? Yeah that confused me too. That keep-the-lock-for-the-whole-transaction logic was introduced in a2597ef17958e75e7ba26507dc407249cc9e7134 around 7.3, way before subtransactions (8.0). The point there was apparently just to avoid hitting the lock manager on subsequent calls. That code has since been moved around end refactored many times. When subtransactions were introduced, the current logic of keeping the lock across subtransactions was added in order to keep the original intent. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 5/11/17 23:59, Tom Lane wrote: > Right, but the existing code is *designed* to hold the lock till end of > top-level transaction, regardless of what happens in any subtransaction. > My understanding of your complaint is that you do not think that's OK > for any lock stronger than AccessShareLock. What he is saying (I believe) is: Because of that custom logic to hold the lock until the end of the transaction across subtransactions, you will keep holding the lock even if a subtransaction aborts, which is nonstandard behavior. Previously, nobody cared, because nobody else was locking sequences. But now we're proposing to make ALTER SEQUENCE lock the sequence, so this behavior would be visible: S1: BEGIN; S1: SAVEPOINT s1; S1: SELECT nextval('seq1'); S1: ROLLBACK TO SAVEPOINT s1; S2: ALTER SEQUENCE seq1 MAXVALUE 100; -- would now wait for S1 commit My amendment to that is that "previously nobody cared" is not quite correct, because the same already happens currently with DROP SEQUENCE. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Mon, May 08, 2017 at 12:28:38PM -0400, Peter Eisentraut wrote: > On 5/8/17 02:58, Noah Misch wrote: > > IMMEDIATE ATTENTION REQUIRED. This PostgreSQL 10 open item is long past due > > for your status update. Please reacquaint yourself with the policy on open > > item ownership[1] and then reply immediately. If I do not hear from you by > > 2017-05-09 07:00 UTC, I will transfer this item to release management team > > ownership without further notice. > > I got side-tracked by the minor releases preparation. I will work on > this now and report on Wednesday. IMMEDIATE ATTENTION REQUIRED. This PostgreSQL 10 open item is long past due for your status update. Please reacquaint yourself with the policy on open item ownership[1] and then reply immediately. If I do not hear from you by 2017-05-16 04:00 UTC, I will transfer this item to release management team ownership without further notice. [1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
On 5/10/17 09:12, Michael Paquier wrote: > Looking at 0001 and 0002... So you are correctly blocking nextval() > when ALTER SEQUENCE holds a lock on the sequence object. And > concurrent calls of nextval() don't conflict. As far as I can see this > matches the implementation of 3. > > Here are some minor comments. Committed after working in your comments. Thanks! -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2017-05-15 10:34:02 -0400, Peter Eisentraut wrote: > On 5/10/17 09:12, Michael Paquier wrote: > > Looking at 0001 and 0002... So you are correctly blocking nextval() > > when ALTER SEQUENCE holds a lock on the sequence object. And > > concurrent calls of nextval() don't conflict. As far as I can see this > > matches the implementation of 3. > > > > Here are some minor comments. > > Committed after working in your comments. Thanks! There's still weird behaviour, unfortunately. If you do an ALTER SEQUENCE changing minval/maxval w/ restart in a transaction, and abort, you'll a) quite possibly not be able to use the sequence anymore, because it may of bounds b) DDL still isn't transactional. At the very least that'd need to be documented. - Andres
On Thu, May 18, 2017 at 4:54 PM, Andres Freund <andres@anarazel.de> wrote: > There's still weird behaviour, unfortunately. If you do an ALTER > SEQUENCE changing minval/maxval w/ restart in a transaction, and abort, > you'll a) quite possibly not be able to use the sequence anymore, > because it may of bounds b) DDL still isn't transactional. Your emails would be a bit easier to understand if you included a few more words. I'm guessing "may of bounds" is supposed to say "may be out of bounds"? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 2017-05-19 08:31:15 -0400, Robert Haas wrote: > On Thu, May 18, 2017 at 4:54 PM, Andres Freund <andres@anarazel.de> wrote: > > There's still weird behaviour, unfortunately. If you do an ALTER > > SEQUENCE changing minval/maxval w/ restart in a transaction, and abort, > > you'll a) quite possibly not be able to use the sequence anymore, > > because it may of bounds b) DDL still isn't transactional. > > Your emails would be a bit easier to understand if you included a few > more words. Yea - I'd explained this one already somewhere upthread, and I'd hoped it'd be enough, but I probably was wrong. > I'm guessing "may of bounds" is supposed to say "may be out of bounds"? Yes. Consider a scenario like: S1: CREATE SEQUENCE oobounds MINVALUE 1 START 1; S1: SELECT nextval('oobounds'); -> 1 S2: BEGIN; S2: ALTER SEQUENCE oobounds MAXVALUE -10 START -10 MINVALUE -1000 INCREMENT BY -1 RESTART; S2: SELECT nextval('oobounds'); -> -10 S2: ROLLBACK; S1: SELECT * FROM pg_sequence WHERE seqrelid = 'oobounds'::regclass; ┌──────────┬──────────┬──────────┬──────────────┬─────────────────────┬────────┬──────────┬──────────┐ │ seqrelid │ seqtypid │ seqstart │ seqincrement │ seqmax │ seqmin │ seqcache │ seqcycle │ ├──────────┼──────────┼──────────┼──────────────┼─────────────────────┼────────┼──────────┼──────────┤ │ 203401 │ 20 │ 1 │ 1 │ 9223372036854775807 │ 1 │ 1 │ f │ └──────────┴──────────┴──────────┴──────────────┴─────────────────────┴────────┴──────────┴──────────┘ S1: SELECT nextval('oobounds'); -> -9 Ooops. Two issues: Firstly, we get a value smaller than seqmin, obviously that's not ok. But even if we'd error out, it'd imo still not be ok, because we have a command that behaves partially transactionally (keeping the seqmax/min transactionally), partially not (keeping the current sequence state at -9). - Andres
On Mon, May 22, 2017 at 11:42 AM, Andres Freund <andres@anarazel.de> wrote: > Ooops. > > Two issues: Firstly, we get a value smaller than seqmin, obviously > that's not ok. But even if we'd error out, it'd imo still not be ok, > because we have a command that behaves partially transactionally > (keeping the seqmax/min transactionally), partially not (keeping the > current sequence state at -9). I don't really agree that this is broken. In 9.6, the update appears to be fully non-transactional, which you could argue is more consistent, but I don't think I'd agree. In other cases where we can't perform an operation transactionally, we call PreventTransactionChain(), but in 9.6, ALTER SEQUENCE oobounds MAXVALUE -10 START -10 MINVALUE -1000 INCREMENT BY -1 RESTART seems to be allowed in a transaction but a subsequent ROLLBACK has no effect, which seems fairly awful. I think it's important to keep in mind that nextval() more or less has to be non-transactional. From a certain point of view, that's why we have sequences. If nextval() didn't notice actions performed by uncommitted transactions, then sequences wouldn't deliver unique values; if it blocked waiting to see whether the other transaction committed and didn't return a value until it either committed or aborted, then sequences would have terrible performance. However, just because nextval() has to be non-transactional doesn't mean that ALL sequence operations have to be non-transactional. I don't really know whether the behavior Peter has chosen here is the best possible choice, so I'm not intending to mount any sort of vigorous defense of it. However, this thread started with the complaint about occasional "ERROR: tuple concurrently updated" messages; in my opinion, any such behavior in new code is unacceptable, and now it's been fixed. "Totally broken" != "not the behavior I prefer". -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 2017-05-23 22:47:07 -0400, Robert Haas wrote: > On Mon, May 22, 2017 at 11:42 AM, Andres Freund <andres@anarazel.de> wrote: > > Ooops. > > > > Two issues: Firstly, we get a value smaller than seqmin, obviously > > that's not ok. But even if we'd error out, it'd imo still not be ok, > > because we have a command that behaves partially transactionally > > (keeping the seqmax/min transactionally), partially not (keeping the > > current sequence state at -9). > > I don't really agree that this is broken. Just a quick clarification question: You did notice that nextval() in S1 after the rollback returned -9, despite seqmin being 0? I can see erroring out being acceptable, but returning flat out wrong values....? - Andres
On Tue, May 23, 2017 at 11:25 PM, Andres Freund <andres@anarazel.de> wrote: > On 2017-05-23 22:47:07 -0400, Robert Haas wrote: >> On Mon, May 22, 2017 at 11:42 AM, Andres Freund <andres@anarazel.de> wrote: >> > Ooops. >> > >> > Two issues: Firstly, we get a value smaller than seqmin, obviously >> > that's not ok. But even if we'd error out, it'd imo still not be ok, >> > because we have a command that behaves partially transactionally >> > (keeping the seqmax/min transactionally), partially not (keeping the >> > current sequence state at -9). >> >> I don't really agree that this is broken. > > Just a quick clarification question: You did notice that nextval() in S1 > after the rollback returned -9, despite seqmin being 0? I can see > erroring out being acceptable, but returning flat out wrong values....? I did see that. I'm unclear what you think it should do instead. I mean, it could notice that cur_val < min_val and return min_val, but that doesn't have a very good chance of being correct either. I suppose the aborted transaction could try to restore the old cur_val when it rolls back, but that's clearly the wrong thing when there's no DDL involved (plus I'm not sure it would even be safe to try to do that). Do you have an idea? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On May 24, 2017 6:57:08 AM EDT, Robert Haas <robertmhaas@gmail.com> wrote: >On Tue, May 23, 2017 at 11:25 PM, Andres Freund <andres@anarazel.de> >wrote: >> On 2017-05-23 22:47:07 -0400, Robert Haas wrote: >>> On Mon, May 22, 2017 at 11:42 AM, Andres Freund <andres@anarazel.de> >wrote: >>> > Ooops. >>> > >>> > Two issues: Firstly, we get a value smaller than seqmin, obviously >>> > that's not ok. But even if we'd error out, it'd imo still not be >ok, >>> > because we have a command that behaves partially transactionally >>> > (keeping the seqmax/min transactionally), partially not (keeping >the >>> > current sequence state at -9). >>> >>> I don't really agree that this is broken. >> >> Just a quick clarification question: You did notice that nextval() in >S1 >> after the rollback returned -9, despite seqmin being 0? I can see >> erroring out being acceptable, but returning flat out wrong >values....? > >I did see that. I'm unclear what you think it should do instead. I >mean, it could notice that cur_val < min_val and return min_val, but >that doesn't have a very good chance of being correct either. I >suppose the aborted transaction could try to restore the old cur_val >when it rolls back, but that's clearly the wrong thing when there's no >DDL involved (plus I'm not sure it would even be safe to try to do >that). Do you have an idea? At the very least we'll have to error out. That's still not nice usability wise, but it sure beats returning flat out wrongvalues. I suspect that the proper fix would be to use a different relfilenode after ddl, when changing the seq file itself (I.e.setval and restart). That seems like it'd be architecturally more appropriate, but also some work. Andres -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Wed, May 24, 2017 at 9:04 AM, Andres Freund <andres@anarazel.de> wrote: > At the very least we'll have to error out. That's still not nice usability wise, but it sure beats returning flat out wrongvalues. I'm not sure. That seems like it might often be worse. Now you need manual intervention before anything even has a hope of working. > I suspect that the proper fix would be to use a different relfilenode after ddl, when changing the seq file itself (I.e.setval and restart). That seems like it'd be architecturally more appropriate, but also some work. I can see some advantages to that, but it seems far too late to think about doing that in v10. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 2017-05-24 10:24:19 -0400, Robert Haas wrote: > On Wed, May 24, 2017 at 9:04 AM, Andres Freund <andres@anarazel.de> wrote: > > At the very least we'll have to error out. That's still not nice usability wise, but it sure beats returning flat outwrong values. > > I'm not sure. That seems like it might often be worse. Now you need > manual intervention before anything even has a hope of working. Well, but then we should just remove minval/maxval if we can't rely on it. > > I suspect that the proper fix would be to use a different relfilenode after ddl, when changing the seq file itself (I.e.setval and restart). That seems like it'd be architecturally more appropriate, but also some work. > > I can see some advantages to that, but it seems far too late to think > about doing that in v10. I wonder if that's not actually very little new code, and I think we might end up regretting having yet another inconsistent set of semantics in v10, which we'll then end up changing again in v11. - Andres
On Wed, May 24, 2017 at 10:32 AM, Andres Freund <andres@anarazel.de> wrote: > Well, but then we should just remove minval/maxval if we can't rely on > it. That seems like a drastic overreaction to me. > I wonder if that's not actually very little new code, and I think we > might end up regretting having yet another inconsistent set of semantics > in v10, which we'll then end up changing again in v11. I'm not exercised enough about it to spend time on it or to demand that Peter do so, but feel free to propose something. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 2017-05-24 10:52:37 -0400, Robert Haas wrote: > On Wed, May 24, 2017 at 10:32 AM, Andres Freund <andres@anarazel.de> wrote: > > Well, but then we should just remove minval/maxval if we can't rely on > > it. > > That seems like a drastic overreaction to me. Well, either they work, or they don't. But since it turns out to be easy enough to fix anyway... > > I wonder if that's not actually very little new code, and I think we > > might end up regretting having yet another inconsistent set of semantics > > in v10, which we'll then end up changing again in v11. > > I'm not exercised enough about it to spend time on it or to demand > that Peter do so, but feel free to propose something. This turns out to be fairly simple patch. We already do precisely what I describe when resetting a sequence for TRUNCATE, so it's an already tested codepath. It also follows a lot more established practice around transactional schema changes, so I think that's architecturally better too. Peter, to my understanding, agreed with that reasoning at pgcon. I just have two questions: 1) We've introduced, in 3d092fe540, infrastructure to avoid unnecessary catalog updates, in an attemt to fix the "concurrentlyupdated" error. But that turned out to not be sufficient anyway, and it bulks up the code. I'd vote for justremoving that piece of logic, it doesn't buy us anything meaningful, and it's bulky. 2) There's currently logic that takes a lower level lock for ALTER SEQUENCE RESET without other options. I think that'sa bit confusing with the proposed change, because then it's unclear when ALTER SEQUENCE is transactional and whennot. I think it's actually a lot easier to understand if we keep nextval()/setval() as non-transactional, and ALTERSEQUENCE as transactional. Comments? - Andres
On 2017-05-31 14:24:38 -0700, Andres Freund wrote: > On 2017-05-24 10:52:37 -0400, Robert Haas wrote: > > On Wed, May 24, 2017 at 10:32 AM, Andres Freund <andres@anarazel.de> wrote: > > > Well, but then we should just remove minval/maxval if we can't rely on > > > it. > > > > That seems like a drastic overreaction to me. > > Well, either they work, or they don't. But since it turns out to be > easy enough to fix anyway... > > > > > I wonder if that's not actually very little new code, and I think we > > > might end up regretting having yet another inconsistent set of semantics > > > in v10, which we'll then end up changing again in v11. > > > > I'm not exercised enough about it to spend time on it or to demand > > that Peter do so, but feel free to propose something. > > This turns out to be fairly simple patch. We already do precisely what > I describe when resetting a sequence for TRUNCATE, so it's an already > tested codepath. It also follows a lot more established practice around > transactional schema changes, so I think that's architecturally better > too. Peter, to my understanding, agreed with that reasoning at pgcon. > > I just have two questions: > 1) We've introduced, in 3d092fe540, infrastructure to avoid unnecessary > catalog updates, in an attemt to fix the "concurrently updated" > error. But that turned out to not be sufficient anyway, and it bulks > up the code. I'd vote for just removing that piece of logic, it > doesn't buy us anything meaningful, and it's bulky. > > 2) There's currently logic that takes a lower level lock for ALTER > SEQUENCE RESET without other options. I think that's a bit confusing > with the proposed change, because then it's unclear when ALTER > SEQUENCE is transactional and when not. I think it's actually a lot > easier to understand if we keep nextval()/setval() as > non-transactional, and ALTER SEQUENCE as transactional. > > Comments? Here's a patch doing what I suggested above. The second patch addresses an independent oversight where the post alter hook was invoked before doing the catalog update. - Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Attachment
On Wed, May 31, 2017 at 8:07 PM, Andres Freund <andres@anarazel.de> wrote: > Here's a patch doing what I suggested above. The second patch addresses > an independent oversight where the post alter hook was invoked before > doing the catalog update. 0002 is a slam-dunk. I don't object to 0001 but haven't reviewed it carefully. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 01/06/17 16:51, Robert Haas wrote: > On Wed, May 31, 2017 at 8:07 PM, Andres Freund <andres@anarazel.de> wrote: >> Here's a patch doing what I suggested above. The second patch addresses >> an independent oversight where the post alter hook was invoked before >> doing the catalog update. > > 0002 is a slam-dunk. I don't object to 0001 but haven't reviewed it carefully. > I did go over the code (ie didn't do actual testing), and I like it much better than the current state. Both in terms of making the behavior more consistent and the fact that the code is simpler. So +1 to go ahead with both patches. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On 2017-06-01 19:08:33 +0200, Petr Jelinek wrote: > On 01/06/17 16:51, Robert Haas wrote: > > On Wed, May 31, 2017 at 8:07 PM, Andres Freund <andres@anarazel.de> wrote: > >> Here's a patch doing what I suggested above. The second patch addresses > >> an independent oversight where the post alter hook was invoked before > >> doing the catalog update. > > > > 0002 is a slam-dunk. I don't object to 0001 but haven't reviewed it carefully. > > > > I did go over the code (ie didn't do actual testing), and I like it much > better than the current state. Both in terms of making the behavior more > consistent and the fact that the code is simpler. > > So +1 to go ahead with both patches. Thanks for the look! I unfortunately forgot to credit you as a reviewer, sorry for that. Pushed both. - Andres