Thread: locks in CREATE TRIGGER, ADD FK

locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and 
CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are 
adding triggers to (the PK table, in the case of ALTER TABLE). Is this 
necessary? I don't see why we can't allow SELECT queries on the table to 
proceed while the trigger is being added.

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Neil Conway wrote:
> AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and 
> CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are 
> adding triggers to (the PK table, in the case of ALTER TABLE). Is this 
> necessary? I don't see why we can't allow SELECT queries on the table to 
> proceed while the trigger is being added.

Sorry, I forgot to mention: I think RowExclusiveLock or ExclusiveLock 
would be sufficient instead.

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Neil Conway wrote:
> AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and
> CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are
> adding triggers to (the PK table, in the case of ALTER TABLE). Is this
> necessary? I don't see why we can't allow SELECT queries on the table to
> proceed while the trigger is being added.

Attached is a patch that changes both to use ShareRowExclusiveLock, and
updates the documentation accordingly. I'll apply this later today,
barring any objections.

-Neil
Index: doc/src/sgml/mvcc.sgml
===================================================================
RCS file: /var/lib/cvs/pgsql/doc/src/sgml/mvcc.sgml,v
retrieving revision 2.47
diff -c -r2.47 mvcc.sgml
*** doc/src/sgml/mvcc.sgml    26 Feb 2005 18:37:17 -0000    2.47
--- doc/src/sgml/mvcc.sgml    23 Mar 2005 00:46:36 -0000
***************
*** 654,661 ****
      </para>

      <para>
!          This lock mode is not automatically acquired by any
!          <productname>PostgreSQL</productname> command.
      </para>
         </listitem>
        </varlistentry>
--- 654,661 ----
      </para>

      <para>
!      Acquired by <command>CREATE TRIGGER</command> and <command>ALTER
!      TABLE ADD FOREIGN KEY</command>.
      </para>
         </listitem>
        </varlistentry>
Index: src/backend/commands/tablecmds.c
===================================================================
RCS file: /var/lib/cvs/pgsql/src/backend/commands/tablecmds.c,v
retrieving revision 1.148
diff -c -r1.148 tablecmds.c
*** src/backend/commands/tablecmds.c    20 Mar 2005 22:00:52 -0000    1.148
--- src/backend/commands/tablecmds.c    23 Mar 2005 00:49:08 -0000
***************
*** 3829,3841 ****
      Oid            constrOid;

      /*
!      * Grab an exclusive lock on the pk table, so that someone doesn't
!      * delete rows out from under us. (Although a lesser lock would do for
!      * that purpose, we'll need exclusive lock anyway to add triggers to
!      * the pk table; trying to start with a lesser lock will just create a
!      * risk of deadlock.)
       */
!     pkrel = heap_openrv(fkconstraint->pktable, AccessExclusiveLock);

      /*
       * Validity and permissions checks
--- 3829,3839 ----
      Oid            constrOid;

      /*
!      * Grab a lock on the pk table, so that someone doesn't delete
!      * rows out from under us; ShareRowExclusive should be good
!      * enough.
       */
!     pkrel = heap_openrv(fkconstraint->pktable, ShareRowExclusiveLock);

      /*
       * Validity and permissions checks
Index: src/backend/commands/trigger.c
===================================================================
RCS file: /var/lib/cvs/pgsql/src/backend/commands/trigger.c,v
retrieving revision 1.178
diff -c -r1.178 trigger.c
*** src/backend/commands/trigger.c    20 Mar 2005 23:40:24 -0000    1.178
--- src/backend/commands/trigger.c    23 Mar 2005 00:49:53 -0000
***************
*** 87,93 ****
      ObjectAddress myself,
                  referenced;

!     rel = heap_openrv(stmt->relation, AccessExclusiveLock);

      if (stmt->constrrel != NULL)
          constrrelid = RangeVarGetRelid(stmt->constrrel, false);
--- 87,98 ----
      ObjectAddress myself,
                  referenced;

!     /*
!      * We need to prevent concurrent CREATE TRIGGER commands, as well
!      * as concurrent table modifications (INSERT, DELETE, UPDATE), so
!      * ShareRowExclusive is sufficient.
!      */
!     rel = heap_openrv(stmt->relation, ShareRowExclusiveLock);

      if (stmt->constrrel != NULL)
          constrrelid = RangeVarGetRelid(stmt->constrrel, false);

Re: locks in CREATE TRIGGER, ADD FK

From
Christopher Kings-Lynne
Date:
If you want to be my friend forever, then fix CLUSTER so that it uses 
sharerowexclusive as well :D

Chris

Neil Conway wrote:
> Neil Conway wrote:
> 
>> AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and 
>> CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are 
>> adding triggers to (the PK table, in the case of ALTER TABLE). Is this 
>> necessary? I don't see why we can't allow SELECT queries on the table 
>> to proceed while the trigger is being added.
> 
> 
> Attached is a patch that changes both to use ShareRowExclusiveLock, and 
> updates the documentation accordingly. I'll apply this later today, 
> barring any objections.


Re: locks in CREATE TRIGGER, ADD FK

From
Russell Smith
Date:
On Wed, 23 Mar 2005 12:40 pm, Christopher Kings-Lynne wrote:
> If you want to be my friend forever, then fix CLUSTER so that it uses 
> sharerowexclusive as well :D
> 
I don't think it's as easy as that, because you have to move tuples
around in the cluster operation.  Same sort of issue as vacuum full I would suggest.

Russell Smith

> Chris
> 
> Neil Conway wrote:
> > Neil Conway wrote:
> > 
> >> AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and 
> >> CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are 
> >> adding triggers to (the PK table, in the case of ALTER TABLE). Is this 
> >> necessary? I don't see why we can't allow SELECT queries on the table 
> >> to proceed while the trigger is being added.
> > 
> > 
> > Attached is a patch that changes both to use ShareRowExclusiveLock, and 
> > updates the documentation accordingly. I'll apply this later today, 
> > barring any objections.
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
> 
> 


Re: locks in CREATE TRIGGER, ADD FK

From
Christopher Kings-Lynne
Date:
>>If you want to be my friend forever, then fix CLUSTER so that it uses 
>>sharerowexclusive as well :D
> 
> I don't think it's as easy as that, because you have to move tuples
> around in the cluster operation.  Same sort of issue as vacuum full I would suggest.

Cluster doesn't move rows...

I didn't say it was easy.  It would involve changing how cluster works.  It would keep the old table around while
buildingthe new, then grab 
 
an exclusive lock to swap the two.

Chris


Re: locks in CREATE TRIGGER, ADD FK

From
Alvaro Herrera
Date:
On Wed, Mar 23, 2005 at 10:42:01AM +0800, Christopher Kings-Lynne wrote:
> >>If you want to be my friend forever, then fix CLUSTER so that it uses 
> >>sharerowexclusive as well :D
> >
> >I don't think it's as easy as that, because you have to move tuples
> >around in the cluster operation.  Same sort of issue as vacuum full I 
> >would suggest.
> 
> Cluster doesn't move rows...
> 
> I didn't say it was easy.  It would involve changing how cluster works. 
>  It would keep the old table around while building the new, then grab 
> an exclusive lock to swap the two.

Huh, cluster already does that.

I don't remember what the rationale was for locking the table, leaving
even simple SELECTs out.  (In fact, IIRC the decision wasn't made by me,
and it wasn't discussed at all.)

-- 
Alvaro Herrera (<alvherre[@]dcc.uchile.cl>)
"I would rather have GNU than GNOT."  (ccchips, lwn.net/Articles/37595/)


Re: locks in CREATE TRIGGER, ADD FK

From
Christopher Kings-Lynne
Date:
> Huh, cluster already does that.

It does and it doesn't.  Something like the first thing it does is muck 
with the old table's filenode IIRC, meaning that immediately the old 
table will no longer work.

Chris


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Christopher Kings-Lynne wrote:
> If you want to be my friend forever, then fix CLUSTER so that it uses 
> sharerowexclusive as well :D

Hmm, this might be possible as well. During a CLUSTER, we currently

- lock the heap relation with AccessExclusiveLock
- lock the index we're clustering on with AccessExclusiveLock
- create a temporary heap relation
- fill with data from the old heap relation, via an index scan
- swap the relfilenodes of the old and temporary heap relations
- rebuild indexes

We certainly can't allow concurrent modifications to either the table or 
the clustered index while this is happening. Allowing index scans 
*should* be safe -- an index scan could result in modifications to the 
index (e.g. updating "tuple is killed" bits), but those shouldn't be 
essential. We might also want to disallow SELECT FOR UPDATE, since we 
would end up invoking heap_mark4update() on the old heap relation. Not 
sure offhand how serious that would be.

So I think it should be possible to lock both the heap relation and the 
index with ExclusiveLock, which would allow SELECTs on them. This would 
apply to both the single relation and multiple relation variants of 
CLUSTER (since we do each individual clustering in its own transaction).

... except that when we rebuild the relation's indexes, we acquire an 
AccessExclusiveLock on the index. This would introduce the risk of 
deadlock. It seems necessary to acquire an AccessExclusiveLock when 
rebuilding shared indexes, since we do the index build in-place, but I 
think we can get by with an ExclusiveLock in the non-shared case, for 
similar reasons as above: we build the new index and then swap relfilenodes.

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Bruce Momjian
Date:
Neil Conway wrote:
> So I think it should be possible to lock both the heap relation and the 
> index with ExclusiveLock, which would allow SELECTs on them. This would 
> apply to both the single relation and multiple relation variants of 
> CLUSTER (since we do each individual clustering in its own transaction).
> 
> ... except that when we rebuild the relation's indexes, we acquire an 
> AccessExclusiveLock on the index. This would introduce the risk of 
> deadlock. It seems necessary to acquire an AccessExclusiveLock when 
> rebuilding shared indexes, since we do the index build in-place, but I 
> think we can get by with an ExclusiveLock in the non-shared case, for 
> similar reasons as above: we build the new index and then swap relfilenodes.

Certainly we need to upgrade to an exclusive table lock to replace the
heap table.  Do we want to get a shared lock and possibly starve waiting
for an exclusive lock on the table to swap the new one in?  Do we do
such escallation anywhere else?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Bruce Momjian wrote:
> Certainly we need to upgrade to an exclusive table lock to replace the
> heap table.

Well, we will be holding an ExclusiveLock on the heap relation 
regardless. We "replace" the heap table by swapping its relfilenode, so 
ISTM we needn't hold an AccessExclusiveLock.

> Do we want to get a shared lock and possibly starve waiting
> for an exclusive lock on the table to swap the new one in?

What I'm saying is that REINDEX on non-shared indexes need only acquire 
an ExclusiveLock, and hence not need to escalate its lock.

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
>> AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and 
>> CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are 
>> adding triggers to (the PK table, in the case of ALTER TABLE). Is this 
>> necessary? I don't see why we can't allow SELECT queries on the table to 
>> proceed while the trigger is being added.

> Attached is a patch that changes both to use ShareRowExclusiveLock, and 
> updates the documentation accordingly. I'll apply this later today, 
> barring any objections.

I don't think this has been adequately thought through at all ... but
at least make it ExclusiveLock.  What is the use-case for allowing
SELECT FOR UPDATE in parallel with this?  One may suppose that someone
doing SELECT FOR UPDATE intends an UPDATE.  (No, don't tell me about
foreign keys.  Alvaro is going to fix that.)

As Chris suggests nearby, this is really only the tip of the iceberg.
I would prefer to see someone do a survey of all our DDL commands and
put forward a coherent proposal for minimum required locks for all of
them.
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes: 
>       /*
> !      * Grab an exclusive lock on the pk table, so that someone doesn't
> !      * delete rows out from under us. (Although a lesser lock would do for
> !      * that purpose, we'll need exclusive lock anyway to add triggers to
> !      * the pk table; trying to start with a lesser lock will just create a
> !      * risk of deadlock.)
>        */
> !     pkrel = heap_openrv(fkconstraint->pktable, AccessExclusiveLock); 
>       /*
>        * Validity and permissions checks
> --- 3829,3839 ----
>       Oid            constrOid; 
>       /*
> !      * Grab a lock on the pk table, so that someone doesn't delete
> !      * rows out from under us; ShareRowExclusive should be good
> !      * enough.
>        */


BTW, the above comment change is seriously inadequate, because it
removes the explanation of *why* that is the minimum required lock.
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:
>   It would keep the old table around while building the new, then grab 
> an exclusive lock to swap the two.

Lock upgrading is right out.
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Neil Conway wrote:
> ... except that when we rebuild the relation's indexes, we acquire an 
> AccessExclusiveLock on the index. This would introduce the risk of 
> deadlock. It seems necessary to acquire an AccessExclusiveLock when 
> rebuilding shared indexes, since we do the index build in-place, but I 
> think we can get by with an ExclusiveLock in the non-shared case, for 
> similar reasons as above: we build the new index and then swap 
> relfilenodes.
From looking at the code, it should be quite possible to do this.

Further points from discussion on IRC:

- TRUNCATE suffers from the same behavior (it acquires an 
AccessExclusiveLock where really an ExclusiveLock or similar should be 
good enough)

- if we make these changes, we will need some way to delete a 
no-longer-visible relfilenode. It should be sufficient to delete a 
relfilenode when the expired pg_class row that refers to it is no longer 
visible to any transactions -- but this isn't necessarily going to be 
true when the transaction that executed the REINDEX/CLUSTER/TRUNCATE 
commits. We could perform this check in some kind of periodic process, 
perhaps -- like the bgwriter, at checkpoint time.

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> Well, we will be holding an ExclusiveLock on the heap relation 
> regardless. We "replace" the heap table by swapping its relfilenode, so 
> ISTM we needn't hold an AccessExclusiveLock.

Utterly wrong.  When you commit you will physically drop the old table.
If there is a SELECT running against the old table it will be quite
unhappy after that.
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Tom Lane wrote:
> Utterly wrong.  When you commit you will physically drop the old table.
> If there is a SELECT running against the old table it will be quite
> unhappy after that.

How can we drop the file at commit, given that a serializable 
transaction's snapshot should still be able to see old relfilenode's 
content?

(If the serializable transaction has already acquired a read lock before 
the TRUNCATE begins, it will block the TRUNCATE -- but there is no 
guarantee that the operations will be ordered like that.)

-Neil


swapping relfilenodes (was: Re: locks in CREATE TRIGGER, ADD FK)

From
Andrew - Supernews
Date:
On 2005-03-23, Neil Conway <neilc@samurai.com> wrote:
> - swap the relfilenodes of the old and temporary heap relations

While discussing this one further on IRC, I noticed the following:

Everywhere I could find that currently replaces the relfilenode of a
relation does so while holding an AccessExclusive lock, and assumes that
this is sufficient to ensure that the old relfilenode can be killed when
the transaction commits. This is not correct.

Example:
 - backend A begins a serializable transaction - backend B truncates a table (and commits) - backend A, still in the
sametransaction, accesses the truncated table
 

Currently backend A sees the truncated table as empty, which is obviously
not right. This is obviously related to any attempt to weaken the locking
on other operations that modify relfilenodes, because doing it right implies
a mechanism to defer the removals past the commit of the modifying
transaction and up to the point where the old data can no longer be seen by
a live transaction.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services


Re: locks in CREATE TRIGGER, ADD FK

From
Bruce Momjian
Date:
Neil Conway wrote:
> Tom Lane wrote:
> > Utterly wrong.  When you commit you will physically drop the old table.
> > If there is a SELECT running against the old table it will be quite
> > unhappy after that.
> 
> How can we drop the file at commit, given that a serializable 
> transaction's snapshot should still be able to see old relfilenode's 
> content?

Vacuum will not remove any old rows because of the transaction xid so
why does it care if the table is clustered/reindexed?   It doesn't have
the table open yet.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: swapping relfilenodes (was: Re: locks in CREATE TRIGGER,

From
Bruce Momjian
Date:
Andrew - Supernews wrote:
> On 2005-03-23, Neil Conway <neilc@samurai.com> wrote:
> > - swap the relfilenodes of the old and temporary heap relations
> 
> While discussing this one further on IRC, I noticed the following:
> 
> Everywhere I could find that currently replaces the relfilenode of a
> relation does so while holding an AccessExclusive lock, and assumes that
> this is sufficient to ensure that the old relfilenode can be killed when
> the transaction commits. This is not correct.
> 
> Example:
> 
>   - backend A begins a serializable transaction
>   - backend B truncates a table (and commits)
>   - backend A, still in the same transaction, accesses the truncated table
> 
> Currently backend A sees the truncated table as empty, which is obviously
> not right. This is obviously related to any attempt to weaken the locking
> on other operations that modify relfilenodes, because doing it right implies
> a mechanism to defer the removals past the commit of the modifying
> transaction and up to the point where the old data can no longer be seen by
> a live transaction.

This is a good point. While DELETE keeps the old rows around and VACUUM
perserves them until the serialized transaction commits, truncate does
not keep the old rows around.

In fact, would a truncate during a backup cause the backup to be
inconsistent because it wouldn't be a true snapshot of the database at
backup start time?  Seems so.

The docs mention:
      TRUNCATE  cannot  be  used if there are foreign-key refer-      ences to the table from other tables. Checking
validityin      such  cases would require table scans, and the whole point      is not to do one.
 

so it doesn't make the referential integrity inconsistent.

Perhaps we should document this.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> How can we drop the file at commit, given that a serializable 
> transaction's snapshot should still be able to see old relfilenode's 
> content?

It isn't 100% MVCC, I agree.  But it works because system catalog
lookups are SnapshotNow, and so when another session comes and wants to
look at the table it will see the committed new version of the pg_class
row pointing at the new relfilenode file.  What you have to prevent is
somebody accessing the table *while* the changeover happens ... and
that's why your lock has to be AccessExclusive.

If you want to complain about MVCC violations in CLUSTER, think about
the fact that it scans the table with SnapshotNow, and therefore loses
rows that are committed-dead but might still be visible to somebody.
        regards, tom lane


Re: swapping relfilenodes (was: Re: locks in CREATE TRIGGER,

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> In fact, would a truncate during a backup cause the backup to be
> inconsistent because it wouldn't be a true snapshot of the database at
> backup start time?  Seems so.

No, because pg_dump holds AccessShareLock on every table that it intends
to dump, thereby ensuring that TRUNCATE/CLUSTER/etc are held off.  The
proposal to weaken the locks that those operations take would in fact
break pg_dump.
        regards, tom lane


Re: swapping relfilenodes (was: Re: locks in CREATE TRIGGER,

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > In fact, would a truncate during a backup cause the backup to be
> > inconsistent because it wouldn't be a true snapshot of the database at
> > backup start time?  Seems so.
> 
> No, because pg_dump holds AccessShareLock on every table that it intends
> to dump, thereby ensuring that TRUNCATE/CLUSTER/etc are held off.  The
> proposal to weaken the locks that those operations take would in fact
> break pg_dump.

Oh, it pre-locks.  I didn't know that.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> - if we make these changes, we will need some way to delete a 
> no-longer-visible relfilenode.

This is presuming that we abandon the notion that system catalog
access use SnapshotNow.  Which opens the question of what they should
use instead ... to which "transaction snapshot" isn't the answer,
because we have to be able to do system catalog accesses before
we've set the snapshot.  (Else forget issuing LOCK TABLE before
the snapshot is set.)

I really think that you haven't the faintest idea of the size of
the can of worms you are opening here :-(
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Tom Lane wrote:
> It isn't 100% MVCC, I agree.  But it works because system catalog
> lookups are SnapshotNow, and so when another session comes and wants to
> look at the table it will see the committed new version of the pg_class
> row pointing at the new relfilenode file.

If by "works", you mean "provides correct transactional semantics", then 
that simply isn't true. Not making CLUSTER and similar DDL commands MVCC 
compliant isn't the end of the world, I agree, but that doesn't make it 
correct, either.

> If you want to complain about MVCC violations in CLUSTER, think about
> the fact that it scans the table with SnapshotNow, and therefore loses
> rows that are committed-dead but might still be visible to somebody.

This seems like another facet of the same problem (a serializable 
transaction's snapshot effectively includes the relfilenodes that were 
visible when the snapshot was taken, and swapping in another relfilenode 
under its nose is asking for trouble).

We could fix the CLUSTER bug, although not the TRUNCATE bug, by scanning 
the old relation with SnapshotAny (or ideally, "the snapshot such that 
we can see all tuples visible to any currently running transaction", if 
we can produce such a snapshot easily). Not sure if that's worth doing; 
it would be nice to solve the root problem (scanning system catalogs 
with SnapshotNow, per discussion elsewhere in thread).

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Tom Lane wrote:
> This is presuming that we abandon the notion that system catalog
> access use SnapshotNow.  Which opens the question of what they should
> use instead ... to which "transaction snapshot" isn't the answer,
> because we have to be able to do system catalog accesses before
> we've set the snapshot.

I wonder if it would be possible to use SnapshotNow before the 
transaction's snapshot has been established, and the transaction's 
snapshot subsequently. Although it definitely makes me nervous to use 
multiple snapshots over the life of a single transaction...

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Tom Lane wrote:
> I don't think this has been adequately thought through at all ... but
> at least make it ExclusiveLock.  What is the use-case for allowing
> SELECT FOR UPDATE in parallel with this?

Ok, patch applied -- I adjusted it to use ExclusiveLock, and fleshed out
some of the comments.

-Neil
Index: doc/src/sgml/mvcc.sgml
===================================================================
RCS file: /var/lib/cvs/pgsql/doc/src/sgml/mvcc.sgml,v
retrieving revision 2.47
diff -c -r2.47 mvcc.sgml
*** doc/src/sgml/mvcc.sgml    26 Feb 2005 18:37:17 -0000    2.47
--- doc/src/sgml/mvcc.sgml    23 Mar 2005 07:33:36 -0000
***************
*** 677,685 ****
      </para>

      <para>
!          This lock mode is not automatically acquired on user tables by any
!          <productname>PostgreSQL</productname> command.  However it is
!          acquired on certain system catalogs in some operations.
      </para>
         </listitem>
        </varlistentry>
--- 677,686 ----
      </para>

      <para>
!          Acquired by <command>CREATE TRIGGER</command> and
!          <command>ALTER TABLE ADD FOREIGN KEY</command>. This lock
!          mode can also be acquired on certain system catalogs in some
!          operations.
      </para>
         </listitem>
        </varlistentry>
Index: src/backend/commands/tablecmds.c
===================================================================
RCS file: /var/lib/cvs/pgsql/src/backend/commands/tablecmds.c,v
retrieving revision 1.148
diff -c -r1.148 tablecmds.c
*** src/backend/commands/tablecmds.c    20 Mar 2005 22:00:52 -0000    1.148
--- src/backend/commands/tablecmds.c    23 Mar 2005 07:27:09 -0000
***************
*** 3829,3841 ****
      Oid            constrOid;

      /*
!      * Grab an exclusive lock on the pk table, so that someone doesn't
!      * delete rows out from under us. (Although a lesser lock would do for
!      * that purpose, we'll need exclusive lock anyway to add triggers to
!      * the pk table; trying to start with a lesser lock will just create a
!      * risk of deadlock.)
       */
!     pkrel = heap_openrv(fkconstraint->pktable, AccessExclusiveLock);

      /*
       * Validity and permissions checks
--- 3829,3841 ----
      Oid            constrOid;

      /*
!      * Grab a lock on the pk table, so that someone doesn't delete
!      * rows out from under us. We will eventually need to add triggers
!      * to the table, at which point we'll need to an ExclusiveLock --
!      * therefore we grab an ExclusiveLock now to prevent possible
!      * deadlock.
       */
!     pkrel = heap_openrv(fkconstraint->pktable, ExclusiveLock);

      /*
       * Validity and permissions checks
Index: src/backend/commands/trigger.c
===================================================================
RCS file: /var/lib/cvs/pgsql/src/backend/commands/trigger.c,v
retrieving revision 1.178
diff -c -r1.178 trigger.c
*** src/backend/commands/trigger.c    20 Mar 2005 23:40:24 -0000    1.178
--- src/backend/commands/trigger.c    23 Mar 2005 07:25:21 -0000
***************
*** 87,93 ****
      ObjectAddress myself,
                  referenced;

!     rel = heap_openrv(stmt->relation, AccessExclusiveLock);

      if (stmt->constrrel != NULL)
          constrrelid = RangeVarGetRelid(stmt->constrrel, false);
--- 87,100 ----
      ObjectAddress myself,
                  referenced;

!     /*
!      * We need to prevent concurrent CREATE TRIGGER commands, as well
!      * as concurrent table modifications (INSERT, DELETE, UPDATE), so
!      * acquire an ExclusiveLock -- it should be fine to allow SELECTs
!      * to proceed. We could perhaps acquire ShareRowExclusiveLock, but
!      * there seems little gain in allowing SELECT FOR UPDATE.
!      */
!     rel = heap_openrv(stmt->relation, ExclusiveLock);

      if (stmt->constrrel != NULL)
          constrrelid = RangeVarGetRelid(stmt->constrrel, false);

Re: locks in CREATE TRIGGER, ADD FK

From
"Zeugswetter Andreas DAZ SD"
Date:
> >   It would keep the old table around while building the new, then grab
> > an exclusive lock to swap the two.
>
> Lock upgrading is right out.

It would need a whole of new family of "intent" locks, with different rules.

Andreas


Re: locks in CREATE TRIGGER, ADD FK

From
Greg Stark
Date:
Neil Conway <neilc@samurai.com> writes:

> Tom Lane wrote:
> > It isn't 100% MVCC, I agree.  But it works because system catalog
> > lookups are SnapshotNow, and so when another session comes and wants to
> > look at the table it will see the committed new version of the pg_class
> > row pointing at the new relfilenode file.
> 
> If by "works", you mean "provides correct transactional semantics", then that
> simply isn't true. Not making CLUSTER and similar DDL commands MVCC compliant
> isn't the end of the world, I agree, but that doesn't make it correct, either.

I think he means it works because it doesn't matter whether the serializable
transaction sees the old table or the new one. As soon as the CLUSTER commits
the serializable transaction can start using the new one since it's
functionally identical to the old one (at least it's supposed to be, Tom
points out it isn't).

> > If you want to complain about MVCC violations in CLUSTER, think about
> > the fact that it scans the table with SnapshotNow, and therefore loses
> > rows that are committed-dead but might still be visible to somebody.

Ouch. That's, er, a problem. I guess currently it's fine for any transaction
using READ COMMITTED but it's already wrong for serializable transactions. And
it'll be wrong for READ COMMITTED if CLUSTER is changed not to take an
exclusive lock. 

-- 
greg



Re: swapping relfilenodes (was: Re: locks in CREATE TRIGGER,

From
Greg Stark
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > In fact, would a truncate during a backup cause the backup to be
> > inconsistent because it wouldn't be a true snapshot of the database at
> > backup start time?  Seems so.
> 
> No, because pg_dump holds AccessShareLock on every table that it intends
> to dump, thereby ensuring that TRUNCATE/CLUSTER/etc are held off.  The
> proposal to weaken the locks that those operations take would in fact
> break pg_dump.

It seems like that would be true for TRUNCATE but not CLUSTER.

Though pg_dump works in READ COMMITTED mode doesn't it? So it doesn't really
get a consistent view of the database exactly anyways. If it tried to go in
SERIALIZABLE mode I suspect it would rarely complete though.

-- 
greg



Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> Tom Lane wrote:
>> I don't think this has been adequately thought through at all ... but
>> at least make it ExclusiveLock.  What is the use-case for allowing
>> SELECT FOR UPDATE in parallel with this?

> Ok, patch applied -- I adjusted it to use ExclusiveLock, and fleshed out 
> some of the comments.

I think last night's discussion makes it crystal-clear why I felt that
this hasn't been sufficiently thought through.  Please revert until the
discussion comes to a conclusion.
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> Tom Lane wrote:
>> It isn't 100% MVCC, I agree.  But it works because system catalog
>> lookups are SnapshotNow, and so when another session comes and wants to
>> look at the table it will see the committed new version of the pg_class
>> row pointing at the new relfilenode file.

> If by "works", you mean "provides correct transactional semantics", then 
> that simply isn't true. Not making CLUSTER and similar DDL commands MVCC 
> compliant isn't the end of the world, I agree, but that doesn't make it 
> correct, either.

I agree that we aren't MVCC with respect to DDL operations (and for this
purpose CLUSTER is DDL).  Trying to become so would open a can of worms
far larger than it's worth, though, IMHO.

Let me give you a couple of illustrations of why things are the way they
are.  Consider
Transaction 1        Transaction 2
BEGIN;
...            BEGIN;
...            INSERT INTO a;
CLUSTER a;        ...

Currently, because T2 continues to hold a write lock on A until it
commits, T1's CLUSTER will block until T2 commits; therefore CLUSTER's
use of SnapshotNow is sufficient to not lose any live tuples.  Were T1
to use a transaction-aware snapshot to scan A, this would not be so.
(SnapshotAny would work, but CLUSTER would have to become an order of
magnitude harder than it is now, because it would have to preserve
UPDATE chain link relationships ... compare VACUUM FULL's treatment of
tuple chains.)
Transaction 1        Transaction 2
BEGIN;
...            CLUSTER a;
INSERT INTO a;

Were T1 using a transaction-aware snapshot to read pg_class, it would
insert its new tuple into the wrong relfilenode for A, causing either
immediate failure or eventual loss of a live tuple.
Transaction 1        Transaction 2
BEGIN;
...            CREATE TRIGGER ... ON INSERT TO a ...
INSERT INTO a;

Were T1 using a transaction-aware snapshot to read pg_trigger, it would
fail to fire the already-committed insert trigger.  (Variants of this
include failing to insert index entries into a new index, failing to
apply a new constraint, etc.)  You don't have to assume that T1 is
in Serializable mode, either.  It might be using Read Committed, but
the INSERT starts and sets its snapshot while T2 is still in progress;
then of course blocks until T2 commits; then does the wrong thing
because it is still paying attention to pre-T2 catalog entries.  This is
why LockRelation accepts SI inval messages immediately after each lock
acquisition; it's to ensure that we do see the effects of T2.
Transaction 1        Transaction 2
BEGIN;
SELECT FROM a;
...            CREATE TRIGGER ... ON INSERT TO a ...
INSERT INTO a;

Ordinarily, once T1 has touched relation A, it can be sure that A's
schema will not change until end of transaction.  The change you
committed last night removes that guarantee, at least for the
limited case of triggers, and makes the above interleaving possible.
While I haven't come up with a clear failure case after a few minutes'
thought, I'm not convinced that there isn't one.
        regards, tom lane


Re: swapping relfilenodes (was: Re: locks in CREATE TRIGGER,

From
Tom Lane
Date:
Greg Stark <gsstark@mit.edu> writes:
> Though pg_dump works in READ COMMITTED mode doesn't it?

Certainly not.
        regards, tom lane


Re: swapping relfilenodes (was: Re: locks in CREATE TRIGGER,

From
Greg Stark
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Greg Stark <gsstark@mit.edu> writes:
> > Though pg_dump works in READ COMMITTED mode doesn't it?
> 
> Certainly not.

It works in serializable mode? I guess the only reason we don't see
"transaction failed" ever is because it's not doing any updates?

-- 
greg



Re: swapping relfilenodes (was: Re: locks in CREATE TRIGGER,

From
Tom Lane
Date:
Greg Stark <gsstark@mit.edu> writes:
> Tom Lane <tgl@sss.pgh.pa.us> writes:
>> Greg Stark <gsstark@mit.edu> writes:
>>> Though pg_dump works in READ COMMITTED mode doesn't it?
>> 
>> Certainly not.

> It works in serializable mode? I guess the only reason we don't see
> "transaction failed" ever is because it's not doing any updates?

Right.
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Tom Lane wrote:
> I think last night's discussion makes it crystal-clear why I felt that
> this hasn't been sufficiently thought through.  Please revert until the
> discussion comes to a conclusion.

I applied the patch because I don't think it is very closely related to 
the discussion. But if you'd prefer, I'm happy to wait until we're done, 
so I've reverted the patch.

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
Tom Lane wrote:
> I agree that we aren't MVCC with respect to DDL operations (and for this
> purpose CLUSTER is DDL).  Trying to become so would open a can of worms
> far larger than it's worth, though, IMHO.

I think if we can come up with a reasonable way to handle all the 
consequences, it's worth doing. And yes, I realize there are a lot of 
consequences, so it may well not be possible.

>     Transaction 1        Transaction 2
> 
>     BEGIN;
> 
>     ...            BEGIN;
> 
>     ...            INSERT INTO a;
> 
>     CLUSTER a;        ...
> 
> Currently, because T2 continues to hold a write lock on A until it
> commits, T1's CLUSTER will block until T2 commits; therefore CLUSTER's
> use of SnapshotNow is sufficient to not lose any live tuples.  Were T1
> to use a transaction-aware snapshot to scan A, this would not be so.

I think this is somewhat tangential: we're discussing changing the 
snapshot used to scan system catalogs, not user relations like A. The 
only reason that CLUSTER's use of SnapshotNow is a problem at the moment 
is the same reason that TRUNCATE is a problem -- a concurrent 
serializable transaction will use the new relfilenode, not the old one.

>     Transaction 1        Transaction 2
> 
>     BEGIN;
> 
>     ...            CLUSTER a;
> 
>     INSERT INTO a;
> 
> Were T1 using a transaction-aware snapshot to read pg_class, it would
> insert its new tuple into the wrong relfilenode for A, causing either
> immediate failure or eventual loss of a live tuple.

Yes, definitely a problem :( The same applies to TRUNCATE, naturally. 
The only somewhat reasonable behavior I can think of is to cause 
modifications to the oldrelfilenode to fail in a concurrent serializable 
transaction. The behavior would be somewhat analogous to an UPDATE in a 
serializable transaction failing because of a concurrent data 
modification, although in this case we would error out on any 
modification (e.g. INSERT).

-Neil


Re: locks in CREATE TRIGGER, ADD FK

From
Bruce Momjian
Date:
Alvaro Herrera wrote:
> On Wed, Mar 23, 2005 at 10:42:01AM +0800, Christopher Kings-Lynne wrote:
> > >>If you want to be my friend forever, then fix CLUSTER so that it uses 
> > >>sharerowexclusive as well :D
> > >
> > >I don't think it's as easy as that, because you have to move tuples
> > >around in the cluster operation.  Same sort of issue as vacuum full I 
> > >would suggest.
> > 
> > Cluster doesn't move rows...
> > 
> > I didn't say it was easy.  It would involve changing how cluster works. 
> >  It would keep the old table around while building the new, then grab 
> > an exclusive lock to swap the two.
> 
> Huh, cluster already does that.
> 
> I don't remember what the rationale was for locking the table, leaving
> even simple SELECTs out.  (In fact, IIRC the decision wasn't made by me,
> and it wasn't discussed at all.)
> 

The issue is that we would have to escalate the shared lock to exclusive
to swap out the files, and lock escallation could lead to
deadlock/starvation.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: locks in CREATE TRIGGER, ADD FK

From
Neil Conway
Date:
On Wed, 2005-03-23 at 10:04 -0500, Tom Lane wrote:
> I think last night's discussion makes it crystal-clear why I felt that
> this hasn't been sufficiently thought through.  Please revert until the
> discussion comes to a conclusion.

Are there any remaining objections to reapplying this patch?

The original commit message is here:

http://archives.postgresql.org/pgsql-committers/2005-03/msg00316.php

The archives of the -hackers thread are here:

http://archives.postgresql.org/pgsql-hackers/2005-03/msg00764.php

-Neil




Re: locks in CREATE TRIGGER, ADD FK

From
Tom Lane
Date:
Neil Conway <neilc@samurai.com> writes:
> Are there any remaining objections to reapplying this patch?
> The original commit message is here:
> http://archives.postgresql.org/pgsql-committers/2005-03/msg00316.php
> The archives of the -hackers thread are here:
> http://archives.postgresql.org/pgsql-hackers/2005-03/msg00764.php

I'm still concerned about the last example I raised in
http://archives.postgresql.org/pgsql-hackers/2005-03/msg00840.php
which was:

>>         Transaction 1           Transaction 2
>> 
>>         BEGIN;
>> 
>>         SELECT FROM a;
>> 
>>         ...                     CREATE TRIGGER ... ON INSERT TO a ...
>> 
>>         INSERT INTO a;
>> 
>> Ordinarily, once T1 has touched relation A, it can be sure that A's
>> schema will not change until end of transaction.  The change you
>> committed last night removes that guarantee, at least for the
>> limited case of triggers, and makes the above interleaving possible.
>> While I haven't come up with a clear failure case after a few minutes'
>> thought, I'm not convinced that there isn't one.

It's possible that this is safe for triggers only, but I'm not 100%
convinced of that, and I'm not sure I see the point of relaxing the
general rule "schema changes require AccessExclusiveLock" for just
this one operation.
        regards, tom lane


Re: locks in CREATE TRIGGER, ADD FK

From
Andrew - Supernews
Date:
On 2005-05-30, Neil Conway <neilc@samurai.com> wrote:
> On Wed, 2005-03-23 at 10:04 -0500, Tom Lane wrote:
>> I think last night's discussion makes it crystal-clear why I felt that
>> this hasn't been sufficiently thought through.  Please revert until the
>> discussion comes to a conclusion.
>
> Are there any remaining objections to reapplying this patch?

I've run into a few questions recently that might be relevent to the
issue of DDL locking in general and therefore possibly this change in
particular.

The most significant one is to do with the pg_get_*def functions and
their consistency, or otherwise, with explicit scans of the system
catalogs. The problem here of course is that the pg_get_*def functions
mostly (but not exclusively) use the syscache and therefore see data
relative to SnapshotNow, whereas the queries that are invoking them
are likely to be doing explicit scans of the catalog tables within the
transaction's active snapshot (and for a long-running serializable
transaction such as pg_dump, these may be some way apart).

The obvious place to look for failure modes is to see whether pg_dump
can be made to fail by deleting something (index, perhaps?) that it is
expecting to find, and see whether it chokes (pg_get_indexdef will elog
if the index doesn't exist in SnapshotNow). Dropping a view might be
another case where this can be made to fail.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services