Thread: relation ### modified while in use

relation ### modified while in use

From
Alex Pilosov
Date:
I'm having the error 'relation <number> modified while in use' fairly
often. It is the same relation that's always giving a problem. Usually
after all currently-running backends die away with that error, error
disappears. If I shutdown, ipcclean, start up postgres, it also
disappears.


What causes this? I'm having a feeling that it has to do with referential
integrity (the table in question is referenced by almost every other
table), and with [possibly] a leak of reference counts? 

This is all with pg7.0.2 on i386.

-alex



Re: relation ### modified while in use

From
Tom Lane
Date:
Alex Pilosov <alex@pilosoft.com> writes:
> I'm having the error 'relation <number> modified while in use' fairly
> often. It is the same relation that's always giving a problem.

Hmm, could we see the full schema dump for that relation?
(pg_dump -s -t tablename dbname will do)

If you are not actively modifying the schema, then in theory you should
not see this message, but...
        regards, tom lane


Re: relation ### modified while in use

From
Alex Pilosov
Date:
I think this happens after I create/modify tables which reference this
table. This is spontaneous, and doesn't _always_ happen...

Anything I could do next time it craps up to help track the problem down?

-alex

----
CREATE TABLE "customers" (       "cust_id" int4 DEFAULT nextval('customers_cust_id_seq'::text) NOT 
NULL,       "phone_npa" character(3) NOT NULL,       "phone_nxx" character(3) NOT NULL,       "phone_rest" character(4)
NOTNULL,       "e_mail" character varying(30),       "daytime_npa" character(3),       "daytime_nxx" character(3),
"daytime_rest" character(4),       "is_business" bool DEFAULT 'f' NOT NULL,       PRIMARY KEY ("cust_id") );
 

CREATE CONSTRAINT TRIGGER "<unnamed>" AFTER DELETE ON "customers"  NOT
DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_noaction_del" ('<unnamed>', 'cc_charges', 'customers',
'UNSPECIFIED', 'cust_id', 'cust_id');

CREATE CONSTRAINT TRIGGER "<unnamed>" AFTER UPDATE ON "customers"  NOT
DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_noaction_upd" ('<unnamed>', 'cc_charges', 'customers',
'UNSPECIFIED', 'cust_id', 'cust_id');


On Sun, 22 Oct 2000, Tom Lane wrote:

> Alex Pilosov <alex@pilosoft.com> writes:
> > I'm having the error 'relation <number> modified while in use' fairly
> > often. It is the same relation that's always giving a problem.
> 
> Hmm, could we see the full schema dump for that relation?
> (pg_dump -s -t tablename dbname will do)
> 
> If you are not actively modifying the schema, then in theory you should
> not see this message, but...
> 
>             regards, tom lane
> 
> 



Re: relation ### modified while in use

From
Tom Lane
Date:
Alex Pilosov <alex@pilosoft.com> writes:
> I think this happens after I create/modify tables which reference this
> table. This is spontaneous, and doesn't _always_ happen...

Um.  I was hoping it was something more easily fixable :-(.  What's
causing the relcache to decide that the rel has been modified is the
addition or removal of foreign-key triggers on the rel.  Which seems
legitimate.  (It's barely possible that we could get away with allowing
triggers to be added or deleted mid-transaction, but that doesn't feel
right to me.)

There are two distinct known bugs that allow the error to be reported.
These have been discussed before, but to recap:

1. relcache will complain if the notification of cache invalidation
arrives after transaction start and before first use of the referenced
rel (when there was already a relcache entry left over from a prior
transaction).  In this situation we should allow the change to occur
without complaint, ISTM.  But the relcache doesn't currently have any
concept of first reference versus later references.

2. Even with #1 fixed, you could still get this error, because we are
way too willing to release locks on rels that have been referenced.
Therefore you can get this sequence:

Session 1            Session 2

begin;

select * from foo; -- LockRelation(AccessShareLock); -- UnLockRelation(AccessShareLock);
            ALTER foo ADD CONSTRAINT;              -- LockRelation(AccessExclusiveLock);              -- lock released
atcommit
 

select * from foo; -- LockRelation(AccessShareLock); -- table schema update is detected, error must be reported


I think that we should hold at least AccessShareLock on any relation
that a transaction has touched, all the way to end of transaction.
This creates the potential for deadlocks that did not use to happen;
for example, if we have two transactions that concurrently both do
begin;select * from foo;  -- gets AccessShareLockLOCK TABLE foo;        -- gets AccessExclusiveLock...end;

this will work currently because the SELECT releases AccessShareLock
when done, but it will deadlock if SELECT does not release that lock.

That's annoying but I see no way around it, if we are to allow
concurrent transactions to do schema modifications of tables that other
transactions are using.

Comments anyone?
        regards, tom lane


Re: relation ### modified while in use

From
Alex Pilosov
Date:
On Mon, 23 Oct 2000, Tom Lane wrote:

> when done, but it will deadlock if SELECT does not release that lock.
> 
> That's annoying but I see no way around it, if we are to allow
> concurrent transactions to do schema modifications of tables that other
> transactions are using.

I might be in above my head, but maybe this is time for yet another type
of lock? "Do-not-modify-this-table-under-me" lock, which shall persist
until transaction commits, and will conflict only with alter table
lock/AccessExclusiveLock?

I realise we have already many lock types, but this seems to be proper
solution to me...

In related vein: Is there a way to see who (at least process id) is
holding locks on tables?



Re: relation ### modified while in use

From
Tom Lane
Date:
Alex Pilosov <alex@pilosoft.com> writes:
> I might be in above my head, but maybe this is time for yet another type
> of lock?

Wouldn't help --- it's still a deadlock.
        regards, tom lane


Re: relation ### modified while in use

From
Alex Pilosov
Date:
On Mon, 23 Oct 2000, Alex Pilosov wrote:

> On Mon, 23 Oct 2000, Tom Lane wrote:
> 
> > when done, but it will deadlock if SELECT does not release that lock.
> > 
> > That's annoying but I see no way around it, if we are to allow
> > concurrent transactions to do schema modifications of tables that other
> > transactions are using.
> 
> I might be in above my head, but maybe this is time for yet another type
> of lock? "Do-not-modify-this-table-under-me" lock, which shall persist
> until transaction commits, and will conflict only with alter table
> lock/AccessExclusiveLock?

I just realised that I _am_ in above my head, and the above makes no
sense, and is identical to holding AccessShareLock. 

Sorry ;)

-alex




Re: relation ### modified while in use

From
Alex Pilosov
Date:
On Mon, 23 Oct 2000, Tom Lane wrote:

>     begin;
>     select * from foo;  -- gets AccessShareLock
>     LOCK TABLE foo;        -- gets AccessExclusiveLock
>     ...
>     end;
> 
> this will work currently because the SELECT releases AccessShareLock
> when done, but it will deadlock if SELECT does not release that lock.
Probably a silly question, but since this is the same transaction,
couldn't the lock be 'upgraded' without a problem? 

Or postgres doesn't currently have idea of lock upgrades...?

-alex





Re: relation ### modified while in use

From
Tom Lane
Date:
Alex Pilosov <alex@pilosoft.com> writes:
> On Mon, 23 Oct 2000, Tom Lane wrote:
>> begin;
>> select * from foo;  -- gets AccessShareLock
>> LOCK TABLE foo;        -- gets AccessExclusiveLock
>> ...
>> end;
>> 
>> this will work currently because the SELECT releases AccessShareLock
>> when done, but it will deadlock if SELECT does not release that lock.

> Probably a silly question, but since this is the same transaction,
> couldn't the lock be 'upgraded' without a problem? 

No, the problem happens when two transactions do the above at about
the same time.  After the SELECTs, both transactions are holding
AccessShareLock, and both are waiting for the other to let go so's they
can get AccessExclusiveLock.  AFAIK any concept of "lock upgrade" falls
afoul of this basic deadlock risk.

We do have a need to be careful that the system doesn't try to do
lock upgrades internally.  For example, inLOCK TABLE foo;
the parsing step had better not grab AccessShareLock on foo in
advance of the main execution step asking for AccessExclusiveLock.
        regards, tom lane


Re: relation ### modified while in use

From
Philip Warner
Date:
At 01:01 23/10/00 -0400, Tom Lane wrote:
>(It's barely possible that we could get away with allowing
>triggers to be added or deleted mid-transaction, but that doesn't feel
>right to me.)
>

A little OT, but the above is a useful feature for managing data; it's not
common, but the following sequence is essential to managing a database safely:

- Start TX
- Drop a few triggers, constraints etc
- Add/change data to fix erroneous/no longer accurate business rules
(audited, of course)
- Reapply the triggers, constraints
- Make sure it looks right
- Commit/Rollback based on the above check

It is very undesirable to drop the triggers/constraints in a separate
transaction since a communications failure could leave them unapplied. At
least in one TX, the recovery process should back out the TX.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: relation ### modified while in use

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
> At 01:01 23/10/00 -0400, Tom Lane wrote:
>> (It's barely possible that we could get away with allowing
>> triggers to be added or deleted mid-transaction, but that doesn't feel
>> right to me.)

> A little OT, but the above is a useful feature for managing data; it's not
> common, but the following sequence is essential to managing a database safely:

> - Start TX
> - Drop a few triggers, constraints etc
> - Add/change data to fix erroneous/no longer accurate business rules
> (audited, of course)
> - Reapply the triggers, constraints
> - Make sure it looks right
> - Commit/Rollback based on the above check

There is nothing wrong with the above as long as you hold exclusive
lock on the tables being modified for the duration of the transaction.

The scenario I'm worried about is on the other side, ie, a transaction
that has already done some things to a table is notified of a change to
that table's triggers/constraints/etc being committed by another
transaction.  Can it deal with that consistently?  I don't think it can
in general.  What I'm proposing is that once an xact has touched a
table, other xacts should not be able to apply schema updates to that
table until the first xact commits.
        regards, tom lane


Re: relation ### modified while in use

From
Philip Warner
Date:
At 01:37 23/10/00 -0400, Tom Lane wrote:
>
>What I'm proposing is that once an xact has touched a
>table, other xacts should not be able to apply schema updates to that
>table until the first xact commits.

Totally agree. You may want to go further and say that metadata changes can
not be made while that *connection* exists: if the client has prepared a
query against a table will it cause a problem when the query is run? 



----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: relation ### modified while in use

From
Hiroshi Inoue
Date:

Tom Lane wrote:

> Philip Warner <pjw@rhyme.com.au> writes:
> > At 01:01 23/10/00 -0400, Tom Lane wrote:
> >> (It's barely possible that we could get away with allowing
> >> triggers to be added or deleted mid-transaction, but that doesn't feel
> >> right to me.)
>
> > A little OT, but the above is a useful feature for managing data; it's not
> > common, but the following sequence is essential to managing a database safely:
>
> > - Start TX
> > - Drop a few triggers, constraints etc
> > - Add/change data to fix erroneous/no longer accurate business rules
> > (audited, of course)
> > - Reapply the triggers, constraints
> > - Make sure it looks right
> > - Commit/Rollback based on the above check
>
> There is nothing wrong with the above as long as you hold exclusive
> lock on the tables being modified for the duration of the transaction.
>
> The scenario I'm worried about is on the other side, ie, a transaction
> that has already done some things to a table is notified of a change to
> that table's triggers/constraints/etc being committed by another
> transaction.  Can it deal with that consistently?  I don't think it can
> in general.  What I'm proposing is that once an xact has touched a
> table, other xacts should not be able to apply schema updates to that
> table until the first xact commits.
>

I agree with you.
I've wondered why AccessShareLock is a short term lock.

If we have a mechanism to acquire a share lock on a tuple,we
could use it for managing system info generally. However the
only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)
Lock on tables would give us a restricted solution about pg_class
tuples.

Thers'a possibility of deadlock in any case but there are few
cases when AccessExclusiveLock is really needed and we could
acquire an AccessExclusiveLock manually from the first if
necessary.

I'm not sure about the use of AccessShareLock in parse-analyze-
optimize phase however.

Regards.
Hiroshi Inoue




Re: relation ### modified while in use

From
Philip Warner
Date:
At 15:29 23/10/00 +0900, Hiroshi Inoue wrote:
>
>If we have a mechanism to acquire a share lock on a tuple,we
>could use it for managing system info generally. However the
>only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)
>Lock on tables would give us a restricted solution about pg_class
>tuples.
>

Don't we have this ability? What about taking a RowShare lock on the
pg_class tuple whenever you read from the table; then requiring schema
updates take a RowExclusive lock on the pg_class tuple?

As you say, it won't prevent deadlocks, but it seems like a reasonable
thing to do.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: relation ### modified while in use

From
Tom Lane
Date:
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> I'm not sure about the use of AccessShareLock in parse-analyze-
> optimize phase however.

That's something we'll have to clean up while fixing this.  Currently
the system may acquire and release AccessShareLock multiple times while
parsing/rewriting/planning.  We need to make sure that an appropriate
lock is grabbed at *first* use and then held.

Should save a few cycles as well as being more correct ...
        regards, tom lane


Re: relation ### modified while in use

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
> Don't we have this ability? What about taking a RowShare lock on the
> pg_class tuple whenever you read from the table; then requiring schema
> updates take a RowExclusive lock on the pg_class tuple?

How is that different from taking locks on the table itself?

In any case, we don't have the ability to hold multiple classes of locks
on individual tuples, AFAIK.  UPDATE and SELECT FOR UPDATE use a
different mechanism that involves setting fields in the header of the
affected tuple.  There's no room there for more than one kind of lock;
what's worse, checking and waiting for that lock is far slower than
normal lock-manager operations.  (But on the plus side, you can be
holding locks on any number of tuples without risking overflowing the
lock manager table, and releasing the locks at commit takes no cycles.)
        regards, tom lane


Re: relation ### modified while in use

From
Philip Warner
Date:
At 10:45 23/10/00 -0400, Tom Lane wrote:
>Philip Warner <pjw@rhyme.com.au> writes:
>> Don't we have this ability? What about taking a RowShare lock on the
>> pg_class tuple whenever you read from the table; then requiring schema
>> updates take a RowExclusive lock on the pg_class tuple?
>
>How is that different from taking locks on the table itself?

Only slightly; one interpretation of a table lock is that it locks all of
the data in the table; and a lock on the pg_class row locks the metadata. I
must admit that I am having a little difficulty thinking of a case where
the distinction would be useful...


>In any case, we don't have the ability to hold multiple classes of locks
>on individual tuples, AFAIK.  UPDATE and SELECT FOR UPDATE use a
>different mechanism that involves setting fields in the header of the
>affected tuple.  There's no room there for more than one kind of lock;
>what's worse, checking and waiting for that lock is far slower than
>normal lock-manager operations. 

So where do
   SELECT FOR UPDATE IN ROW SHARE MODE 
and    LOCK TABLE IN ROW EXCLUSIVE MODE statements. 

fit in? 

They *seem* to provide differing levels of row locking.



----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: relation ### modified while in use

From
"Vadim Mikheev"
Date:
> > in general.  What I'm proposing is that once an xact has touched a
> > table, other xacts should not be able to apply schema updates to that
> > table until the first xact commits.
> >
> 
> I agree with you.

I don't know. We discussed this issue just after 6.5 and decided to
allow concurrent schema modifications.
Oracle has disctionary locks but run each DDL statement in separate
xaction, so - no deadlock condition here. OTOH, I wouldn't worry
about deadlock - one just had to follow common anti-deadlock rules.

> I've wondered why AccessShareLock is a short term lock.

MUST BE. AccessShare-/Exclusive-Locks are *data* locks.
If one want to protect schema then new schema share/excl locks
must be inroduced. There is no conflict between data and
schema locks - they are orthogonal.

We use AccessShare-/Exclusive-Locks for schema because of...
we allow concurrent schema modifications and no true schema
locks were required.

> If we have a mechanism to acquire a share lock on a tuple,we
> could use it for managing system info generally. However the
> only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)

Actually, just look at lock.h:LTAG structure - lock manager supports
locking of "some objects" inside tables:

typedef struct LTAG
{   Oid        relId;   Oid        dbId;   union   {       BlockNumber    blkno;       Transaction        xid;   }
objId;  ...- we could add oid to union above and lock tables by acquiring lock
 
on pg_class with objId.oid = table' oid. Same way we could lock indices
and whatever we want... if we want -:)

> Lock on tables would give us a restricted solution about pg_class
> tuples.
> 
> Thers'a possibility of deadlock in any case but there are few
> cases when AccessExclusiveLock is really needed and we could
> acquire an AccessExclusiveLock manually from the first if
> necessary.
> 
> I'm not sure about the use of AccessShareLock in parse-analyze-
> optimize phase however.

There is notion about breakable (parser) locks in Oracle documentation -:)

Vadim




Re: relation ### modified while in use

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
> Only slightly; one interpretation of a table lock is that it locks all of
> the data in the table; and a lock on the pg_class row locks the metadata. I
> must admit that I am having a little difficulty thinking of a case where
> the distinction would be useful...

I can't see any value in locking the data without locking the metadata.
Given that, the other way round is sort of moot...

> So where do
>     SELECT FOR UPDATE IN ROW SHARE MODE 

We don't support that (never heard of it before, in fact)

> and 
>     LOCK TABLE IN ROW EXCLUSIVE MODE statements. 
> fit in? 

That one is just a table lock (RowExclusiveLock).  All the variants
of LOCK TABLE are table-level locks.
        regards, tom lane


Re: relation ### modified while in use

From
Hiroshi Inoue
Date:

Philip Warner wrote:

> At 15:29 23/10/00 +0900, Hiroshi Inoue wrote:
> >
> >If we have a mechanism to acquire a share lock on a tuple,we
> >could use it for managing system info generally. However the
> >only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)
> >Lock on tables would give us a restricted solution about pg_class
> >tuples.
> >
>
> Don't we have this ability? What about taking a RowShare lock on the
> pg_class tuple whenever you read from the table; then requiring schema
> updates take a RowExclusive lock on the pg_class tuple?
>

Both RowShare and RowExclusive lock are table level
locking. The implementation of tuple level locking is
quite different from that of table level locking.
The information of table level locking is held in shared
memory. OTOH the information of tuple level locking
is held in the tuple itself i.e. a transaction(t_xmax) is
updating/deleting/selecting for update the tuple....
If other backends are about to update/delete/select
for update a tuple,they check the information of the
tuple and if the tuple is being updated/... they wait until
the end of the transaction(t_xmax).

Regards.
Hiroshi Inoue



Re: relation ### modified while in use

From
Hiroshi Inoue
Date:

Vadim Mikheev wrote:

> > > in general.  What I'm proposing is that once an xact has touched a
> > > table, other xacts should not be able to apply schema updates to that
> > > table until the first xact commits.
> > >
> >
> > I agree with you.
>
> I don't know. We discussed this issue just after 6.5 and decided to
> allow concurrent schema modifications.
> Oracle has disctionary locks but run each DDL statement in separate
> xaction, so - no deadlock condition here. OTOH, I wouldn't worry
> about deadlock - one just had to follow common anti-deadlock rules.
>
> > I've wondered why AccessShareLock is a short term lock.
>
> MUST BE. AccessShare-/Exclusive-Locks are *data* locks.
> If one want to protect schema then new schema share/excl locks
> must be inroduced. There is no conflict between data and
> schema locks - they are orthogonal.
>

Oracle doesn't have Access...Lock locks.
In my understanding,locking levels you provided contains
an implicit share/exclusive lock on the corrsponding
pg_class tuple i.e. AccessExclusive Lock acquires an
exclusive lock on the corresping pg_class tuple and
other locks acquire a share lock, Is it right ?

>
> We use AccessShare-/Exclusive-Locks for schema because of...
> we allow concurrent schema modifications and no true schema
> locks were required.
>
> > If we have a mechanism to acquire a share lock on a tuple,we
> > could use it for managing system info generally. However the
> > only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)
>
> Actually, just look at lock.h:LTAG structure - lock manager supports
> locking of "some objects" inside tables:
>
> typedef struct LTAG
> {
>     Oid        relId;
>     Oid        dbId;
>     union
>     {
>         BlockNumber    blkno;
>         Transaction        xid;
>     } objId;
>     ...
>  - we could add oid to union above and lock tables by acquiring lock
> on pg_class with objId.oid = table' oid. Same way we could lock indices
> and whatever we want... if we want -:)
>

As you know well,this implemenation has a flaw that we have
to be anxious about the shortage of shared memory.


> > Lock on tables would give us a restricted solution about pg_class
> > tuples.
> >
> > Thers'a possibility of deadlock in any case but there are few
> > cases when AccessExclusiveLock is really needed and we could
> > acquire an AccessExclusiveLock manually from the first if
> > necessary.
> >
> > I'm not sure about the use of AccessShareLock in parse-analyze-
> > optimize phase however.
>
> There is notion about breakable (parser) locks in Oracle documentation -:)
>

I've known it also but don't know how to realize the similar
concept in PostgreSQL.

Regards.
Hiroshi Inoue



RE: relation ### modified while in use

From
"Mikheev, Vadim"
Date:
> > > I've wondered why AccessShareLock is a short term lock.
> >
> > MUST BE. AccessShare-/Exclusive-Locks are *data* locks.
> > If one want to protect schema then new schema share/excl locks
> > must be inroduced. There is no conflict between data and
> > schema locks - they are orthogonal.
> >
> 
> Oracle doesn't have Access...Lock locks.

Oracle has no vacuum. We need in AccessExclusiveLock to
support vacuum - to stop any concurrent scans over table.

But maybe I try to make things more complex without
good reason - long term AccessShareLock would just
block vacuum till transaction end (in addition to blocked
concurrent DDL statements we discuss now) - not big
inconvenience probably.
So ok, I have no strong objection against using
Access...Locks as schema locks.

> In my understanding,locking levels you provided contains
> an implicit share/exclusive lock on the corrsponding
> pg_class tuple i.e. AccessExclusive Lock acquires an
> exclusive lock on the corresping pg_class tuple and
> other locks acquire a share lock, Is it right ?

No. Access...Locks are acquired over target table
(table' oid is used as key for lmgr hash table),
not over corresponding pg_class tuple, in what case
we would use pg_clas' oid + table' oid as key
(possibility I've described below).

> > > If we have a mechanism to acquire a share lock on a tuple,we
^^^^^^^^^^^^^^^^^^^^^
> > > could use it for managing system info generally. However the
> > > only allowed lock on a tuple is exclusive.  
> > > Access(Share/Exclusive)
> >
...
> >  - we could add oid to union above and lock tables by acquiring lock
> > on pg_class with objId.oid = table' oid. Same way we could 
> > lock indices and whatever we want... if we want -:)
> 
> As you know well,this implemenation has a flaw that we have
> to be anxious about the shortage of shared memory.

Didn't you asked about share lock on a tuple?
Share locks may be kept in memory only.
I've just pointed that we have such mechanism -:)
Another possible answer is - Shared Catalog Cache.

Vadim


Re: relation ### modified while in use

From
Hiroshi Inoue
Date:

"Mikheev, Vadim" wrote:

> > > > I've wondered why AccessShareLock is a short term lock.
> > >
> > > MUST BE. AccessShare-/Exclusive-Locks are *data* locks.
> > > If one want to protect schema then new schema share/excl locks
> > > must be inroduced. There is no conflict between data and
> > > schema locks - they are orthogonal.
> > >
> >
> > Oracle doesn't have Access...Lock locks.
>
> Oracle has no vacuum. We need in AccessExclusiveLock to
> support vacuum - to stop any concurrent scans over table.
>
> But maybe I try to make things more complex without
> good reason - long term AccessShareLock would just
> block vacuum till transaction end (in addition to blocked
> concurrent DDL statements we discuss now) - not big
> inconvenience probably.
> So ok, I have no strong objection against using
> Access...Locks as schema locks.
>
> > In my understanding,locking levels you provided contains
> > an implicit share/exclusive lock on the corrsponding
> > pg_class tuple i.e. AccessExclusive Lock acquires an
> > exclusive lock on the corresping pg_class tuple and
> > other locks acquire a share lock, Is it right ?
>
> No. Access...Locks are acquired over target table
> (table' oid is used as key for lmgr hash table),
> not over corresponding pg_class tuple, in what case
> we would use pg_clas' oid + table' oid as key
> (possibility I've described below).
>

Yes,I know that "lock table" doesn't touch the correpon
ding pg_class tuple at all.  However isn't it equivalent ?
At least

>
> > > > If we have a mechanism to acquire a share lock on a tuple,we
>

need Access(Share/Exclusive)Lock ?


> ...
> > >  - we could add oid to union above and lock tables by acquiring lock
> > > on pg_class with objId.oid = table' oid. Same way we could
> > > lock indices and whatever we want... if we want -:)
> >
> > As you know well,this implemenation has a flaw that we have
> > to be anxious about the shortage of shared memory.
>
> Didn't you asked about share lock on a tuple?
> Share locks may be kept in memory only.
> I've just pointed that we have such mechanism -:)

Hmm,I remember you refered to SHARE lock on tuples once.
I wasn't able to suppose how you would implement it then.
I've also thought the enhancement of current locking
machanism which had been used for page level locking but
have always been discouraged by the shmem shortage flaw.

Regards.
Hiroshi Inoue



Mailing list archives available?

From
Krzysztof Kowalczyk
Date:
Are mailing list archives of various postgresql mailing list available
anywhere?

I know they were some time ago but I couldn't find any link on
www.postgresql.org now. I subscribed to a list mainly because I want to
monitor the progress but the amount of messages kills my inbox. It would
be really convenient for me if I could just browse the archives on web
once in a while.

Krzysztof Kowalczyk


Re: Mailing list archives available?

From
The Hermit Hacker
Date:
http://www.postgresql.org/mhonarc has them all listed .. not sure how to
get there from the Web site ... Vince?

On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:

> Are mailing list archives of various postgresql mailing list available
> anywhere?
> 
> I know they were some time ago but I couldn't find any link on
> www.postgresql.org now. I subscribed to a list mainly because I want to
> monitor the progress but the amount of messages kills my inbox. It would
> be really convenient for me if I could just browse the archives on web
> once in a while.
> 
> Krzysztof Kowalczyk
> 
> 

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 



Re: Mailing list archives available?

From
Vince Vielhaber
Date:
On Tue, 24 Oct 2000, The Hermit Hacker wrote:

> 
> http://www.postgresql.org/mhonarc has them all listed .. not sure how to
> get there from the Web site ... Vince?

There are links from both the Developer's Corner and User's Lounge ->
General Info.

Vince.


> 
> On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:
> 
> > Are mailing list archives of various postgresql mailing list available
> > anywhere?
> > 
> > I know they were some time ago but I couldn't find any link on
> > www.postgresql.org now. I subscribed to a list mainly because I want to
> > monitor the progress but the amount of messages kills my inbox. It would
> > be really convenient for me if I could just browse the archives on web
> > once in a while.
> > 
> > Krzysztof Kowalczyk
> > 
> > 
> 
> Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
> Systems Administrator @ hub.org 
> primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 
> 
> 

-- 
==========================================================================
Vince Vielhaber -- KA8CSH    email: vev@michvhf.com    http://www.pop4.net128K ISDN from $22.00/mo - 56K Dialup from
$16.00/moat Pop4 Networking       Online Campground Directory    http://www.camping-usa.com      Online Giftshop
Superstore   http://www.cloudninegifts.com
 
==========================================================================





Re: Mailing list archives available?

From
The Hermit Hacker
Date:
On Tue, 24 Oct 2000, Vince Vielhaber wrote:

> On Tue, 24 Oct 2000, The Hermit Hacker wrote:
> 
> > 
> > http://www.postgresql.org/mhonarc has them all listed .. not sure how to
> > get there from the Web site ... Vince?
> 
> There are links from both the Developer's Corner and User's Lounge ->
> General Info.

Ya know, I've gone in and looked several times and my eye always gets draw
down to the section titled ' Mailing Lists '? :)  Can you put lnks from
the 'pgsql-{admin,announce,general,etc}' in that section to the archives
as well, so its a bit easier to find?  And maybe 'bold' the words "mailing
lists" in the General Info section, so that it stands out a bit more? :)

> > Vince.
> 
> 
> > 
> > On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:
> > 
> > > Are mailing list archives of various postgresql mailing list available
> > > anywhere?
> > > 
> > > I know they were some time ago but I couldn't find any link on
> > > www.postgresql.org now. I subscribed to a list mainly because I want to
> > > monitor the progress but the amount of messages kills my inbox. It would
> > > be really convenient for me if I could just browse the archives on web
> > > once in a while.
> > > 
> > > Krzysztof Kowalczyk
> > > 
> > > 
> > 
> > Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
> > Systems Administrator @ hub.org 
> > primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 
> > 
> > 
> 
> -- 
> ==========================================================================
> Vince Vielhaber -- KA8CSH    email: vev@michvhf.com    http://www.pop4.net
>  128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking
>         Online Campground Directory    http://www.camping-usa.com
>        Online Giftshop Superstore    http://www.cloudninegifts.com
> ==========================================================================
> 
> 
> 
> 

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 



Re: relation ### modified while in use

From
"Vadim Mikheev"
Date:
> > > In my understanding,locking levels you provided contains
> > > an implicit share/exclusive lock on the corrsponding
> > > pg_class tuple i.e. AccessExclusive Lock acquires an
> > > exclusive lock on the corresping pg_class tuple and
> > > other locks acquire a share lock, Is it right ?
> >
> > No. Access...Locks are acquired over target table
> > (table' oid is used as key for lmgr hash table),
> > not over corresponding pg_class tuple, in what case
> > we would use pg_clas' oid + table' oid as key
> > (possibility I've described below).
> >
> 
> Yes,I know that "lock table" doesn't touch the correpon
> ding pg_class tuple at all.  However isn't it equivalent ?

From what POV?
Lock manager will allow two simultaneous exclusive locks using these
different methods (keys) and so we can interpret (use) them differently.

Vadim




Re: relation ### modified while in use

From
Hiroshi Inoue
Date:

Vadim Mikheev wrote:

> > > > In my understanding,locking levels you provided contains
> > > > an implicit share/exclusive lock on the corrsponding
> > > > pg_class tuple i.e. AccessExclusive Lock acquires an
> > > > exclusive lock on the corresping pg_class tuple and
> > > > other locks acquire a share lock, Is it right ?
> > >
> > > No. Access...Locks are acquired over target table
> > > (table' oid is used as key for lmgr hash table),
> > > not over corresponding pg_class tuple, in what case
> > > we would use pg_clas' oid + table' oid as key
> > > (possibility I've described below).
> > >
> >
> > Yes,I know that "lock table" doesn't touch the correpon
> > ding pg_class tuple at all.  However isn't it equivalent ?
>
> >From what POV?
> Lock manager will allow two simultaneous exclusive locks using these
> different methods (keys) and so we can interpret (use) them differently.
>

Seems my first explanation was really bad,sorry.

When I saw Access(Share/Exclusive)Lock for the first time,
I thought what they are for.
For VACUUM ? Yes. For DROP TABLE ? Yes. For ALTER TABLE ?
Maybe yes...........
Oracle doesn't have VACUUM and probably handles the other
cases using dictionary lock mechanism.
Unfortunately we've had no dictionary lock mechanism.
Don't Access(..)Lock locks compensate the lack of dictionary
lock mechanism ?

Regards.
Hiroshi Inoue



Re: Mailing list archives available?

From
Vince Vielhaber
Date:
I sure hope this is a rerun 'cuze I did it yesterday.

Vince.


On Tue, 24 Oct 2000, The Hermit Hacker wrote:

> On Tue, 24 Oct 2000, Vince Vielhaber wrote:
> 
> > On Tue, 24 Oct 2000, The Hermit Hacker wrote:
> > 
> > > 
> > > http://www.postgresql.org/mhonarc has them all listed .. not sure how to
> > > get there from the Web site ... Vince?
> > 
> > There are links from both the Developer's Corner and User's Lounge ->
> > General Info.
> 
> Ya know, I've gone in and looked several times and my eye always gets draw
> down to the section titled ' Mailing Lists '? :)  Can you put lnks from
> the 'pgsql-{admin,announce,general,etc}' in that section to the archives
> as well, so its a bit easier to find?  And maybe 'bold' the words "mailing
> lists" in the General Info section, so that it stands out a bit more? :)
> 
> 
>  > > Vince.
> > 
> > 
> > > 
> > > On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:
> > > 
> > > > Are mailing list archives of various postgresql mailing list available
> > > > anywhere?
> > > > 
> > > > I know they were some time ago but I couldn't find any link on
> > > > www.postgresql.org now. I subscribed to a list mainly because I want to
> > > > monitor the progress but the amount of messages kills my inbox. It would
> > > > be really convenient for me if I could just browse the archives on web
> > > > once in a while.
> > > > 
> > > > Krzysztof Kowalczyk
> > > > 
> > > > 
> > > 
> > > Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
> > > Systems Administrator @ hub.org 
> > > primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 
> > > 
> > > 
> > 
> > -- 
> > ==========================================================================
> > Vince Vielhaber -- KA8CSH    email: vev@michvhf.com    http://www.pop4.net
> >  128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking
> >         Online Campground Directory    http://www.camping-usa.com
> >        Online Giftshop Superstore    http://www.cloudninegifts.com
> > ==========================================================================
> > 
> > 
> > 
> > 
> 
> Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
> Systems Administrator @ hub.org 
> primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 
> 
> 

-- 
==========================================================================
Vince Vielhaber -- KA8CSH    email: vev@michvhf.com    http://www.pop4.net128K ISDN from $22.00/mo - 56K Dialup from
$16.00/moat Pop4 Networking       Online Campground Directory    http://www.camping-usa.com      Online Giftshop
Superstore   http://www.cloudninegifts.com
 
==========================================================================





Re: relation ### modified while in use

From
Hiroshi Inoue
Date:
Tom Lane wrote:

> Alex Pilosov <alex@pilosoft.com> writes:
> > I think this happens after I create/modify tables which reference this
> > table. This is spontaneous, and doesn't _always_ happen...
>
> Um.  I was hoping it was something more easily fixable :-(.  What's
> causing the relcache to decide that the rel has been modified is the
> addition or removal of foreign-key triggers on the rel.  Which seems
> legitimate.  (It's barely possible that we could get away with allowing
> triggers to be added or deleted mid-transaction, but that doesn't feel
> right to me.)
>
> There are two distinct known bugs that allow the error to be reported.
> These have been discussed before, but to recap:
>
> 1. relcache will complain if the notification of cache invalidation
> arrives after transaction start and before first use of the referenced
> rel (when there was already a relcache entry left over from a prior
> transaction).  In this situation we should allow the change to occur
> without complaint, ISTM.  But the relcache doesn't currently have any
> concept of first reference versus later references.
>

Do we have a conclusion about this thread ?
If no,how about changing heap_open(r) so that they allocate
Relation descriptors after acquiring a lock on the table ?
We would use LockRelation() no longer.

Comments ?

Regards.
Hiroshi Inoue



Re: relation ### modified while in use

From
Tom Lane
Date:
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> Do we have a conclusion about this thread ?
> If no,how about changing heap_open(r) so that they allocate
> Relation descriptors after acquiring a lock on the table ?
> We would use LockRelation() no longer.

That won't do by itself, because that will open us up to failures when
a relcache invalidation arrives mid-transaction and we don't happen to
have the relation open at the time.  We could still have parse/plan
results that depend on the old relation definition.

Really we need to fix things so that a lock is held from first use to
end of transaction, independently of heap_open/heap_close.
        regards, tom lane


RE: relation ### modified while in use

From
"Hiroshi Inoue"
Date:
> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> 
> Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> > Do we have a conclusion about this thread ?
> > If no,how about changing heap_open(r) so that they allocate
> > Relation descriptors after acquiring a lock on the table ?
> > We would use LockRelation() no longer.
> 
> That won't do by itself,

Doesn't current heap_open() have a flaw that even the first 
use of a relation in a transaction may cause an error
"relation ### modified while in use" ?

> because that will open us up to failures when
> a relcache invalidation arrives mid-transaction and we don't happen to
> have the relation open at the time.  We could still have parse/plan
> results that depend on the old relation definition.
> 

PL/pgSQL already prepares a plan at the first execution
time and executes the plan repeatedly after that.
We would have general PREPARE/EXECUTE feature in the
near fututre. IMHO another mechanism to detect plan invali
dation is needed.

BTW,I sometimes see    ERROR:  SearchSysCache: recursive use of cache 10(16)
under small MAXNUMMESSAGES environment.
I'm not sure about the cause but suspicious if sufficiently
many system relations are nailed for "cache state reset".

Regards.
Hiroshi Inoue


Re: relation ### modified while in use

From
Tom Lane
Date:
"Hiroshi Inoue" <Inoue@tpf.co.jp> writes:
> Doesn't current heap_open() have a flaw that even the first 
> use of a relation in a transaction may cause an error
> "relation ### modified while in use" ?

Sure, that was the starting point of the discussion.

>> because that will open us up to failures when
>> a relcache invalidation arrives mid-transaction and we don't happen to
>> have the relation open at the time.  We could still have parse/plan
>> results that depend on the old relation definition.

> PL/pgSQL already prepares a plan at the first execution
> time and executes the plan repeatedly after that.
> We would have general PREPARE/EXECUTE feature in the
> near fututre. IMHO another mechanism to detect plan invali
> dation is needed.

Yes, we need the ability to invalidate cached plans.  But that doesn't
have anything to do with this issue, IMHO.  The problem at hand is that
a plan may be invalidated before it is even finished building.  Do you
expect the parse-rewrite-plan-execute pipeline to be prepared to back up
and restart if we notice a relation schema change report halfway down the
process?  How will we even *know* whether the schema change invalidates
what we've done so far, unless we have a first-use-in-transaction flag?

> BTW,I sometimes see 
>     ERROR:  SearchSysCache: recursive use of cache 10(16)
> under small MAXNUMMESSAGES environment.
> I'm not sure about the cause but suspicious if sufficiently
> many system relations are nailed for "cache state reset".

Does this occur after a prior error message?  I have been suspicious
because there isn't a mechanism to clear the syscache-busy flags during
xact abort.  If we elog() out of a syscache fill operation, seems like
the busy flag will be left set, leading to exactly the above error on
later xacts' attempts to use that syscache.  I think we need an
AtEOXact_Syscache routine that runs around and clears the busy flags.
(In the commit case, perhaps it should issue debug notices if it finds
any that are set.)
        regards, tom lane


RE: relation ### modified while in use

From
Alex Pilosov
Date:
On Fri, 3 Nov 2000, Hiroshi Inoue wrote:

> PL/pgSQL already prepares a plan at the first execution
> time and executes the plan repeatedly after that.
> We would have general PREPARE/EXECUTE feature in the
> near fututre. IMHO another mechanism to detect plan invali
> dation is needed.
Excellent point. While now I don't consider it too inconvenient to reload
all my stored procedures after I change database structure, in future, I'd
love it to be handled by postgres itself.

Possibly, plpgsql (or postgresql itself) could have a 'dependency' list of
objects that the current object depends on?

This would additionally help dump/restore (the old one, I'm not talking
about the newfangled way to do it), since, for restore, you need to dump
the objects in the order of their dependency, and plpgsql procedure can
potentially depend on an object that has a higher OID...

-alex




RE: relation ### modified while in use

From
"Hiroshi Inoue"
Date:
> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> 
> "Hiroshi Inoue" <Inoue@tpf.co.jp> writes:
> > Doesn't current heap_open() have a flaw that even the first 
> > use of a relation in a transaction may cause an error
> > "relation ### modified while in use" ?
> 
> Sure, that was the starting point of the discussion.
>

At least my proposal resolves this flaw.
> >> because that will open us up to failures when
> >> a relcache invalidation arrives mid-transaction and we don't happen to
> >> have the relation open at the time.  We could still have parse/plan
> >> results that depend on the old relation definition.
> 
> > PL/pgSQL already prepares a plan at the first execution
> > time and executes the plan repeatedly after that.
> > We would have general PREPARE/EXECUTE feature in the
> > near fututre. IMHO another mechanism to detect plan invali
> > dation is needed.
> 
> Yes, we need the ability to invalidate cached plans.  But that doesn't
> have anything to do with this issue, IMHO.  The problem at hand is that
> a plan may be invalidated before it is even finished building.  Do you
> expect the parse-rewrite-plan-execute pipeline to be prepared to back up
> and restart if we notice a relation schema change report halfway down the
> process? 

IMHO executor should re-parse-rewrite-plan if the target plan
is no longer valid.

> How will we even *know* whether the schema change invalidates
> what we've done so far, unless we have a first-use-in-transaction flag?
> 

Regards.
Hiroshi Inoue 


RE: relation ### modified while in use

From
"Hiroshi Inoue"
Date:
> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
>
> > BTW,I sometimes see
> >     ERROR:  SearchSysCache: recursive use of cache 10(16)
> > under small MAXNUMMESSAGES environment.
> > I'm not sure about the cause but suspicious if sufficiently
> > many system relations are nailed for "cache state reset".
>
> Does this occur after a prior error message?  I have been suspicious
> because there isn't a mechanism to clear the syscache-busy flags during
> xact abort.  If we elog() out of a syscache fill operation, seems like
> the busy flag will be left set, leading to exactly the above error on
> later xacts' attempts to use that syscache.  I think we need an
> AtEOXact_Syscache routine that runs around and clears the busy flags.
> (In the commit case, perhaps it should issue debug notices if it finds
> any that are set.)
>

I don't know if I've seen the cases you pointed out.
I have the following gdb back trace. Obviously it calls
SearchSysCache() for cacheId 10 twice. I was able
to get another gdb back trace but discarded it by
mistake.  Though I've added pause() just after detecting
recursive use of cache,backends continue the execution
in most cases unfortunately.
I've not examined the backtrace yet. But don't we have
to nail system relation descriptors more than now ?
"cache state reset" could arrive at any heap_open().

Not that #0 corresponds to pause() and line numbers may
be different from yours.

#0  0x40163db7 in __libc_pause ()
#1  0x8141ade in SearchSysCache (cache=0x825b89c, v1=17113, v2=0, v3=0,
v4=0)   at catcache.c:1026
#2  0x8145bd0 in SearchSysCacheTuple (cacheId=10, key1=17113, key2=0,
key3=0,   key4=0) at syscache.c:505
#3  0x807a100 in IndexSupportInitialize (indexStrategy=0x829d230,   indexSupport=0x829ab2c, isUnique=0x829cf26 "",
indexObjectId=17113,  accessMethodObjectId=403, maxStrategyNumber=5, maxSupportNumber=1,   maxAttributeNumber=2) at
istrat.c:561
#4  0x81437cd in IndexedAccessMethodInitialize (relation=0x829cf10)   at relcache.c:1180
#5  0x8143599 in RelationBuildDesc (buildinfo={infotype = 1, i = {       info_id = 17113, info_name = 0x42d9 <Address
0x42d9out of
 
bounds>}},   oldrelation=0x829cf10) at relcache.c:1095
#6  0x8143f8d in RelationClearRelation (relation=0x829cf10, rebuildIt=1
'\001')   at relcache.c:1687
#7  0x81440fa in RelationFlushRelation (relationPtr=0x8246f8c,   skipLocalRelations=1) at relcache.c:1789
#8  0x80d02e3 in HashTableWalk (hashtable=0x823941c,   function=0x81440d0 <RelationFlushRelation>, arg=1) at
hasht.c:47
#9  0x81442b5 in RelationCacheInvalidate () at relcache.c:1922
#10 0x81421bd in ResetSystemCaches () at inval.c:559
#11 0x810302b in InvalidateSharedInvalid (   invalFunction=0x8142150 <CacheIdInvalidate>,   resetFunction=0x81421b0
<ResetSystemCaches>)at sinval.c:153
 
#12 0x8142332 in DiscardInvalid () at inval.c:722
#13 0x8104a9f in LockRelation (relation=0x8280134, lockmode=1) at lmgr.c:151
#14 0x807427d in heap_open (relationId=16580, lockmode=1) at heapam.c:638
#15 0x8141b54 in SearchSysCache (cache=0x825b89c, v1=17116, v2=0, v3=0,
v4=0)   at catcache.c:1049
#16 0x8145bd0 in SearchSysCacheTuple (cacheId=10, key1=17116, key2=0,
key3=0,   key4=0) at syscache.c:505
#17 0x80921d5 in CatalogIndexInsert (idescs=0xbfffeaac, nIndices=2,   heapRelation=0x82443d0, heapTuple=0x827a4c8) at
indexing.c:156
#18 0x808e6e7 in AddNewAttributeTuples (new_rel_oid=137741,
tupdesc=0x8279904)   at heap.c:659
#19 0x808e9c3 in heap_create_with_catalog (relname=0x82a02c4 "bprime",   tupdesc=0x8279904, relkind=114 'r', istemp=0
'\000',  allow_system_table_mods=0 '\000') at heap.c:911
 
#20 0x80c320d in InitPlan (operation=CMD_SELECT, parseTree=0x8288100,   plan=0x8277d70, estate=0x8277dfc) at
execMain.c:729
#21 0x80c2af1 in ExecutorStart (queryDesc=0x8278c14, estate=0x8277dfc)   at execMain.c:131
#22 0x810c327 in ProcessQuery (parsetree=0x8288100, plan=0x8277d70,   dest=Remote) at pquery.c:260
#23 0x810aeb5 in pg_exec_query_string (   query_string=0x8287c58 "SELECT *\n   INTO TABLE Bprime\n   FROM tenk1\n
WHERE unique2 < 1000;", dest=Remote, parse_context=0x822efb4) at
postgres.c:820
#24 0x810be42 in PostgresMain (argc=4, argv=0xbfffed74, real_argc=4,   real_argv=0xbffff654, username=0x823c881
"reindex")at postgres.c:1808
 
#25 0x80f3913 in DoBackend (port=0x823c618) at postmaster.c:1963
#26 0x80f34e6 in BackendStartup (port=0x823c618) at postmaster.c:1732
#27 0x80f285a in ServerLoop () at postmaster.c:978
#28 0x80f22f4 in PostmasterMain (argc=4, argv=0xbffff654) at
postmaster.c:669
#29 0x80d41bd in main (argc=4, argv=0xbffff654) at main.c:112

Regards.
Hirsohi Inoue



ResetSystemCaches(was Re: relation ### modified while in use)

From
Hiroshi Inoue
Date:
Hi

RelationCacheInvalidate() is called from ResetSystemCaches()
and calles RelationFlushRelation() for all relation descriptors
except some nailed system relations.
I'm wondering why nailed relations could be exceptions.
Conversely why must RelationCacheInvalidate() call
RelationFlushRelation() for other system relations ?
Isn't it sufficient to call smgrclose() and replace rd_rel
member of system relations by the latest ones instead
of calling RelationFlushRelation() ?
There's -O option of postmaster(postgres) which allows
system table structure modification.  I'm suspicious
if it has been used properly before.

Comments ?

Regards.
Hiroshi Inoue



Re: ResetSystemCaches(was Re: relation ### modified while in use)

From
Tom Lane
Date:
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> RelationCacheInvalidate() is called from ResetSystemCaches()
> and calles RelationFlushRelation() for all relation descriptors
> except some nailed system relations.
> I'm wondering why nailed relations could be exceptions.
> Conversely why must RelationCacheInvalidate() call
> RelationFlushRelation() for other system relations ?
> Isn't it sufficient to call smgrclose() and replace rd_rel
> member of system relations by the latest ones instead
> of calling RelationFlushRelation() ?

Possibly you could do fixrdesc() instead of just ignoring the report
entirely for nailed-in relations.  Not sure it's worth worrying about
though --- in practice, what is this going to make possible?  You can't
change the structure of a nailed-in system catalog, nor will adding
triggers or rules to it work very well, so I'm not quite seeing the
point.

BTW, don't forget that there are nailed-in indexes as well as tables.
Not sure if that matters to this code, but it might.
        regards, tom lane


Re: ResetSystemCaches(was Re: relation ### modified while in use)

From
Hiroshi Inoue
Date:
Tom Lane wrote:

> Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> > RelationCacheInvalidate() is called from ResetSystemCaches()
> > and calles RelationFlushRelation() for all relation descriptors
> > except some nailed system relations.
> > I'm wondering why nailed relations could be exceptions.
> > Conversely why must RelationCacheInvalidate() call
> > RelationFlushRelation() for other system relations ?
> > Isn't it sufficient to call smgrclose() and replace rd_rel
> > member of system relations by the latest ones instead
> > of calling RelationFlushRelation() ?
>
> Possibly you could do fixrdesc() instead of just ignoring the report
> entirely for nailed-in relations.  Not sure it's worth worrying about
> though --- in practice, what is this going to make possible?  You can't
> change the structure of a nailed-in system catalog, nor will adding
> triggers or rules to it work very well, so I'm not quite seeing the
> point.
>

Hmm,my point is on not nailed system relations(indexes)
not on already nailed relations.
Coundn't we skip system relations(indexes)  in Relation
CacheInvalidate() ?

Regards.
Hiroshi Inoue



Recursive use of syscaches (was: relation ### modified while in use)

From
Tom Lane
Date:
"Hiroshi Inoue" <Inoue@tpf.co.jp> writes:
>> Does this occur after a prior error message?  I have been suspicious
>> because there isn't a mechanism to clear the syscache-busy flags during
>> xact abort.

> I don't know if I've seen the cases you pointed out.
> I have the following gdb back trace. Obviously it calls
> SearchSysCache() for cacheId 10 twice. I was able
> to get another gdb back trace but discarded it by
> mistake.  Though I've added pause() just after detecting
> recursive use of cache,backends continue the execution
> in most cases unfortunately.
> I've not examined the backtrace yet. But don't we have
> to nail system relation descriptors more than now ?

I don't think that's the solution; nailing more descriptors than we
absolutely must is not a pretty approach, and I don't think it solves
this problem anyway.  Your example demonstrates that recursive use
of a syscache is perfectly possible when a cache inval message arrives
just as we are about to search for a syscache entry.  Consider
the following path:

1. We are doing index_open and ensuing relcache entry load for some user
index.  In the middle of this, we need to fetch a not-currently-cached
pg_amop entry that is referenced by the index.

2. As we open pg_amop, we receive an SI message for some other user
index that is referenced in the current query and so currently has
positive refcnt.  We therefore attempt to rebuild that index's relcache
entry.

3. At this point we have recursive invocation of relcache load, which
may well lead to a recursive attempt to fetch the very same pg_amop
entry that the outer relcache load is trying to fetch.

Therefore, the current error test of checking for re-entrant lookups in
the same syscache is bogus.  It would still be bogus even if we refined
it to notice whether the exact same entry is being sought.

On top of that, we have the issue I was concerned about that there is
no mechanism for clearing the cache-busy flags during xact abort.

Rather than trying to fix this stuff, I propose that we simply remove
the test for recursive use of a syscache.  AFAICS it will never catch
any real bugs in production.  It might catch bugs in development (ie,
someone messes up the startup sequence in a way that causes a truly
circular cache lookup) but I think a stack overflow crash is a
perfectly OK result then.
        regards, tom lane


Re: Recursive use of syscaches (was: relation ### modified while in use)

From
Bruce Momjian
Date:
> Rather than trying to fix this stuff, I propose that we simply remove
> the test for recursive use of a syscache.  AFAICS it will never catch
> any real bugs in production.  It might catch bugs in development (ie,
> someone messes up the startup sequence in a way that causes a truly
> circular cache lookup) but I think a stack overflow crash is a
> perfectly OK result then.

Agreed.


--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Recursive use of syscaches (was: relation ### modified while in use)

From
Tom Lane
Date:
I wrote:
> On top of that, we have the issue I was concerned about that there is
> no mechanism for clearing the cache-busy flags during xact abort.

Hmm, brain cells must be fading fast.  On looking into the code I
find that there *is* such a mechanism --- installed by yours truly,
only three months ago.

Still, I think getting rid of the test altogether is a better answer.
        regards, tom lane


Re: Recursive use of syscaches (was: relation ### modified while in use)

From
Hiroshi Inoue
Date:
Tom Lane wrote:

> "Hiroshi Inoue" <Inoue@tpf.co.jp> writes:
> >> Does this occur after a prior error message?  I have been suspicious
> >> because there isn't a mechanism to clear the syscache-busy flags during
> >> xact abort.
>
> > I don't know if I've seen the cases you pointed out.
> > I have the following gdb back trace. Obviously it calls
> > SearchSysCache() for cacheId 10 twice. I was able
> > to get another gdb back trace but discarded it by
> > mistake.  Though I've added pause() just after detecting
> > recursive use of cache,backends continue the execution
> > in most cases unfortunately.
> > I've not examined the backtrace yet. But don't we have
> > to nail system relation descriptors more than now ?
>
> I don't think that's the solution; nailing more descriptors than we
> absolutely must is not a pretty approach,

I don't object to remove the check  'recursive use of cache'
because it's not a real check of recursion.
My concern is the robustness of rel cache.
It seems pretty dangerous to discard system relation
descriptors used for cache mechanism especially in
case of error recovery.
It also seems pretty dangerous to recontruct relation
descriptors especially in case of error recovery.

Regards.
Hiroshi Inoue




Re: Recursive use of syscaches (was: relation ### modified while in use)

From
Tom Lane
Date:
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> My concern is the robustness of rel cache.
> It seems pretty dangerous to discard system relation
> descriptors used for cache mechanism especially in
> case of error recovery.
> It also seems pretty dangerous to recontruct relation
> descriptors especially in case of error recovery.

Why?  We are able to construct all the non-nailed relcache entries
from scratch during backend startup.  That seems a sufficient
proof that we can reconstruct any or all of them on demand.

Until the changes I made today, there was a flaw in that logic,
namely that the specific order that relcache entries are built in
during startup might be somehow magic, ie, building them in another
order might cause a recursive syscache call.  But now, that doesn't
matter.
        regards, tom lane


Re: Re: Recursive use of syscaches (was: relation ### modified while in use)

From
Hiroshi Inoue
Date:
Tom Lane wrote:

> Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> > My concern is the robustness of rel cache.
> > It seems pretty dangerous to discard system relation
> > descriptors used for cache mechanism especially in
> > case of error recovery.
> > It also seems pretty dangerous to recontruct relation
> > descriptors especially in case of error recovery.
>
> Why?  We are able to construct all the non-nailed relcache entries
> from scratch during backend startup.  That seems a sufficient
> proof that we can reconstruct any or all of them on demand.
>


Hmm,why is it sufficent ?
At backend startup there are no rel cache except
some nailed rels. When 'reset system cache' message
arrives,there would be many rel cache entries and
some of them may be in use.
In addtion there could be some inconsitency of db
in the middle of the transaction. Is it safe to recon
struct rel cache under the inconsistency ?

Regards.
Hiroshi Inoue



Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> Tom Lane wrote:
>> Why?  We are able to construct all the non-nailed relcache entries
>> from scratch during backend startup.  That seems a sufficient
>> proof that we can reconstruct any or all of them on demand.

> Hmm,why is it sufficent ?
> At backend startup there are no rel cache except
> some nailed rels. When 'reset system cache' message
> arrives,there would be many rel cache entries and
> some of them may be in use.

Doesn't bother me.  The ones that are in use will get rebuilt.
That might trigger recursive rebuilding of system-table relcache
entries, and consequently recursive syscache lookups, but so what?
That already happens during backend startup: some relcache entries
are loaded as a byproduct of attempts to build other ones.

> In addtion there could be some inconsitency of db
> in the middle of the transaction. Is it safe to recon
> struct rel cache under the inconsistency ?

No worse than trying to start up while other transactions are
running.  We don't support on-the-fly modification of schemas
for system catalogs anyway, so I don't see the issue.
        regards, tom lane