Thread: too many trigger records found for relation "item" - what's that about??
too many trigger records found for relation "item" - what's that about??
From
"Lenorovitz, Joel"
Date:
Greetings, I've had a strange error crop up recently on a table 'Item' which contains about 60 rows and lives in a development database I'm currently working on. Since the DB was last freshly created from a dump file several days ago I've added/dropped/altered a few tables (not necessarily 'Item' though), modified some data, and run many queries against this and other tables. Now, all of a sudden if I try to run a query against 'Item' I get the error shown below about too many trigger records. Any idea what this means, how this came to be, and most of all how to correct it? Below is the buffer from a recent session with a \d on Item and the only other thing I can offer is that several tables have Item.id as a foreign key. Please advise and thanks in advance for the help. - Joel postgres=# select * from item; ERROR: too many trigger records found for relation "item" postgres=# \d item Table "public_test.item" Column | Type | Modifiers -------------------------------------+------------------------+--------- -------- --------------------------------------- id | bigint | not null default nextval('item_sequence_id'::regclass) name | character varying(100) | not null manufacturer_organization_id | bigint | model | character varying(100) | version | character varying(100) | size | character varying(100) | quantity_measurement_parameter_enum | bigint | not null color_enum | bigint | batch_unit_enum | bigint | is_consumable | boolean | not null is_persistent | boolean | not null Indexes: "item_pkey_id" PRIMARY KEY, btree (id) Foreign-key constraints: "item_fkey_batch_unit_enum" FOREIGN KEY (batch_unit_enum) REFERENCES enum_va lue(id) ON UPDATE CASCADE ON DELETE RESTRICT "item_fkey_color_enum" FOREIGN KEY (color_enum) REFERENCES enum_value(id) ON UPDATE CASCADE ON DELETE RESTRICT "item_fkey_manufacturer_organization_id" FOREIGN KEY (manufacturer_organizat ion_id) REFERENCES organization(id) ON UPDATE CASCADE ON DELETE CASCADE "item_fkey_quantity_measurement_parameter_enum" FOREIGN KEY (quantity_measur ement_parameter_enum) REFERENCES enum_value(id) ON UPDATE CASCADE ON DELETE REST RICT postgres=# select * from actual_inventory a join item b on a.item_id = b.id; ERROR: too many trigger records found for relation "item" postgres=#
"Lenorovitz, Joel" <Joel.Lenorovitz@usap.gov> writes: > postgres=# select * from item; > ERROR: too many trigger records found for relation "item" You could reset the pg_class.reltriggers entry for "item" to be however many pg_trigger entries there actually are for the table. I'm curious how you got into this state though... what PG version is this, and have you been doing any hand manipulation of triggers? regards, tom lane
On Mon, 2007-01-22 at 20:56, Lenorovitz, Joel wrote: [snip] > ERROR: too many trigger records found for relation "item" I've got this error on a development data base where we were continuously creating new child tables referencing the same parent table. The responsible code is in src/backend/commands/trigger.c, and I think it only happens if you manage to create/drop a new trigger (which also could be a FK trigger created by a new foreign key referencing that table, as in our case) exactly between that code gets the count of the triggers and processes them. In any case it should be a transient error, i.e. it should only happen when you heavily create/drop triggers... our integration test case was actually heavily creating new child tables, so that's how it happened for us. In a production scenario I won't be creating all the time new triggers in parallel with other heavy activities, so it doesn't bother me. Cheers, Csaba.
Nevertheless, the database should be able to handle any combination of syntactically correct SQL statements without throwing errors and maintaining the database in a consistent state. If what you're saying is right, the error thrown here is not a user configuration error, but an RDBMS implementation error. A development database is still obviously an important role for PostgreSQL to function in (as far as PostgreSQL's dev team is concerned, a development database *is* a "production" use since once of *their* end-users experiences the problem) and it needs to be able to handle cases such as this with no problems. And no matter how unlikely it is to be in a production environment, *someone* will try to modify their schema dynamically like this. I'm wondering if there is a race condition in CREATE or DROP with respect to triggers and foreign keys. If that's the case, it's going to affect someone eventually. -- Brandon Aiken CS/IT Systems Engineer -----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Csaba Nagy Sent: Tuesday, January 23, 2007 4:42 AM To: Lenorovitz, Joel Cc: Postgres general mailing list Subject: Re: [GENERAL] too many trigger records found for relation "item" - On Mon, 2007-01-22 at 20:56, Lenorovitz, Joel wrote: [snip] > ERROR: too many trigger records found for relation "item" I've got this error on a development data base where we were continuously creating new child tables referencing the same parent table. The responsible code is in src/backend/commands/trigger.c, and I think it only happens if you manage to create/drop a new trigger (which also could be a FK trigger created by a new foreign key referencing that table, as in our case) exactly between that code gets the count of the triggers and processes them. In any case it should be a transient error, i.e. it should only happen when you heavily create/drop triggers... our integration test case was actually heavily creating new child tables, so that's how it happened for us. In a production scenario I won't be creating all the time new triggers in parallel with other heavy activities, so it doesn't bother me. Cheers, Csaba. ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -------------------------------------------------------------------- ** LEGAL DISCLAIMER ** Statements made in this email may or may not reflect the views and opinions of Wineman Technology, Inc. This E-mail message and any attachments may contain legally privileged, confidential or proprietary information. If you arenot the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s),you are hereby notified that any dissemination, distribution or copying of this E-mail message is strictly prohibited.If you have received this message in error, please immediately notify the sender and delete this E-mail messagefrom your computer. QS Disclaimer Demo. Copyright (C) Pa-software. Visit www.pa-software.com for more information.
On Tue, 2007-01-23 at 14:49, Brandon Aiken wrote: > Nevertheless, the database should be able to handle any combination of > syntactically correct SQL statements without throwing errors and > maintaining the database in a consistent state. If what you're saying > is right, the error thrown here is not a user configuration error, but > an RDBMS implementation error. > > A development database is still obviously an important role for > PostgreSQL to function in (as far as PostgreSQL's dev team is concerned, > a development database *is* a "production" use since once of *their* > end-users experiences the problem) and it needs to be able to handle > cases such as this with no problems. And no matter how unlikely it is > to be in a production environment, *someone* will try to modify their > schema dynamically like this. > > I'm wondering if there is a race condition in CREATE or DROP with > respect to triggers and foreign keys. If that's the case, it's going to > affect someone eventually. When I said it doesn't bother me, I meant it literally me, not implying the Postgres community in any way :-) And I did report it that time (I can't find the mail, but I think Tom had a look at it and probably decided it is not top priority). Cheers, Csaba.
Csaba Nagy <nagy@ecircle-ag.com> writes: > The responsible code is in src/backend/commands/trigger.c, and I > think it only happens if you manage to create/drop a new trigger (which > also could be a FK trigger created by a new foreign key referencing that > table, as in our case) exactly between that code gets the count of the > triggers and processes them. All such code takes exclusive lock on the table, so the above explanation is impossible. regards, tom lane
On Tue, 2007-01-23 at 15:43, Tom Lane wrote: > Csaba Nagy <nagy@ecircle-ag.com> writes: > > The responsible code is in src/backend/commands/trigger.c, and I > > think it only happens if you manage to create/drop a new trigger (which > > also could be a FK trigger created by a new foreign key referencing that > > table, as in our case) exactly between that code gets the count of the > > triggers and processes them. > > All such code takes exclusive lock on the table, so the above > explanation is impossible. Well, in that case it must be some other bug as it is readily reproducible here. My nightly integration has this error each night. The reason I don't panic (although I thought I reported it, but I can't find the mail) is that rerunning the failed things succeeds, and the failed operation is a table creation which is never critical for us in the sense that it can be retried as many times as necessary. The test data base is an 8.1.3 installation. The queries failing for the last run were: - an insert into the parent table; - a create table which was creating a child table to the same parent table the other query was inserting; I'm not sure if the 2 failures were connected or not. I also can't confirm if it also happens on 8.2 as my integration is still not running through on 8.2... Cheers, Csaba.
Csaba Nagy <nagy@ecircle-ag.com> writes: > On Tue, 2007-01-23 at 15:43, Tom Lane wrote: >> All such code takes exclusive lock on the table, so the above >> explanation is impossible. > Well, in that case it must be some other bug as it is readily > reproducible here. My nightly integration has this error each night. Well, if you can show a reproducible test case, I'd like to look at it. regards, tom lane
[Update: the post didn't make it to the list probably due to the attachment, so I resend it inlined... and I was not ableto trigger the same behavior on 8.2, so it might have been already fixed.] [snip] > Well, if you can show a reproducible test case, I'd like to look at it. OK, I have a test case which has ~ 90% success rate in triggering the issue on my box. It is written in Java, hope you can run it, in any case you'll get the idea how to reproduce the issue. The code is attached, and I list here some typical output run against an 8.1.3 postgres installation. The first exception is strange on it's own, it was produced after a few runs, might be caused by another issue with creating/dropping tables (I think I have seen this too some time ago). I'll go and run it against 8.2 and see if the issue is still there. My problems on the integration box turned out to be postgres logging set to too high level and running out of disk space due to log amount... Cheers, Csaba. Error executing sql: CREATE TABLE test_child_0 (a bigint primary key references test_parent(a)) org.postgresql.util.PSQLException: ERROR: duplicate key violates unique constraint "pg_type_typname_nsp_index" at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329) at com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155) at com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:91) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76) Error executing sql: DROP TABLE test_child_0 com.domeus.trials.TestChildTableCreationIndependent$MissingTableException at com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:158) at com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:98) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76) Error executing sql: DROP TABLE test_child_251 org.postgresql.util.PSQLException: ERROR: 2 trigger record(s) not found for relation "test_parent" at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329) at com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155) at com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:98) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76) Error executing sql: DROP TABLE test_child_258 org.postgresql.util.PSQLException: ERROR: too many trigger records found for relation "test_parent" at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329) at com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155) at com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:98) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76) Error executing sql: DROP TABLE test_child_262 org.postgresql.util.PSQLException: ERROR: too many trigger records found for relation "test_parent" at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329) at com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155) at com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:98) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76) From another run: Error executing sql: insert into test_parent values (96) org.postgresql.util.PSQLException: ERROR: 2 trigger record(s) not found for relation "test_parent" at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329) at com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155) at com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork2(TestChildTableCreationIndependent.java:108) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:78) Error executing sql: insert into test_parent values (215) org.postgresql.util.PSQLException: ERROR: too many trigger records found for relation "test_parent" at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329) at com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155) at com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork2(TestChildTableCreationIndependent.java:108) at com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:78) (I tried first attaching the file, but didn't make it to the list, so I inline) ************************************************ import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.sql.Statement; public class TestChildTableCreationIndependent { private String url; private String user; private String password; TestChildTableCreationIndependent() throws Exception { Class.forName("org.postgresql.Driver"); // replace these with your values url = "jdbc:postgresql://<host>:5432/<dbname>"; user = "<user>"; password = "<pass>"; } public static void main(String[] args) throws Exception { new TestChildTableCreationIndependent().testChildTableCreation(); } public void testChildTableCreation() throws Exception { final int THREAD_COUNT = 20; final int EXECUTION_COUNT = 30; final Connection connection = getConnection(); dropTables(connection, THREAD_COUNT * EXECUTION_COUNT); Thread[] threads = new Thread[THREAD_COUNT]; executeQuery(connection, "create table test_parent(a bigint primary key)"); for (int i = 0; i < threads.length; i++) { threads[i] = new TestChildTableCreationIndependent.WorkerThread(EXECUTION_COUNT); } for (int i = 0; i < threads.length; i++) { threads[i].start(); } for (int i = 0; i < threads.length; i++) { threads[i].join(); } dropTables(connection, THREAD_COUNT * EXECUTION_COUNT); } private Connection getConnection() throws SQLException { return DriverManager.getConnection(url, user, password); } private class WorkerThread extends Thread { private final Connection connection; private final int executionCount; public WorkerThread(int executionCount) throws SQLException { this.executionCount = executionCount; connection = getConnection(); } @Override public void run() { for (int i = 0; i < executionCount; i++) { try { if (Math.random() < 0.5) { doWork1(); } else { doWork2(); } } catch (Exception e) { e.printStackTrace(); } } } private void doWork1() { final String tableName = getChildTableName(); String sql = "CREATE TABLE " + tableName + " (a bigint primary key references test_parent(a))"; try { executeQuery(connection, sql); } catch (Throwable e) { System.err.println("Error executing sql: " + sql); e.printStackTrace(); } sql = "DROP TABLE " + tableName; try { executeQuery(connection, sql); } catch (Throwable e) { System.err.println("Error executing sql: " + sql); e.printStackTrace(); } } private void doWork2() { final String sql = "insert into test_parent values (" + idCounter++ + ")"; try { executeQuery(connection, sql); } catch (Throwable e) { System.err.println("Error executing sql: " + sql); e.printStackTrace(); } } } private void dropTables(Connection connection, int childCount) throws SQLException { childCounter = 0; for (int i = 0; i < childCount; i++) { final String tableName = TestChildTableCreationIndependent.getChildTableName(); try { executeQuery(connection, "DROP TABLE " + tableName); } catch (MissingTableException e) { // ignore, it was already dropped } catch (SQLException e) { e.printStackTrace(); throw e; } } try { executeQuery(connection, "DROP TABLE test_parent"); } catch (MissingTableException e) { // ignore, it was already dropped } catch (SQLException e) { e.printStackTrace(); throw e; } childCounter = 0; } private static volatile int childCounter = 0; private static volatile int idCounter = 0; private static String getChildTableName() { // this is good enough on my box to not need synchronisation, YMMV - synchronize this method if it won't work return "test_child_" + TestChildTableCreationIndependent.childCounter++; } private void executeQuery(Connection connection, String sql) throws SQLException, MissingTableException { Statement statement = connection.createStatement(); try { statement.execute(sql); } catch (SQLException e) { String message = e.getMessage(); if (message.indexOf("does not exist") != -1 && message.indexOf("table") != -1) { throw new MissingTableException(); } throw e; } } private static class MissingTableException extends Exception {} }
Csaba Nagy <nagy@ecircle-ag.com> writes: >> Well, if you can show a reproducible test case, I'd like to look at it. > OK, I have a test case which has ~ 90% success rate in triggering the > issue on my box. It is written in Java, hope you can run it, in any case > you'll get the idea how to reproduce the issue. Hm, well the trigger-related complaints are pretty obviously from a known race condition: pre-8.2 we'd read the pg_class row for a table before obtaining any lock on the table. So if someone else was concurrently adding or deleting triggers then the value of pg_class.reltriggers could be wrong by the time we'd managed to acquire any lock. I believe this is fixed as of 8.2 --- can you duplicate it there? (No, backpatching the fix is not practical.) > The code is attached, and I list here some typical output run against an > 8.1.3 postgres installation. The first exception is strange on it's own, > it was produced after a few runs, might be caused by another issue with > creating/dropping tables (I think I have seen this too some time ago). How sure are you about that uninterlocked getChildTableName() thing? It's possible to get a failure complaining about duplicate type name instead of duplicate relation name during CREATE TABLE, if the timing is just right. regards, tom lane
> [snip] I believe this is fixed as of 8.2 --- can you duplicate it > there? (No, backpatching the fix is not practical.) No, I was not able to duplicate it on 8.2, so I think it's fixed (given that on 8.1 the errors are triggered almost 100% of the runs). > How sure are you about that uninterlocked getChildTableName() thing? > It's possible to get a failure complaining about duplicate type name > instead of duplicate relation name during CREATE TABLE, if the timing > is just right. Not sure at all (I left it deliberately unsynchronized to go as fast as it can even if it errors sometimes on duplicate tables), so that might be an explanation. The error is a bit misleading though, or better inconsistent: if I would have to detect the duplicate table error condition in my code so that I can take corrective steps I would need to look for 2 error types instead of 1 - if I only knew that I have to. And BTW, I have seen something similar while creating temporary tables which should not conflict even with the same table name I think... Cheers, Csaba.
I have the same problem yesterday. I got this error when I try to disable the trigger in pg_catalog: UPDATE pg_catalog.pg_class SET reltriggers = 0 WHERE oid = 'foobar'::pg_catalog.regclass'; But if I disabling the trigger using this syntax: "ALTER TABLE tablename DISABLE TRIGGER triggername" everything ok. ----- Original Message ----- From: "Csaba Nagy" <nagy@ecircle-ag.com> To: "Postgres general mailing list" <pgsql-general@postgresql.org> Sent: Friday, January 26, 2007 10:06 AM Subject: Re: [GENERAL] too many trigger records found for relation "item" - >> [snip] I believe this is fixed as of 8.2 --- can you duplicate it >> there? (No, backpatching the fix is not practical.) > > No, I was not able to duplicate it on 8.2, so I think it's fixed (given > that on 8.1 the errors are triggered almost 100% of the runs). > >> How sure are you about that uninterlocked getChildTableName() thing? >> It's possible to get a failure complaining about duplicate type name >> instead of duplicate relation name during CREATE TABLE, if the timing >> is just right. > > Not sure at all (I left it deliberately unsynchronized to go as fast as > it can even if it errors sometimes on duplicate tables), so that might > be an explanation. The error is a bit misleading though, or better > inconsistent: if I would have to detect the duplicate table error > condition in my code so that I can take corrective steps I would need to > look for 2 error types instead of 1 - if I only knew that I have to. > > And BTW, I have seen something similar while creating temporary tables > which should not conflict even with the same table name I think... > > Cheers, > Csaba. > > > > ---------------------------(end of broadcast)--------------------------- > TIP 3: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq >
Csaba Nagy <nagy@ecircle-ag.com> writes: > And BTW, I have seen something similar while creating temporary tables > which should not conflict even with the same table name I think... I've heard reports of that, but never been able to duplicate it ... regards, tom lane
On Fri, Jan 26, 2007 at 02:33:05PM +0100, Furesz Peter wrote: > I have the same problem yesterday. I got this error when I try to disable > the trigger in pg_catalog: > > UPDATE pg_catalog.pg_class SET reltriggers = 0 WHERE oid = > 'foobar'::pg_catalog.regclass'; Well duh. The error is precisely complaining about the fact that the reltriggers field doesn't match the number of actual triggers. What this tells you is that this is the wrong way to disable triggers. > But if I disabling the trigger using this syntax: > > "ALTER TABLE tablename DISABLE TRIGGER triggername" > > everything ok. And this is the right way... Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.