Re: trouble creating log table with rules - Mailing list pgsql-hackers
From | Wayne Piekarski |
---|---|
Subject | Re: trouble creating log table with rules |
Date | |
Msg-id | 199906250652.QAA29938@helpdesk.senet.com.au Whole thread Raw |
Responses |
Re: [HACKERS] Re: trouble creating log table with rules
|
List | pgsql-hackers |
> Date: Thu, 24 Jun 1999 01:53:05 -0500 > From: Jim Rowan <jmr@computing.com> > Subject: trouble creating log table with rules > > I've read the docs in the programmers manual, and can create rules like this: > > CREATE RULE "m_log_change" AS ON UPDATE TO "machine" > do ( > INSERT INTO machine_log (who, date, machnum, col, newval) > SELECT getpgusername(), 'now'::text, old.machnum, > 'host', new.host > WHERE (new.host != old.host) or > (old.host IS NOT NULL and new.host IS NULL) or > (old.host IS NULL and new.host IS NOT NULL); > > INSERT INTO machine_log (who, date, machnum, col, newval) > SELECT getpgusername(), 'now'::text, old.machnum, > 'serial_num_cpu', new.serial_num_cpu > WHERE (new.serial_num_cpu != old.serial_num_cpu) or > (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or > (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL); > ); > > My big problem is that if I replicate this enough times to cover the fields I > want, I get this error: > > pqReadData() -- backend closed the channel unexpectedly. > This probably means the backend terminated abnormally > before or while processing the request. > We have lost the connection to the backend, so further processing is impossible. Terminating. > > Is there a way I can avoid this error? Is there a better way to code these > rules? Hi, I've seen similar kinds of behaviour in 6.4.2 with triggers/rules/procedures and so on where the backend would die randomly. I'm not sure if it is still in 6.5, I haven't used it yet. Have a look at the error log from the postmaster and see if there is anything interesting in there and I might be able to help you some more here. Sometimes you might get a BTP_CHAIN fault, or another one (I can't rememember - I haven't seen it in a while). The solution I found was just before adding your procedures or whatever, do a VACUUM ANALYZE pg_proc, which will vacuum one of the internal system tables, and then it would work. I found that without the vacuum, postgres would die every third or fourth time I tried to reload my triggers, etc. Also, I haven't reported this yet (because I can't reproduce it) but every so often, I've found that you'll do the vacuum, and then it will return "Blowaway_relation_buffers returned -2" and the vacuum dies. This is really bad, and so you would normally dump the data and reload, but you can't do this for pg_proc. So the dbms is screwed and you have to reload the whole thing. It turns out that one of the indices or the table itself has this BTP_CHAIN problem. I did some experiments involving trying to trick postgres into allowing me to dump reload it (ie, create a new table called pg_proc_2, with the same data and indices, and moving it into place but it won't let you do it to protect itself. The worst part with this kind of death is that my database is about 1.1 Gb on disk, and so reloading is NOT something I want to have to do :) Anyone got any advice for this or know of a problem? As mentioned in another email posted to the hackers list, I am getting lots of problems with BTP_CHAIN problems and having to reload tables, which is not something I want to do during the day when staff are trying to use the database and I have to shut it down. I've heard there is a patch for this but I haven't got anything back on whether its ok to use it or not. bye, Wayne ------------------------------------------------------------------------------ Wayne Piekarski Tel: (08) 8221 5221 Research & Development Manager Fax: (08) 8221 5220 SE Network Access Pty Ltd Mob: 0407 395 889 222 Grote Street Email: wayne@senet.com.au Adelaide SA 5000 WWW: http://www.senet.com.au
pgsql-hackers by date: