Thread: trouble creating log table with rules

trouble creating log table with rules

From
Jim Rowan
Date:
I'm trying to create a table (machine_log) that logs changes to another table
(machine).  My goal is to track changes to most of the fields, and to also
track inserts and deletes in the machine table.  I want to know who/when/what
for each change.  I've had some success but not total...

The machine table has a gazillion fields in it (it's a legacy thing; we're
stuck with it for the moment) -- it's certainly not very elegant.  Here are
the tables:

CREATE SEQUENCE "machine_machnum_seq" start 1 increment 1 maxvalue 2147483647 minvalue 1  cache 1 ;
CREATE TABLE "machine" ("host" text NOT NULL,"serial_num_cpu" text,"asset_tag_cpu" text,"risc_tag" text,"dept"
text,"date_purch"date DEFAULT '1/1/0000',"mach_state" text,"owner" text,"cube" text,"mach_user" text,"update_id"
text,"date_added"date DEFAULT text 'now',"date_touched" date DEFAULT text 'now',"date_updated" date DEFAULT text
'now',"notes"text,"po_num" text,"project" text,"asset_type" text,"net_intf" text,"ip_addr" text,"net_num" int2,"e_net"
text,"cpu"text,"os_rev" text,"memory" int2,"swap" float4,"b_systype" text,"hostid" text NOT NULL,"sys_type"
text,"disk_total"int4,"n_cpus" int2,"kernel_id" text,"disk_used" int4,"frame_buff" text,"bkup_status" text,"disk_avail"
int4,"machnum"int4 DEFAULT nextval ( '"machine_machnum_seq"' ) NOT NULL,"y2k" text);
 

CREATE TABLE "machine_log" ("who" text,"date" datetime,"machnum" int4, "col" text,  -- which column in machine was
changed"newval"text); -- and it's new value
 
CREATE UNIQUE INDEX "machine_host_key" on "machine" using btree ( "host" "text_ops" );
CREATE UNIQUE INDEX "machine_machnum_key" on "machine" using btree ( "machnum" "int4_ops" );


I've read the docs in the programmers manual, and can create rules like this:

CREATE RULE "m_log_change" AS ON UPDATE TO "machine"
do ( INSERT INTO machine_log (who, date, machnum, col, newval)   SELECT getpgusername(), 'now'::text, old.machnum,
'host',new.host   WHERE (new.host != old.host) or (old.host IS NOT NULL and new.host IS NULL) or    (old.host IS NULL
andnew.host IS NOT NULL);
 
 INSERT INTO machine_log (who, date, machnum, col, newval)   SELECT getpgusername(), 'now'::text, old.machnum,
'serial_num_cpu',new.serial_num_cpu   WHERE (new.serial_num_cpu != old.serial_num_cpu) or (old.serial_num_cpu IS NOT
NULLand new.serial_num_cpu IS NULL) or    (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL);
 
);

So far so good, and this works for any update to these particular fields.
(Side question: why do I have to do the tests to see if one of the values is
null?  If I leave these out, the rule doesn't fire when either old or new is
NULL.)

My big problem is that if I replicate this enough times to cover the fields I 
want, I get this error:

pqReadData() -- backend closed the channel unexpectedly.       This probably means the backend terminated abnormally
  before or while processing the request.
 
We have lost the connection to the backend, so further processing is impossible.  Terminating.


Is there a way I can avoid this error?  Is there a better way to code these
rules? 

(part 2:)
Is there a halfway elegant way to code a rule that will log inserts/deletes?
(I suppose I can name all the fields in machine and the who/when
fields .. is that the only way?)  

I'd rather use SELECT * from machine, but how can I add my extra two fields
that tell me who/when?


Jim Rowan            DCSI            DCE/DFS/Sysadmin Consulting
jmr@computing.com                                            (512) 374-1143


Re: trouble creating log table with rules

From
Wayne Piekarski
Date:
> Date: Thu, 24 Jun 1999 01:53:05 -0500
> From: Jim Rowan <jmr@computing.com> 
> Subject: trouble creating log table with rules
> 
> I've read the docs in the programmers manual, and can create rules like this:
> 
> CREATE RULE "m_log_change" AS ON UPDATE TO "machine"
> do (
>   INSERT INTO machine_log (who, date, machnum, col, newval)
>     SELECT getpgusername(), 'now'::text, old.machnum,
>      'host', new.host
>     WHERE (new.host != old.host) or 
>     (old.host IS NOT NULL and new.host IS NULL) or
>         (old.host IS NULL and new.host IS NOT NULL);
> 
>   INSERT INTO machine_log (who, date, machnum, col, newval)
>     SELECT getpgusername(), 'now'::text, old.machnum,
>      'serial_num_cpu', new.serial_num_cpu
>     WHERE (new.serial_num_cpu != old.serial_num_cpu) or 
>     (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or
>         (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL);
> );
> 
> My big problem is that if I replicate this enough times to cover the fields I 
> want, I get this error:
> 
> pqReadData() -- backend closed the channel unexpectedly.
>         This probably means the backend terminated abnormally
>         before or while processing the request.
> We have lost the connection to the backend, so further processing is impossible.  Terminating.
> 
> Is there a way I can avoid this error?  Is there a better way to code these
> rules? 

Hi,

I've seen similar kinds of behaviour in 6.4.2 with
triggers/rules/procedures and so on where the backend would die randomly.
I'm not sure if it is still in 6.5, I haven't used it yet.  Have a look at
the error log from the postmaster and see if there is anything interesting
in there and I might be able to help you some more here. Sometimes you
might get a BTP_CHAIN fault, or another one (I can't rememember - I
haven't seen it in a while). The solution I found was just before adding
your procedures or whatever, do a VACUUM ANALYZE pg_proc, which will
vacuum one of the internal system tables, and then it would work. I found
that without the vacuum, postgres would die every third or fourth time I
tried to reload my triggers, etc.

Also, I haven't reported this yet (because I can't reproduce it) but every
so often, I've found that you'll do the vacuum, and then it will return
"Blowaway_relation_buffers returned -2" and the vacuum dies. This is
really bad, and so you would normally dump the data and reload, but you
can't do this for pg_proc. So the dbms is screwed and you have to reload
the whole thing. It turns out that one of the indices or the table itself
has this BTP_CHAIN problem. 

I did some experiments involving trying to trick postgres into allowing me
to dump reload it (ie, create a new table called pg_proc_2, with the same
data and indices, and moving it into place but it won't let you do it to
protect itself.

The worst part with this kind of death is that my database is about 1.1 Gb
on disk, and so reloading is NOT something I want to have to do :)

Anyone got any advice for this or know of a problem? As mentioned in
another email posted to the hackers list, I am getting lots of problems
with BTP_CHAIN problems and having to reload tables, which is not
something I want to do during the day when staff are trying to use the
database and I have to shut it down. I've heard there is a patch for this
but I haven't got anything back on whether its ok to use it or not.

bye,
Wayne

------------------------------------------------------------------------------
Wayne Piekarski                               Tel:     (08) 8221 5221
Research & Development Manager                Fax:     (08) 8221 5220
SE Network Access Pty Ltd                     Mob:     0407 395 889
222 Grote Street                              Email:   wayne@senet.com.au
Adelaide SA 5000                              WWW:     http://www.senet.com.au



Re: [HACKERS] Re: trouble creating log table with rules

From
wieck@debis.com (Jan Wieck)
Date:
>
> > Date: Thu, 24 Jun 1999 01:53:05 -0500
> > From: Jim Rowan <jmr@computing.com>
> > Subject: trouble creating log table with rules
> >
> > I've read the docs in the programmers manual, and can create rules like this:
> >
> > CREATE RULE "m_log_change" AS ON UPDATE TO "machine"
> > do (
> >   INSERT INTO machine_log (who, date, machnum, col, newval)
> >     SELECT getpgusername(), 'now'::text, old.machnum,
> >   'host', new.host
> >     WHERE (new.host != old.host) or
> >  (old.host IS NOT NULL and new.host IS NULL) or
> >       (old.host IS NULL and new.host IS NOT NULL);
> >
> >   INSERT INTO machine_log (who, date, machnum, col, newval)
> >     SELECT getpgusername(), 'now'::text, old.machnum,
> >   'serial_num_cpu', new.serial_num_cpu
> >     WHERE (new.serial_num_cpu != old.serial_num_cpu) or
> >  (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or
> >       (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL);
> > );
> >
> > My big problem is that if I replicate this enough times to cover the fields I
> > want, I get this error:
> >
> > pqReadData() -- backend closed the channel unexpectedly.
> >         This probably means the backend terminated abnormally
> >         before or while processing the request.
> > We have lost the connection to the backend, so further processing is impossible.  Terminating.
> >

    You  didn't  tell  us  which  version of PostgreSQL and (more
    important) if the error occurs during  CREATE  RULE  or  when
    updating machine.

    If  it occurs during the CREATE RULE (what I hope for you) it
    doesn't happen in the rewriter itself. For the  rule  actions
    in  the  example above it isn't important in which order they
    are processed. So you could setup  single  action  rules  per
    field to get (mostly) the same results.

    If  you  can  create the entire multi action rule but get the
    backend crash during UPDATE of machine, then it's  a  problem
    in the rewriter which I cannot imagine looking at your rules.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#========================================= wieck@debis.com (Jan Wieck) #

Re: [HACKERS] Re: trouble creating log table with rules

From
Jim Rowan
Date:
I said:

> > CREATE RULE "m_log_change" AS ON UPDATE TO "machine"
> > do (
> >   INSERT INTO machine_log (who, date, machnum, col, newval)
> >     SELECT getpgusername(), 'now'::text, old.machnum,
> >   'host', new.host
> >     WHERE (new.host != old.host) or
> >  (old.host IS NOT NULL and new.host IS NULL) or
> >       (old.host IS NULL and new.host IS NOT NULL);
> >
> >   INSERT INTO machine_log (who, date, machnum, col, newval)
> >     SELECT getpgusername(), 'now'::text, old.machnum,
> >   'serial_num_cpu', new.serial_num_cpu
> >     WHERE (new.serial_num_cpu != old.serial_num_cpu) or
> >  (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or
> >       (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL);
> > );
> > My big problem is that if I replicate this enough times to cover the fields I
> > want, I get this error:
> >
> > pqReadData() -- backend closed the channel unexpectedly.
> >         This probably means the backend terminated abnormally
> >         before or while processing the request.
> > We have lost the connection to the backend, so further processing is impossible.  Terminating.
> >

wieck>     You didn't tell us which version of PostgreSQL and (more
wieck> important) if the error occurs during CREATE RULE or when updating
wieck> machine.

Duhhh. sorry!
postgresql 6.5; FreeBSD 3.2 stable - recent.

The error occurs during CREATE RULE.

wieck> So you could setup  single  action  rules  per field to get (mostly)
wieck> the same results. 

I previously had tried to do the same thing with many (more than 10) distinct 
single-action rules (sorry, don't have the exact syntax of what I used.. but
it was very similar to this example.).

In that case, the CREATE RULE worked properly, but at update time it bombed
out (again, don't have the detail anymore).  The error message indicated that 
it thought there was a loop in my rules, something about "more than 10"... 
In that case, as I remember, the backend did not crash -- it just declined to 
execute the update.

I'll try multiple multi-action rules to see if I can do what I want..

Is this (the way I'm writing the rules) the best approach?


Jim Rowan            DCSI            DCE/DFS/Sysadmin Consulting
jmr@computing.com                                            (512) 374-1143