Thread: Help with trigger that updates a row prior to a potentially aborted deletion?

Help with trigger that updates a row prior to a potentially aborted deletion?

From
"Simon Kinsella"
Date:
Hi all,

I have a situation where a DELETE operation may (correctly) fail due to a
RESTRICT FK constraint. If so, I need to set a flag in the row indicating
that it has been marked for deletion so that I can disregarded in subsequent
queries.

I'm trying to achieve this with a BEFORE DELETE trigger, which would set the
'marked_for_deletion' field to TRUE before attempting the delete proper.
Then if the DELETE fails the row would still be tagged and I'd be happy.
Problem is, when the DELETE operation fails with a RESTRICT VIOLATION error
the entire operation is rolled back, including the BEFORE triggers, leaving
me back where I started.

Is there anyway to get the DELETE operation, or more specifically the FK
constraint, to fail silently, i.e. to skip over the failed operation and not
throw an exception? I'm really racking my brains on this one but not really
getting anywhere!

I have sample data defs to play with if that would be helpful.

Thanks in advance!

Simon Kinsella




Re: Help with trigger that updates a row prior to a potentially aborted deletion?

From
"Simon Kinsella"
Date:
Hello Achilleus

Thanks for your feedback. On changing the return to NULL:
According to the docs, if I return NULL in the BEFORE trigger itself, all
subsequent triggers and the row-level op itself (the actual delete) will be
skipped completely, which is no good. I will confirm this to make sure
though.

On your suggestion of manually updating, I have been trying something like
this with interesting (but undesirable!) results:

CREATE OR REPLACE FUNCTION fn_trg_mark_ref_as_deleted() RETURNS TRIGGER AS
$$       BEGIN               UPDATE ref_table SET deleted = TRUE                           WHERE ref_id = OLD.ref_id;
           RETURN OLD;       END;
 
$$
LANGUAGE 'plpgsql' VOLATILE;

CREATE TRIGGER mark_ref_as_deleted BEFORE DELETE ON ref_table       FOR EACH ROW EXECUTE PROCEDURE
fn_trg_mark_ref_as_deleted();

(I'm returning OLD for the reason above).  Oddly, this does indeed set the
soft-delete flag but never deletes the row, even if there are no constraint
dependencies.  

I'm going to keep playing but any other suggestions would be very welcome :)

Here are some sample schema and defs for anyone who's interested:

----------------------------------------------------------------------------
--------    
-- SCHEMA DEFS:
BEGIN;
CREATE TABLE ref_table
(ref_id INTEGER NOT NULL,deleted BOOLEAN DEFAULT FALSE,CONSTRAINT ref_table_pkey PRIMARY KEY (ref_id)
);

CREATE TABLE dep_table
(dep_id INTEGER NOT NULL,ref_id INTEGER NOT NULL,CONSTRAINT dep_table_pkey PRIMARY KEY (dep_id)
);

ALTER TABLE dep_table ADD CONSTRAINT dep_table_depends_on_ref_tableFOREIGN KEY (ref_id)REFERENCES ref_table
(ref_id)MATCHFULL ON DELETE NO ACTION ON UPDATE CASCADE NOT DEFERRABLE;
 

CREATE OR REPLACE FUNCTION fn_trg_mark_ref_as_deleted() RETURNS TRIGGER AS
$$BEGIN    UPDATE ref_table SET deleted = TRUE            WHERE ref_id = OLD.ref_id;    RETURN OLD;END;
$$
LANGUAGE 'plpgsql' VOLATILE;

CREATE TRIGGER mark_ref_as_deleted BEFORE DELETE ON ref_tableFOR EACH ROW EXECUTE PROCEDURE
fn_trg_mark_ref_as_deleted();

COMMIT;

----------------------------------------------------------------------------
--------    
-- SAMPLE DATA:
BEGIN;
DELETE FROM dep_table;
DELETE FROM ref_table;

INSERT INTO ref_table (ref_id) VALUES (1);
INSERT INTO ref_table (ref_id) VALUES (2);
INSERT INTO ref_table (ref_id) VALUES (3);
INSERT INTO ref_table (ref_id) VALUES (4);
INSERT INTO ref_table (ref_id) VALUES (5);

INSERT INTO dep_table (dep_id,ref_id) VALUES (100,1);    
INSERT INTO dep_table (dep_id,ref_id) VALUES (101,1);    
INSERT INTO dep_table (dep_id,ref_id) VALUES (102,2);    
INSERT INTO dep_table (dep_id,ref_id) VALUES (103,2);    
INSERT INTO dep_table (dep_id,ref_id) VALUES (104,3);    
COMMIT;
----------------------------------------------------------------------------
--------
-- SAMPLE QUERIES (which don't do what I would like!):

DELETE FROM ref_table WHERE ref_id = 1  -- Ideally should sets the 'deleted'
flag and not remove the row. (works OK)

DELETE FROM ref_table WHERE ref_id = 5  -- Ideally should remove the row
completely. (Does not work OK)
----------------------------------------------------------------------------
--------

Thanks all, 

Simon Kinsella








Re: Help with trigger that updates a row prior to a potentially

From
Stephan Szabo
Date:
On Wed, 1 Mar 2006, Simon Kinsella wrote:

> Hi all,
>
> I have a situation where a DELETE operation may (correctly) fail due to a
> RESTRICT FK constraint. If so, I need to set a flag in the row indicating
> that it has been marked for deletion so that I can disregarded in subsequent
> queries.
>
> I'm trying to achieve this with a BEFORE DELETE trigger, which would set the
> 'marked_for_deletion' field to TRUE before attempting the delete proper.
> Then if the DELETE fails the row would still be tagged and I'd be happy.
> Problem is, when the DELETE operation fails with a RESTRICT VIOLATION error
> the entire operation is rolled back, including the BEFORE triggers, leaving
> me back where I started.
>

> Is there anyway to get the DELETE operation, or more specifically the FK
> constraint, to fail silently, i.e. to skip over the failed operation and not
> throw an exception?

Not with the standard constraint trigger, no. In general, constraint
checks happen after the action and as such can't skip over an operation
since it's already happened.

You might be able to do this within a function however if you do the
update and then start an exeption checking block to do the delete.



Re: Help with trigger that updates a row prior to a potentially aborted deletion?

From
"Simon Kinsella"
Date:
Yes I originally started working on a function based approach like you
suggest, but realised it wouldn't cover the situation where the delete
operation is fired as a result of a CASCADE ON DELETE constraint from a
parent table, rather than as a manual query. I suppose I could ditch that
particular cascading contraint and replace it with a trigger function that
calls my custom delete function. Not sure if I like that though ;)

Thanks for your feedback,

Simon Kinsella

-----Original Message-----
From: Stephan Szabo [mailto:sszabo@megazone.bigpanda.com]
Sent: Wednesday, March 01, 2006 2:31 PM
To: Simon Kinsella
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Help with trigger that updates a row prior to a
potentially aborted deletion?


On Wed, 1 Mar 2006, Simon Kinsella wrote:

> Hi all,
>
> I have a situation where a DELETE operation may (correctly) fail due 
> to a RESTRICT FK constraint. If so, I need to set a flag in the row 
> indicating that it has been marked for deletion so that I can 
> disregarded in subsequent queries.
>
> I'm trying to achieve this with a BEFORE DELETE trigger, which would 
> set the 'marked_for_deletion' field to TRUE before attempting the delete
proper.
> Then if the DELETE fails the row would still be tagged and I'd be happy.
> Problem is, when the DELETE operation fails with a RESTRICT VIOLATION 
> error the entire operation is rolled back, including the BEFORE 
> triggers, leaving me back where I started.
>

> Is there anyway to get the DELETE operation, or more specifically the 
> FK constraint, to fail silently, i.e. to skip over the failed 
> operation and not throw an exception?

Not with the standard constraint trigger, no. In general, constraint checks
happen after the action and as such can't skip over an operation since it's
already happened.

You might be able to do this within a function however if you do the update
and then start an exeption checking block to do the delete.





Replication - state of the art?

From
Bryce Nesbitt
Date:
I'm interested in creating a mirror database, for use in case one our
primary machine goes down.  Can people here help sort out which of the
several replication projects is most viable?

As far as I can tell, the winner is slony1 at
http://gborg.postgresql.org/project/slony1/projdisplay.php , but there
are several contenders.
                  ------------------------------------------

The needs here are very relaxed.  We have one main postmaster which runs
24x7x365.  There's another machine nearby that can accept a log or
journal of some sort.  The alternate never needs DB access, except in
case of main machine failure,  and then we can take up to 15 minutes to
switch over and rebuild the DB.  "No-lost transaction" is far more
important than switch time.

Anyone here using replication or transaction journaling?  Has it proved
reliable, easy to maintain?
                           Bryce Nesbitt


Re: Help with trigger that updates a row prior to a potentially aborted deletion?

From
Andrew Sullivan
Date:
On Wed, Mar 01, 2006 at 01:30:23PM -0000, Simon Kinsella wrote:
> I'm trying to achieve this with a BEFORE DELETE trigger, which would set the
> 'marked_for_deletion' field to TRUE before attempting the delete proper.
> Then if the DELETE fails the row would still be tagged and I'd be happy.
> Problem is, when the DELETE operation fails with a RESTRICT VIOLATION error
> the entire operation is rolled back, including the BEFORE triggers, leaving
> me back where I started.

Yes.  In 8.1, you could use a subtransaction for the DELETE, which I
think would allow you to rollback at that point and still leave the
UPDATE in place.

A


-- 
Andrew Sullivan  | ajs@crankycanuck.ca
The fact that technology doesn't work is no bar to success in the marketplace.    --Philip Greenspun


Re: Replication - state of the art?

From
Andrew Sullivan
Date:
On Wed, Mar 01, 2006 at 09:51:46AM -0800, Bryce Nesbitt wrote:
> switch over and rebuild the DB.  "No-lost transaction" is far more
> important than switch time.

You can't guarantee that without two phase commit, no matter what you
do.  Log shipping doesn't require you to have an active database
running on the origin (slony-1 does, which is one of its potential
drawbacks).  But that won't help you if a transaction committed at
the instant an earthquake hit your datacentre, wiping it out.  You
can't get the data off the failed origin no matter what. 

> Anyone here using replication or transaction journaling?  Has it proved
> reliable, easy to maintain?

Define "easy".  Every possible replication system is going to have
slightly grotty corners into which you find yourself wandering.  The
question is merely whether the room is octagonal or merely
rectangular.

A

-- 
Andrew Sullivan  | ajs@crankycanuck.ca
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.    --J.D. Baldwin


Re: Replication - state of the art?

From
Bryce Nesbitt
Date:
Andrew Sullivan wrote:
> On Wed, Mar 01, 2006 at 09:51:46AM -0800, Bryce Nesbitt wrote:
>   
>> switch over and rebuild the DB.  "No-lost transaction" is far more
>> important than switch time.
>>     
>
> You can't guarantee that without two phase commit, no matter what you
> do.  Log shipping doesn't require you to have an active database
> running on the origin (slony-1 does, which is one of its potential
> drawbacks).  But that won't help you if a transaction committed at
> the instant an earthquake hit your datacentre, wiping it out.  You
> can't get the data off the failed origin no matter what. 
>   
Actually let me loosen that a bit:  we don't need two phase commit.  We
can loose the most recent transaction, or even the last few seconds of
transactions.  What we can't survive is -- on the day of the emergency
-- a long and complicated DB rebuild with mistakes and hard-to-debug
data issues.

>> Anyone here using replication or transaction journaling?  Has it proved
>> reliable, easy to maintain?
>>     
>
> Define "easy".  Every possible replication system is going to have
> slightly grotty corners into which you find yourself wandering.  The
> question is merely whether the room is octagonal or merely
> rectangular.
>   
There's no fire creating demand for replication, so there is little time
budget.
So is there a sort of padded, no-sharp-corners, playroom that gets us
90% of the way there?

We're looking to reduce what's now a 24 hour window on data loss (since
the most recent
nightly) into something more reasonable (like 500 milliseconds).  But
risk -- of data corruption --
and time --too much-- will can the project.




Re: Replication - state of the art?

From
Andrew Sullivan
Date:
On Wed, Mar 01, 2006 at 11:28:06AM -0800, Bryce Nesbitt wrote:
> Actually let me loosen that a bit:  we don't need two phase commit.  We
> can loose the most recent transaction, or even the last few seconds of
> transactions.  What we can't survive is -- on the day of the emergency
> -- a long and complicated DB rebuild with mistakes and hard-to-debug
> data issues.

Then I suggest you use Slony-I.  While it is not plug and play, the
thing it _is_ designed to handle reasonably well is failover and
(better) switchover.  Most systems plan to solve that piece of
functionality later, with a script or something, at which point it is
apparent that setting up failover or swichover to be anything
approaching safe is actually very tricky.  (Log shipping is probably
not in this category, but AFAIK the promote-to-live support for a
standby database copy is still not all built by anyone.  If you like
rolling your own, however, it might be your answer.)

> There's no fire creating demand for replication, so there is little time
> budget.
> So is there a sort of padded, no-sharp-corners, playroom that gets us
> 90% of the way there?

The "no budget" remark here is what makes me strike CMD's Mammoth
Replicator off the list.  But I'm sure their administration tools are
far sweeter than the admittedly hackish ones that Slony currently
delivers out of the box.  

> nightly) into something more reasonable (like 500 milliseconds).  But
> risk -- of data corruption --
> and time --too much-- will can the project.

Another big reason to use a live-standby system like Slony is that
once you have the extra database online, you suddenly think of all
sorts of nifty queries you can move there without destroying your
production performance.  Be careful not to get addicted, is all.

A

-- 
Andrew Sullivan  | ajs@crankycanuck.ca
Information security isn't a technological problem.  It's an economics
problem.    --Bruce Schneier


Re: Replication - state of the art?

From
Scott Marlowe
Date:
On Wed, 2006-03-01 at 11:51, Bryce Nesbitt wrote:
> I'm interested in creating a mirror database, for use in case one our
> primary machine goes down.  Can people here help sort out which of the
> several replication projects is most viable?
> 
> As far as I can tell, the winner is slony1 at
> http://gborg.postgresql.org/project/slony1/projdisplay.php , but there
> are several contenders.
> 
>                    ------------------------------------------
> 
> The needs here are very relaxed.  We have one main postmaster which runs
> 24x7x365.  There's another machine nearby that can accept a log or
> journal of some sort.  The alternate never needs DB access, except in
> case of main machine failure,  and then we can take up to 15 minutes to
> switch over and rebuild the DB.  "No-lost transaction" is far more
> important than switch time.
> 
> Anyone here using replication or transaction journaling?  Has it proved
> reliable, easy to maintain?

You might want to look at pgpool in mirror replication mode as well. 
It's got some limitations due to it's query shipping nature, but may
give you what you need.  It knows how to switch off from the dead server
and keep running on the one good one.  It's solid software though.  

Mammoth may be a better option for you.  It's not that horribly
expensive, and setup is supposed to be a snap.

If you use slony, you might want to look at frontending it with pgpool
which makes switching the servers around a little easier, as you can do
it in pgpool instead of your app layer.  

Lots of choices.  Hard to say which is right.

I really like slony, and use it at work.  I'm quite happy with it.


Re: Help with trigger that updates a row prior to a potentially aborted deletion?

From
"Simon Kinsella"
Date:
Ok thanks, will check this out. Is that the same as savepoints, or something
different? (am using 8.1.2)

At the moment I'm investigating using a rule (rewrite the DELETE as an
UPDATE to set the flag, then use an AFTER UPDATE trigger to attempt to
delete the row if the flag was set). Not sure if it's going to work but if
so I'll post back.

Thanks!

Simon

-----Original Message-----
From: pgsql-sql-owner@postgresql.org [mailto:pgsql-sql-owner@postgresql.org]
On Behalf Of Andrew Sullivan
Sent: Wednesday, March 01, 2006 6:24 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Help with trigger that updates a row prior to a
potentially aborted deletion?

On Wed, Mar 01, 2006 at 01:30:23PM -0000, Simon Kinsella wrote:
> I'm trying to achieve this with a BEFORE DELETE trigger, which would 
> set the 'marked_for_deletion' field to TRUE before attempting the delete
proper.
> Then if the DELETE fails the row would still be tagged and I'd be happy.
> Problem is, when the DELETE operation fails with a RESTRICT VIOLATION 
> error the entire operation is rolled back, including the BEFORE 
> triggers, leaving me back where I started.

Yes.  In 8.1, you could use a subtransaction for the DELETE, which I think
would allow you to rollback at that point and still leave the UPDATE in
place.

A


--
Andrew Sullivan  | ajs@crankycanuck.ca
The fact that technology doesn't work is no bar to success in the
marketplace.    --Philip Greenspun

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
              http://www.postgresql.org/docs/faq




Re: Replication - state of the art?

From
"Jim C. Nasby"
Date:
You could also use WAL shipping and some PITR trickery to keep a 'warm
standby' database up to date. How far behind it falls is up to you,
since you'll be periodically syncing the current WAL file to the backup
machine. Do the sync once a minute, and at most you lose 60 seconds of
data.

On Wed, Mar 01, 2006 at 02:49:18PM -0500, Andrew Sullivan wrote:
> On Wed, Mar 01, 2006 at 11:28:06AM -0800, Bryce Nesbitt wrote:
> > Actually let me loosen that a bit:  we don't need two phase commit.  We
> > can loose the most recent transaction, or even the last few seconds of
> > transactions.  What we can't survive is -- on the day of the emergency
> > -- a long and complicated DB rebuild with mistakes and hard-to-debug
> > data issues.
> 
> Then I suggest you use Slony-I.  While it is not plug and play, the
> thing it _is_ designed to handle reasonably well is failover and
> (better) switchover.  Most systems plan to solve that piece of
> functionality later, with a script or something, at which point it is
> apparent that setting up failover or swichover to be anything
> approaching safe is actually very tricky.  (Log shipping is probably
> not in this category, but AFAIK the promote-to-live support for a
> standby database copy is still not all built by anyone.  If you like
> rolling your own, however, it might be your answer.)
> 
> > There's no fire creating demand for replication, so there is little time
> > budget.
> > So is there a sort of padded, no-sharp-corners, playroom that gets us
> > 90% of the way there?
> 
> The "no budget" remark here is what makes me strike CMD's Mammoth
> Replicator off the list.  But I'm sure their administration tools are
> far sweeter than the admittedly hackish ones that Slony currently
> delivers out of the box.  
> 
> > nightly) into something more reasonable (like 500 milliseconds).  But
> > risk -- of data corruption --
> > and time --too much-- will can the project.
> 
> Another big reason to use a live-standby system like Slony is that
> once you have the extra database online, you suddenly think of all
> sorts of nifty queries you can move there without destroying your
> production performance.  Be careful not to get addicted, is all.
> 
> A
> 
> -- 
> Andrew Sullivan  | ajs@crankycanuck.ca
> Information security isn't a technological problem.  It's an economics
> problem.
>         --Bruce Schneier
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461


Re: Replication - state of the art?

From
Andrew Sullivan
Date:
On Wed, Mar 01, 2006 at 04:15:18PM -0600, Jim C. Nasby wrote:
> You could also use WAL shipping and some PITR trickery to keep a 'warm
> standby' database up to date. How far behind it falls is up to you,
> since you'll be periodically syncing the current WAL file to the backup
> machine. Do the sync once a minute, and at most you lose 60 seconds of
> data.

Right.  But you have to write all of that, and write the failover
scripts, and such like.  And the OP suggested that there wasn't time
budget for that.  But it'd work.

A

-- 
Andrew Sullivan  | ajs@crankycanuck.ca
The whole tendency of modern prose is away from concreteness.    --George Orwell


Re: Help with trigger that updates a row prior to a potentially aborted deletion?

From
Andrew Sullivan
Date:
On Wed, Mar 01, 2006 at 08:41:20PM -0000, Simon Kinsella wrote:
> Ok thanks, will check this out. Is that the same as savepoints, or something
> different? (am using 8.1.2)

Yes, same thing.

> At the moment I'm investigating using a rule (rewrite the DELETE as an
> UPDATE to set the flag, then use an AFTER UPDATE trigger to attempt to
> delete the row if the flag was set). Not sure if it's going to work but if
> so I'll post back.

In a message you sent that I read after I sent mine, you also said
you had to be able to handle deletes from the table with CASCADE.  My
suggestion won't work for that, I don't think (but it might be worth
a try).  The only other thing I can think of is just flag everything,
and use a daemon to go around and perform the actual deletes for you.

A

-- 
Andrew Sullivan  | ajs@crankycanuck.ca
The plural of anecdote is not data.    --Roger Brinner


Re: Help with trigger that updates a row prior to a potentially aborted deletion?

From
"Simon Kinsella"
Date:
Hi Andrew,

I think I may have cracked this problem by combining a RULE ON DELETE which
calls a function instead of the standard DELETE op.  No triggers. It was a
real nightmare as it was my first attempt at a rule and it kept ending up in
circular self-references. In the end though it boiled down to a pretty
compact solution. It's getting late now but I'll be testing it out properly
tomorrow.  I can post it up then if you're interested.

Thanks for your help,

Simon

-----Original Message-----
From: pgsql-sql-owner@postgresql.org [mailto:pgsql-sql-owner@postgresql.org]
On Behalf Of Andrew Sullivan
Sent: Thursday, March 02, 2006 12:20 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Help with trigger that updates a row prior to a
potentially aborted deletion?

On Wed, Mar 01, 2006 at 08:41:20PM -0000, Simon Kinsella wrote:
> Ok thanks, will check this out. Is that the same as savepoints, or 
> something different? (am using 8.1.2)

Yes, same thing.

> At the moment I'm investigating using a rule (rewrite the DELETE as an 
> UPDATE to set the flag, then use an AFTER UPDATE trigger to attempt to 
> delete the row if the flag was set). Not sure if it's going to work 
> but if so I'll post back.

In a message you sent that I read after I sent mine, you also said you had
to be able to handle deletes from the table with CASCADE.  My suggestion
won't work for that, I don't think (but it might be worth a try).  The only
other thing I can think of is just flag everything, and use a daemon to go
around and perform the actual deletes for you.

A

--
Andrew Sullivan  | ajs@crankycanuck.ca
The plural of anecdote is not data.    --Roger Brinner

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
              http://www.postgresql.org/docs/faq




Re: Help with trigger that updates a row prior to a potentially aborted deletion?

From
Andrew Sullivan
Date:
On Fri, Mar 03, 2006 at 12:19:22AM -0000, Simon Kinsella wrote:
> Hi Andrew,
> 
> I think I may have cracked this problem by combining a RULE ON DELETE which
> calls a function instead of the standard DELETE op.  No triggers. It was a

Ah.  Yes, likely.  Yeah, you can't do that.

A

-- 
Andrew Sullivan  | ajs@crankycanuck.ca
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.    --J.D. Baldwin