Re: background triggers? - Mailing list pgsql-general
From | Dawid Kuroczko |
---|---|
Subject | Re: background triggers? |
Date | |
Msg-id | 758d5e7f0605251127y71a87dc5pf3e19f3527e83edf@mail.gmail.com Whole thread Raw |
In response to | Re: background triggers? (Rafal Pietrak <rafal@zorro.isa-geek.com>) |
Responses |
Re: background triggers?
Re: background triggers? |
List | pgsql-general |
On 5/25/06, Rafal Pietrak <rafal@zorro.isa-geek.com> wrote: > > Well, exactly "not being interested in the outcome" is IMHO the reason > > why your demands clash with archtecture of a RDBMS. Most RDBMS go a long > > way to ensure consistency and safety of your data, once they promised > > (usually by not raising an error at least on "commit") that they stored > > them. This doesn't match very well with asynchronous processes for which > > "nobody cares" (In the sense that there is nobody to reports errors to). > > No, no no. This is completly different story. > > That would really be very missfortunate if we couldn't relay on RDBMS > 'confirmed storage'. Oh, I think Florian meant that it is strange that your application is not interested in the trigger's output. Of course one might want to add a notify-about-a-trigger-failure-by-email feature to circumvent that, but I won't be going so far off. What is here, is that with your approach, you fire a trigger and forget about it. It either commits some time later, or does not, and you don't know it. You don't know it, because your application went on, did other things, and has no way of knowing what's with the commit. Well, you can speculate, that you will notice that no work is being done. But why? Maybe the trigger is inefficient and isss soo sloooow, iittt taaaakessss aaaaaggeeees tooo cooompleeete. Or maybe it ROLLBACKed, effectively removing all evidence of the work done. With this approach, you don't know it -- and this is what probably struck Florian's "strange for RDBMS" feeling. > Here I'm just not interested in that procedure outcome: if it eventually > COMMITED or ROLLED-BACK. But obviously I am interested in consistancy of > database, when the detached procedure COMMITS. Hmm. How are you going to get it? No, an "on ROLLBACK" trigger is not a good idea! :-) > I mean. It looks like this is *really* a novelty for RDBMS design - I > feel, that real programmers here (you guys :) are so hard to persuade > its necesary, because it's so far from the 'synchronous nature' of > clasical RDBMS design and triggers in particular. Don't get me wrong, but a word "bizzarre" is more suitable than "novelty". The background processing is there since very long time -- why do you think LISTEN/NOTIFY was implemented? :) > But I'd like to express my believe, that having such tool within the > server can help build better database applications. write faster <> write better. As I wrote some time earlier, you can code a trigger in PL/perlU doing exactly what you want. The more usual approach of using LISTEN/NOTIFY or a cron job is easier to manage (you have much better control on how many times the given function is called). Imagine a query with thousands of INSERTS grouped inside a transaction. Your background trigger will mean that postgresql will be spawning awfully alot of new connections, for nothing, as they won't see a new rows from different transaction. You said that your scheme would implement exclusive locking. Well, if I were writing such an application, I would rather want such code to be fired not more frequently than 1 minute. ON COMMIT DETACHED WITH MINIMUM DELAY '1 minute'::interval; ? :-) For locking you can simply use existing infrastructure, like CREATE TABLE my_locking_table (lasttime timestamptz); INSERT INTO my_locking_table(now()); BEGIN SELECT lasttime FROM my_locking_table WHERE lasttime < now()-'1 minute'::interval FOR UPDATE NOWAIT; IF FOUND THEN -- do dome work UPDATE my_locking_table SET lattime=now(); RETURN; END IF; EXECPTION when locked... END; And if you want to check for 'ps auxw|grep backup.sh', you may also, without need for extending these things. > I would only call it a framework if I can say COMMIT within the trigger > body. Or alternatively, if I can define a trigger, so to say: "FOR EACH > COMMIT" (pls note, that it's a different point in time, then "FOR EACH > STATEMENT") which I could also define as "DETACHED" - launched by the > forked backend. Actually, I like the idea of "ON COMMIT" trigger (though without the "DETACHED" part), but this is another story... Regards, Dawid
pgsql-general by date: