Thread: [OT] Dilemma about OS <-> Postgres interaction

[OT] Dilemma about OS <-> Postgres interaction

From
Rory Campbell-Lange
Date:
I have an application, largely written in plpgsql, where messages are
dumped into an 'outbox' for sending via email or text message.

I am in a dilemma about how best to pick up these messages and send them
on to the message sending systems. At present I have looping perl
scripts which have a permanent database connection, which I run through
screen sessions on my server. Rather embarassingly whenever I upgrade my
application code (reload the database and export the latest CVS code) or
upgrade Postgres, my looping scripts die.

I suppose I could use plperl and create a trigger to alert a system-wide
daemon that a message was ready for sending whenever a new one dropped
into the 'outbox', and restart the daemon if it had died. I feel
superstitious about using an untrusted language for functions though.

Any sage advice? I realise this is both a beginner's quandry, and also
not particularly Postgresql specific. Apologies.

Rory
--
Rory Campbell-Lange
<rory@campbell-lange.net>
<www.campbell-lange.net>

Re: [OT] Dilemma about OS <-> Postgres interaction

From
Karsten Hilbert
Date:
> I suppose I could use plperl and create a trigger to alert a system-wide
> daemon that a message was ready for sending whenever a new one dropped
> into the 'outbox', and restart the daemon if it had died. I feel
> superstitious about using an untrusted language for functions though.
Sounds like a perfect candidate for a notify/listen solution ?

Eg. send a notify from an insert trigger. In your "listen"
listener loop in the demon regularly check and re-establish
the database connection.

In GnuMed we are listening for trigger()ed notifies on
insert/delete/upgrade in our middleware cache quite
successfully.

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

Re: [OT] Dilemma about OS <-> Postgres interaction

From
Edmund Bacon
Date:
Rory Campbell-Lange wrote:

> I have an application, largely written in plpgsql, where messages are
> dumped into an 'outbox' for sending via email or text message.
>
> I am in a dilemma about how best to pick up these messages and send them
> on to the message sending systems. At present I have looping perl
> scripts which have a permanent database connection, which I run through
> screen sessions on my server. Rather embarassingly whenever I upgrade my
> application code (reload the database and export the latest CVS code) or
> upgrade Postgres, my looping scripts die.
>
> I suppose I could use plperl and create a trigger to alert a system-wide
> daemon that a message was ready for sending whenever a new one dropped
> into the 'outbox', and restart the daemon if it had died. I feel
> superstitious about using an untrusted language for functions though.
>
> Any sage advice? I realise this is both a beginner's quandry, and also
> not particularly Postgresql specific. Apologies.
>
> Rory

Assuming you are on a unix of some sort,  Two possible solutions come
immediately to mind.

1) Fire off a non-looping version of your perl scrip from crontab

2) Run your looping perl script from the inittab.

the crontab solution would be my preference: it's easier to set up the job
to run under a user other than root, and you can tailor when your script
runs (say every 5 minutes between 8:00 AM and 6:00 PM Monday to Friday).

In both cases, you'd probably want to add some tests to see if your database
was up before trying to connect



Re: [OT] Dilemma about OS <-> Postgres interaction

From
"Joshua D. Drake"
Date:
> Assuming you are on a unix of some sort,  Two possible solutions come
> immediately to mind.
>
> 1) Fire off a non-looping version of your perl scrip from crontab
>
> 2) Run your looping perl script from the inittab.

Be very careful here at least if you are running Linux. Linux init is
notorious for self made DOS attacks. Basically the init runs out of
control because of a bad script and brings the machine to it's knees.



>
> the crontab solution would be my preference: it's easier to set up the job
> to run under a user other than root, and you can tailor when your script
> runs (say every 5 minutes between 8:00 AM and 6:00 PM Monday to Friday).
>
> In both cases, you'd probably want to add some tests to see if your database
> was up before trying to connect
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if your
>       joining column's datatypes do not match


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - jd@commandprompt.com - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL

Attachment

Re: [OT] Dilemma about OS <-> Postgres interaction

From
Rory Campbell-Lange
Date:
On 17/06/04, Edmund Bacon (ebacon@onesystem.com) wrote:
> Rory Campbell-Lange wrote:
>
> > I am in a dilemma about how best to pick up these messages and send them
> > on to the message sending systems. At present I have looping perl
> > scripts which have a permanent database connection, which I run through
> > screen sessions on my server. Rather embarassingly whenever I upgrade my
> > application code (reload the database and export the latest CVS code) or
> > upgrade Postgres, my looping scripts die.

> Assuming you are on a unix of some sort,  Two possible solutions come
> immediately to mind.
>
> 1) Fire off a non-looping version of your perl scrip from crontab
>
> 2) Run your looping perl script from the inittab.
>
> the crontab solution would be my preference: it's easier to set up the job
> to run under a user other than root, and you can tailor when your script
> runs (say every 5 minutes between 8:00 AM and 6:00 PM Monday to Friday).

I should have mentioned that we need the messages sent very soon after
they have landed in the 'inbox'; otherwise cron would definitely be the
way to go -- including an @reboot line.

Thanks for the suggestion, though.
Rory

--
Rory Campbell-Lange
<rory@campbell-lange.net>
<www.campbell-lange.net>

Re: [OT] Dilemma about OS <-> Postgres interaction

From
Harald Fuchs
Date:
In article <20040617232536.GB17879@campbell-lange.net>,
Rory Campbell-Lange <rory@campbell-lange.net> writes:

> I should have mentioned that we need the messages sent very soon after
> they have landed in the 'inbox'; otherwise cron would definitely be the
> way to go -- including an @reboot line.

This rules out a cronjob.  How about writing a pl/perlu function which
sends the mail and calling that function by an AFTER INSERT trigger?

Re: [OT] Dilemma about OS <-> Postgres interaction

From
Thomas Hallgren
Date:
Harald Fuchs wrote:
> In article <20040617232536.GB17879@campbell-lange.net>,
> Rory Campbell-Lange <rory@campbell-lange.net> writes:
>
>
>>I should have mentioned that we need the messages sent very soon after
>>they have landed in the 'inbox'; otherwise cron would definitely be the
>>way to go -- including an @reboot line.
>
>
> This rules out a cronjob.  How about writing a pl/perlu function which
> sends the mail and calling that function by an AFTER INSERT trigger?
>

Use of triggers has one drawback. What happens if the insert is rolled
back when the transaction attempts to commit?

I've mentioned this before. It would be *very* beneficial if it was
possible for a Pl/<lang> module to subscribe to transactional events.
That in turn would enable a solution like this to defer the actual
sending until the transaction is succesfully commited and to remove any
stray messages should it be rolled back.

Awaiting a more full-blown 2 phase commit solution I would settle for
the ability to subscribe to four events (actually, this would work
pretty good with 2pc as well).

1. "Commit issued". This event is sent before any of the normal commit
code is executed. In essence, it's similar to any other normal SQL
command executed within the transaction itself. An event handler is
allowed to make changes to the database. A change will schedule a new
"Commit issued" event to be sent once all event handlers have been
called. This process repeats until no event handler makes a change. The
mechanism will need a loop counter and a configurable threshold.

A commit can be prevented during "Commit issued" using normal error
handling.

2. "Commit start". From now on, no changes are permitted. The users view
of the data will not change. This is where deferred constraints,
transaction wide rule checking etc. can be executed. A handler is
allowed to terminate the commit using normal error handling (a normal
return from the handler is equivalent to a "vote commit" in a 2pc
environment).

3. "Commit complete". The commit is at it's final stage. From the
database perspective, the commit is complete and cannot be rolled back.
A handler can use this to attempt to send prepared emails etc. A failure
will not have any effect on stored data. Normally, the handler would
perform a minimum of processing here (since it cannot fail) and do most
of the work at "Commit start".

4. "Rollback complete". This notification tells the handler that
everything has been rolled back.

AFAICS, only two things are required from the PostgreSQL backend. A
simple event sending mechanism to be triggered before and after the
(complete) process of committing or rolling back, and the ability to
track attempts to make a change to the database. The latter of course to
log the fact that a change has occured  (for "Commit issued") or to
prevent it from happening (all other handlers). I'm sure this
functionality is there already. I'd be happy to write a patch if someone
could point out where I find it.

Kind regards,

Thomas Hallgren





Re: [OT] Dilemma about OS <-> Postgres interaction

From
Rory Campbell-Lange
Date:
On 18/06/04, Harald Fuchs (hf517@protecting.net) wrote:
> In article <20040617232536.GB17879@campbell-lange.net>,
> Rory Campbell-Lange <rory@campbell-lange.net> writes:
>
> > I should have mentioned that we need the messages sent very soon after
> > they have landed in the 'inbox'; otherwise cron would definitely be the
> > way to go -- including an @reboot line.
>
> This rules out a cronjob.  How about writing a pl/perlu function which
> sends the mail and calling that function by an AFTER INSERT trigger?

I was trying to avoid having an untrusted language in the database.
After all, this may be the way to do, and I don't need to worry about
daemons or anything like that.

Even further off-topic:
How does a pl/perl function access perl modules? Is there a sensible way
of testing them?

Thanks
Rory
--
Rory Campbell-Lange
<rory@campbell-lange.net>
<www.campbell-lange.net>

Re: [OT] Dilemma about OS <-> Postgres interaction

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


> I should have mentioned that we need the messages sent very soon
> after they have landed in the 'inbox'; otherwise cron would
> definitely be the way to go -- including an @reboot line.

Cron will work just fine. Here's how I did it for a project I once
had. Write a perl script that make a connection to the database,
then loops indefinitely, checking the database every X seconds
for new mail to send. When in finds some, it forks a child to handle
it, then goes on listening. Now write a small script that checks if
the first script is still running. If it is, exit silently. If
not, launch a new instance. Add an entry in cron to run the second
script every minute. New mail is sent out instantly, and your script
gets resurrected if killed. You might also look into the daemontools
program, which does something similar.

- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200406190730

-----BEGIN PGP SIGNATURE-----

iD8DBQFA1CS4vJuQZxSWSsgRArGzAJ0bgwn4RPYCNwz7xxzUE/wU1SOm/QCeMGBu
VyCS3Y4bNO8XySrH6slYsUM=
=O0Eb
-----END PGP SIGNATURE-----