Re: Using Postgresql as application server - Mailing list pgsql-general

From Chris Travers
Subject Re: Using Postgresql as application server
Date
Msg-id CAKt_Zft07yy=RjSrXp=5GYD48mZKRXYM+SuM6w3PjyK3QgEPUw@mail.gmail.com
Whole thread Raw
In response to Re: Using Postgresql as application server  (Sim Zacks <sim@compulab.co.il>)
List pgsql-general
On Thu, Aug 18, 2011 at 4:32 AM, Sim Zacks <sim@compulab.co.il> wrote:

> There are many differences.
> 1) If I have a database function and I copy my database to another server,
> the function still works.
> If I have an external daemon application, I not only have to copy my
> database, I also have to copy the daemon application. Then I have to build
> an init script and make sure it runs at startup. My LISTEN/NOTIFY daemon is
> a c application, so when I move my database to a server on a different
> platform, I have to recompile it.


Ok, so you have made a decision to favor performance well ahead of
flexibility.  I guess the question is what the performance cost
writing it in python actually is and what the flexibility cost of
writing it in C actually is.  Presumably you have already answered
this for yourself, but this strikes me as coming out of that tradeoff
rather than being inherent in the idea.

>
> 2) there is absolutely no reason you can't build redundancy into this
> system.
>
> Its not a question of whether I can or cannot build redundancy, it is a
> question of whether I have to build an entire system in order to call a
> database function from another database function. The only reason this is
> complicated is because it needs to be in its own session. That simple issue
> shouldn't force me to build: a) a daemon application, b) include redundancy
> to ensure that it is running, c) not be included in my database
> backup/restore.

Emailing IMHO isn't a database function.

> Remember, I don't want to build a _system_, I basically want an asynchronous
> trigger. On specific event call a database function in its own transaction
> space and allow the existing transaction to end.
>
> 3)  The overhead really shouldn't be bad, and if your parts are
> well-modularized, and carefully designed overhead really should be
> minimal.
>
> Any overhead that is not necessary should not be added in. It is the minor
> level of frustration that something didn't work when I migrated servers
> until the "Oh Yeah" kicked in. Then looking through all my notes to find the
> compilation instructions for my daemon because we moved from a 32 bit server
> to a 64 bit. Then trying to figure out the syntax for the init script,
> because we moved from Gentoo to Debian and it is slightly different. It
> isn't a lot of overhead but it is completely unneccessary in our situation.
> I will agree that this is entirely necessary if your application actually
> uses an external system and the database communicates through Listen/Notify.
> You have 2 systems to deal with in any case, but for me the only external
> component is having the daemon listen so it can call another function in the
> database. IOW, I don't generally deal with anything else on the server.

In general I would be opposed to allowing functions to exist outside
of transactional control.  While it is true you save some conceptual
complexity in moving everything into the database, allowing stored
proc functions to commit/start transactions would add a tremendous
amount conceptual complexity in the database itself.  At the moment I
don't think this is generally worth it.  The beauty of the current
approach is that the transactional control works in very well-defined
ways.  This significantly saves testing and QA effort.    I would be
concerned that a capability like this would be sufficiently disruptive
to the assumptions of testing, that the costs would always be far
higher than the benefits.

Best Wishes,
Chris Travers

pgsql-general by date:

Previous
From: Jerry Sievers
Date:
Subject: Re: altering foreign key without a table scan
Next
From: Scott Ribe
Date:
Subject: Re: Suspicious Bill