Re: PostgreSQL transaction locking problem - Mailing list pgsql-general

From Jeff Martin
Subject Re: PostgreSQL transaction locking problem
Date
Msg-id NEBBLNMDMLIJEILLDFNBAEBHCFAA.jeff@dgjc.org
Whole thread Raw
In response to Re: PostgreSQL transaction locking problem  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: PostgreSQL transaction locking problem  (Lincoln Yeoh <lyeoh@pop.jaring.my>)
List pgsql-general
Tom Lane writes:
> [ thinks about that... ]  Yeah, probably so, because SetQuerySnapshot
> is presently executed only in the outer command loop; there won't be
> one between the LOCK and the SELECT inside your function.  So the SELECT
> still doesn't think that the other xact has committed.  You could make
> it work (in read-committed mode) if you issued the LOCK from the
> application before calling the function.
Thanks.  That is the answer.  I couldn't get any locking mechanism to work
at all in my application because my entire db API is encapsulated in pg/sql
functions.

> There's been some discussion about whether SetQuerySnapshot should occur
> between statements in plpgsql functions or not --- AFAIR, there were
One argument for doing this as you say is just for guys like me.  I am
writing a PHP/PostgreSQL application and have made the decision to push as
much business logic as possible into pg/sql functions.  I did this to 1)
keep the PHP code lighter weight, 2) reduce the communication between my
Apache server and Postgres for any one database function, and 3) make my
code more portable and easier to use from other systems.  That is other
non-PHP code writers with have less code to port in order to tap into all my
business logic because it is all encapsulated within pg/sql.  Thus I would
argue to take the snapshot between statements within pg/sql.  Otherwise I
cannot make any locking decisions from within a function.  Of course I'm not
a developer in PostgreSQL so I'm not aware of the arguments against the
idea.

Here is a statement for your comment.  I have felt that one argument against
my decision to push more business logic into pg/sql is that the postgres
server is then running all this code. That would be OK, but I think I may
lose some advantages of multi-processing servers.  If Apache was running the
code encapsulated as PHP functions I can tune my Apache server to have
several processes running concurrently and even on several machines
independent from the database server.  Thus in a multi-user situation I
gain.  However, can I tune postgres to run several server processes as well?
Can postgres run concurrent server processes in both persistant and
non-persistant connection situations?  I am beginning to think I should not
have encapsulated my business logic in pg/sql but in PHP functions instead.

> arguments on both sides, and we haven't come to a consensus yet.  But
> the bottom line is that in the present implementation, a function cannot
> see the effects of transactions that commit while it's running.
Understood.  Thanks.

Jeff

Jeff Martin
704 Torrey Lane, Apartment D
Boalsburg, PA 16827
H814-466-7791
jeff@dgjc.org <mailto:jeff@dgjc.org>
www.dgjc.org <http://www.dgjc.org>




pgsql-general by date:

Previous
From: Neil Conway
Date:
Subject: Re: HASH index method not correctly handling NULL text
Next
From: Tom Lane
Date:
Subject: Re: HASH index method not correctly handling NULL text