Re: Lost updates vs resumable connections/transactions - Mailing list pgsql-interfaces
From | Jens Lechtenboerger |
---|---|
Subject | Re: Lost updates vs resumable connections/transactions |
Date | |
Msg-id | m2pt1be3lv.fsf@pcwi4002.uni-muenster.de Whole thread Raw |
In response to | Re: Lost updates vs resumable connections/transactions (Jan Wieck <JanWieck@Yahoo.com>) |
List | pgsql-interfaces |
Jan Wieck <JanWieck@Yahoo.com> wrote: > On 12/9/2004 10:43 AM, Jens Lechtenbörger wrote: > >> Dear reader, >> I've got the following simple wish: Please tell me how to program a >> standard web application that allows to update tuples without lost >> updates. >> My guess is that this cannot be done easily right now, but that this >> could be done very easily with just a slight API (libpq) extension, >> which allows to tie a user/browser session to a database >> transaction. The remainder of this mail explains my view of the >> situation in detail. If I'm missing something fundamental then >> please let me know... >> >> [Some details from my original mail deleted...] >> >> So, my first question: Is there a simple solution to program >> transactions that span multiple scripts? >> I'm aware of the following "heavyweight" solution: >> 1. Do some kind of explicit user level locking. In my view, this is >> redundant work and the wrong way as databases support transactions. > > You underestimate the impact of your "solution". Not only does it need lock > timeouts, you also get scaling problems. Your approach assumes that each and > every browser session will have its own, unique database session. Yes. > The same reason why your Apache web server can handle 10000 simultaneous > sessions with just 100 or so server processes is manifold true in the case of > a database. 10000 DB sessions means 10000 PostgreSQL backends. Probably only > 100 or so active at any given time, so there will be a lot of swapping going > on, unless you put some ridiculous amount of RAM into place just to counter > this design mistake. Actually, I assume just a few concurrent users, not 10000. > Even applications that have statefull enduser terminals (like SAP R/3 for > example) never allow an open transaction over user interaction. They all use > the "heavy" way of application level advisory locks (held in database tables > of course). In all these systems, no particular operating system resource > other than a socket and some memory is used by any idle/thinking user session. I got your point. Nevertheless, my point is the following: If I write my application, e.g., as Java applets, then I can have one database connection per user (with a controlled number of concurrent users). I wonder about the same thing with CGI/PHP. >> 2. Program a proxy-process that actually handles the database >> connections (instead of the HTTP server) and that allows to suspend a >> transaction at the end of one script, which can be resumed at the >> beginning of another script. E.g. SQL Relay can do this: >> http://sqlrelay.sourceforge.net/ >> So, my second question: What do the PostgreSQL maintainers think >> about enriching the API to make it usable in a web environment, >> where stateless HTTP servers stand between (and in the way of) >> statefull applications and database servers? >> This could be done with just two additional API calls, e.g., >> for libpq: >> int PQsuspendConn(PGconn *conn) and >> PGconn *PQresumeConn(const char *conninfo) > > Since in the Apache case, it isn't even guaranteed that the next request in a > session will be served by the same web server process, how exactly do you > intend to park and later hand over the open socket for that connection. And > please, in a portable way ;-) I simply assume that I'm in an environment where session tracking is supported. Then the int value returned by PQsuspendConn will just be part of session data. E.g., in case of PHP there won't be any problem. I guess that many people will be happy with this assumption. Jens
pgsql-interfaces by date: