Thread: More concurent transaction over single connection ?
AFAIK (7.4.x) there is one limitation in persistant connections to postgresql from various frontends ( http://cz.php.net/manual/en/features.persistent-connections.php ), because it can not use transactions in situation where more concurent tasks use a single connection (execuse my wrong english) I suggest to add some sort of "context" identificator to frontend/backend protocol to overcome this limit. Ie frontend - ( like PHP for example ) make ONE persistant connection and different scripts are served over this connection. But frontend add for each instance of script a unique "context" identificator and postgresql server will treat different "contexts" as they was send by different connections. The results wil be sorted by "context" by frontend and feeded to apprpriate instance of the php script I think it may add some benefit to avoiding connection starting costs, especially in case where database and client are in greater network distance and/or need to use some expensive procedure to start connection and allow a relay simple and transparent connection pooling, may be a some type od "spare servers" like in Apache (MinSpareServers and Max SpareServers configuration directive ) What do you think about it ?
NTPT wrote: > AFAIK (7.4.x) there is one limitation in persistant connections to > postgresql from various frontends ( > http://cz.php.net/manual/en/features.persistent-connections.php ), > because it can not use transactions in situation where more concurent > tasks use a single connection (execuse my wrong english) > > > > I suggest to add some sort of "context" identificator to > frontend/backend protocol to overcome this limit. Ie frontend - ( like > PHP for example ) make ONE persistant connection and different scripts > are served over this connection. But frontend add for each instance of > script a unique "context" identificator and postgresql server will > treat different "contexts" as they was send by different connections. > The results wil be sorted by "context" by frontend and feeded to > apprpriate instance of the php script You've just reinvented connections. The problem is at the application end really, since PHP doesn't provide a middle-ware layer to manage this sort of stuff. Typically, java-based application servers manage this sort of thing for you. > I think it may add some benefit to avoiding connection starting costs, > especially in case where database and client are in greater network > distance and/or need to use some expensive procedure to start connection > and allow a relay simple and transparent connection pooling, may be a > some type od "spare servers" like in Apache (MinSpareServers and Max > SpareServers configuration directive ) Perhaps take a look at pgpool connection pooling. -- Richard Huxton Archonet Ltd
On Wed, Feb 09, 2005 at 12:22:44AM +0100, NTPT wrote: > AFAIK (7.4.x) there is one limitation in persistant connections to > postgresql from various frontends ( > http://cz.php.net/manual/en/features.persistent-connections.php ), because > it can not use transactions in situation where more concurent tasks use a > single connection (execuse my wrong english) One backend handle one session. Nothing strane about that. That won't change anytime soon either. > I think it may add some benefit to avoiding connection starting costs, > especially in case where database and client are in greater network > distance and/or need to use some expensive procedure to start connection > and allow a relay simple and transparent connection pooling, may be a > some type od "spare servers" like in Apache (MinSpareServers and Max > SpareServers configuration directive ) I think you're looking for pgpool, it does connection pooling. Multiplexing multiple connection over one doesn't help, because the process at the other end will have to copy them out to several different backends anything, so you don't save anything. pgpool is a much better system... Hope this helps, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them.
Attachment
On Wed, 9 Feb 2005, NTPT wrote: > AFAIK (7.4.x) there is one limitation in persistant connections to postgresql > from various frontends ( > http://cz.php.net/manual/en/features.persistent-connections.php ), because it > can not use transactions in situation where more concurent tasks use a single > connection (execuse my wrong english) For the PHP case, it's not a limitation. pg_connect() and pg_pconnect() have the same semantics, per specs. That is, there's no way to write a PHP program that behaves differently depening on the usage of pg_connect or pg_pconnect. You can always safely substitute a pg_pconnect with pg_connect, as far as PHP is concerned (it may affect performance, but not semantics of the program). Moreover, for a Web application, assuming you're using a multiprocess server such as Apache, you can never tell which sub-process will handle your next request. And database connections are per-process. So the problem you need to solve is not multiple apps using the same connection, but also how to pass connections along among sub-processes. Think bigger, and consider a load-balancing solution, where requests are directed to different web frontents: you'll need to pass database connections among different _hosts_. It's the stateless nature for HTTP that makes web services really scalable. Persistent connections destroy any possible state when the request is done. You can't have a transaction span multiple requests, per design. If you really need that, consider an application server. Anyway, beginning a transaction in one page, and waiting for a second request from the client in order to commit it is bad practice, since the wait can me potentially very long (and you need a way to recover from that). .TM. -- ____/ ____/ / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _____/ _____/ _/ Colombo@ESI.it
Ok. Let,s have a some model scenarios . Let it be a web server with some embedded language like PHP. 1: Multiprocess server (Like Apache 1.x ) : Each process use one persistent connection. Right ? One proces can serve only one request in present time. Right ? When request is finished, process hold your connection open and awaiting a new request. From the point of view of the transactions it is OK, because transactions over one persistant connection are "serialized" by nature. 2: One process, but multiple threads . If each thread have your separate db connections, it is ok, it is like previous example, just substitute word "process" by word "thread" 3: One process, multiple threads, all threads share the same one persitant connection. Because one thread serve one request in present time, but threads can run "concurently" (AFIAK ), I am affraid, that multiple transactions over the single connection in this scenario will result a complette mess. I am right ? Please execuse my wrong english. ----- Original Message ----- From: "Richard Huxton" <dev@archonet.com> To: "NTPT" <ntpt@seznam.cz> Cc: "Postgres General" <pgsql-general@postgresql.org> Sent: Wednesday, February 09, 2005 11:45 AM Subject: Re: [GENERAL] More concurent transaction over single connection > NTPT wrote: >> AFAIK (7.4.x) there is one limitation in persistant connections to >> postgresql from various frontends ( >> http://cz.php.net/manual/en/features.persistent-connections.php ), >> because it can not use transactions in situation where more concurent >> tasks use a single connection (execuse my wrong english) >> >> >> >> I suggest to add some sort of "context" identificator to >> frontend/backend protocol to overcome this limit. Ie frontend - ( like >> PHP for example ) make ONE persistant connection and different scripts >> are served over this connection. But frontend add for each instance of >> script a unique "context" identificator and postgresql server will treat >> different "contexts" as they was send by different connections. The >> results wil be sorted by "context" by frontend and feeded to apprpriate >> instance of the php script > > You've just reinvented connections. The problem is at the application end > really, since PHP doesn't provide a middle-ware layer to manage this sort > of stuff. Typically, java-based application servers manage this sort of > thing for you. > >> I think it may add some benefit to avoiding connection starting costs, >> especially in case where database and client are in greater network >> distance and/or need to use some expensive procedure to start connection >> and allow a relay simple and transparent connection pooling, may be a >> some type od "spare servers" like in Apache (MinSpareServers and Max >> SpareServers configuration directive ) > > Perhaps take a look at pgpool connection pooling. > > -- > Richard Huxton > Archonet Ltd > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster > >
NTPT wrote: > > 3: One process, multiple threads, all threads share the same one > persitant connection. Because one thread serve one request in present > time, but threads can run "concurently" (AFIAK ), I am affraid, that > multiple transactions over the single connection in this scenario will > result a complette mess. I am right ? Yes - that's the definition of a connection really. Each connection provides you with a set of configuration values and a context for transactions. -- Richard Huxton Archonet Ltd