Thread: RE: Connection Pooling...(Repost)...please do help..
I haven't found PG to have much connection overhead, why would open/closing a connection-per-query require server side connection pooling? You might try having your application acquire a connection on-demand and then having a timeout mechanism that discards the connection after a certain amount of idle time. That if you have to fire off hundreds of small queries you don't encounter connection overhead. If the machine is idle, it disconnects and the server doesn't get loaded down with tons of idle backends. This may sound overly complicated, but you might move the database operations out of the client-tier and into a application-tier. DCOM would allow you to query from the COM layer and broker objects containing the results back to the client. This way the middle tier could maintain it's own pool of connections iternally, plus you then get to take advantage of the other benefits of a distributed system. As a side note, I have a little research project going right now. It is an HTTP XML Server that acts as a database liason by accepting HTTP POST requests (USER, PASSWORD, SQL, etc) and returning the results as XML. It is Java based, but you could do it with any language you want. Coming from a Win32 client, one could use IE 5's XML parser to process the results. You could implement something very similar in a short amount of time. (This is partially up-and-running with less than a week's work) This is a perfect place to implement connection pooling (which happens to be what I am currently adding to it). I would be glad to provide source once I get it stabilzed, although I don't know if it will ever be mature enough for production work...I just wanted to learn XML. :) Anyway, I just wanted to throw some ideas out there... Joel -----Original Message----- From: sk@pobox.com [mailto:sk@pobox.com] Sent: Friday, December 15, 2000 11:32 AM To: pgsql-interfaces@postgresql.org Subject: Re: [INTERFACES] Connection Pooling...(Repost)...please do help.. Thanks Joel, Yes there is. But it only keeps your unused connections on for a given amount of time, so as to avoid opening new connections when you open a connection again within the stipulated time. In our case, we keep the connection open till the application has any forms open. Problem in this case is that multiple users from different workstations are opening connections on the PG server on RH 6.2. What we want to do is start closing connections from applications after query and somehow pool the connections on the Linux server. Would like comments on this strategy and a how-to on this....help? anyone? With best regards. Sanjay. On Wed, 13 Dec 2000 15:19:53 -0500, in tci.lists.rdbms.postgresql.interfaces you wrote: >I know that the ODBC "engine" on Win32-based systems is capable of >performing connection pooling, but I am not sure if the driver has to >support it or not...anyone? Check the "Connection Pooling" tab on the ODBC >admin window in control panel. If you don't have one, you might try >upgrading MDAC. > >Joel > >-----Original Message----- >From: Sanjay Arora [mailto:sk@pobox.com] >Sent: Wednesday, December 13, 2000 7:19 PM >To: pgsql-interfaces@postgresql.org >Subject: [INTERFACES] Connection Pooling...(Repost)...please do help... > > >I am using postgreSQL v. 7.0.2 on RH Linux 6.2 on the server with VB6 >Application accessing the DB through postgrSQL ODBC driver v. 6.50. > >I want to pool my connections on the postgreSQL server. Can some people >give me some pointers? Some Web Resources for studying this subject? My >experience on connection pooling is limited to MTS in MS environs. > >With best regards. > >Sanjay. >
Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...)
From
sk@pobox.com (Sanjay Arora)
Date:
My quest for pooling odbc connections....some interesting questions have occured to me.....at least they seem interesting to me ;-)) Problem: Each machine keeping one connection open while application is executing.....may or may not use odbc driver connection pooling...it will just keep connection open for repeat queries....presently we are getting the application to keep the connection open. Now, supposing each machine keeps one connection open, how many connections can a PG machine take? So will need connection cost figures: 1....how much memory does each connection consume? 2....what is query cost? how does one go about estimating cost of a query? CPU? Memory? Other variables? In our particular case, we are using a Pentium 550 server with EIDE HDD Linux 6.2 and 128 MB RAM. We are running Apache & Mail Applications with medium to low load on it, in addition to two PG daemons, one having Production data and one for development testing. Presently, we have about 20 workstations opening one connection to the PG daemons and they machines will increase to around 200 in 6 months or so. So how do we go about calculating the overheads for this scenario? What would be the different metrics we would need to evaluate in order to calculate PG load costs and selecting machine, ram & other issues? Second, I thought about writing a connection manager on the server as a middleware between odbc & PG and I have some interesting questions there too: Scenario one: ODBC opens connections on this connection manager running on the server and it in turn opens connections on PG daemon. 1...Should Connection Manager be run as an independent daemon/broker or should PG be modified to serve that function? 2...How can the connection manager handle user level security, if implemented in the DB? 3...If a single user is making all the connections to the server(various programs running on the server itself), can such a connection manager be possible? Would you please make comments on feasibility/non-feasibility of above? Any other way this sort of thing can be achieved? I am thinking of buying a C book? With best regards. Sanjay. On Fri, 15 Dec 2000 15:04:26 -0500, in tci.lists.rdbms.postgresql.interfaces you wrote: >I haven't found PG to have much connection overhead, why would open/closing >a connection-per-query require server side connection pooling? You might >try having your application acquire a connection on-demand and then having a >timeout mechanism that discards the connection after a certain amount of >idle time. That if you have to fire off hundreds of small queries you don't >encounter connection overhead. If the machine is idle, it disconnects and >the server doesn't get loaded down with tons of idle backends. > >This may sound overly complicated, but you might move the database >operations out of the client-tier and into a application-tier. DCOM would >allow you to query from the COM layer and broker objects containing the >results back to the client. This way the middle tier could maintain it's >own pool of connections iternally, plus you then get to take advantage of >the other benefits of a distributed system. > >As a side note, I have a little research project going right now. It is an >HTTP XML Server that acts as a database liason by accepting HTTP POST >requests (USER, PASSWORD, SQL, etc) and returning the results as XML. It is >Java based, but you could do it with any language you want. Coming from a >Win32 client, one could use IE 5's XML parser to process the results. You >could implement something very similar in a short amount of time. (This is >partially up-and-running with less than a week's work) This is a perfect >place to implement connection pooling (which happens to be what I am >currently adding to it). I would be glad to provide source once I get it >stabilzed, although I don't know if it will ever be mature enough for >production work...I just wanted to learn XML. :) > >Anyway, I just wanted to throw some ideas out there... >Joel > >-----Original Message----- >From: sk@pobox.com [mailto:sk@pobox.com] >Sent: Friday, December 15, 2000 11:32 AM >To: pgsql-interfaces@postgresql.org >Subject: Re: [INTERFACES] Connection Pooling...(Repost)...please do >help.. > > >Thanks Joel, > >Yes there is. But it only keeps your unused connections on for a given >amount of time, so as to avoid opening new connections when you open a >connection again within the stipulated time. > >In our case, we keep the connection open till the application has any >forms open. Problem in this case is that multiple users from different >workstations are opening connections on the PG server on RH 6.2. > >What we want to do is start closing connections from applications >after query and somehow pool the connections on the Linux server. > >Would like comments on this strategy and a how-to on this....help? >anyone? > >With best regards. > >Sanjay. > >On Wed, 13 Dec 2000 15:19:53 -0500, in >tci.lists.rdbms.postgresql.interfaces you wrote: > >>I know that the ODBC "engine" on Win32-based systems is capable of >>performing connection pooling, but I am not sure if the driver has to >>support it or not...anyone? Check the "Connection Pooling" tab on the ODBC >>admin window in control panel. If you don't have one, you might try >>upgrading MDAC. >> >>Joel >> >>-----Original Message----- >>From: Sanjay Arora [mailto:sk@pobox.com] >>Sent: Wednesday, December 13, 2000 7:19 PM >>To: pgsql-interfaces@postgresql.org >>Subject: [INTERFACES] Connection Pooling...(Repost)...please do help... >> >> >>I am using postgreSQL v. 7.0.2 on RH Linux 6.2 on the server with VB6 >>Application accessing the DB through postgrSQL ODBC driver v. 6.50. >> >>I want to pool my connections on the postgreSQL server. Can some people >>give me some pointers? Some Web Resources for studying this subject? My >>experience on connection pooling is limited to MTS in MS environs. >> >>With best regards. >> >>Sanjay. >> >
Re: Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...)
From
"Adam Lang"
Date:
As (I believe) Joel mentioned, you should use a distributed architecture. Clients shouldn't directly access your db server. I believe it is "acceptable" if you are only looking at a small app that 10 people are going to use, but 200 hundred clients is a lot. You should have postgres on one tier, your clients on one, and devise a middle tier that acts as a relay between your clients and postgres. That way the 200 connections are not handled by postgres. Postgres will only need to handle the 10 or so you pool with the middle tier. You said you were familiar with MTS pooling... that is basically your answer. Put your business logic on MTS to talk to and pool with postgres and have your clients access the MTS. Not to mention it is easier to manage your code on one server than on 200 workstations. As for the other reply involving the http server as a liason for a db and spitting out XML... that would be a very nice add-on to postgres. I have heard several people inquire if postgres is going to support XML (much like SQL Server 2000 does now). Something liek that would move Postgres into that arena without having to have core developers worry about it. It can actually turn into a whole offshoot project. I was thinking of looking into it also. Using the Apache Xerces project (their HTTP webserver implementation of XML). Adam Lang Systems Engineer Rutgers Casualty Insurance Company http://www.rutgersinsurance.com ----- Original Message ----- From: "Sanjay Arora" <sk@pobox.com> To: <pgsql-interfaces@postgresql.org> Sent: Saturday, December 16, 2000 10:51 AM Subject: [INTERFACES] Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...) > My quest for pooling odbc connections....some interesting questions > have occured to me.....at least they seem interesting to me ;-)) > > Problem: > > Each machine keeping one connection open while application is > executing.....may or may not use odbc driver connection pooling...it > will just keep connection open for repeat queries....presently we are > getting the application to keep the connection open. > > Now, supposing each machine keeps one connection open, how many > connections can a PG machine take? So will need connection cost > figures: > > 1....how much memory does each connection consume? > 2....what is query cost? how does one go about estimating cost of a > query? CPU? Memory? Other variables? > > In our particular case, we are using a Pentium 550 server with EIDE > HDD Linux 6.2 and 128 MB RAM. We are running Apache & Mail > Applications with medium to low load on it, in addition to two PG > daemons, one having Production data and one for development testing. > > Presently, we have about 20 workstations opening one connection to the > PG daemons and they machines will increase to around 200 in 6 months > or so. > > So how do we go about calculating the overheads for this scenario? > What would be the different metrics we would need to evaluate in order > to calculate PG load costs and selecting machine, ram & other issues? > > Second, I thought about writing a connection manager on the server as > a middleware between odbc & PG and I have some interesting questions > there too: > > Scenario one: > > ODBC opens connections on this connection manager running on the > server and it in turn opens connections on PG daemon. > > 1...Should Connection Manager be run as an independent daemon/broker > or should PG be modified to serve that function? > 2...How can the connection manager handle user level security, if > implemented in the DB? > 3...If a single user is making all the connections to the > server(various programs running on the server itself), can such a > connection manager be possible? > > Would you please make comments on feasibility/non-feasibility of > above? > > Any other way this sort of thing can be achieved? I am thinking of > buying a C book? > > With best regards. > Sanjay. > > On Fri, 15 Dec 2000 15:04:26 -0500, in > tci.lists.rdbms.postgresql.interfaces you wrote: > > >I haven't found PG to have much connection overhead, why would open/closing > >a connection-per-query require server side connection pooling? You might > >try having your application acquire a connection on-demand and then having a > >timeout mechanism that discards the connection after a certain amount of > >idle time. That if you have to fire off hundreds of small queries you don't > >encounter connection overhead. If the machine is idle, it disconnects and > >the server doesn't get loaded down with tons of idle backends. > > > >This may sound overly complicated, but you might move the database > >operations out of the client-tier and into a application-tier. DCOM would > >allow you to query from the COM layer and broker objects containing the > >results back to the client. This way the middle tier could maintain it's > >own pool of connections iternally, plus you then get to take advantage of > >the other benefits of a distributed system. > > > >As a side note, I have a little research project going right now. It is an > >HTTP XML Server that acts as a database liason by accepting HTTP POST > >requests (USER, PASSWORD, SQL, etc) and returning the results as XML. It is > >Java based, but you could do it with any language you want. Coming from a > >Win32 client, one could use IE 5's XML parser to process the results. You > >could implement something very similar in a short amount of time. (This is > >partially up-and-running with less than a week's work) This is a perfect > >place to implement connection pooling (which happens to be what I am > >currently adding to it). I would be glad to provide source once I get it > >stabilzed, although I don't know if it will ever be mature enough for > >production work...I just wanted to learn XML. :) > > > >Anyway, I just wanted to throw some ideas out there... > >Joel > > > >-----Original Message----- > >From: sk@pobox.com [mailto:sk@pobox.com] > >Sent: Friday, December 15, 2000 11:32 AM > >To: pgsql-interfaces@postgresql.org > >Subject: Re: [INTERFACES] Connection Pooling...(Repost)...please do > >help.. > > > > > >Thanks Joel, > > > >Yes there is. But it only keeps your unused connections on for a given > >amount of time, so as to avoid opening new connections when you open a > >connection again within the stipulated time. > > > >In our case, we keep the connection open till the application has any > >forms open. Problem in this case is that multiple users from different > >workstations are opening connections on the PG server on RH 6.2. > > > >What we want to do is start closing connections from applications > >after query and somehow pool the connections on the Linux server. > > > >Would like comments on this strategy and a how-to on this....help? > >anyone? > > > >With best regards. > > > >Sanjay. > > > >On Wed, 13 Dec 2000 15:19:53 -0500, in > >tci.lists.rdbms.postgresql.interfaces you wrote: > > > >>I know that the ODBC "engine" on Win32-based systems is capable of > >>performing connection pooling, but I am not sure if the driver has to > >>support it or not...anyone? Check the "Connection Pooling" tab on the ODBC > >>admin window in control panel. If you don't have one, you might try > >>upgrading MDAC. > >> > >>Joel > >> > >>-----Original Message----- > >>From: Sanjay Arora [mailto:sk@pobox.com] > >>Sent: Wednesday, December 13, 2000 7:19 PM > >>To: pgsql-interfaces@postgresql.org > >>Subject: [INTERFACES] Connection Pooling...(Repost)...please do help... > >> > >> > >>I am using postgreSQL v. 7.0.2 on RH Linux 6.2 on the server with VB6 > >>Application accessing the DB through postgrSQL ODBC driver v. 6.50. > >> > >>I want to pool my connections on the postgreSQL server. Can some people > >>give me some pointers? Some Web Resources for studying this subject? My > >>experience on connection pooling is limited to MTS in MS environs. > >> > >>With best regards. > >> > >>Sanjay. > >> > >
Re: Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...)
From
Oleg Bartunov
Date:
On Sat, 16 Dec 2000, Adam Lang wrote: > Date: Sat, 16 Dec 2000 12:07:16 -0500 > From: Adam Lang <aalang@rutgersinsurance.com> > To: pgsql-interfaces@postgresql.org > Subject: Re: [INTERFACES] Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...pleasedo help...) > > As (I believe) Joel mentioned, you should use a distributed architecture. > Clients shouldn't directly access your db server. I believe it is > "acceptable" if you are only looking at a small app that 10 people are going > to use, but 200 hundred clients is a lot. > > You should have postgres on one tier, your clients on one, and devise a > middle tier that acts as a relay between your clients and postgres. That > way the 200 connections are not handled by postgres. Postgres will only > need to handle the 10 or so you pool with the middle tier. Brrr, we have 128 persistent connections without any problem. Just use -N option. I dont' remember maximum number of backends compiled on default, but you could always change this number. But you're right whe you speaking about 3-tire model. We're experimenting with Corba and preliminary results are promising Regards, Oleg _____________________________________________________________ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83
Re: Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...)
From
"Adam Lang"
Date:
I'm not saying anything about postgres not being able to handle that many connections. I'm just saying it shouldn't if it doesn't have to. Adam Lang Systems Engineer Rutgers Casualty Insurance Company http://www.rutgersinsurance.com ----- Original Message ----- From: "Oleg Bartunov" <oleg@sai.msu.su> To: "Adam Lang" <aalang@rutgersinsurance.com> Cc: <pgsql-interfaces@postgresql.org> Sent: Saturday, December 16, 2000 12:28 PM Subject: Re: [INTERFACES] Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...) > On Sat, 16 Dec 2000, Adam Lang wrote: > > > Date: Sat, 16 Dec 2000 12:07:16 -0500 > > From: Adam Lang <aalang@rutgersinsurance.com> > > To: pgsql-interfaces@postgresql.org > > Subject: Re: [INTERFACES] Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...) > > > > As (I believe) Joel mentioned, you should use a distributed architecture. > > Clients shouldn't directly access your db server. I believe it is > > "acceptable" if you are only looking at a small app that 10 people are going > > to use, but 200 hundred clients is a lot. > > > > You should have postgres on one tier, your clients on one, and devise a > > middle tier that acts as a relay between your clients and postgres. That > > way the 200 connections are not handled by postgres. Postgres will only > > need to handle the 10 or so you pool with the middle tier. > > Brrr, we have 128 persistent connections without any problem. > Just use -N option. I dont' remember maximum number of backends compiled > on default, but you could always change this number. > But you're right whe you speaking about 3-tire model. We're experimenting > with Corba and preliminary results are promising > > > Regards, > Oleg > > _____________________________________________________________ > Oleg Bartunov, sci.researcher, hostmaster of AstroNet, > Sternberg Astronomical Institute, Moscow University (Russia) > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ > phone: +007(095)939-16-83, +007(095)939-23-83
Re: Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...)
From
sk@pobox.com (Sanjay Arora)
Date:
Well, that essentially means that I have to deploy one more server with MTS on it....isn't there anyway that I can do this thing on the Linux server? If I have to deploy one more machine...I would like that to be a Linux one ;-)) We are planning to shift our inhouse apps to Linux & Java (GUI using SWING), so I would like to byepass MS, if thats at all possible somehow. I understand, I would be able to pool my connections serverside using Java, but presently I am stuck with ODBC. In any case, if somebody can guide me how to calculate the connection load & query load, I shall be very thankful. With best regards. Sanjay. PS: I agree that the XML idea is a terrific one and I shall definitely wait till someone develops it....just wish I had the capability to do it myself....anyways....someday ;-)) On Sat, 16 Dec 2000 12:36:06 -0500, in tci.lists.rdbms.postgresql.interfaces you wrote: >I'm not saying anything about postgres not being able to handle that many >connections. I'm just saying it shouldn't if it doesn't have to. > >Adam Lang >Systems Engineer >Rutgers Casualty Insurance Company >http://www.rutgersinsurance.com >----- Original Message ----- >From: "Oleg Bartunov" <oleg@sai.msu.su> >To: "Adam Lang" <aalang@rutgersinsurance.com> >Cc: <pgsql-interfaces@postgresql.org> >Sent: Saturday, December 16, 2000 12:28 PM >Subject: Re: [INTERFACES] Connection Pooling....an interesting question!! >(was..Connection Pooling...(Repost)...please do help...) > > >> On Sat, 16 Dec 2000, Adam Lang wrote: >> >> > Date: Sat, 16 Dec 2000 12:07:16 -0500 >> > From: Adam Lang <aalang@rutgersinsurance.com> >> > To: pgsql-interfaces@postgresql.org >> > Subject: Re: [INTERFACES] Connection Pooling....an interesting >question!! (was..Connection Pooling...(Repost)...please do help...) >> > >> > As (I believe) Joel mentioned, you should use a distributed >architecture. >> > Clients shouldn't directly access your db server. I believe it is >> > "acceptable" if you are only looking at a small app that 10 people are >going >> > to use, but 200 hundred clients is a lot. >> > >> > You should have postgres on one tier, your clients on one, and devise a >> > middle tier that acts as a relay between your clients and postgres. >That >> > way the 200 connections are not handled by postgres. Postgres will only >> > need to handle the 10 or so you pool with the middle tier. >> >> Brrr, we have 128 persistent connections without any problem. >> Just use -N option. I dont' remember maximum number of backends compiled >> on default, but you could always change this number. >> But you're right whe you speaking about 3-tire model. We're experimenting >> with Corba and preliminary results are promising >> >> >> Regards, >> Oleg >> >> _____________________________________________________________ >> Oleg Bartunov, sci.researcher, hostmaster of AstroNet, >> Sternberg Astronomical Institute, Moscow University (Russia) >> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ >> phone: +007(095)939-16-83, +007(095)939-23-83 > >
Re: Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...)
From
"Adam Lang"
Date:
The options that I know of to do distributed programming with a windows app but non windows middle tier is this: Use Corba. This is the same as DCOM, except there are CORBA libraries for most OSes. Use HTTP XML servers to act as the middle tier. This way it doesn't matter what platforms you are using. Program your client app and server app to talk to each other over tcp/ip and sockets. I think there is an ActiveX/DCOM patch for Linux. Never tried it out. Here is the link to a FAQ about it: http://www.softworksltd.com/dcomlinuxfaq.html If you do try it, please let me know your experiences. I'd be more than interested to hear. As for calculation load... sorry, can't help you there. Adam Lang Systems Engineer Rutgers Casualty Insurance Company http://www.rutgersinsurance.com ----- Original Message ----- From: "Sanjay Arora" <sk@pobox.com> To: <pgsql-interfaces@postgresql.org> Sent: Saturday, December 16, 2000 2:52 PM Subject: Re: [INTERFACES] Connection Pooling....an interesting question!! (was..Connection Pooling...(Repost)...please do help...) > Well, that essentially means that I have to deploy one more server > with MTS on it....isn't there anyway that I can do this thing on the > Linux server? If I have to deploy one more machine...I would like that > to be a Linux one ;-)) > > We are planning to shift our inhouse apps to Linux & Java (GUI using > SWING), so I would like to byepass MS, if thats at all possible > somehow. I understand, I would be able to pool my connections > serverside using Java, but presently I am stuck with ODBC. > > In any case, if somebody can guide me how to calculate the connection > load & query load, I shall be very thankful. > > With best regards. > Sanjay. > > PS: I agree that the XML idea is a terrific one and I shall definitely > wait till someone develops it....just wish I had the capability to do > it myself....anyways....someday ;-)) > > > On Sat, 16 Dec 2000 12:36:06 -0500, in > tci.lists.rdbms.postgresql.interfaces you wrote: > > >I'm not saying anything about postgres not being able to handle that many > >connections. I'm just saying it shouldn't if it doesn't have to. > > > >Adam Lang > >Systems Engineer > >Rutgers Casualty Insurance Company > >http://www.rutgersinsurance.com > >----- Original Message ----- > >From: "Oleg Bartunov" <oleg@sai.msu.su> > >To: "Adam Lang" <aalang@rutgersinsurance.com> > >Cc: <pgsql-interfaces@postgresql.org> > >Sent: Saturday, December 16, 2000 12:28 PM > >Subject: Re: [INTERFACES] Connection Pooling....an interesting question!! > >(was..Connection Pooling...(Repost)...please do help...) > > > > > >> On Sat, 16 Dec 2000, Adam Lang wrote: > >> > >> > Date: Sat, 16 Dec 2000 12:07:16 -0500 > >> > From: Adam Lang <aalang@rutgersinsurance.com> > >> > To: pgsql-interfaces@postgresql.org > >> > Subject: Re: [INTERFACES] Connection Pooling....an interesting > >question!! (was..Connection Pooling...(Repost)...please do help...) > >> > > >> > As (I believe) Joel mentioned, you should use a distributed > >architecture. > >> > Clients shouldn't directly access your db server. I believe it is > >> > "acceptable" if you are only looking at a small app that 10 people are > >going > >> > to use, but 200 hundred clients is a lot. > >> > > >> > You should have postgres on one tier, your clients on one, and devise a > >> > middle tier that acts as a relay between your clients and postgres. > >That > >> > way the 200 connections are not handled by postgres. Postgres will only > >> > need to handle the 10 or so you pool with the middle tier. > >> > >> Brrr, we have 128 persistent connections without any problem. > >> Just use -N option. I dont' remember maximum number of backends compiled > >> on default, but you could always change this number. > >> But you're right whe you speaking about 3-tire model. We're experimenting > >> with Corba and preliminary results are promising > >> > >> > >> Regards, > >> Oleg > >> > >> _____________________________________________________________ > >> Oleg Bartunov, sci.researcher, hostmaster of AstroNet, > >> Sternberg Astronomical Institute, Moscow University (Russia) > >> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ > >> phone: +007(095)939-16-83, +007(095)939-23-83 > > > >
"Clark, Joel" wrote: > > I haven't found PG to have much connection overhead, why would open/closing > a connection-per-query require server side connection pooling? Each connection causes the backend to fork. With a heavy load you'll feel the overhead of creating and closing so many connections. -- Joseph Shraibman jks@selectacast.net Increase signal to noise ratio. http://www.targabot.com