Thread: Postgresql can handle 200 connections (two tier) ?
Hi, I want to know if are there real world cases of postgresql supporting 200 simultaneous connections in two tier application. I'm asking because I'm considering postgresql for the next application I need to write. A bit off-topic but I want to hear opinion of developers about two-tier versus tree-tier applications with this number of connections to a postgresql database. Thanks in advance Geraldo Lopes de Souza
Hi, We have well over 200 simultaneous connections in a two tier atmosphere. We run a web cluster with well tuned apache servers (KeepAlive off), and use transactions on every request. Each request can have up to ~16 queries. Those ~16 queries are wrapped in transactions with appropriate rollbacks if the thing fails. We have tons of logging and those ~16 queries all together take about .05 seconds. We also have table sizes in the millions of tuples. About a third of the queries are updates or inserts. Some queries are 5 table joins as well. We have over a million transactions (~16 million queries) every day. On the other hand, we throw some hardware at this also :-) 4GB Ram Dual Athlon MP 1600+ RAID Drives Other settings: Max-Files: 300,000 Shared Ram: 1.6GB OS: RedHat 7.3 - Ericson Smith eric@did-it.com On Tue, 2002-06-25 at 20:34, Geraldo Lopes de Souza wrote: > Hi, > > I want to know if are there real world cases of postgresql supporting 200 > simultaneous connections in two tier application. > > I'm asking because I'm considering postgresql for the next application I > need to write. > > A bit off-topic but I want to hear opinion of developers about two-tier > versus tree-tier applications with this number of connections to a > postgresql database. > > Thanks in advance > > Geraldo Lopes de Souza > > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster > >
On Tue, 25 Jun 2002, Geraldo Lopes de Souza wrote: >I want to know if are there real world cases of postgresql supporting >200 simultaneous connections in two tier application. Our web site and content management system (www.mcgill.ca) is a home grown system built around Perl, PHP and PostgreSQL runing on PPC Linux (YellowDog). The database isn't taxed very heavily because I designed it so that the system caches preferences and such (results from SQL queries) in the file system next to the page. However, when ever I make a system wide change (a little under 7000 pages), I have a Perl script that runs through the file system updating the cached files. During this time, I've easily seen over 500 connections to the database without any noticable effect on the performance of the webserver (that still also does live queries on many pages if the viewer is anonymous, or all pages if the view is logged in). Our web server is an 867Mhz G4 'QuickSilver' and the Posgres machine is a dual 800 MHz QuickSilver as well (running in single CPU mode thanks to a hardware bug). Both have 1GB of RAM, and the Postgres machine spends most of its day yawning. :-) Cheers, Chris -- Christopher Murtagh Webmaster / Sysadmin Web Communications Group McGill University Montreal, Quebec Canada Tel.: (514) 398-3122 Fax: (514) 398-2017