Thread: Looking for installations with a large number of concurrent users
Hello all, We're implementing a fairly large J2EE application, I'm estimating around 450,000 concurrent users at high peak. Performing reads and writes and we have a very high performance requirement. I'll be using connection pooling (probably the pooling delivered with Geronimo). I'd like to get an idea of "how big can I go". without running into context switch storms, or hiting some other wall. The design actually calls for multiple databases but I'm trying to get a good idea of the max size / database. (I.e., don't want 50+ database servers if i can avoid it) We'll be on 8.4 (or 8.5) by the time we go live and SLES linux (for now). I don't have hardware yet, basically, we'll purchase enough hardware to handle whatever we need... Is anyone willing to share their max connections and maybe some rough hardware sizing (cpu/mem?). Thanks Dave
David Kerr <dmk@mr-paradox.net> wrote: > We'll be on 8.4 (or 8.5) by the time we go live and SLES linux (for > now). I don't have hardware yet, basically, we'll purchase enough > hardware to handle whatever we need... > > Is anyone willing to share their max connections and maybe some > rough hardware sizing (cpu/mem?). We're on SLES 10 SP 2 and are handling a web site which gets two to three million hits per day, running tens of millions of queries, while functioning as a replication target receiving about one million database transactions to modify data, averaging about 10 DML statements each, on one box with the following hardware: 16 Xeon X7350 @ 2.93GHz 128 GB RAM 36 drives in RAID 5 for data for the above 2 mirrored drives for xlog 2 mirrored drives for the OS 12 drives in RAID 5 for another database (less active) a decent battery backed RAID controller, using write-back This server also runs our Java middle tiers for accessing the database on the box (using a home-grown framework). We need to run three multiprocessor blades running Tomcat to handle the rendering for the web application. The renderers tend to saturate before this server. This all runs very comfortably on the one box, although we have multiples (in different buildings) kept up-to-date on replication, to ensure high availability. The connection pool for the web application is maxed at 25 active connections; the replication at 6. We were using higher values, but found that shrinking the connection pool down to this improved throughput (in a saturation test) and response time (in production). If the server were dedicated to PostgreSQL only, more connections would probably be optimal. I worry a little when you mention J2EE. EJBs were notoriously poor performers, although I hear there have been improvements. Just be careful to pinpoint your bottlenecks so you can address the real problem if there is a performance issue. -Kevin
On Wed, Jun 10, 2009 at 11:40:21AM -0500, Kevin Grittner wrote: - We're on SLES 10 SP 2 and are handling a web site which gets two to - three million hits per day, running tens of millions of queries, while - functioning as a replication target receiving about one million - database transactions to modify data, averaging about 10 DML - statements each, on one box with the following hardware: [snip] Thanks! that's all great info, puts me much more at ease. - The connection pool for the web application is maxed at 25 active - connections; the replication at 6. We were using higher values, but - found that shrinking the connection pool down to this improved - throughput (in a saturation test) and response time (in production). - If the server were dedicated to PostgreSQL only, more connections - would probably be optimal. Ok, so it looks ilike I need to focus some testing there to find the optimal for my setup. I was thinking 25 for starters, but I think i'll bump that to 50. - I worry a little when you mention J2EE. EJBs were notoriously poor - performers, although I hear there have been improvements. Just be - careful to pinpoint your bottlenecks so you can address the real - problem if there is a performance issue. Sounds good, thanks for the heads up. Dave