> I make a mistake ... there are 10.000 users and 1.000 from 10.000 try to
> access at the same time the database.
I have problems with your numbers. Even if you have 10,000 users who
are ALL online at the same time, in any reasonable period of time (say
60 seconds), how many of them would initiate a request?
In most online applications, 95% OR MORE of all time is spent waiting
for the user to do something. Web-based applications seem to fit that
rule fairly well, because nothing happens at the server end for any
given user until a 'submit' button is pressed.
Consider, for example, a simple name-and-address entry form. A really
fast typist can probably fill out 60-70 of them in an hour. That
means each user is submitting a request every 50-60 seconds. Thus
if there were 10,000 users doing this FULL TIME, they would generate
something under 200 requests/second.
In practice, I wouldn't expect to see more than 50-75 requests/second,
and it shouldn't be too hard to design a hardware configuration capable
of supporting that, disk speed and memory size are likely to be the
major bottleneck points.
I don't know if anyone has ever set up a queuing theory model for a
PostgreSQL+Apache environment, there are probably too many individual
tuning factors (not to mention application specific factors) to make
a generalizable model practical.
--
Mike Nolan