On Wed, 2004-04-28 at 17:58, Chris Browne wrote:
> listas@miti.com.br ("Kilmer C. de Souza") writes:
> > Oww ... sorry man ...
> > I make a mistake ... there are 10.000 users and 1.000 from 10.000 try to
> > access at the same time the database.
> > Can you help me again with this condition?
>
> The issues don't really change. Opening 1000 concurrent connections
> means spawning 1K PostgreSQL processes, which will reserve a pile of
> memory, and cause a pretty severe performance problem.
>
I think you need some qualifiers to that statement, since opening the
processes themselves should cause little to no problems at all if given
the right hardware. The main database I work on is currently set to
handle up to 825 simultaneous connections during peak times and that is
with perl dbi style connection pooling. If it weren't for i/o issues,
I'm pretty sure PostgreSQL would have no problems at all running that
load, which really only means we need to get a faster disk system set
up. (Currently the data and wal live on a single 10,000 rpm SCSI drive).
While I agree with everyone else in this thread that the OP is not
likely to ever need such a high connection count, there's no reason that
PostgreSQL can't support it given you have enough RAM, fast enough
disks, and you don't shoot yourself in the foot with FK/Locking issues
in the app.
Robert Treat
--
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL