Thread: Reality check

Reality check

From
John Gage
Date:
Configuration:

one database

three tables, none more than a few thousand rows with only a few
fields per row

one, repeat one, perl cgi script on the server

4,500 users simultaneously invoking the cgi script via http requests
from remote browsers (simultaneously meaning within perhaps seconds of
each other)

Each invocation of the script opens and closes a connection to the
database, implying perhaps 4,500 ~simultaneous~ connections to the
database

The script uses a single, repeat single/unique, *postgres* user to
access the database, a user who has read and write privileges on the
tables but that's it (no table creation, etc)

In other words, there are 4,500 unique users accessing the website,
each uniquely identified via http authentication, but only ONE
postgres user who they all share to access the database, which implies
that a single, unique postgres user will be opening and closing
connections to the database perhaps a thousand times a minute.

Does this work?

If it doesn't, are there any suggestions?

Thanking you very much for time and thoughts,

John Gage

P.S. Internet users are identified within the tables in a field
containing their Apache environmental variable user name.  They cannot
collide with each other in the tables.





Re: Reality check

From
"A. Kretschmer"
Date:
In response to John Gage :
> In other words, there are 4,500 unique users accessing the website,
> each uniquely identified via http authentication, but only ONE
> postgres user who they all share to access the database, which implies
> that a single, unique postgres user will be opening and closing
> connections to the database perhaps a thousand times a minute.
>
> Does this work?
>
> If it doesn't, are there any suggestions?

I think you should consider a connection pooler like pgbouncer.

Regards, Andreas
--
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

Re: Reality check

From
John Gage
Date:
This answer is exactly on target, I surmise.

But what I need is an ISP who utilizes pgbouncer (or who would, much
more unlikely, permit me to install it).

In other words, I can design the database and the perl script, but I
am at the mercy of whatever ISP I try to use.

Do all ISP's use pgbouncer, or can I be referred to one who does?

Thanks

John


On May 28, 2010, at 9:56 AM, A. Kretschmer wrote:

> In response to John Gage :
>> In other words, there are 4,500 unique users accessing the website,
>> each uniquely identified via http authentication, but only ONE
>> postgres user who they all share to access the database, which
>> implies
>> that a single, unique postgres user will be opening and closing
>> connections to the database perhaps a thousand times a minute.
>>
>> Does this work?
>>
>> If it doesn't, are there any suggestions?
>
> I think you should consider a connection pooler like pgbouncer.
>


Re: Reality check

From
Thom Brown
Date:
On 28 May 2010 08:56, A. Kretschmer <andreas.kretschmer@schollglas.com> wrote:
> In response to John Gage :
>> In other words, there are 4,500 unique users accessing the website,
>> each uniquely identified via http authentication, but only ONE
>> postgres user who they all share to access the database, which implies
>> that a single, unique postgres user will be opening and closing
>> connections to the database perhaps a thousand times a minute.
>>
>> Does this work?
>>
>> If it doesn't, are there any suggestions?
>
> I think you should consider a connection pooler like pgbouncer.
>

+1

Connections have their own overhead and I've found pgbouncer to be
effective.  You may also wish to consider caching if it's not critical
they see real-time data, which will reduce the number of queries
hitting the database.

Regards

Thom