"Arnold Gamboa" <arnold@php4us.com> writes:
> We're about to build a "huge" website now. I got tied up in signing the
> contract without really getting enough information about PgSQL since this
> what we plan to implement with PHP (normally we use mySQL but i guess it
> does not fit for huge databases like that).
Can you do connection pooling and client side caching of database queries
in PHP ? From working with Java this is the spot we really improve speed.
>
> Here's my problem.. We're about to build a site like hitbox.com where there
> is a large amount of database required.. If say there is 100,000 users with
> 1000 page hits per day for each, and everything will be logged, you could
> imagine how huge this will be. I'm just so "nervous" (really, that's the
> term) if we implement this and later on experience a slow down or worse than
> that, crash in the server.
How many database queries do you have per page hit ?
How many database inserts/updates do you have per page hit ?
Are you using the database for httpd access logging, or is it some
application level logging ? Anyhow you might want to look into an
architecture where you have a dedicated box for the logging.
But most important, test with real data. Populate your database and run
stress tests.
I'm was doing some testing on a portal my company has developed with
PostgreSQL as the backend database. Running on my Linux laptop P466 with
128MB, Apache JServ, PostgreSQL 7.0.2. I managed to get about ~20 pageviews
a second. Each pageview had on average 4 queries and 1 insert.
But measure for yourself. Remember that you can gain a lot by tuning
application, database and OS.
regards,
Gunnar