Thread: Performance/Reliability statistics?

Performance/Reliability statistics?

From
"Jason M.Felice"
Date:
To start this in the typical I-just-joined-this-list fasion:

Hello, all:

I'm pitching a project for a client currently and have narrowed the choices
for the back-end down to PostgreSQL vs. InterBase vs. Oracle.  InterBase and
Oracle both have basic performance and reliability statistics (InterBase says
it's good for about 700 users and about a 10,000 row database, for example),
but I have not been able to find any such information about PostgreSQL.

I know that any such information would be subjective, but I'm looking to make
sure that PostgreSQL is in the same ball-park as the project.

This is going to be a PHP-written web-based application with approximately
2000 users which come in two classes, the first class (which is about 1900 of
the users) will usually be using the system once a day, maybe in the morning -
maybe not even that regularly.  The remaining 100 (well, less than that
actually - maybe 50?), will be using the system constantly between 7AM and 5PM
every day.  The database structure is going to be medium-sophisticated but it
will really revolve around two or three tables, the most central table will be
indexed multiple ways and heavily read and updated simultaneously.

Okay, do we have any statistics on a somewhat-similar environment?  I love
PostgreSQL, so I don't mean to offend anyone by saying this, but I really
haven't seen it used in this scale environment...  Anyone?  Tricks to keep
it running? (I noticed on another list that a backend horking causes the
others to rollback and shutdown to avoid corrupting shared memory, so I'll be
putting this into inittab ;-)

Thanks in advance,
-Jason M. Felice <jfelice@cronosys.com>

P.S. If there are some good medium-large-ish scale projects which are fairly
stable out there, the next step will be to ask how much hardware.


Re: Performance/Reliability statistics?

From
Peter Eisentraut
Date:
Jason M.Felice writes:

> InterBase and Oracle both have basic performance and reliability
> statistics

Do you mean *statistics* or `claims'?

> (InterBase says it's good for about 700 users and about a 10,000 row
> database, for example)

There are PostgreSQL databases with millions of rows and many gigabytes in
size. I couldn't tell you much about the user aspect but I don't see a
problem with 50 concurrent connections plus an extended set of occasional
users.

> (I noticed on another list that a backend horking causes the others to
> rollback and shutdown to avoid corrupting shared memory,

It will shutdown all connections and reinitialize itself. That's different
from shutting down the whole server.

> so I'll be putting this into inittab ;-)

I'd strongly advise against that. It won't solve the problem you think it
would (because there is none) and it comes with its own set of issues.

> P.S. If there are some good medium-large-ish scale projects which are
> fairly stable out there,

I don't really know what you mean with this comment but let me assure
you: people actually use this software for real work.

> the next step will be to ask how much hardware.

You can read endless threads about "hardware" in the archives. What it
comes down to is lots of memory, a good disk, a good file system, lots of
CPU power (and perhaps a second CPU); approximately in that order I'd say.


--
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden