Thread: Is PostgreSQL ready for mission critical applications?

Is PostgreSQL ready for mission critical applications?

From
Stephen Birch
Date:
Question: Is PostgreSQL ready for mission critical applications?
----------------------------------------------------

This has to be a FAQ, but I cannot seem to get a good answer.

I have just returned from a very interesting Comdex where I spend a few
hours in the Linux area asking for opinions on this issue.

Most people were unsure and suggested that MySQL has a great reputation
for stability and may be a better choice.  Since I want transaction
support, MySQL is not an option for me.  In fact, I really wanted the
DBMS to support referential integrity as well, PostgreSQL doesn't  :-(

Several people reminded me that MySQL is faster than PostgreSQL.
However,  performance is far less important to me than the basic
question of stability.   The database must be stable enough to run
almost 24x7 and must never suffer from data corruption or from
mysterious crashes.

Is anyone out there actually using PostgreSQL for a mission critical
application?

Is PostgreSQL ready for prime time?

Help!



PS Many thanks to the MySQL advocate at the KDE stand who provided a
compelling argument that my
particular application did not need transactions.  Further thought
convinced me that they are a requirement.
Sorry I didn't get your name - yours was an insightful discussion.

Also, thanks to Michele Webster at the Applix booth for a  lively
discussion regarding this issue and for the suggestion that I post the
question at the PostgreSQL site.





Re: [GENERAL] Is PostgreSQL ready for mission critical applications?

From
Kevin Heflin
Date:
On Sun, 21 Nov 1999, Stephen Birch wrote:


> Several people reminded me that MySQL is faster than PostgreSQL.
> However,  performance is far less important to me than the basic
> question of stability.   The database must be stable enough to run
> almost 24x7 and must never suffer from data corruption or from
> mysterious crashes.
>
> Is anyone out there actually using PostgreSQL for a mission critical
> application?
>
> Is PostgreSQL ready for prime time?

We've using Postgresql for nothing but mission critical work. Never have
any problems with it. Don't have any complaints about the speed. We were
running 6.3.2 for a long time, and recently upgraded to 6.5.x
Supposedly the newer version has some speed benefits, but again, we've
never had any problems with speed to begin with.

We chose postgresql over mysql about 3 years ago, due to the more complete
sql support. Other than that, I just liked it better.

Postgresql handles the authentication for our dial-up users.. we have over
6000 users dialing into our network.

The same server also handles request for some dynamic web pages which
request information from postgresql.. sometimes with over a million hits
per hour.

All in all we've got about 100 different DBs on the server. All mission
critical as far as I'm concerned, some more than others obviously.


I've also received some great and timely help through the postgresql
mailing lists.


Good luck


Kevin



--------------------------------------------------------------------
Kevin Heflin          | ShreveNet, Inc.      | Ph:318.222.2638 x103
VP/Production         | 333 Texas St #175    | FAX:318.221.6612
kheflin@shreve.net    | Shreveport, LA 71101 | http://www.shreve.net
--------------------------------------------------------------------


Re: Is PostgreSQL ready for mission critical applications?

From
Jochen Topf
Date:
> Stephen Birch <sbirch@ironmountainsystems.com> writes:
> Question: Is PostgreSQL ready for mission critical applications?
> [...]

I can *not* recommend using PostgreSQL for a mission critical application. I
have used PostgreSQL for a reasonably sized project, where it is used as
the central database for an ISP for administration of all users, accounts,
hosts, ip numbers, accounting, etc. The decision for PostgreSQL was based
on cost and features. Like you, I needed transactions and other goodies
like triggers and notifications, that no other freely available database
can provide.

I was very pleased with PostgreSQL in the beginning, but that changed after
a while. PostgreSQL is not really stable, in fact it is very easy to crash
the backend process that is handling the connection to your client and quite
often the other backends shut down, too. I have seen many random errors, for
instance sometimes loading a new stored procedure will crash the database,
while it works the next time. Sometimes databases grow over every bound
making the system slower and slower, the vaccum process needs hours to do
its work and nothing except a dump and rebuild of the database helps.

The most frustrating thing is that most bugs are not repeatable or at least
not repeatable in a small test script that I could send in with a bug report.
Looking at the bug reports that come through the mailing list, there are a
lots of the type: X works here but not in this similar situation. This is
IMHO a symptom of a bad design. A recent upgrade (I think it was from 6.5
to 6.5.1 or something like that) helped a little bit but on the other hand
some query optimizations that worked before didn't work anymore.

So all this leads to my conclusion: The system is not ready for prime time.
If you only use some basic functionality it might be ok, but if you (like
me) use everything from transactions to triggers, notification, user defined
types, stored procedurs and rules, you will probabely not be happy with it.

There is a very active developer community and I still have hope that
PostgreSQL will make it at some point (otherwise I wouldn't be following the
mailing list).

Jochen
--
Jochen Topf - jochen@remote.org - http://www.remote.org/jochen/


Re: Is PostgreSQL ready for mission critical applications?

From
Jochen Topf
Date:
> Stephen Birch <sbirch@ironmountainsystems.com> writes:
> Question: Is PostgreSQL ready for mission critical applications?
> [...]

I can *not* recommend using PostgreSQL for a mission critical application. I
have used PostgreSQL for a reasonably sized project, where it is used as
the central database for an ISP for administration of all users, accounts,
hosts, ip numbers, accounting, etc. The decision for PostgreSQL was based
on cost and features. Like you, I needed transactions and other goodies
like triggers and notifications, that no other freely available database
can provide.

I was very pleased with PostgreSQL in the beginning, but that changed after
a while. PostgreSQL is not really stable, in fact it is very easy to crash
the backend process that is handling the connection to your client and quite
often the other backends shut down, too. I have seen many random errors, for
instance sometimes loading a new stored procedure will crash the database,
while it works the next time. Sometimes databases grow over every bound
making the system slower and slower, the vaccum process needs hours to do
its work and nothing except a dump and rebuild of the database helps.

The most frustrating thing is that most bugs are not repeatable or at least
not repeatable in a small test script that I could send in with a bug report.
Looking at the bug reports that come through the mailing list, there are a
lots of the type: X works here but not in this similar situation. This is
IMHO a symptom of a bad design. A recent upgrade (I think it was from 6.5
to 6.5.1 or something like that) helped a little bit but on the other hand
some query optimizations that worked before didn't work anymore.

So all this leads to my conclusion: The system is not ready for prime time.
If you only use some basic functionality it might be ok, but if you (like
me) use everything from transactions to triggers, notification, user defined
types, stored procedurs and rules, you will probabely not be happy with it.

There is a very active developer community and I still have hope that
PostgreSQL will make it at some point (otherwise I wouldn't be following the
mailing list).

Jochen
--
Jochen Topf - jochen@remote.org - http://www.remote.org/jochen/


Re: [GENERAL] Is PostgreSQL ready for mission critical applications?

From
Alessio Bragadini
Date:
Stephen Birch wrote:

> Question: Is PostgreSQL ready for mission critical applications?

> Several people reminded me that MySQL is faster than PostgreSQL.

On this issue I simply stick with the defition I received at University:
a DBMS (DataBase Management System) does transactions. Period. MySQL is
not a DBMS, then, but something like DBM.

--
Alessio F. Bragadini        alessio@albourne.com
APL Financial Services        http://www.sevenseas.org/~alessio
Nicosia, Cyprus             phone: +357-2-750652

You are welcome, sir, to Cyprus. -- Shakespeare's "Othello"

Re: Is PostgreSQL ready for mission critical applications?

From
"Brett W. McCoy"
Date:
> Stephen Birch <sbirch@ironmountainsystems.com> writes:
> Question: Is PostgreSQL ready for mission critical applications?

I think it is.  In my office, we are converting hundreds of thousands of
digitized documents (each of which is comprised of multiple TIFF images)
into PDF documents.  This has been going on since April or so.  We are
using Postgres 6.4 under Linux (PPro 200 w/128 megs of RAM), with the
original images stored on Novell servers.  This is almost a 24x7 process,
as we are constantly running conversion batches and going through QC
processes before the images are backed up and put into offline storage. We
are using Perl for the application front end (as CGI), and Image Alchemy
for the conversion, and Postgres for the batch maintenance.  This system
absolutely required transaction support, especially in the QC process.
On top of this, we are using the same server to run a simple search engine
based around Postgres to retrieve adverse drug reaction reports -- this
database has several million rows across several tables, using a PHP3
frontend.  Here, though, speed is not the consideration but reliable
performance is.  PostgreSQL has been very stable and I have no reason to
question its reliability.  We are going to be moving our drug reaction
database over onto its own server soon and providing public (although
secure) access in the near future -- it will be using a mod_perl frontend,
along with the the PostgreSQL fulltext module.

So I think PostgreSQL is quite solid and reliable.  The only thing I think
that is sorely needed in PostgreSQL is referential integrity constraints
like foreign keys (although this can be emulated with triggers).

On the other hand, I have been using MS-SQL 7 for several months now, for
another project, and am not at all happy with it -- it has crashed on me
several times (because of some flaky OCXs), even though I was only doing
database design and not doing production work, and I am frustrated by the
lack of user-defined functions that I have taken for granted in
PostgreSQL.

Brett W. McCoy
                                        http://www.lan2wan.com/~bmccoy/
-----------------------------------------------------------------------
"Gotcha, you snot-necked weenies!"
-- Post Bros. Comics


Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
Thomas Good
Date:
On Sun, 21 Nov 1999, Jochen Topf wrote:

> > Stephen Birch <sbirch@ironmountainsystems.com> writes:
> > Question: Is PostgreSQL ready for mission critical applications?
> > [...]
>
> I was very pleased with PostgreSQL in the beginning, but that changed after
> a while. PostgreSQL is not really stable, in fact it is very easy to crash
> the backend process that is handling the connection to your client and quite
> often the other backends shut down, too. I have seen many random errors, for
> instance sometimes loading a new stored procedure will crash the database,
> while it works the next time. Sometimes databases grow over every bound
> making the system slower and slower, the vaccum process needs hours to do
> its work and nothing except a dump and rebuild of the database helps.

Odd...I have a large group of users who *hammer* on my postgres database
daily.  I have twelve years worth of records in one local database...
despite all sorts of errors made by users (and electricians) we have
been fortunate, corruption wise.

We switched over from FoxPro about a year ago.   Dropped PROGRESS soon
after.  All of our character mode and web interface apps were ported
over and the performance is better than what we had previously.  I also
tested Oracle, Sybase and Informix and opted for Postgres.

I haven't had cause to regret it.  Occasionally (when I hastily write
sloppy queries that contain ORDER BY clauses) I have to clean up some
stale pg_sort files but aside from this (my error) postgres runs well.
I not only can recommend it for mission critical apps, I generally do
even if the affirmation is unsolicited.  ;-)

------- North Richmond Community Mental Health Center -------

Thomas Good                                   MIS Coordinator
Vital Signs:                  tomg@ { admin | q8 } .nrnet.org
                                          Phone: 718-354-5528
                                          Fax:   718-354-5056

/* Member: Computer Professionals For Social Responsibility */


Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
Ken Gunderson
Date:
At 12:04 PM 11/21/99 -0500, Brett W. McCoy wrote:
>> Stephen Birch <sbirch@ironmountainsystems.com> writes:
>> Question: Is PostgreSQL ready for mission critical applications?
>
>I think it is.  In my office, we are converting hundreds of thousands of
<snip>
>database has several million rows across several tables, using a PHP3
>frontend.  Here, though, speed is not the consideration but reliable
>performance is.  PostgreSQL has been very stable and I have no reason to
>question its reliability.  We are going to be moving our drug reaction
>database over onto its own server soon and providing public (although
>secure) access in the near future -- it will be using a mod_perl frontend,
>along with the the PostgreSQL fulltext module.
<snip>

I am curious as to why you are choosing to use mod_perl instead of php3,
especially since you've already been using php3??  And especially with
php4/zend just around the corner.  Not trying to start a flame war here, I
just really want to know.

Ciao-- Ken
http://www.y2know.org/safari

Failure is not an option- it comes bundled with your Microsoft product.

Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
"Kane Tao"
Date:
I second this.  I am sure you will not be able to get a good idea of the
database for some people have no problems with it and some do.  My main
concern for the database used in a mission critical application is that
1)  It requires a VERY skilled DBA in both Unix and PostgreSQL
2)  There are few tools that make for ease of development and
administration.
3)  Documentation is no where near as detailed or all encompassing as a
database like Oracle.
4)  There are certain instances when the database requires a rebuild from
scratch or tape that are not related to hardware failure or disk corruption.
.5)  There are no transaction logs or redo logs that allow you to recover
the database to a point in time or handle hot online backups.
6)  It does not scale up to multi processor/multi threading very well (As I
understand it).
7)  A vacuum has to be run often (at a regular interval) taking up valuable
system resources...locking tables and sometimes just failing utterly.

Although I will say I have been very happy with it as far as what I use it
for which is web site/e-commerce development.  Usually mirroring or
distributed off of another internal coporate server :)

----- Original Message -----
From: Jochen Topf <pgsql-general@mail.remote.org>
To: <pgsql-general@postgreSQL.org>
Sent: Sunday, November 21, 1999 11:23 AM
Subject: [GENERAL] Re: Is PostgreSQL ready for mission critical
applications?


> > Stephen Birch <sbirch@ironmountainsystems.com> writes:
> > Question: Is PostgreSQL ready for mission critical applications?
> > [...]
>
> I can *not* recommend using PostgreSQL for a mission critical application.
I
> have used PostgreSQL for a reasonably sized project, where it is used as
> the central database for an ISP for administration of all users, accounts,
> hosts, ip numbers, accounting, etc. The decision for PostgreSQL was based
> on cost and features. Like you, I needed transactions and other goodies
> like triggers and notifications, that no other freely available database
> can provide.
>
> I was very pleased with PostgreSQL in the beginning, but that changed
after
> a while. PostgreSQL is not really stable, in fact it is very easy to crash
> the backend process that is handling the connection to your client and
quite
> often the other backends shut down, too. I have seen many random errors,
for
> instance sometimes loading a new stored procedure will crash the database,
> while it works the next time. Sometimes databases grow over every bound
> making the system slower and slower, the vaccum process needs hours to do
> its work and nothing except a dump and rebuild of the database helps.
>
> The most frustrating thing is that most bugs are not repeatable or at
least
> not repeatable in a small test script that I could send in with a bug
report.
> Looking at the bug reports that come through the mailing list, there are a
> lots of the type: X works here but not in this similar situation. This is
> IMHO a symptom of a bad design. A recent upgrade (I think it was from 6.5
> to 6.5.1 or something like that) helped a little bit but on the other hand
> some query optimizations that worked before didn't work anymore.
>
> So all this leads to my conclusion: The system is not ready for prime
time.
> If you only use some basic functionality it might be ok, but if you (like
> me) use everything from transactions to triggers, notification, user
defined
> types, stored procedurs and rules, you will probabely not be happy with
it.
>
> There is a very active developer community and I still have hope that
> PostgreSQL will make it at some point (otherwise I wouldn't be following
the
> mailing list).
>
> Jochen
> --
> Jochen Topf - jochen@remote.org - http://www.remote.org/jochen/
>
>
> ************
>
>



Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
"Brett W. McCoy"
Date:
On Sun, 21 Nov 1999, Ken Gunderson wrote:

> I am curious as to why you are choosing to use mod_perl instead of php3,
> especially since you've already been using php3??  And especially with
> php4/zend just around the corner.  Not trying to start a flame war here, I
> just really want to know.

Because I am using several Perl modules that aren't available in PHP3
(some of which are ones I've written).  Don't get me wrong, I like PHP3
(especially because it uses the perl regular expression engine), but there
are some things in Perl I want to use, and I am more familiar with Perl as
well.

Brett W. McCoy
                                        http://www.lan2wan.com/~bmccoy/
-----------------------------------------------------------------------
Your worship is your furnaces
which, like old idols, lost obscenes,
have molten bowels; your vision is
machines for making more machines.
        -- Gordon Bottomley, 1874


Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
The Hermit Hacker
Date:
On Sun, 21 Nov 1999, Jochen Topf wrote:

> > Stephen Birch <sbirch@ironmountainsystems.com> writes:
> > Question: Is PostgreSQL ready for mission critical applications?
> > [...]
>
> I can *not* recommend using PostgreSQL for a mission critical application. I
> have used PostgreSQL for a reasonably sized project, where it is used as
> the central database for an ISP for administration of all users, accounts,
> hosts, ip numbers, accounting, etc. The decision for PostgreSQL was based
> on cost and features. Like you, I needed transactions and other goodies
> like triggers and notifications, that no other freely available database
> can provide.

Odd, I've been using PostgreSQL since v1.x for exactly this same reason,
and we haven't had any problems with the database crashing since v6.x was
released.  Then again, the radius server opens/closes its connections as
required, instead of relynig on one persistent connection, so maybe that
helps, but that's just "application programming" vs backend...

Also, PostgreSQL is the *key* element to the virtual email system that I
built around Cyrus IMAPd several months back...if PostgreSQL was "flakey",
I'd have users losing email left, right and center...basically, *all* mail
delivery, and user authentication, relies on PostgreSQL being up 24/7
*period*...and I consider that one to be even more mission critical then
the accounting system above.

I'm stuck at something like 6.4 for the accounting app, and 6.5.0 right
now for the virtual email system, so I'm not even running the more up to
date 6.5.3 yet...

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org


Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
The Hermit Hacker
Date:
On Sun, 21 Nov 1999, Kane Tao wrote:

> I second this.  I am sure you will not be able to get a good idea of the
> database for some people have no problems with it and some do.  My main
> concern for the database used in a mission critical application is that
> 1)  It requires a VERY skilled DBA in both Unix and PostgreSQL

why?  compared to what I've seen of Oracle, PostgreSQL is pretty
brain-dead to operate...

> 2)  There are few tools that make for ease of development and
>     administration.

    can't really comment here as I do everything in perl *shrug*

> 3)  Documentation is no where near as detailed or all encompassing as a
>     database like Oracle.

    again, can't comment here as I've always found what I was looking
for...specific examples?

> 4)  There are certain instances when the database requires a rebuild from
>     scratch or tape that are not related to hardware failure or disk
>     corruption.

    version of PostgreSQL?

    my email system database was created on July 9th of this year, and
I've never had to reload it from tape or otherwise, and its used by
sendmail 24/7 for email delivery.  (v6.5.0)

    my account system database was created on March 3th of this year,
and has been running without a reboot/restart since June 7th, which was
the last reboot of that machine...never reloaded that system either, and
its an older 6.4.0 system.

> 5)  There are no transaction logs or redo logs that allow you to recover
>     the database to a point in time or handle hot online backups.

    being worked on...but you are right, not currently available.

> 6)  It does not scale up to multi processor/multi threading very well (As I
>     understand it).

    actually, postgresql would run better on a multi-cpu FreeBSD
machine then MySQL would, to be honest.  FreeBSD's SMP doesn't have the
ability to 'change cpu on a thread-by-thread basis', so the fact that
MySQL uses threads would actually be a drawback vs advantage (all threads
of the started processes would be stuck on the same CPU, even if the other
CPU was idle)...PostgreSQL, each forked instance would go to the more idle
CPU, since its a new process...

> 7)  A vacuum has to be run often (at a regular interval) taking up
>     valuable system resources...locking tables and sometimes just
>     failing utterly.

    why does it have to be run often?  it depends on you
application/database.  if you are changing your database around *alot*
(alot of update/deletes), yes, since you have to force it to do its own
garbage collection...the next release will remove the table locking
required, since Vadim's MVCC implementation removes the requirement for it
to do so.  I do not beleive that this is something that is in v6.5.3, but
believe its already in there for v7...don't quote me on that, I've been
wrong before...

    essentially, one of the ideas that's been toyed with (but I'm not
sure if anyone has worked on) is the concept of getting rid of the
requiremetn for a vacuum altogether.  with the new MVCC code, the concept
of a table lock has essentially been removed, so a 'vacuum' *could* be
done periodically by the system itself...sort of like the auto-disk
defragmentation code that is in alot of the Unix file systems ...

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org


Re: Is PostgreSQL ready for mission critical applications?

From
Jochen Topf
Date:
The Hermit Hacker <scrappy@hub.org> wrote:
: Odd, I've been using PostgreSQL since v1.x for exactly this same reason,
: and we haven't had any problems with the database crashing since v6.x was
: released.  Then again, the radius server opens/closes its connections as
: required, instead of relynig on one persistent connection, so maybe that
: helps, but that's just "application programming" vs backend...

: Also, PostgreSQL is the *key* element to the virtual email system that I
: built around Cyrus IMAPd several months back...if PostgreSQL was "flakey",
: I'd have users losing email left, right and center...basically, *all* mail
: delivery, and user authentication, relies on PostgreSQL being up 24/7
: *period*...and I consider that one to be even more mission critical then
: the accounting system above.

Seems, we are in the same business. :-) But do you never vacuum? Do you never,
ever do something which blocks the database for more than a second or so? I
would never trust any database (not PostgreSQL and not Oracle and not any
other database) to be so reliable, they are just to complex and there are
just too many situations where they would not be reachable or block the
access for too long. An email system I build has nearly 2 million POP accesses
per day. That is more than a thausend authentications per minute (actually the
accesses are not evenly spread around the day, so this is a very rough
calculation). Even if the database is fast enough to handle this, it would
mean that a one minute breakdown of the database, for any reason, would make
1000 customers unhappy. That is not something I could sleep well with...

But coming back to the original question about the reliability of PostgreSQL.
Am I the only one using the "advanced features", like plpgsql procedures,
triggers, rules, etc.? Most of the problems I have encountered lie with these
kind of things and not with basic functionality. If you just use your
PostgreSQL as a kind of Berkeley DB with SQL, you will probabely not have too
many problems. Maybe these parts are tested more thoroughly? Many people seem
to use databases only as a directory service. If most of your accesses to
the database are reading not writing, if you never do any joins, no subselects,
no triggers and no other fancy stuff, then you are not really using the things
a RDBMS is for. And if PostgreSQL is only good enough for this kind of work,
it is not doing its job.

Jochen
--
Jochen Topf - jochen@remote.org - http://www.remote.org/jochen/


Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
"K.Tao"
Date:
Well I do apologize as all my experiences are in the use of pre 6.5
versions...I assume there have no longer been any reports of databases
having to be rebuilt or restored from tape from the way you are talking ;)

Although I still feel that the level of expertise for an admin on a Unix
platform running PostgreSQL is much higher than lets say Oracle on NT.  One
example is if you cancel out of a admin process like vacuum while in pgsql.
U have to have enough exp to know what files to go and delete to be able to
get pgsql back up and running.

I do think that the commercial support program moves PostgreSQL much closer
to being a database I would choose.  I havent had the requirement for that
support, but I am sure if I had a large system utilizing PostgreSQL I would
not hesitate to pay to make sure 24x7 I can get the database back up and
runniog within 15 mins of it going down :)

I also do like the sound of getting rid of vacuum and its table locks.
Anythng to make the database more self administering and self recovering is
good ;)






__________________________________________________
Do You Yahoo!?
Bid and sell for free at http://auctions.yahoo.com


Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
The Hermit Hacker
Date:
On Mon, 22 Nov 1999, K.Tao wrote:

> Well I do apologize as all my experiences are in the use of pre 6.5
> versions...I assume there have no longer been any reports of databases
> having to be rebuilt or restored from tape from the way you are talking ;)
>
> Although I still feel that the level of expertise for an admin on a
> Unix platform running PostgreSQL is much higher than lets say Oracle
> on NT.  One example is if you cancel out of a admin process like
> vacuum while in pgsql. U have to have enough exp to know what files to
> go and delete to be able to get pgsql back up and running.

Actually, I believe the pg_vlock file is planned for removal in v7.0 ...
just checked the current source tree, and this hasn't happened yet, but
there was talk on -hackers about removing it since MVCC invalidates the
requirement of locking all tables in a database while doing the vacuum...

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org


Re: Is PostgreSQL ready for mission critical applications?

From
Jochen Topf
Date:
The Hermit Hacker <scrappy@hub.org> wrote:
: [...]
: take a look at:
: [list deleted]
: Each one of those is mission critical to the person using it, and, in some
: cases, I'd say to the ppl that they affect (Utility Billing and POS System
: are the two that come to mind there)...
: [...]

Well, there are millions of people using Microsoft products for mission
critical applications. I would never do that. :-) Maybe my standards are higher
or my applications different. So this list really doesn't say much. The
problem with databases in general is, that my standards for them are way
higher then for most other pieces of software. If my web server failes, I
restart it. If a mail server fails, I restart it, if syslog failes, I don't
have a log file. But if a database failes, it is generally a lot more trouble.
On the other hand a database is generally, apart from the kernel, the most
complex thing running on your servers...

: Quite frankly, I think the fact that Jochen is still around *even though*
: he has problems says alot about the quality of both the software and the
: development processes that we've developed over the past year, and also
: gives a good indication of where we are going...

This is true. Despite of the problems I had with PostgreSQL, the system
I am using it for still runs PostgreSQL and it sort of works. We have to
reload the database every once in a while, and some of the triggers, I would
like to have, don't work. But basically it works. If you don't have the
money to go for a commercial database, PostgreSQL is not a bad option. But
don't think that everything with PostgreSQL is as bright, like some of the
postings make you believe. Watch your database for performance and other
problems, don't forget the backups and think about how to build your
application that it failes gracefully if the database screws up.

If you have an Oracle database you don't do that, you hire a DBA for it.
There is no way you can do it yourself. :-)

Jochen
--
Jochen Topf - jochen@remote.org - http://www.remote.org/jochen/


Re: Is PostgreSQL ready for mission critical applications?

From
Jochen Topf
Date:
Kane Tao <death@solaris1.mysolution.com> wrote:
:> And there are some problems of this kind in PostgreSQL. I am logging all
:> logins and logouts from a radius server into PostgreSQL and after it ran
:> well for several months, it slowed to a crawl and vacuum wouldn't work
:> anymore. So, yes, I do have a lot of inserts, although about 6000 inserts
:> a day and a total of a few hundert thausend records is not really much.

: What version of PostgreSQL did this occur on?  And how often were you
: running vacuums?

6.4.something and 6.5.1. Vacuum runs nightly.

Ross J. Reedstrom <reedstrm@wallace.ece.rice.edu> wrote:
: P.S. I noticed you mentioned the 'bug tracking system'. I know that a
: web based bug tracker was tried out earlier this year, but was abandoned
: in favor of the mailing lists.

I tried reporting it to the mailing list which I found in the documentation
or on the web page somewhere, but it didn't accept my message because I was
not subscribed. I figured that maybe the list was only for internal use
of the developers and the web bug tracking system, so I put it into the
web based system. That was a few month ago. I have no idea whether that was
the right thing to do and what happend to that bug report. It is not easy
to find information about that on the web pages or in the documentation.

Jochen
--
Jochen Topf - jochen@remote.org - http://www.remote.org/jochen/


Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
Peter Eisentraut
Date:
How about isolating some of these problems and solving them? Or if you
don't have the time/skills to do that, at least develop a detailed plan
how it should work. I am not trying to be arrogant here, but this project
depends on people finding things that annoy them and then fix them. That's
how I ended up here.

In particular documentation issues are more prone to be neglected because
the core developers are of course extremely familiar with everything and
also mostly have other things to do. (No offense to Thomas -- great work.)
It takes no programming skills to update the documenation, and if you
don't know SGML/DocBook, we're here to help.

On 1999-11-21, Kane Tao mentioned:

> 1)  It requires a VERY skilled DBA in both Unix and PostgreSQL

Granted, the installation process receives critique all the time. How
would you like it to work? What parts are too complicated? If they only
*appear* to be so, then this is a documentation deficiency, otherwise we'd
need to think about it.

> 2)  There are few tools that make for ease of development and
> administration.

Personally, I am under the impression that there is not a whole lot of
administering to do, which is Good. Regarding ease of development, the
interfaces we offer are IMHO just as good as other DBMS' offer, but we're
not in the business of providing toolkits such as Zope. If less third
parties choose to support us, that sucks, but it's not an argument against
PostgreSQL itself. (cf. "<some_free_os> is inferior because there are no
'productivity' apps available for it")

> 3)  Documentation is no where near as detailed or all encompassing as a
> database like Oracle.

Although I usually find what I need, see 2nd paragraph.

> 4)  There are certain instances when the database requires a rebuild from
> scratch or tape that are not related to hardware failure or disk corruption.

Huh?

> .5)  There are no transaction logs or redo logs that allow you to recover
> the database to a point in time or handle hot online backups.

Point granted. But it's coming.

> 6)  It does not scale up to multi processor/multi threading very well (As I
> understand it).

I don't understand this area too well either, but is there *anything*
below $10000 that scales to multiprocessors well?

> 7)  A vacuum has to be run often (at a regular interval) taking up valuable
> system resources...locking tables and sometimes just failing utterly.

Not really. Sunday morning at 4 should suffice unless you run the hottest
thing on the Net.

    -Peter

--
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden



Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
"Kane Tao"
Date:
> How about isolating some of these problems and solving them? Or if you
> don't have the time/skills to do that, at least develop a detailed plan
> how it should work. I am not trying to be arrogant here, but this project
> depends on people finding things that annoy them and then fix them. That's
> how I ended up here.
I have solved all of the problems I have encountered through my light usage
of PostgreSQL.  The problems I refer to are problems I read about in here
for example when users say they have corrupt indexes and the suggested
solution is to rebuild all the indexes (which is not easy to do i.e. one
click of the mouse, or a problem that should occur so often)

> In particular documentation issues are more prone to be neglected because
> the core developers are of course extremely familiar with everything and
> also mostly have other things to do. (No offense to Thomas -- great work.)
> It takes no programming skills to update the documenation, and if you
> don't know SGML/DocBook, we're here to help.

Although I do see this happen all of the time...it still is a deficiency
that makes the database that much harder to learn and use...

> > 1)  It requires a VERY skilled DBA in both Unix and PostgreSQL
>
> Granted, the installation process receives critique all the time. How
> would you like it to work? What parts are too complicated? If they only
> *appear* to be so, then this is a documentation deficiency, otherwise we'd
> need to think about it.
I think the concept of user friendly design is universal.  There should be
one button in the middle of the screen you push and everything is done for
you :)  (refer to technical support if you need to know more :)

> > 2)  There are few tools that make for ease of development and
> > administration.
>
> Personally, I am under the impression that there is not a whole lot of
> administering to do, which is Good. Regarding ease of development, the
> interfaces we offer are IMHO just as good as other DBMS' offer, but we're
> not in the business of providing toolkits such as Zope. If less third
> parties choose to support us, that sucks, but it's not an argument against
> PostgreSQL itself. (cf. "<some_free_os> is inferior because there are no
> 'productivity' apps available for it")
Database administration is not just system maintenance.  It is also
designing and maintaining tables, stored procedures, triggers etc ...

> > 3)  Documentation is no where near as detailed or all encompassing as a
> > database like Oracle.
>
> Although I usually find what I need, see 2nd paragraph.
I havent been though the documentation in quite a while.  But I remember
wanting to know allt he files that were installed for PostgreSQL and where
they were located as well as what each file is used for, how the system was
affected by abnormal shutdowns, a list of all the possible error msgs
generated and the steps to recover/correct the problems, file buffer
optimization, transaction buffer optimization, disk space usage for tables
and indexes and how to calculate them, system tables and what each field
meant and when they were updated, how to turn on system metrics for
transactions, what are the pros and cons of potential backup procedures;how
are they done...  Those were just a few questions I had back when.  Never
found the answers back then...

> > 4)  There are certain instances when the database requires a rebuild
from
> > scratch or tape that are not related to hardware failure or disk
corruption.
>
> Huh?
Same as before...I have read numerous responses that state that the only way
to resolve a problem is to go to tape backups and restore....I personally
have never had to do it (Thank God)

> > 6)  It does not scale up to multi processor/multi threading very well
(As I
> > understand it).
>
> I don't understand this area too well either, but is there *anything*
> below $10000 that scales to multiprocessors well?
Oracle is under $10000 if u dont ask for the unlimited Internet users
version ;)

> > 7)  A vacuum has to be run often (at a regular interval) taking up
valuable
> > system resources...locking tables and sometimes just failing utterly.
>
> Not really. Sunday morning at 4 should suffice unless you run the hottest
> thing on the Net.
That brings up the differences in views on what is a mission critical
system.  I see it as a 24x7 system that has thousands of transactions daily
in which the system cannot be down in case of emergency for more than 15
minutes and in case of scheduked down time less than 5 minutes if not at
all.






Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
Elmar Haneke
Date:

Peter Eisentraut wrote:

> > 6)  It does not scale up to multi processor/multi threading very well (As I
> > understand it).
>
> I don't understand this area too well either, but is there *anything*
> below $10000 that scales to multiprocessors well?

The only real "deficit" of PostgreSQL in comparison to the "big
Servers" is, that it cannot utilize multiple CPU or disks to process
an single query faster. Servers as Informix or Oracle can split an
single SQL-statement to multiple Jobs done in parallel. PostgreSQL can
only process queries on different connections in parallel. I don't
know if there are any problems with SMP capability but I'm sure that
these should be solvable.

If someone really needs an DBMS capable of splitting single queries to
multiple CPU PostgreSQL is no choice - I don't think that this might
change in the future since there is not much need for such an
extension.

Elmar

Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
Bruce Momjian
Date:
> > would you like it to work? What parts are too complicated? If they only
> > *appear* to be so, then this is a documentation deficiency, otherwise we'd
> > need to think about it.
> I think the concept of user friendly design is universal.  There should be
> one button in the middle of the screen you push and everything is done for
> you :)  (refer to technical support if you need to know more :)

I refer to this as "helmet-ware".  The software reads your mind, figures
out what you want it to do, and does it.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [GENERAL] Re: Is PostgreSQL ready for mission critical applications?

From
Lincoln Yeoh
Date:
At 11:48 AM 24-11-1999 -0500, Bruce Momjian wrote:
>> > would you like it to work? What parts are too complicated? If they only
>> > *appear* to be so, then this is a documentation deficiency, otherwise
we'd
>> > need to think about it.
>> I think the concept of user friendly design is universal.  There should be
>> one button in the middle of the screen you push and everything is done for
>> you :)  (refer to technical support if you need to know more :)
>
>I refer to this as "helmet-ware".  The software reads your mind, figures
>out what you want it to do, and does it.

And halfway you change your mind :).

So it's still not fool proof.

You need futureware. It'll predict what is really wanted and do that
instead ;). In fact it doesn't need MVCC and stuff like that, since it
knows what's going to happen. It'll have an Advanced Multi Universe
Concurrency Control.

Seriously tho. For things to be useful, there will always be a need for
humans to make decisions.

For databases and much software a single "Install Yes/No" is not
satisfactory in sufficient cases to require additional decisions to be made
during installation.

The challenge is to organise the decisions/choices in as optimal a way as
possible. For example: The useful and popular choices are more
accessible/apparent, and the less popular ones don't clutter the others. At
the same time, making them obvious and understandable.

Not easy. Easy to go wrong. If you put only one button, sometimes the
actual choice will be "Remove crappy software? (Yes/No)".

Hey, how about putting this option: "Global Thermonuclear War? (Yes/No)".
Of course that is only if Pg compiled with humour.h included.

Cheerio,

Link.