In <c2d9e70e050219112379204df4@mail.gmail.com>, on 02/19/05 at 02:23 PM, Jaime Casanova <systemguards@gmail.com>
said:
>On Fri, 18 Feb 2005 22:35:31 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote: >
>pgsql@mohawksoft.com writes:
>> > I think there should be a 100% no data loss fail safe.
>>
>> Possibly we need to recalibrate our expectations here. The current
>> situation is that PostgreSQL will not lose data if:
>>
>> 1. Your disk drive doesn't screw up (eg, lie about write complete,
>> or just plain die on you).
>> 2. Your kernel and filesystem don't screw up.
>> 3. You follow the instructions about routine vacuuming.
>> 4. You don't hit any bugs that we don't know about.
>>
>I'm not an expert but a happy user. My opinion is:
>1) there is nothing to do with #1 and #2.
>2) #4 is not a big problem because of the velocity developers fix those
>when a bug is found.
>3) All databases has some type of maintenance routine, in informix for
>example we have (update statistics, and there are others for oracle) of
>course they are for performance reasons, but vacuum is too for that and
>additionally give us the XID wraparound.
>So, to have a maintenance routine in PostgreSQL is not bad. *Bad* is to
>have a DBA(1) with no clue about the tool is using. Tools that do to much
>are an incentive in hire *no clue* people.
>(1) DBA: DataBase Administrator or DataBase Aniquilator???
>regards,
>Jaime Casanova
Bad mouthing the people who use your software is a good way to make sure
no one uses the software.
The catastrophic failure of the database because a maintenence function is
not performed is a problem with the software, not with the people using
it.
--
-----------------------------------------------------------
lsunley@mb.sympatico.ca
-----------------------------------------------------------