On 11/02/2016 11:18 PM, Gionatan Danti wrote:
> Il 03-11-2016 00:21 Jim Nasby ha scritto:
>> On 11/2/16 2:02 PM, Gionatan Danti wrote:
>>
>> That means at least some of the Postgres files have been damaged
>> (possibly due to the failing disk). Postgres will complain when it
>> sees internal data structures that don't make sense, but it has no way
>> to know if any of the user data has been screwed up.
>
> I understand that (unfortunately) user data *will* be corrupted/lost.
> However, having no backup, I think the customer *must* accept that...
>
>>
>> I wouldn't trust the existing cluster that far. Since it sounds like
>> you have no better options, you could use zero_damaged_pages to allow
>> a pg_dumpall to complete, but you're going to end up with missing
>> data. So what I'd suggest would be:
>>
>> stop Postgres
>> make a copy of the cluster
>> start with zero_damaged_pages
>> pg_dumpall
>> stop and remove the cluster (make sure you've got that backup)
>> create a new cluster and load the dump
>
> The whole dump/restore approach surely is the most sensible one.
> However, I am concerned that if the dump have some undetected problems
> leading to a failed restore, I had to recover from the raw files (which
> I would like to avoid). Moreover, the expected remaining lifetime of
> such a database is 2/3 months only, as a new production system should be
> installed shortly. This is why I would prefer to use vacuum/reindex and
> avoid a full dump/restore.
The above does not make sense. You are having to recover because there
was no backup and now you want to go forward without doing a backup?
>
> Thank you very much Jim.
>
--
Adrian Klaver
adrian.klaver@aklaver.com