Thread: I/O error on data file, can't run backup

I/O error on data file, can't run backup

From
Leif Biberg Kristensen
Date:
Running postgresql 9.0.5 on

balapapa ~ # uname -a
Linux balapapa 2.6.39-gentoo-r3 #1 SMP Sun Jul 17 11:22:15 CEST 2011 x86_64
Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz GenuineIntel GNU/Linux

I'm trying to run pg_dump on my database, and get an error:

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  could not read block 1 in file
"base/612249/11658": Inn/ut-feil
pg_dump: The command was: SELECT tableoid, oid, opfname, opfnamespace, (SELECT
rolname FROM pg_catalog.pg_roles WHERE oid = opfowner) AS rolname FROM
pg_opfamily

I have tried to stop postgresql and take a filesystem backup of the data
directory with a cp -ax, but it crashes on the same file. I've looked at the
directory with ls -l, and the file looks pretty normal to me. I've also
rebooted from a live CD and run fsck on my /var partition, and it doesn't find
any problem.

The database is still working perfectly.

The backup script overwrote my previous backup with a 40 byte file (yes silly
me I know that's bloody stupid - I'm gonna fix that) and now I haven't got a
recent backup anymore.

Is this fixable?

regards, Leif

Re: I/O error on data file, can't run backup

From
Tom Lane
Date:
Leif Biberg Kristensen <leif@solumslekt.org> writes:
> Running postgresql 9.0.5 on
> balapapa ~ # uname -a
> Linux balapapa 2.6.39-gentoo-r3 #1 SMP Sun Jul 17 11:22:15 CEST 2011 x86_64
> Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz GenuineIntel GNU/Linux

> I'm trying to run pg_dump on my database, and get an error:

> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  could not read block 1 in file
> "base/612249/11658": Inn/ut-feil
> pg_dump: The command was: SELECT tableoid, oid, opfname, opfnamespace, (SELECT
> rolname FROM pg_catalog.pg_roles WHERE oid = opfowner) AS rolname FROM
> pg_opfamily

> I have tried to stop postgresql and take a filesystem backup of the data
> directory with a cp -ax, but it crashes on the same file.

You have a disk failure on some sector of that file, apparently.  I'd be
thinking about replacing that disk drive if I were you.  Once it starts
showing uncorrectable errors the MTTF is going to be short.

> The backup script overwrote my previous backup with a 40 byte file (yes silly
> me I know that's bloody stupid - I'm gonna fix that) and now I haven't got a
> recent backup anymore.

> Is this fixable?

Postgres can't magically resurrect data that your drive lost, if that's
what you were hoping for.  However, you might be in luck, because that
file is probably just an index and not original data.  Try this:

    select relname from pg_class where relfilenode = 11658;

On my 9.0 installation I get "pg_opclass_am_name_nsp_index".  If you get
the same (or any other index for that matter) just reindex that index
and you'll be all right ... or at least, you will be if that's the only
file your drive has lost.

            regards, tom lane

Re: I/O error on data file, can't run backup

From
Leif Biberg Kristensen
Date:
On Wednesday 5. October 2011 20.42.00 Tom Lane wrote:
> Postgres can't magically resurrect data that your drive lost, if that's
> what you were hoping for.  However, you might be in luck, because that
> file is probably just an index and not original data.  Try this:
>
>     select relname from pg_class where relfilenode = 11658;
>
> On my 9.0 installation I get "pg_opclass_am_name_nsp_index".  If you get
> the same (or any other index for that matter) just reindex that index
> and you'll be all right ... or at least, you will be if that's the only
> file your drive has lost.

Tom,
this is what I get:

postgres@balapapa ~ $ psql pgslekt
psql (9.0.5)
Type "help" for help.

pgslekt=# select relname from pg_class where relfilenode = 11658;
   relname
-------------
 pg_opfamily
(1 row)

regards, Leif

Re: I/O error on data file, can't run backup

From
Leif Biberg Kristensen
Date:
I seemingly fixed the problem by stopping postgres and doing:

balapapa 612249 # mv 11658 11658.old
balapapa 612249 # mv 11658.old 11658

And the backup magically works.

I'm gonna move the data to another disk right now.

regards, Leif

Re: I/O error on data file, can't run backup

From
Tom Lane
Date:
Leif Biberg Kristensen <leif@solumslekt.org> writes:
> I seemingly fixed the problem by stopping postgres and doing:
> balapapa 612249 # mv 11658 11658.old
> balapapa 612249 # mv 11658.old 11658

> And the backup magically works.

Wow, that is magic.  I was going to suggest copying pg_opfamily from
template0, which would probably work (maybe requiring reindexing) as
long as you didn't have any non-core data types in use.  But you
got lucky.

> I'm gonna move the data to another disk right now.

Good plan.

            regards, tom lane

Re: I/O error on data file, can't run backup

From
Leif Biberg Kristensen
Date:
On Wednesday 5. October 2011 22.41.49 Tom Lane wrote:
> Leif Biberg Kristensen <leif@solumslekt.org> writes:

> > I'm gonna move the data to another disk right now.
>
> Good plan.

Couple of things I forgot to mention, in case it matters:

The disk is a 1 TB Seagate Barracuda S-ATA, and it has been in use for about a
year. I've been using this brand since way back around 1998 without any
problems, but have never used any disk more than 3 years. The file system is
ext3.

I had a hang on the machine a few hours earlier that required a power-off
reboot. That has been a problem with this rig since I built it about a year
ago, it's probably a funky connection somewhere. This may be the direct cause
of the I/O error, which also may mean that the disk is not to blame.

I'm so used to postgres and everything else coming up without a hiccup after a
power-off that I don't usually pay much attention to it. But I'm certainly
going to rework my backup strategy, and keep several generations.

regards, Leif

Re: I/O error on data file, can't run backup

From
Steve Crawford
Date:
On 10/05/2011 02:48 PM, Leif Biberg Kristensen wrote:
>
> I had a hang on the machine a few hours earlier that required a power-off
> reboot. That has been a problem with this rig since I built it about a year
> ago, it's probably a funky connection somewhere. This may be the direct cause
> of the I/O error, which also may mean that the disk is not to blame.
>
> I'm so used to postgres and everything else coming up without a hiccup after a
> power-off that I don't usually pay much attention to it
PostgreSQL is great, but it can't overcome defective hardware.

I'm thinking perhaps a funky memory problem - you are having odd crashes
after all.

If memory is failing you could have a file that is corrupted not on disk
but in the cache. Perhaps in the process of stopping and starting
PostgreSQL, the data that was causing the trouble got flushed from cache
then reread from disk. You may find this story interesting:
http://blogs.oracle.com/ksplice/entry/attack_of_the_cosmic_rays1

Cheers,
Steve


Re: I/O error on data file, can't run backup

From
Leif Biberg Kristensen
Date:
On Thursday 6. October 2011 00.17.38 Steve Crawford wrote:
> I'm thinking perhaps a funky memory problem - you are having odd crashes
> after all.

I've been thinking about the memory myself, but it passes memtest86plus with
flying colors. Or at least it did the last time I checked which is a few months
ago.

The problems got a lot better after I replaced a monster Radeon XFX video card
with a very basic fanless NVidia card (with the added bonus that I can now
actually watch Flash videos in full screen), which may point to overheating
issues.

In other news: I discovered that injecting `date +%u` into the backup file name
at an appropriate place will number it by weekday, which is great for keeping
daily backups for a week.

regards, Leif.

Re: I/O error on data file, can't run backup

From
Steve Crawford
Date:
On 10/05/2011 03:43 PM, Leif Biberg Kristensen wrote:
> On Thursday 6. October 2011 00.17.38 Steve Crawford wrote:
>> I'm thinking perhaps a funky memory problem - you are having odd crashes
>> after all.
> I've been thinking about the memory myself, but it passes memtest86plus with
> flying colors. Or at least it did the last time I checked which is a few months
> ago.
I have had two machines pass extensive memtest86plus but fail on heavy
pgbench testing and in both cases the cause was ultimately traced to bad
memory.

Cheers,
Steve


Re: I/O error on data file, can't run backup

From
Craig Ringer
Date:
On 10/06/2011 03:06 AM, Leif Biberg Kristensen wrote:
> I seemingly fixed the problem by stopping postgres and doing:
>
> balapapa 612249 # mv 11658 11658.old
> balapapa 612249 # mv 11658.old 11658
>
> And the backup magically works.

Woooooo! That's ... "interesting".

I'd be inclined to suspect filesystem corruption, a file system bug /
kernel bug (not very likely if you're on ext3), flakey RAM, etc rather
than a failing disk ... though a failing disk _could_ still be the culprit.

Use smartmontools to do a self-test; if 'smartctl -d ata -t long
/dev/sdx' (where 'x' is the drive node) is reported by 'smartctl -d ata
-a /dev/sdx' as having passed, there are no pending or uncorrectable
sectors, and the disk status is reported as 'HEALTHY' your disk is quite
likely OK. Note that a 'PASSED' or 'HEALTHY' report by its self doesn't
mean much, disk firmwares often return HEALTHY even when the disk can't
even read sector 0.

I strongly recommend making a full backup, both a pg_dump *and* a
file-system level copy of the datadir. Personally I'd then do a test
restore of the pg_dump backup on a separate Pg instance and if it looked
OK I'd re-initdb then reload from the dump.

--
Craig Ringer

Re: I/O error on data file, can't run backup

From
Leif Biberg Kristensen
Date:
On Thursday 6. October 2011 07.07.11 Craig Ringer wrote:
> On 10/06/2011 03:06 AM, Leif Biberg Kristensen wrote:
> > I seemingly fixed the problem by stopping postgres and doing:
> >
> > balapapa 612249 # mv 11658 11658.old
> > balapapa 612249 # mv 11658.old 11658
> >
> > And the backup magically works.
>
> Woooooo! That's ... "interesting".
>
> I'd be inclined to suspect filesystem corruption, a file system bug /
> kernel bug (not very likely if you're on ext3), flakey RAM, etc rather
> than a failing disk ... though a failing disk _could_ still be the culprit.
>
> Use smartmontools to do a self-test; if 'smartctl -d ata -t long
> /dev/sdx' (where 'x' is the drive node) is reported by 'smartctl -d ata
> -a /dev/sdx' as having passed, there are no pending or uncorrectable
> sectors, and the disk status is reported as 'HEALTHY' your disk is quite
> likely OK. Note that a 'PASSED' or 'HEALTHY' report by its self doesn't
> mean much, disk firmwares often return HEALTHY even when the disk can't
> even read sector 0.
>
> I strongly recommend making a full backup, both a pg_dump *and* a
> file-system level copy of the datadir. Personally I'd then do a test
> restore of the pg_dump backup on a separate Pg instance and if it looked
> OK I'd re-initdb then reload from the dump.

Craig,
Thank you very much for the tip on smartmontools, which I didn't know about.
There indeed appears to be some problems with this disk:

8<---

balapapa ~ # smartctl -d ata -a /dev/sdb -s on
smartctl 5.40 2010-10-16 r3189 [x86_64-pc-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.11 family
Device Model:     ST31000340AS
Serial Number:    9QJ1ZMHY
Firmware Version: SD15
User Capacity:    1 000 204 886 016 bytes
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:    Thu Oct  6 07:46:19 2011 CEST

==> WARNING: There are known problems with these drives,
AND THIS FIRMWARE VERSION IS AFFECTED,
see the following Seagate web pages:
http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207931
http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207951

SMART support is: Available - device has SMART capability.
SMART support is: Disabled

=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Enabled.

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (  25) The self-test routine was aborted by
                                        the host.
Total time to complete Offline
data collection:                 ( 650) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off
support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 236) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x103b) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   114   099   006    Pre-fail  Always
-       61796058
  3 Spin_Up_Time            0x0003   094   092   000    Pre-fail  Always
-       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always
-       46
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always
-       1
  7 Seek_Error_Rate         0x000f   045   045   030    Pre-fail  Always
-       3848329867033
  9 Power_On_Hours          0x0032   076   076   000    Old_age   Always
-       21358
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always
-       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always
-       141
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always
-       0
187 Reported_Uncorrect      0x0032   016   016   000    Old_age   Always
-       84
188 Command_Timeout         0x0032   100   100   000    Old_age   Always
-       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always
-       0
190 Airflow_Temperature_Cel 0x0022   063   049   045    Old_age   Always
-       37 (Min/Max 37/37)
194 Temperature_Celsius     0x0022   037   051   000    Old_age   Always
-       37 (0 23 0 0)
195 Hardware_ECC_Recovered  0x001a   018   014   000    Old_age   Always
-       61796058
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always
-       2
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -
2
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always
-       0

SMART Error Log Version: 1
ATA Error Count: 84 (device log contains only the most recent five errors)
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 84 occurred at disk power-on lifetime: 19483 hours (811 days + 19 hours)
  When the command that caused the error occurred, the device was active or
idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 ff ff ff ef 00      11:02:43.659  READ DMA EXT
  27 00 00 00 00 00 e0 00      11:02:43.658  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      11:02:43.638  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      11:02:43.619  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      11:02:43.558  READ NATIVE MAX ADDRESS EXT

Error 83 occurred at disk power-on lifetime: 19483 hours (811 days + 19 hours)
  When the command that caused the error occurred, the device was active or
idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 ff ff ff ef 00      11:02:40.589  READ DMA EXT
  27 00 00 00 00 00 e0 00      11:02:40.588  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      11:02:40.568  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      11:02:40.549  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      11:02:40.498  READ NATIVE MAX ADDRESS EXT

Error 82 occurred at disk power-on lifetime: 19483 hours (811 days + 19 hours)
  When the command that caused the error occurred, the device was active or
idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 ff ff ff ef 00      11:02:37.539  READ DMA EXT
  27 00 00 00 00 00 e0 00      11:02:37.538  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      11:02:37.518  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      11:02:37.499  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      11:02:37.448  READ NATIVE MAX ADDRESS EXT

Error 81 occurred at disk power-on lifetime: 19483 hours (811 days + 19 hours)
  When the command that caused the error occurred, the device was active or
idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 ff ff ff ef 00      11:02:34.459  READ DMA EXT
  27 00 00 00 00 00 e0 00      11:02:34.458  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      11:02:34.438  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      11:02:34.419  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      11:02:34.348  READ NATIVE MAX ADDRESS EXT

Error 80 occurred at disk power-on lifetime: 19483 hours (811 days + 19 hours)
  When the command that caused the error occurred, the device was active or
idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 ff ff ff ef 00      11:02:31.369  READ DMA EXT
  27 00 00 00 00 00 e0 00      11:02:31.368  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      11:02:31.348  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      11:02:31.329  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      11:02:31.278  READ NATIVE MAX ADDRESS EXT

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

8<---

I'm not a hard disk guru, and this info doesn't really say anything specific to
me. Wrt the firmware issue, I went to the Seagate site and downloaded and
burned an ISO, rebooted and followed the instructions, but the firmware update
failed with "unexpected disk type" or something to that effect, although I'm
positively certain that I downloaded the correct ISO and the program initially
identified the disk correctly. The download page is
<http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207951>
and the ISO is the bottommost, for ST31000340AS. I'm going to send a mail to
the Seagate support about it, but first I have to rerun the procedure and get
the exact messages.

regards, Leif.

Re: I/O error on data file, can't run backup

From
Craig Ringer
Date:
On 10/06/2011 02:15 PM, Leif Biberg Kristensen wrote:

> Model Family:     Seagate Barracuda 7200.11 family
> Device Model:     ST31000340AS
> Serial Number:    9QJ1ZMHY
> Firmware Version: SD15

Oh, joy. I have some of those, and can confirm their data-eating powers.
Thankfully mine were in a backup server as part of a regularly verified
RAID array with ECC on the volumes, so I didn't lose any data, but it
was certainly frustrating.

The firmware updater is a right pain, because it only supports certain
SATA controllers and you have to boot into it. Grr.

smartctl is a *vital* tool; the more people know about it and how
awesome it is, the better.

--
Craig Ringer