Thread: "Could not open relation XXX: No such file or directory"
I am running Postgres 8.1.4 on Windows XP Professional Service Pack 3. Yesterday when attempting to select from one particular database I repeatedly encountered errors such as the following: 2009-08-18 10:49:54 ERROR: could not open relation 1663/51796/1247: No such file or directory 2009-08-18 10:49:54 STATEMENT: SELECT typname FROM pg_type WHERE oid = 1043 1247 is pg_type. In Explorer I could see the file 51796/1247, and the timestamp indicated that it had not recently been updated. I had never previously seen this error. As that database is used only for testing, I dropped it and loaded a new one from a dump. However, earlier today a colleague working against this new database experienced a slightly different error on several occasions: 2009-08-19 03:06:45 ERROR: could not read block 0 of relation 1663/52752/52896: No such file or directory 2009-08-19 03:06:45 STATEMENT: SELECT * FROM "RP"."RP_ATTRIBUTE" WHERE rp_id = (('MM100')::text) 52752/52896 is the Primary Key index on the referenced table. Again, I can see the file in Explorer, and it has not been updated since the database was created. When I try the same Select statement now I get no error. Clearly something is amiss, but I don't know what. I should be grateful for any suggestions as to what I should check. |
On 19/08/2009 6:26 PM, Alan Millington wrote: > 2009-08-19 03:06:45 ERROR: could not read block 0 of relation > 1663/52752/52896: No such file or directory > Clearly something is amiss, but I don't know what. I should be grateful > for any suggestions as to what I should check. Got a virus scanner installed? If so, remove it (do not just disable it) and see if you can reproduce the problem. Ditto anti-spyware software. You should also `chkdsk' your file system(s) and use a SMART diagnostic tool to test your hard disk (assuming it's a single ATA disk). -- Craig Ringer
On Wed, Aug 19, 2009 at 4:26 AM, Alan Millington<admillington@yahoo.co.uk> wrote: > I am running Postgres 8.1.4 on Windows XP Professional Service Pack 3. In addition to the very helpful advice Craig provided, you also need to look at upgrading your pgsql install. 8.1 is no longer supported on windows due to issues in it which were deemed unfixable without major changes. 8.2 and above are supported on windows. I'd suggest migrating off of 8.1 and onto at least 8.3, if not 8.4.
Hi there, On 19/08/2009, at 8:38 PM, Craig Ringer wrote: > On 19/08/2009 6:26 PM, Alan Millington wrote: > >> 2009-08-19 03:06:45 ERROR: could not read block 0 of relation >> 1663/52752/52896: No such file or directory > >> Clearly something is amiss, but I don't know what. I should be >> grateful >> for any suggestions as to what I should check. > > Got a virus scanner installed? If so, remove it (do not just disable > it) and see if you can reproduce the problem. Ditto anti-spyware > software. > > You should also `chkdsk' your file system(s) and use a SMART > diagnostic tool to test your hard disk (assuming it's a single ATA > disk). By the way, `chkdsk' in Windows or `fsck' in Unix can, in a way, be a _source_ of file loss if the file metadata got damaged badly, e.g., by a system crash, and the file node has to be cleared. So I've always been curious if there is a way to retrieve surviving records from a PostgreSQL database damaged by file loss. Do you know any? (Of course, the only true solution is to have been making backups beforehand, but...) Thanks! Yar
Yaroslav Tykhiy wrote: > By the way, `chkdsk' in Windows or `fsck' in Unix can, in a way, be a > _source_ of file loss if the file metadata got damaged badly, e.g., by a > system crash, and the file node has to be cleared. So I've always been > curious if there is a way to retrieve surviving records from a > PostgreSQL database damaged by file loss. Do you know any? (Of course, > the only true solution is to have been making backups beforehand, but...) The Ubuntu Linux site has this page on data recovery (also applicable to other Linux flavors): https://help.ubuntu.com/community/DataRecovery I assume that a database file, because of its structure, is harder to recover after it becomes corrupt than, say, an XML file. But any port in a storm, right? > > Thanks! > > Yar >
On 21/08/2009, at 12:40 PM, Seth Gordon wrote: > Yaroslav Tykhiy wrote: >> By the way, `chkdsk' in Windows or `fsck' in Unix can, in a way, be >> a _source_ of file loss if the file metadata got damaged badly, >> e.g., by a system crash, and the file node has to be cleared. So >> I've always been curious if there is a way to retrieve surviving >> records from a PostgreSQL database damaged by file loss. Do you >> know any? (Of course, the only true solution is to have been >> making backups beforehand, but...) > > The Ubuntu Linux site has this page on data recovery (also > applicable to other Linux flavors): > > https://help.ubuntu.com/community/DataRecovery > > I assume that a database file, because of its structure, is harder > to recover after it becomes corrupt than, say, an XML file. But any > port in a storm, right? Excuse me, but my curiosity was about a somewhat different thing. Let's assume we did file system level data recovery but lost just a couple of files from $PGDATA/base that were damaged hopelessly. Now, if we start pgsql and try accessing the database, pgsql will fail as soon as it hits a missing file. So I wondered if there was a way to tell pgsql to ignore such errors at the cost of returning possibly inconsistent and corrupted data. It has just occurred to me that recreating the files zero-filled is another option to try. As long as the objects stored in the database are small and/or uncompressed, screwing up a few pages shouldn't affect data from the other pages, right? Yar
On Fri, 2009-08-21 at 11:30 +1000, Yaroslav Tykhiy wrote: > Hi there, > > On 19/08/2009, at 8:38 PM, Craig Ringer wrote: > > You should also `chkdsk' your file system(s) and use a SMART > > diagnostic tool to test your hard disk (assuming it's a single ATA > > disk). > > By the way, `chkdsk' in Windows or `fsck' in Unix can, in a way, be a > _source_ of file loss if the file metadata got damaged badly, e.g., by > a system crash, and the file node has to be cleared. So I've always > been curious if there is a way to retrieve surviving records from a > PostgreSQL database damaged by file loss. Good point and good question. One thing that'd _REALLY_ help recover PostgreSQL databases would be if files defining the tables had a header containing: - A magic number or string - The PostgreSQL version - The file path/name relative to the pg data root eg: PGSQL84\x00base/11511/2699 That'd be a big bonus if they turned up in lost+found, and would also assist in recovery of a database from a file system with completely destroyed or unusable metadata (eg: dead superblocks). Then again, with the DB files not having end markers and with the potential for file fragmentation you're probably not going to recover a DB from a completely mangled FS anyway. Help identifying DB files from lost+found would be very nice, though. Of course, we all keep good backups so nobody'll ever _need_ this sort of thing, right? Right? *sigh* -- Craig Ringer
On 19/08/2009 6:38 PM, Craig Ringer wrote:
chkdsk reported that the disc is clean.
Since installing Postgres in early 2007 I have been running it together with McAfee with no problem. A few days ago McAfee was deinstalled and Kaspersky installed in its place, so Kaspersky appeared to be a suspect.
However, on looking at the matter again, I am now almost certain that I caused the problem myself. I have a Python function which (as a workaround to a problem which exists in Python 2.4, the version to which Postgres 8.1.4 is tied) executes a chdir. It appears that once this has happened, the current Postgres session is no longer able to find any new data files, though evidently it is still able to use those that it has located previously. If you can confirm that Postgres does indeed rely on the current working directory to locate its data files, the problem is solved.
Moral: never underestimate the stupidity of the people who post the questions (in this case, me)! No doubt this provides one example of why Python is deemed "unsafe".
|
Alan Millington wrote: > However, on looking at the matter again, I am now almost certain that > I caused the problem myself. I have a Python function which (as a > workaround to a problem which exists in Python 2.4, the version to > which Postgres 8.1.4 is tied) executes a chdir. It appears that once > this has happened, the current Postgres session is no longer able to > find any new data files, though evidently it is still able to use > those that it has located previously. If you can confirm that Postgres > does indeed rely on the current working directory to locate its data > files, the problem is solved. Yes, Postgres expects that the CWD is not changed. Files that were previously opened continue to work fine, because they are kept open. Do not chdir() in a function. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.