Thread: Problems with pg_dump
There are several databases I'm trying to dump, but I keep running into problems. Could someone please tell me what these error messages mean? (I'm using PostgreSQL 7.0.1). I'm trying to dump a database with the following command: pg_dump -d db1 > db1.out And I'm getting the following error: dumpClasses(): command failed. Explanation from backend: 'pqReadData() -- backend closed the channel unexpectedly. This probably means the backedn terminated abnormally before or while processing the request.'. With a second database, running the same command, I get an entirely different error message: getTables(): SELECT failed. Explanation from backend: 'ERROR: aclitemout: bad ai_type: 39'. Still, with a third database, all works well and the database structure and INSERT statements are dumped to the file. I need to retrieve the data from these two databases ASAP for a client. If anyone knows the answer and can help, please do so. Thanks, Ben Ramsey Technical Director EUREKA! INTERACTIVE, INC. * Phone: (770) 642-0390 * Fax: (770) 216-1702 * Web: www.eureka-interactive.com * E-mail: ben@eureka-interactive.com
On 7 Mar 2003 at 15:51, Ben Ramsey wrote: > There are several databases I'm trying to dump, but I keep running > into problems. Could someone please tell me what these error messages > mean? (I'm using PostgreSQL 7.0.1). Is it possible that you have a mismatch between versions of pg_dump and PostgreSQL? I had a similar problem once before, albeit not with the same error messages.....it turned out that I had two (different versions) copies of pg_dump, and the PATH meant that I was inadventently running the wrong one! --Ray. ------------------------------------------------------------- Raymond O'Donnell http://www.galwaycathedral.org/recitals rod@iol.ie Galway Cathedral Recitals -------------------------------------------------------------
Reply didn't reply to the list, as I had assumed... it replied to the person who sent the message. Oh well... Let's try again. :) > Is it possible that you have a mismatch between versions of pg_dump > and PostgreSQL? I don't think this is possible since this is the only installation of Postgres running on this machine. There was no installation previous to this one. Furthermore, there are 12 databases, and only one of them dumps correctly with pgdump. The rest of them give me one of these two error messages. Can someone just tell me what these error messages mean so I can get underway with debugging this? I needed the dump of at least two of these databases last week for our clients. Thanks. ---------- The error messages, again, are: dumpClasses(): command failed. Explanation from backend: 'pqReadData() -- backend closed the channel unexpectedly. This probably means the backedn terminated abnormally before or while processing the request.'. And: getTables(): SELECT failed. Explanation from backend: 'ERROR: aclitemout: bad ai_type: 39'.
Data corruption, bad memory/harddrive? 3/10/2003 10:41:37 AM, "Ben Ramsey" <ben@eureka-interactive.com> wrote: >Reply didn't reply to the list, as I had assumed... it replied to the person >who sent the message. Oh well... Let's try again. :) > >> Is it possible that you have a mismatch between versions of pg_dump >> and PostgreSQL? > >I don't think this is possible since this is the only installation of >Postgres running on this machine. There was no installation previous to >this one. Furthermore, there are 12 databases, and only one of them dumps >correctly with pgdump. The rest of them give me one of these two error >messages. > >Can someone just tell me what these error messages mean so I can get >underway with debugging this? I needed the dump of at least two of these >databases last week for our clients. > >Thanks. > >---------- > >The error messages, again, are: > > dumpClasses(): command failed. Explanation from backend: > 'pqReadData() -- backend closed the channel unexpectedly. > This probably means the backedn terminated abnormally > before or while processing the request.'. > >And: > > getTables(): SELECT failed. Explanation from backend: > 'ERROR: aclitemout: bad ai_type: 39'. > > > >---------------------------(end of broadcast)--------------------------- >TIP 3: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to majordomo@postgresql.org so that your >message can get through to the mailing list cleanly >
> Data corruption, bad memory/harddrive? Well, now. That can't be very good. If this is the case, anyone have any ideas of how to retrieve this data, or am I just stuck for good and out hours of work trying to get this data for our clients?
"Ben Ramsey" <ben@eureka-interactive.com> writes: > Well, now. That can't be very good. If this is the case, anyone have any > ideas of how to retrieve this data, or am I just stuck for good and out > hours of work trying to get this data for our clients? With 11 out of 12 databases apparently corrupted, your odds seem bleak. If you are really lucky, the failures all come from one isolated problem, rather than suggesting massive misfeasance on the part of your disk. I'd bet on pg_shadow as the best candidate (though it could maybe be pg_group or pg_database; there are no other shared tables, hence no other prospect for a common-mode failure). Can you do "select * from pg_shadow" without errors? How about pg_group and pg_database? regards, tom lane
Hi, I am looking for a curses / ncurses example using libpq C functions to search a PostgreSQL database. Any idea ?. Thanks, Diego Gil.
> Can you do "select * from pg_shadow" without errors? How > about pg_group and pg_database? I can do all of these without any problems, but I think I've narrowed down the culprit. It seems that the error is generated off of (perhaps corrupt?) columns in the database. When I log into the database, I can "select *" on each table. For some tables, I get all the results. For others, I get an error. For the tables generating errors, I further narrowed it down by "select <column name>". I can get data from most of the columns, but some of them generate errors. For example, a column with a timestamp datatype gives me the following error: ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero. Thus, I can dump all tables that do not generate errors using "pg_dump -ad --table <table name> <dbname>". But I cannot dump the tables with corrupted in this way. I can, however, use \o (in psql) to specify a file to which query output may be directed (and then run a query that selects only the unaffected columns). But this is not the desired course, as it doesn't generate the INSERT statements. So, I ask: How does one drop a column from a table so that I can drop the affected column and then use pg_dump as normal?
"Ben Ramsey" <ben@eureka-interactive.com> writes: > So, I ask: How does one drop a column from a table so that I can drop the > affected column and then use pg_dump as normal? If you're on 7.3 you can just drop the column. On earlier releases, I suspect the best recourse is to select the undamaged columns into a new table and then dump that. regards, tom lane