Thread: query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

From
"吴德文"
Date:
Help!

A few days ago, my php page began to complain this:
------
Warning: PostgresSQL query failed: pqReadData() -- backend closed the 
channel unexpectedly. This probably means the backend terminated abnormally 
before or while processing the request.
------

The SQL string in php page is:
------
$sql.='Select news_id,title,summary,publish_time,is_html,if_use_url,url,news_pri ';
$sql.='From newses N,classes C ';
$sql.="Where N.class_id = C.class_id AND C.classname='$class' ";
$sql.='Order by publish_time Desc,news_id Desc Limit '.$Nlimit;
------

NOTE:
I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
and the table is "newses", looks like this (dumped from "pg_dump -s -t newses news"):

CREATE TABLE "newses" (
"news_id" int4 DEFAULT nextval ( '"newses_news_id_seq"' ) NOT NULL,
"title" character varying(100) NOT NULL,
"class_id" int4 NOT NULL,
"summary" text DEFAULT '',
"user_id" int4 NOT NULL,
"url" character varying(100),
"img_url" character varying(100),
"publish_time" date NOT NULL,
"if_show_news" bool DEFAULT bool 'f' NOT NULL,
"if_use_url" bool DEFAULT bool 'f' NOT NULL,
"is_html" bool DEFAULT bool 'f' NOT NULL,
"view_count" int4 DEFAULT 0 NOT NULL,
"news_pri" int4);
CREATE UNIQUE INDEX "newses_pkey" on "newses" using btree ( "news_id" 
"int4_ops" );

This table has 243 records, the max news_id is 253.

Later I found queries like these fails in psql:
select news_id,title from newses order by news_id desc limit 10;
select count(news_id) from newses;

But thess works fine:
select * from newses where news_id< 300;
select count(*) from newses where news_id <300;
select count(news_id) from newses where news_id <300;

A simple rule is if I'm running query over the whole 
table without condition, I get same error message mentioned above.

I thought my postgresql should be patch or upgrade, so I began to backup the 
database on it. 

But I found that pg_dump sometimes does not work on that very table,
and sometimes work with a long long time then error.

following are the error message of "pg_dump news -t newses -f newses-data.sql":
------
pqWait() -- connection not open
PQendcopy: resetting connection
SQL query to dump the contents of Table 'newses' did not execute correctly.  After we read all the table contents from
thebackend, PQendcopy() failed.  Explanation from backend: ''.
 
The query was: 'COPY "newses" TO stdout;
'.
------

I read the file(14M) generated and found that after the normally record(91K) there are many character like these:
------
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
280368896               \N      \N      0                       \N      f       f       f       0       0
280368896               \N      \N      0                       \N      f       f       f       0       0
280368896               \N      \N      0                       \N      f       f       f       0       0
------
And end with
------
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
\.
------

It is my nightmare now for I can't get back my data. I googled around with 
no luck. 

Anyone help me to get back the data and tell me what was going on?        


              Yours Wind Wood
              windwood@jingxian.xmu.edu.cn
                 2003-12-04

Re: query and pg_dump problem on my postgresql 6.5.3/Redhat

From
Jan Wieck
Date:
I would say you're loosing your disk drive. Have you lately checked for
bad blocks?


Jan


吴德文 wrote:

> Help!
>
> A few days ago, my php page began to complain this:
> ------
> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
> channel unexpectedly. This probably means the backend terminated abnormally
> before or while processing the request.
> ------
>
> The SQL string in php page is:
> ------
> $sql.='Select news_id,title,summary,publish_time,is_html,if_use_url,url,news_pri ';
> $sql.='From newses N,classes C ';
> $sql.="Where N.class_id = C.class_id AND C.classname='$class' ";
> $sql.='Order by publish_time Desc,news_id Desc Limit '.$Nlimit;
> ------
>
> NOTE:
> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
> and the table is "newses", looks like this (dumped from "pg_dump -s -t newses news"):
>
> CREATE TABLE "newses" (
> "news_id" int4 DEFAULT nextval ( '"newses_news_id_seq"' ) NOT NULL,
> "title" character varying(100) NOT NULL,
> "class_id" int4 NOT NULL,
> "summary" text DEFAULT '',
> "user_id" int4 NOT NULL,
> "url" character varying(100),
> "img_url" character varying(100),
> "publish_time" date NOT NULL,
> "if_show_news" bool DEFAULT bool 'f' NOT NULL,
> "if_use_url" bool DEFAULT bool 'f' NOT NULL,
> "is_html" bool DEFAULT bool 'f' NOT NULL,
> "view_count" int4 DEFAULT 0 NOT NULL,
> "news_pri" int4);
> CREATE UNIQUE INDEX "newses_pkey" on "newses" using btree ( "news_id"
> "int4_ops" );
>
> This table has 243 records, the max news_id is 253.
>
> Later I found queries like these fails in psql:
> select news_id,title from newses order by news_id desc limit 10;
> select count(news_id) from newses;
>
> But thess works fine:
> select * from newses where news_id< 300;
> select count(*) from newses where news_id <300;
> select count(news_id) from newses where news_id <300;
>
> A simple rule is if I'm running query over the whole
> table without condition, I get same error message mentioned above.
>
> I thought my postgresql should be patch or upgrade, so I began to backup the
> database on it.
>
> But I found that pg_dump sometimes does not work on that very table,
> and sometimes work with a long long time then error.
>
> following are the error message of "pg_dump news -t newses -f newses-data.sql":
> ------
> pqWait() -- connection not open
> PQendcopy: resetting connection
> SQL query to dump the contents of Table 'newses' did not execute correctly.  After we read all the table contents
fromthe backend, PQendcopy() failed.  Explanation from backend: ''. 
> The query was: 'COPY "newses" TO stdout;
> '.
> ------
>
> I read the file(14M) generated and found that after the normally record(91K) there are many character like these:
> ------
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> 280368896               \N      \N      0                       \N      f       f       f       0       0
> 280368896               \N      \N      0                       \N      f       f       f       0       0
> 280368896               \N      \N      0                       \N      f       f       f       0       0
> ------
> And end with
> ------
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N      \N
> \.
> ------
>
> It is my nightmare now for I can't get back my data. I googled around with
> no luck.
>
> Anyone help me to get back the data and tell me what was going on?
>
>
>               Yours Wind Wood
>               windwood@jingxian.xmu.edu.cn
>                  2003-12-04
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #


Re: query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

From
Richard Huxton
Date:
On Thursday 04 December 2003 14:55, 吴德文 wrote:
> Help!
>
> A few days ago, my php page began to complain this:
> ------
> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
> channel unexpectedly. This probably means the backend terminated abnormally
> before or while processing the request.
[snip]
> NOTE:
> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
> and the table is "newses", looks like this (dumped from "pg_dump -s -t
> newses news"):

One of the developers will probably be able to help, but bear in mind many are
in the USA/Canada and so you might have time-zone delays. It will be
suggested you upgrade to 7.3.5 or 7.4.0 as soon as possible. That might mean
upgrading from RedHat 6.2 as well.

At present:
1. Dump all the other tables, if you can
2. Stop PostgreSQL
3. make a file backup of /var/data (or wherever your data is stored)

OK - now at least you know things can't get any worse.

In psql you can use \a to set unaligned output and \o <filename> to output
query results to a file. You can then try SELECT * FROM newses WHERE news_id
BETWEEN 1 AND 100, then 101-200 etc. This should let you recover a great deal
of your data if only one disk-block is damaged.

From what you say, you should be able to recover your table's data. Then, I'd
recreate the database from your dumps.

> But I found that pg_dump sometimes does not work on that very table, and
> sometimes work with a long long time then error.

This sounds like either a disk or memory error. I'd guess disk.

--
  Richard Huxton
  Archonet Ltd

Re: query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

From
"Wind Wood"
Date:
hi,
   The disk just had problem, I used fsck to fix it, But the problem
of database is still there.
  After I read the postgresql document and found this:
------
COPY stops operation at the first error. This should not lead to problems in the event of a COPY FROM, but the target
relationwill, of course, be partially modified in a COPY TO. The VACUUM query should be used to clean up after a failed
copy.
------
     Then I Execute the sql "vacuum newses;" in psql, it return this message:
------
NOTICE:  Rel newses: Uninitialized page 16 - fixing
NOTICE:  Rel newses: Uninitialized page 17 - fixing
VACUUM
------

    It seems VACUUM fixed something, then I retry the SQL complained error before,
they all work well now, my php page work well also.

    It's exciting that all problems is gone, but I'm still not clear about what 
happened and what the VACUUM had done, anyone can explain it?


======= 2003-12-05 您在来信中写道:=======

>On Thursday 04 December 2003 14:55, 吴德文 wrote:
>> Help!
>>
>> A few days ago, my php page began to complain this:
>> ------
>> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
>> channel unexpectedly. This probably means the backend terminated abnormally
>> before or while processing the request.
>[snip]
>> NOTE:
>> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
>> and the table is "newses", looks like this (dumped from "pg_dump -s -t
>> newses news"):
>
>One of the developers will probably be able to help, but bear in mind many are 
>in the USA/Canada and so you might have time-zone delays. It will be 
>suggested you upgrade to 7.3.5 or 7.4.0 as soon as possible. That might mean 
>upgrading from RedHat 6.2 as well.
>
>At present:
>1. Dump all the other tables, if you can
>2. Stop PostgreSQL 
>3. make a file backup of /var/data (or wherever your data is stored)
>
>OK - now at least you know things can't get any worse.
>
>In psql you can use \a to set unaligned output and \o <filename> to output 
>query results to a file. You can then try SELECT * FROM newses WHERE news_id 
>BETWEEN 1 AND 100, then 101-200 etc. This should let you recover a great deal 
>of your data if only one disk-block is damaged.
>
>From what you say, you should be able to recover your table's data. Then, I'd 
>recreate the database from your dumps.
>
>> But I found that pg_dump sometimes does not work on that very table, and
>> sometimes work with a long long time then error.
>
>This sounds like either a disk or memory error. I'd guess disk.
>
>-- 
>  Richard Huxton
>  Archonet Ltd
>
>

= = = = = = = = = = = = = = = = = = = =
            

                    致
礼!
                 
               Wind Wood
               windwood@jingxian.xmu.edu.cn
                    2003-12-05 

Re: query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

From
Martijn van Oosterhout
Date:
Hmm, you do realise that 6.5 is *really* *really* old. Who knows how many
bugs there are. There have been *5* major releases since and many more minor
ones.

I remember back then, 6.5 had many quirks that required a vacuum to fix (or
couldn't be fixed at all). I remember when creating a temporary table inside
a transaction that aborted left an orphand file on disk.

You're going to have a hard time getting someone to help you with a version
that old.

Good luck,

On Sat, Dec 06, 2003 at 03:15:05PM +0800, Wind Wood wrote:
> hi,
>    The disk just had problem, I used fsck to fix it, But the problem
> of database is still there.
> ????After I read the postgresql document and found this:
> ------
> COPY stops operation at the first error. This should not lead to problems in the event of a COPY FROM, but the target
relationwill, of course, be partially modified in a COPY TO. The VACUUM query should be used to clean up after a failed
copy.
> ------
>      Then I Execute the sql "vacuum newses;" in psql, it return this message:
> ------
> NOTICE:  Rel newses: Uninitialized page 16 - fixing
> NOTICE:  Rel newses: Uninitialized page 17 - fixing
> VACUUM
> ------
>
>     It seems VACUUM fixed something, then I retry the SQL complained error before,
> they all work well now, my php page work well also.
>
>     It's exciting that all problems is gone, but I'm still not clear about what
> happened and what the VACUUM had done, anyone can explain it?
>
>
> ======= 2003-12-05 ????????????????=======
>
> >On Thursday 04 December 2003 14:55, ?????? wrote:
> >> Help!
> >>
> >> A few days ago, my php page began to complain this:
> >> ------
> >> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
> >> channel unexpectedly. This probably means the backend terminated abnormally
> >> before or while processing the request.
> >[snip]
> >> NOTE:
> >> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
> >> and the table is "newses", looks like this (dumped from "pg_dump -s -t
> >> newses news"):
> >
> >One of the developers will probably be able to help, but bear in mind many are
> >in the USA/Canada and so you might have time-zone delays. It will be
> >suggested you upgrade to 7.3.5 or 7.4.0 as soon as possible. That might mean
> >upgrading from RedHat 6.2 as well.
> >
> >At present:
> >1. Dump all the other tables, if you can
> >2. Stop PostgreSQL
> >3. make a file backup of /var/data (or wherever your data is stored)
> >
> >OK - now at least you know things can't get any worse.
> >
> >In psql you can use \a to set unaligned output and \o <filename> to output
> >query results to a file. You can then try SELECT * FROM newses WHERE news_id
> >BETWEEN 1 AND 100, then 101-200 etc. This should let you recover a great deal
> >of your data if only one disk-block is damaged.
> >
> >From what you say, you should be able to recover your table's data. Then, I'd
> >recreate the database from your dumps.
> >
> >> But I found that pg_dump sometimes does not work on that very table, and
> >> sometimes work with a long long time then error.
> >
> >This sounds like either a disk or memory error. I'd guess disk.
> >
> >--
> >  Richard Huxton
> >  Archonet Ltd
> >
> >
>
> = = = = = = = = = = = = = = = = = = = =
>
>
>                     ??
> ????
>
>                Wind Wood
>                windwood@jingxian.xmu.edu.cn
>                     2003-12-05
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster

--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> "All that is needed for the forces of evil to triumph is for enough good
> men to do nothing." - Edmond Burke
> "The penalty good people pay for not being interested in politics is to be
> governed by people worse than themselves." - Plato

Attachment

Re: query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

From
"Wind Wood"
Date:
Hi!

  Thanks all for your help.

    Now I can dump data from old dababase, but new problem came when I restore the data to postgresql 7.4.

    Error went out during the COPY command, if I do it in such command lines:
    1.     pg_dump news -f pgsql-database-news.sql        #in old system with postgresql 6.5.3
    2.     su - postgres                                #in new system with postgresql 7.4
    3.     createdb -T template0 news
    4.     psql news < pgsql-database-news.sql
------
ERROR:  missing data for column "user_id"
CONTEXT:  COPY newses, line 1:
------

    But if I do it in such command line, it works well :
    1.    pg_dump -d news -f pgsql-data-news.sql      #in old system with postgresql 6.5.3
    2.     su - postgres                                #in new system with postgresql 7.4
    3.     createdb -T template0 news
    4.    psql news < pgsql-data-news.sql

    The database looks fine but I'm not sure it is really good. 
    Then I try postgresql 7.1.3, and I found that it is very well from 6.5.3 to 7.1.3 in the two kind of
command line.
    Later I also found that it likes the 6.5.3 when I dump/restore the data from 7.1.3 to 7.4.

    Could Anybody explain it? Whether there is a version with big change between 7.1.3 and 7.4? 

======= 2003-12-06 Wind Wood wrote:=======

>hi,
>   The disk just had problem, I used fsck to fix it, But the problem
>of database is still there.
>  After I read the postgresql document and found this:
>------
>COPY stops operation at the first error. This should not lead to problems in the event of a COPY FROM, but the target
relationwill, of course, be partially modified in a COPY TO. The VACUUM query should be used to clean up after a failed
copy.
>------
>     Then I Execute the sql "vacuum newses;" in psql, it return this message:
>------
>NOTICE:  Rel newses: Uninitialized page 16 - fixing
>NOTICE:  Rel newses: Uninitialized page 17 - fixing
>VACUUM
>------
>
>    It seems VACUUM fixed something, then I retry the SQL complained error before,
>they all work well now, my php page work well also.
>
>    It's exciting that all problems is gone, but I'm still not clear about what 
>happened and what the VACUUM had done, anyone can explain it?
>
>
>======= 2003-12-05 您在来信中写道:=======
>
>>On Thursday 04 December 2003 14:55, 吴德文 wrote:
>>> Help!
>>>
>>> A few days ago, my php page began to complain this:
>>> ------
>>> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
>>> channel unexpectedly. This probably means the backend terminated abnormally
>>> before or while processing the request.
>>[snip]
>>> NOTE:
>>> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
>>> and the table is "newses", looks like this (dumped from "pg_dump -s -t
>>> newses news"):
>>
>>One of the developers will probably be able to help, but bear in mind many are 
>>in the USA/Canada and so you might have time-zone delays. It will be 
>>suggested you upgrade to 7.3.5 or 7.4.0 as soon as possible. That might mean 
>>upgrading from RedHat 6.2 as well.
>>
>>At present:
>>1. Dump all the other tables, if you can
>>2. Stop PostgreSQL 
>>3. make a file backup of /var/data (or wherever your data is stored)
>>
>>OK - now at least you know things can't get any worse.
>>
>>In psql you can use \a to set unaligned output and \o <filename> to output 
>>query results to a file. You can then try SELECT * FROM newses WHERE news_id 
>>BETWEEN 1 AND 100, then 101-200 etc. This should let you recover a great deal 
>>of your data if only one disk-block is damaged.
>>
>>From what you say, you should be able to recover your table's data. Then, I'd 
>>recreate the database from your dumps.
>>
>>> But I found that pg_dump sometimes does not work on that very table, and
>>> sometimes work with a long long time then error.
>>
>>This sounds like either a disk or memory error. I'd guess disk.
>>
>>-- 
>>  Richard Huxton
>>  Archonet Ltd
>>
>>
>
>= = = = = = = = = = = = = = = = = = = =
>            
>
>                    致
>礼!
>                 
>               Wind Wood
>               windwood@jingxian.xmu.edu.cn
>                    2003-12-05 
>
>---------------------------(end of broadcast)---------------------------TIP 4: Don't 'kill -9' the postmaster
>

= = = = = = = = = = = = = = = = = = = =
            

--
                 
               Yours,Wind Wood
               windwood@jingxian.xmu.edu.cn
                    2003-12-09