Thread: pg_dump not dumping all tables

pg_dump not dumping all tables

From
"G. Anthony Reina"
Date:
I just installed 6.5.1 on my RH 6.0 PII/400 MHz machine. I had
previously been running the database on 6.4.2 and had several backups
made through pgdump. When I tried to restore the database (i.e. psql -e
db01 < db.backup) all of the tables were created, but only some of them
had data. These tables are just real tables, not views or anything
strange. Luckily, I also had a back up where I had pg_dump'ed each table
separately (so I'm not in a total jam). But I can't figure out why the
pg_dump didn't backup all of the data.

-Tony Reina




Re: [HACKERS] pg_dump not dumping all tables

From
Hannu Krosing
Date:
"G. Anthony Reina" wrote:
> 
> I just installed 6.5.1 on my RH 6.0 PII/400 MHz machine. I had
> previously been running the database on 6.4.2 and had several backups
> made through pgdump. When I tried to restore the database (i.e. psql -e
> db01 < db.backup) all of the tables were created, but only some of them
> had data. These tables are just real tables, not views or anything
> strange. Luckily, I also had a back up where I had pg_dump'ed each table
> separately (so I'm not in a total jam). But I can't figure out why the
> pg_dump didn't backup all of the data.
> 
> -Tony Reina

If there is even one row dumped wrong the data for the whole table is
not 
inserted ;(

I've had this for row's containing \n (or maybe \r) that got dumped as
real 
newline that screwed the whole COPY xxx FROM stdin. 
I resolved it by editing the dumpfile via visual inspection.

Another thing to try would be to dump as proper insert strings (pg_dump
-d) 
instead of copy from. It will be slow to load though ...

---------------------------
Hannu


Re: [HACKERS] pg_dump not dumping all tables

From
Tom Lane
Date:
"G. Anthony Reina" <reina@nsi.edu> writes:
> I just installed 6.5.1 on my RH 6.0 PII/400 MHz machine. I had
> previously been running the database on 6.4.2 and had several backups
> made through pgdump. When I tried to restore the database (i.e. psql -e
> db01 < db.backup) all of the tables were created, but only some of them
> had data. These tables are just real tables, not views or anything
> strange. Luckily, I also had a back up where I had pg_dump'ed each table
> separately (so I'm not in a total jam). But I can't figure out why the
> pg_dump didn't backup all of the data.

That is distressing, all right ... and it's not a report we've heard
before.  Can you see any pattern to which tables' contents were saved
and which were not?  I'd wonder about peculiar table names, seldom-
used column data types, and so forth.

Did your indexes get recreated from the db.backup file?

Is there any chance that the db.backup file got truncated (say, because
you ran out of disk space during the dump)?

If you can, it would be nice to see the db.backup file itself, minus
data so that it's not too big to email.  If you could strip the data
out and just indicate which tables had data and which not, it should
amount to only a few K of table-creation commands...
        regards, tom lane


Re: [HACKERS] pg_dump not dumping all tables

From
"G. Anthony Reina"
Date:
Hannu Krosing wrote:

> If there is even one row dumped wrong the data for the whole table is
> not
> inserted ;(
>
> I've had this for row's containing \n (or maybe \r) that got dumped as
> real
> newline that screwed the whole COPY xxx FROM stdin.
> I resolved it by editing the dumpfile via visual inspection.
>
> Another thing to try would be to dump as proper insert strings (pg_dump
> -d)
> instead of copy from. It will be slow to load though ...
>

Hannu,
   Unfortunately, my dump file is 2 Gig and so I can't edit it easily. I
don't mind slowness as long as I have accuracy so I'll try the pg_dump -d.

Thanks.
-Tony




Re: [HACKERS] pg_dump not dumping all tables

From
"G. Anthony Reina"
Date:
Tom,
   I think I may have found the error but I can't be sure. I compressed the
pg_dump'd backup file and then samba'd it to a Windows 95 machine in order to
burn it to a CD-ROM. I wonder if Windows added extra line feeds here and
there (although I don't see them when I do a head or tail on the file). If
that's the case, then it is my fault.

-Tony



Tom Lane wrote:

> That is distressing, all right ... and it's not a report we've heard
> before.  Can you see any pattern to which tables' contents were saved
> and which were not?  I'd wonder about peculiar table names, seldom-
> used column data types, and so forth.
>

All of the tables seemed to be the ones marked ***_proc (e.g.
center_out_proc, ellipse_proc, etc.). These all seemed to be at the end of
the pg_dump. So probably somewhere in the pg_dump a table had an extra
character and screwed up the remaining tables from being written (if I am
correctly understanding how pg_dump works).

>
> Did your indexes get recreated from the db.backup file?

Yes. They get created just after the copy commands. Of course, it would be
nice if they were created first and then the data was copied in. My indicies
have unique keys. There have been times with 6.4.2 where for some reason
(despite having a unique index), I have had two rows in an index. This even
happened when I went to pg_dump the table and rebuild it. I was thinking that
if the index was created first and then the data was copied, then this
probably couldn't occur on a rebuild.

>
>
> Is there any chance that the db.backup file got truncated (say, because
> you ran out of disk space during the dump)?
>

No, this partition is 10 Gigs. I have about 1-2 Gigs left even when the
pg_dump finishes.

>
> If you can, it would be nice to see the db.backup file itself, minus
> data so that it's not too big to email.  If you could strip the data
> out and just indicate which tables had data and which not, it should
> amount to only a few K of table-creation commands...
>
>                         regards, tom lane

Again, the text file is over 2 Gig so I can't seem to find an editor that is
big enough to hold it all in memory (I only have a half a gig of RAM). So it
really is just guesswork. Anything you can think of to strip the data from
this big of a file?

-Tony





Re: [HACKERS] pg_dump not dumping all tables

From
Tom Lane
Date:
Hannu Krosing <hannu@trust.ee> writes:
> I've had this for row's containing \n (or maybe \r) that got dumped as
> real newline that screwed the whole COPY xxx FROM stdin.

FWIW, I think that particular bug was fixed some time ago; leastwise
I cannot reproduce it with either 6.4.2 or current pg_dump.

Tony, would you let us know whether -d helps?
        regards, tom lane


Re: [HACKERS] pg_dump not dumping all tables

From
Tom Lane
Date:
"G. Anthony Reina" <reina@nsi.edu> writes:
>     I think I may have found the error but I can't be sure. I compressed the
> pg_dump'd backup file and then samba'd it to a Windows 95 machine in order to
> burn it to a CD-ROM. I wonder if Windows added extra line feeds here and
> there (although I don't see them when I do a head or tail on the
> file).

If the file was compressed when you transferred it, then any newline
breakage would have messed it up pretty thoroughly... so I doubt that
theory.

Hannu's thought is a good one: corrupted data within a particular COPY
command would probably have caused the entire COPY to fail, but psql
would have recovered at the \. and picked up with the rest of the
restore script, which seems to match the symptoms.  I think he's blamed
a long-gone bug, but there could be another one with similar effects.

However, if that happened you should certainly have seen a complaint
from psql (and also in the postmaster log) while running the restore.
Did you look through the output of the restore script carefully?

> All of the tables seemed to be the ones marked ***_proc (e.g.
> center_out_proc, ellipse_proc, etc.). These all seemed to be at the end of
> the pg_dump.

Hmm.  What kind of data was in them?

> Yes. They get created just after the copy commands. Of course, it would be
> nice if they were created first and then the data was copied in.

There's a reason for that: it's a lot faster to build the index after
doing the bulk load, rather than incrementally as the data is loaded.

> Again, the text file is over 2 Gig so I can't seem to find an editor that is
> big enough to hold it all in memory (I only have a half a gig of RAM). So it
> really is just guesswork. Anything you can think of to strip the data from
> this big of a file?

Not short of writing a little perl script that looks for COPY ... and \.
But at this point it seems likely that the problem is in the data
itself, so stripping it out would lose the evidence anyway.  Grumble.
        regards, tom lane


Re: [HACKERS] pg_dump not dumping all tables

From
The Hermit Hacker
Date:
On Wed, 28 Jul 1999, G. Anthony Reina wrote:

> Again, the text file is over 2 Gig so I can't seem to find an editor that is
> big enough to hold it all in memory (I only have a half a gig of RAM). So it
> really is just guesswork. Anything you can think of to strip the data from
> this big of a file?

egrep "^CREATE|^COPY" <filename> ?

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 



Re: [HACKERS] pg_dump not dumping all tables

From
"Ross J. Reedstrom"
Date:
On Wed, Jul 28, 1999 at 11:28:17PM -0300, The Hermit Hacker wrote:
> On Wed, 28 Jul 1999, G. Anthony Reina wrote:
> 
> > Again, the text file is over 2 Gig so I can't seem to find an editor that is
> > big enough to hold it all in memory (I only have a half a gig of RAM). So it
> > really is just guesswork. Anything you can think of to strip the data from
> > this big of a file?
> 
> egrep "^CREATE|^COPY" <filename> ?

The one class of failures on upgrade we have been seeing is tables with
fieldnames that were previously reserved words. One of those might keep the
rest of the COPYs from working, would it not?

try piping the combined stdout and stderr together through grep "ERROR"
and see if anything pops up.

Ross

-- 
Ross J. Reedstrom, Ph.D., <reedstrm@rice.edu> 
NSBRI Research Scientist/Programmer
Computer and Information Technology Institute
Rice University, 6100 S. Main St.,  Houston, TX 77005


Re: [HACKERS] pg_dump not dumping all tables

From
Hannu Krosing
Date:
The Hermit Hacker wrote:
> 
> On Wed, 28 Jul 1999, G. Anthony Reina wrote:
> 
> > Again, the text file is over 2 Gig so I can't seem to find an editor that is
> > big enough to hold it all in memory (I only have a half a gig of RAM). So it
> > really is just guesswork. Anything you can think of to strip the data from
> > this big of a file?
> 
> egrep "^CREATE|^COPY" <filename> ?

Nay,we have currently nice multi-line CREATEs.

the following python script should to work

------------------------------------------------------
#!/usr/bin/env python
import sys
in_data = 0
while 1:   line = sys.stdin.readline()   if not line: break   if line[:5] == 'COPY ':       in_data = 1   if not
in_data:sys.stdout.write(line)   if in_data and line[:2] == '\\.':       in_data = 0
 
-----------------------------------------------------

as you can probably guess it is used as 

stripdata.py <withdata.sql >withoutdata.sql

-------------------------
Hannu