Thread: Re: [HACKERS] "CANNOT EXTEND" -
Thanks Bruce. Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set up to segment the files on linux? If not, I may have hit a brick wall here, and have no way to back this baby up. Tim -----Original Message----- From: Bruce Momjian <maillist@candle.pha.pa.us> To: perdue@raccoon.com <perdue@raccoon.com> Cc: pgsql-sql@hub.org <pgsql-sql@hub.org> Date: Monday, March 15, 1999 7:53 PM Subject: Re: [SQL] Re: [HACKERS] URGENT - >Call me at my signature phone number. > > >[Charset iso-8859-1 unsupported, filtering to ASCII...] >> I still cannot get pg_dump to work since I fixed the system yesterday. This >> is a real mess and I need to make sure I have a current backup. >> >> -I upgraded from 6.4 -> 6.4.2 and applied the 2GB patch >> -I did "initdb" from the postgres user account >> -I cannot get pg_dump to work: >> >> >> ------- >> [tim@db /]$ pg_dump db_domain > /fireball/pg_dumps/db_domain.dump >> pg_dump error in finding the template1 database >> ------- >> >> >> At this point, the postmaster dies and restarts. >> >> I think I'm getting to where I need some real help getting this thing to >> dump again. >> >> The database is up and running just fine - I just cannot dump. >> >> Any tips or advice is GREATLY needed and appreciated at this point. All I >> need is it to take a dump ;-) >> >> Tim >> >> tim@perdue.net >> >> >> >> >> >> -----Original Message----- >> From: Bruce Momjian <maillist@candle.pha.pa.us> >> To: tim@perdue.net <tim@perdue.net> >> Date: Monday, March 15, 1999 11:57 AM >> Subject: Re: [SQL] Re: [HACKERS] URGENT - >> >> >> >> >> >> >> >> >From 6.4 -> 6.4.2 >> >> >> >> The production database is working well, but pg_dump doesn't work. Now >> >> I'm worried that my database will corrupt again and I won't have it >> >> backed up. >> > >> >6.4 to 6.4.2 should work just fine, and the patch should not change >> >that. Are you saying the application of the patch caused the system to >> >be un-dumpable? Or perhaps was it the stopping of the postmaster. I >> >can work with you to get it dump-able if needed? >> > >> >-- >> > Bruce Momjian | http://www.op.net/~candle >> > maillist@candle.pha.pa.us | (610) 853-3000 >> > + If your life is a hard drive, | 830 Blythe Avenue >> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026 >> > >> >> > > >-- > Bruce Momjian | http://www.op.net/~candle > maillist@candle.pha.pa.us | (610) 853-3000 > + If your life is a hard drive, | 830 Blythe Avenue > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
[Charset iso-8859-1 unsupported, filtering to ASCII...] > Thanks Bruce. > > Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set > up to segment the files on linux? If not, I may have hit a brick wall here, > and have no way to back this baby up. > pg_dump only dumps a flat unix file. That can be any size your OS supports. It does not segment. However, a 2gig table will dump to a much smaller version than 2gig because of the overhead for every record. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
On Mon, 15 Mar 1999, Bruce Momjian wrote: > [Charset iso-8859-1 unsupported, filtering to ASCII...] > > Thanks Bruce. > > > > Do you have any idea what happens with pg_dump when it hits 2GB??? Is it set > > up to segment the files on linux? If not, I may have hit a brick wall here, > > and have no way to back this baby up. > > > > pg_dump only dumps a flat unix file. That can be any size your OS > supports. It does not segment. However, a 2gig table will dump to a > much smaller version than 2gig because of the overhead for every record. Hmmm, I think that, as some people are now using >2Gig tables, we should think of adding segmentation to pg_dump as an option, otherwise this is going to become a real issue at some point. Also, I think we could do with having some standard way of dumping and restoring large objects. -- Peter T Mount peter@retep.org.uk Main Homepage: http://www.retep.org.uk PostgreSQL JDBC Faq: http://www.retep.org.uk/postgresJava PDF Generator: http://www.retep.org.uk/pdf
> > pg_dump only dumps a flat unix file. That can be any size your OS > > supports. It does not segment. However, a 2gig table will dump to a > > much smaller version than 2gig because of the overhead for every record. > > Hmmm, I think that, as some people are now using >2Gig tables, we should > think of adding segmentation to pg_dump as an option, otherwise this is > going to become a real issue at some point. So the OS doesn't get a table over 2 gigs. Does anyone have a table that dumps a flat file over 2gig's, whose OS can't support files over 2 gigs. Never heard of a complaint. > > Also, I think we could do with having some standard way of dumping and > restoring large objects. I need to add a separate large object type. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026