Thread: Upgrading from 6.3->6.4.2/6.5b1 possible
AHH! The pgsql v6.3 backend has begun crashing horribly on me since I've added a few more database backed webpages (with messages like 'backend cache invalidation...').. I figured that upgrading to 6.4.2 would solve my problems.. But I can't upgrade! Things I've tried: * Using old pg_dumpall -z > file and psql -e template1 < file (after install and initdb) * Using the pg_dumpall from the new version. * Using pg_upgrade * Rereading the docs dozens of times * having a friend try it. * all of the above with both 6.4.2 and 6.5beta Every time I start loading, it chokes up on the dump output and falls into a constant stream of parse errors and messages like "PQsendQuery() -- query is too long. Maximum length is 8191" Argh!!! I've got websites constantly crashing and coustomers ready to revolt, what can I do???
Gregory Maxwell wrote: > >AHH! The pgsql v6.3 backend has begun crashing horribly on me since I've >added a few more database backed webpages (with messages like 'backend >cache invalidation...').. I figured that upgrading to 6.4.2 would solve my >problems.. But I can't upgrade! Things I've tried: > >* Using old pg_dumpall -z > file and psql -e template1 < file (after > install and initdb) >* Using the pg_dumpall from the new version. >* Using pg_upgrade >* Rereading the docs dozens of times >* having a friend try it. >* all of the above with both 6.4.2 and 6.5beta > >Every time I start loading, it chokes up on the dump output and falls into >a constant stream of parse errors and messages like "PQsendQuery() -- >query is too long. Maximum length is 8191" > >Argh!!! I've got websites constantly crashing and coustomers ready to >revolt, what can I do??? edit the dump file; change every line to an insert command. Run that so that you can see what it is objecting to. 6.3's dump did not cover everything, so you are likely to have to edit the dump file to make it reloadable. -- Oliver Elphick Oliver.Elphick@lfix.co.uk Isle of Wight http://www.lfix.co.uk/oliver PGP key from public servers; key ID 32B8FAA1 ======================================== "Do all things without murmurings and disputings; that ye may be blameless and harmless, the sons of God, without rebuke, in the midst of a crooked and perverse nation, among whom ye shine as lights in the world." Philippians 2:14,15
On Sun, 30 May 1999, Oliver Elphick wrote: > Gregory Maxwell wrote: > > > >AHH! The pgsql v6.3 backend has begun crashing horribly on me since I've > >added a few more database backed webpages (with messages like 'backend > >cache invalidation...').. I figured that upgrading to 6.4.2 would solve my > >problems.. But I can't upgrade! Things I've tried: > > > >* Using old pg_dumpall -z > file and psql -e template1 < file (after > > install and initdb) > >* Using the pg_dumpall from the new version. > >* Using pg_upgrade > >* Rereading the docs dozens of times > >* having a friend try it. > >* all of the above with both 6.4.2 and 6.5beta > > > >Every time I start loading, it chokes up on the dump output and falls into > >a constant stream of parse errors and messages like "PQsendQuery() -- > >query is too long. Maximum length is 8191" > > > >Argh!!! I've got websites constantly crashing and coustomers ready to > >revolt, what can I do??? > > edit the dump file; change every line to an insert command. Run that > so that you can see what it is objecting to. > > 6.3's dump did not cover everything, so you are likely to have to edit the > dump file to make it reloadable. Is there a better way, perhaps even a perl script? My dump is quite large (~1 million rows?) and manual editing is right out. If I'm going to go through that kind of labor I'll probably switch to MySQL (which has a larger web userbase, even though I prefer postgres for it's completeness).. Argh.. Are you aware of any patches to v6.3 that might improve my stability without breaking compatibility? Thanks
I've seen this problem too in 6.5 beta. I don't have a solution, but I'd just like to add my voice to say that this problem is real. Actually, I did have a kind of solution. If you dump proper insert statements into the dump and then run every insert in a separate process. Ugly but workable if the number isn't too great. while read A do echo "$A" | psql databasename done <dumpfilename Gregory Maxwell wrote: > > AHH! The pgsql v6.3 backend has begun crashing horribly on me since I've > added a few more database backed webpages (with messages like 'backend > cache invalidation...').. I figured that upgrading to 6.4.2 would solve my > problems.. But I can't upgrade! Things I've tried: > > * Using old pg_dumpall -z > file and psql -e template1 < file (after > install and initdb) > * Using the pg_dumpall from the new version. > * Using pg_upgrade > * Rereading the docs dozens of times > * having a friend try it. > * all of the above with both 6.4.2 and 6.5beta > > Every time I start loading, it chokes up on the dump output and falls into > a constant stream of parse errors and messages like "PQsendQuery() -- > query is too long. Maximum length is 8191" > > Argh!!! I've got websites constantly crashing and coustomers ready to > revolt, what can I do???
> > AHH! The pgsql v6.3 backend has begun crashing horribly on me since I've > added a few more database backed webpages (with messages like 'backend > cache invalidation...').. I figured that upgrading to 6.4.2 would solve my > problems.. But I can't upgrade! Things I've tried: > > * Using old pg_dumpall -z > file and psql -e template1 < file (after > install and initdb) > * Using the pg_dumpall from the new version. > * Using pg_upgrade > * Rereading the docs dozens of times > * having a friend try it. > * all of the above with both 6.4.2 and 6.5beta > > Every time I start loading, it chokes up on the dump output and falls into > a constant stream of parse errors and messages like "PQsendQuery() -- > query is too long. Maximum length is 8191" > > Argh!!! I've got websites constantly crashing and coustomers ready to > revolt, what can I do??? You are running psql -e. You need to find out where in the dump file it is failing. Send us the area where it is crashing, or send me personally(not mailing list) the most of the psql output and I can figure it out. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
I have had trouble on numerous occasions "undumping" a pg_dump. Below is a perl script that I use as a pipe for undumping...hope it helps .....pg_undump..... #!/usr/bin/perl -p # This script is a pipe that is used to break up PSQL dumps into pieces # that are more manageable by the postmaster. If there are too many # input lines, the memory used by the transaction block can easily # choke the machine. BEGIN { $MAXLINES = 1000; } # Does the current input line define the beginning of an input block? if (m/^COPY/) { # When a copy line is encountered grab the line for later use # and turn on the line counter; $copyline = $_; $cnt = 0; } # Does the current input line define the end of an input block? if (m{^\\\.}) { # We have just macthed the end of STDIN line. Set counter off undef $cnt; } # If we are in an input block and the count is at the max, "flush" the buffer # and setup for the next block. if (defined($cnt) and ($cnt > $MAXLINES) ) { $cnt = 0; print '\.', "\n"; print $copyline; } $cnt++ if defined $cnt;
> > I've seen this problem too in 6.5 beta. I don't have a solution, but I'd > just like to add my voice to say that this problem is real. > > Actually, I did have a kind of solution. If you dump proper insert > statements into the dump and then run every insert in a separate > process. Ugly but workable if the number isn't too great. > > while read A > do > echo "$A" | psql databasename > done <dumpfilename > > We don't hear about this very often. Can someone tell us exactly what a bad load looks like, or a cause. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Following this thread... just wondering is this a known problem of updating from 6.3x to 6.4x or 6.5 ? or is this an isolated problem from one user. We are looking to upgrade our postgresql 6.3.2 soon, (I think we've been having some of the same problems described below ('backend cache invalidation') Kevin On Sun, 30 May 1999, Gregory Maxwell wrote: > On Sun, 30 May 1999, Oliver Elphick wrote: > > > Gregory Maxwell wrote: > > > > > >AHH! The pgsql v6.3 backend has begun crashing horribly on me since I've > > >added a few more database backed webpages (with messages like 'backend > > >cache invalidation...').. I figured that upgrading to 6.4.2 would solve my > > >problems.. But I can't upgrade! Things I've tried: > > > > > >* Using old pg_dumpall -z > file and psql -e template1 < file (after > > > install and initdb) > > >* Using the pg_dumpall from the new version. > > >* Using pg_upgrade > > >* Rereading the docs dozens of times > > >* having a friend try it. > > >* all of the above with both 6.4.2 and 6.5beta > > > > > >Every time I start loading, it chokes up on the dump output and falls into > > >a constant stream of parse errors and messages like "PQsendQuery() -- > > >query is too long. Maximum length is 8191" > > > > > >Argh!!! I've got websites constantly crashing and coustomers ready to > > >revolt, what can I do??? > > > > edit the dump file; change every line to an insert command. Run that > > so that you can see what it is objecting to. > > > > 6.3's dump did not cover everything, so you are likely to have to edit the > > dump file to make it reloadable. > > Is there a better way, perhaps even a perl script? My dump is quite large > (~1 million rows?) and manual editing is right out. If I'm going to go > through that kind of labor I'll probably switch to MySQL (which has a > larger web userbase, even though I prefer postgres for it's > completeness).. > > Argh.. Are you aware of any patches to v6.3 that might improve my > stability without breaking compatibility? > > Thanks > > > -------------------------------------------------------------------- Kevin Heflin | ShreveNet, Inc. | Ph:318.222.2638 x103 VP/Mac Tech | 333 Texas St #175 | FAX:318.221.6612 kheflin@shreve.net | Shreveport, LA 71101 | http://www.shreve.net --------------------------------------------------------------------
Bruce, I've put up a script for ftp that shows some wierd behaviour. This script was created by hand, so it's possible that some errors are caused by syntax errors (although I spent a while trying to find them without success). I think the point is that it is not recovering from/ reporting the error well (if there is one that is). It just gives errors about "max query size exceeded". Something happens part way through, and then you just get the same error thousands of times. ftp://tech.com.au/pub/load-error.gz Bruce Momjian wrote: > > > > > I've seen this problem too in 6.5 beta. I don't have a solution, but I'd > > just like to add my voice to say that this problem is real. > > > > Actually, I did have a kind of solution. If you dump proper insert > > statements into the dump and then run every insert in a separate > > process. Ugly but workable if the number isn't too great. > > > > while read A > > do > > echo "$A" | psql databasename > > done <dumpfilename > > > > > > We don't hear about this very often. Can someone tell us exactly what a > bad load looks like, or a cause. > > -- > Bruce Momjian | http://www.op.net/~candle > maillist@candle.pha.pa.us | (610) 853-3000 > + If your life is a hard drive, | 830 Blythe Avenue > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Sorry. I need to see the lines causing the error. Looks like the problem is near here: INSERT INTO product(title, summary, category, mfrcode, mfr, costprice, rrprice, taxrate, weight) SELECT 'CF-37L/QD CASE for F-401s,x QD (not original F-401QD)','FOR F-401,s,x, leatherette, no strap','24081','814913','21192','75.00','123.75','.32', null; Remove lines from the file until the load works. I recommend starting at this line, and removing above and below it. The last line you remove is the one that is causing the failure. Report back on the cause. > Bruce, > I've put up a script for ftp that shows some wierd behaviour. This > script was created by hand, so it's possible that some errors are caused > by syntax errors (although I spent a while trying to find them without > success). > > I think the point is that it is not recovering from/ reporting the error > well (if there is one that is). It just gives errors about "max query > size exceeded". Something happens part way through, and then you just > get the same error thousands of times. > > ftp://tech.com.au/pub/load-error.gz > > > Bruce Momjian wrote: > > > > > > > > I've seen this problem too in 6.5 beta. I don't have a solution, but I'd > > > just like to add my voice to say that this problem is real. > > > > > > Actually, I did have a kind of solution. If you dump proper insert > > > statements into the dump and then run every insert in a separate > > > process. Ugly but workable if the number isn't too great. > > > > > > while read A > > > do > > > echo "$A" | psql databasename > > > done <dumpfilename > > > > > > > > > > We don't hear about this very often. Can someone tell us exactly what a > > bad load looks like, or a cause. > > > > -- > > Bruce Momjian | http://www.op.net/~candle > > maillist@candle.pha.pa.us | (610) 853-3000 > > + If your life is a hard drive, | 830 Blythe Avenue > > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026 > -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
I tried the below script (again, going from 6.3->6.4.2 on RedHat 5.2 Linux).. Now I don't get the out of memory errors but it still chokes up on the syntax.. The back end spits out things like: ERROR: type name lookup of char16 failed ERROR: ChangeAcl: class "accounts" not found ERROR: ChangeAcl: class "accounts" not found ERROR: COPY command failed. Class accounts does not exist. ERROR: parser: parse error at or near "0" ERROR: type name lookup of char8 failed ERROR: ChangeAcl: class "passwd" not found ERROR: ChangeAcl: class "passwd" not found ERROR: type name lookup of char8 failed ERROR: ChangeAcl: class "page_text" not found ERROR: ChangeAcl: class "page_text" not found ERROR: ChangeAcl: class "page_text" not found ERROR: type name lookup of char8 failed ERROR: ChangeAcl: class "page_loan_matrix" not found ERROR: ChangeAcl: class "page_loan_matrix" not found ERROR: ChangeAcl: class "page_loan_matrix" not found ERROR: COPY command failed. Class passwd does not exist. ERROR: parser: parse error at or near "greg" ERROR: parser: parse error at or near "$100" FATAL 1: Database chuck1 does not exist in pg_database Thanks for your help. Everyone here has been very helpful! On Mon, 31 May 1999, Michael A. Koerber SR wrote: > I have had trouble on numerous occasions "undumping" a pg_dump. Below is > a perl script that I use as a pipe for undumping...hope it helps > > .....pg_undump..... > #!/usr/bin/perl -p > > # This script is a pipe that is used to break up PSQL dumps into pieces > # that are more manageable by the postmaster. If there are too many > # input lines, the memory used by the transaction block can easily > # choke the machine. > > BEGIN { $MAXLINES = 1000; } > > # Does the current input line define the beginning of an input block? > if (m/^COPY/) { > # When a copy line is encountered grab the line for later use > # and turn on the line counter; > $copyline = $_; > $cnt = 0; > } > > # Does the current input line define the end of an input block? > if (m{^\\\.}) { > # We have just macthed the end of STDIN line. Set counter off > undef $cnt; > } > > # If we are in an input block and the count is at the max, "flush" the buffer > # and setup for the next block. > if (defined($cnt) and ($cnt > $MAXLINES) ) { > $cnt = 0; > print '\.', "\n"; > print $copyline; > } > > $cnt++ if defined $cnt; >