Thread: vacuum problem
vacuum still doesn't work for me. I did a regresion test and found the sanity_check test failed. The result shows that the backend died while doing vacuum. I destroyed and re-created the regression database. Vacuum for a fresh database seems working. Next I ran each test one by one manually to see when vacuum began to fail. Vacuuming after the boolean test was ok but after the char test the backend died. Here is a backtrace of the backend. gdb ../../backend/postgres ../../../../data/base/regression/postgres.core GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for GDB; type "show warranty" for details. GDB 4.16 (i386-unknown-freebsd), Copyright 1996 Free Software Foundation, Inc... Core was generated by `postgres'. Program terminated with signal 6, Abort trap. Cannot access memory at address 0x20105080. #0 0x201c4b61 in ?? () (gdb) where #0 0x201c4b61 in ?? () #1 0x201c43d3 in ?? () #2 0xcee08 in ExcAbort (excP=0xf9800, detail=0, data=0x0, message=0xd20ce "!(0 < (size) && (size) <= (0xfffffff))") at excabort.c:26 #3 0xced67 in ExcUnCaught (excP=0xf9800, detail=0, data=0x0, message=0xd20ce "!(0 < (size) && (size) <= (0xfffffff))") at exc.c:173 #4 0xcedba in ExcRaise (excP=0xf9800, detail=0, data=0x0, message=0xd20ce "!(0 < (size) && (size) <= (0xfffffff))") at exc.c:190 #5 0xce5ef in ExceptionalCondition ( conditionName=0xd20ce "!(0 < (size) && (size) <= (0xfffffff))", exceptionP=0xf9800, detail=0x11d85c "size=0 [0x0]", fileName=0xd20b8 "mcxt.c", lineNumber=228) at assert.c:73 #6 0xd2140 in MemoryContextAlloc (context=0x1320ac, size=0) at mcxt.c:227 #7 0xd23b2 in palloc (size=0) at palloc.c:69 #8 0xde02 in GetIndexValue (tuple=0x210850d8, hTupDesc=0x12d250, attOff=0, attrNums=0x21054bbc, fInfo=0x13ae10, attNull=0xefbfaa8f "") at indexam.c:386 #9 0x222e8 in FormIndexDatum (numberOfAttributes=1, attributeNumber=0x21054bbc, heapTuple=0x210850d8, heapDescriptor=0x12d250, datum=0x19fe90, nullv=0x1a16d0 "\220#\032", fInfo=0x13ae10) at index.c:1284 #10 0x2f45d in vc_rpfheap (vacrelstats=0x12e850, onerel=0x132b10, vacuum_pages=0xefbfac44, fraged_pages=0xefbfac38, nindices=3, Irel=0x1a16b0) at vacuum.c:1146 #11 0x2e1b7 in vc_vacone (relid=1249, analyze=0, va_cols=0x0) at vacuum.c:552 #12 0x2d911 in vc_vacuum (VacRelP=0x0, analyze=0 '\000', va_cols=0x0) at vacuum.c:256 #13 0x2d72e in vacuum (vacrel=0x0, verbose=0, analyze=0 '\000', va_spec=0x0) at vacuum.c:159 #14 0xa6337 in ProcessUtility (parsetree=0x12d630, dest=Remote) at utility.c:634 #15 0xa3442 in pg_exec_query_dest (query_string=0xefbfad6c "vacuum;", dest=Remote, aclOverride=0) at postgres.c:706 #16 0xa3334 in pg_exec_query (query_string=0xefbfad6c "vacuum;") at postgres.c:644 #17 0xa496b in PostgresMain (argc=6, argv=0xefbfcde4, real_argc=4, real_argv=0xefbfd6b8) at postgres.c:1481 #18 0x8c0f4 in DoBackend (port=0x131000) at postmaster.c:1412 #19 0x8bb56 in BackendStartup (port=0x131000) at postmaster.c:1191 #20 0x8b046 in ServerLoop () at postmaster.c:725 #21 0x8a92b in PostmasterMain (argc=4, argv=0xefbfd6b8) at postmaster.c:534 #22 0x4bfc7 in main (argc=4, argv=0xefbfd6b8) at main.c:93
Tatsuo Ishii <t-ishii@sra.co.jp> writes: > vacuum still doesn't work for me. > > I did a regresion test and found the sanity_check test failed. The > result shows that the backend died while doing vacuum. I destroyed and > re-created the regression database. Vacuum for a fresh database seems > working. Did a "cvs update" today, dumped my existing databases with pg_dump, built and installed the new system. On a newly created database, with no tables defined, vacuum works (or, at least, claims to). For a production database with lots of tables, indices and sequences, the following happens: nhh=> vacuum; ERROR: fmgr_info: function 28261: cache lookup failed -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
Tatsuo Ishii <t-ishii@sra.co.jp> writes: > vacuum still doesn't work for me. > > I did a regresion test and found the sanity_check test failed. The > result shows that the backend died while doing vacuum. I destroyed and > re-created the regression database. Vacuum for a fresh database seems > working. Did a "cvs update" today, dumped my existing databases with pg_dump, built and installed the new system. On a newly created database, with no tables defined, vacuum works (or, at least, claims to). For a production database with lots of tables, indices and sequences, the following happens: nhh=> vacuum; ERROR: fmgr_info: function 28261: cache lookup failed -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
> Tatsuo Ishii <t-ishii@sra.co.jp> writes: > > > vacuum still doesn't work for me. > > > > I did a regresion test and found the sanity_check test failed. The > > result shows that the backend died while doing vacuum. I destroyed and > > re-created the regression database. Vacuum for a fresh database seems > > working. > > Did a "cvs update" today, dumped my existing databases with pg_dump, > built and installed the new system. On a newly created database, with > no tables defined, vacuum works (or, at least, claims to). For a > production database with lots of tables, indices and sequences, the > following happens: > > nhh=> vacuum; > ERROR: fmgr_info: function 28261: cache lookup failed I can find no mention of that number anywhere in the sources. Can you? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > nhh=> vacuum; > > ERROR: fmgr_info: function 28261: cache lookup failed > > I can find no mention of that number anywhere in the sources. Isn't that OID above the system-reserved range (which goes to ~20000)? So Tom's db is having trouble finding a function which he had defined himself?? - Tom
> > > nhh=> vacuum; > > > ERROR: fmgr_info: function 28261: cache lookup failed > > > > I can find no mention of that number anywhere in the sources. > > Isn't that OID above the system-reserved range (which goes to ~20000)? > So Tom's db is having trouble finding a function which he had defined > himself?? > > - Tom > OK, this helps. It shows an error in the function DataFill that someone else complained about. Working on it now. The multi-key index of bootstrap is proving harder than I thought. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian <maillist@candle.pha.pa.us> writes: > OK, this helps. It shows an error in the function DataFill that > someone else complained about. Working on it now. Great, Bruce! Let me know if you want me to test something -- I've got a completely repeatable situation here, knowing precisely the steps I've taken to cause this to happen. -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
Bruce Momjian <maillist@candle.pha.pa.us> writes: > OK, this helps. It shows an error in the function DataFill that > someone else complained about. Working on it now. Great, Bruce! Let me know if you want me to test something -- I've got a completely repeatable situation here, knowing precisely the steps I've taken to cause this to happen. -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
>> > > nhh=> vacuum; >> > > ERROR: fmgr_info: function 28261: cache lookup failed >> > >> > I can find no mention of that number anywhere in the sources. >> >> Isn't that OID above the system-reserved range (which goes to ~20000)? >> So Tom's db is having trouble finding a function which he had defined >> himself?? >> >> - Tom >> > >OK, this helps. It shows an error in the function DataFill that someone >else complained about. Working on it now. > >The multi-key index of bootstrap is proving harder than I thought. This morning I did cvsup to see if vacuum works. recompiled everything and did initdb. But I got core dump of shell in the middle of processing. Looked like creating pg_user failing. So I did following command by hand: echo "CREATE RULE _RETpg_user AS ON SELECT TO pg_user DO INSTEAD SELECT usename, usesysid, usecreate -odb, usetrace, usesuper, usecatupd, '********'::text as passwd, valuntil FROM pg_shadow;" | postgres -F -Q -D/usr/mgr/t-ishii/src/PostgreSQL/anonCVS/data template1 POSTGRES backend interactive interface $Revision: 1.86 $ $Date: 1998/08/25 21:34:04 $ > ERROR: cannot find attribute 1 of relation pg_user ERROR: cannot find attribute 1 of relation pg_user Removing sources and getting whole source tree doesn't help. This is FreeBSD 2.2.6. I will check on different platform. -- Tatsuo Ishii t-ishii@sra.co.jp
> > Looked like creating pg_user failing. So I did following command by hand: > > echo "CREATE RULE _RETpg_user AS ON SELECT TO pg_user DO INSTEAD > SELECT usename, usesysid, usecreate -odb, usetrace, usesuper, > usecatupd, '********'::text as passwd, valuntil FROM pg_shadow;" | > postgres -F -Q -D/usr/mgr/t-ishii/src/PostgreSQL/anonCVS/data template1 > POSTGRES backend interactive interface > $Revision: 1.86 $ $Date: 1998/08/25 21:34:04 $ > > ERROR: cannot find attribute 1 of relation pg_user > ERROR: cannot find attribute 1 of relation pg_user > > Removing sources and getting whole source tree doesn't help. > This is FreeBSD 2.2.6. I will check on different platform. That is bizarre. You are running FreeBSD, and me BSDI. Seems like they should behave the same. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
>> Looked like creating pg_user failing. So I did following command by hand: >> >> echo "CREATE RULE _RETpg_user AS ON SELECT TO pg_user DO INSTEAD >> SELECT usename, usesysid, usecreate -odb, usetrace, usesuper, >> usecatupd, '********'::text as passwd, valuntil FROM pg_shadow;" | >> postgres -F -Q -D/usr/mgr/t-ishii/src/PostgreSQL/anonCVS/data template1 >> POSTGRES backend interactive interface >> $Revision: 1.86 $ $Date: 1998/08/25 21:34:04 $ >> > ERROR: cannot find attribute 1 of relation pg_user >> ERROR: cannot find attribute 1 of relation pg_user >> >> Removing sources and getting whole source tree doesn't help. >> This is FreeBSD 2.2.6. I will check on different platform. > >That is bizarre. You are running FreeBSD, and me BSDI. Seems like they >should behave the same. I think so too. I don't know why. BTW, I tried the same source on my LinuxPPC box. This time initdb ran fine. Then I did the regression test. All of tests failed and the error messages are quite similar. Any idea? results/boolean.out:ERROR: cannot find attribute 1 of relation booltbl1 results/boolean.out:ERROR: cannot find attribute 1 of relation booltbl2 results/char.out:ERROR: cannot find attribute 1 of relation char_tbl results/int2.out:ERROR: cannot find attribute 1 of relation int2_tbl results/name.out:ERROR: cannot find attribute 1 of relation name_tbl results/strings.out:ERROR: cannot find attribute 1 of relation char_tbl results/strings.out:ERROR: cannot find attribute 1 of relation text_tbl results/strings.out:ERROR: cannot find attribute 1 of relation varchar_tbl results/text.out:ERROR: cannot find attribute 1 of relation text_tbl results/varchar.out:ERROR: cannot find attribute 1 of relation varchar_tbl -- Tatsuo Ishii t-ishii@sra.co.jp
> BTW, I tried the same source on my LinuxPPC box. This time initdb ran > fine. Then I did the regression test. All of tests failed and the > error messages are quite similar. Any idea? > > results/boolean.out:ERROR: cannot find attribute 1 of relation booltbl1 Could this have something to do with the resdomno (sp?) attribute renumbering done to help with views? - Tom
> > > nhh=> vacuum; > > > ERROR: fmgr_info: function 28261: cache lookup failed > > > > I can find no mention of that number anywhere in the sources. > > Isn't that OID above the system-reserved range (which goes to ~20000)? > So Tom's db is having trouble finding a function which he had defined > himself?? This should all be fixed now. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian <maillist@candle.pha.pa.us> writes: > This should all be fixed now. Looks that way to me. Thanks! By the way, PostgreSQL somehow seems to have become significantly faster for my use sometime over the last month or two. For the select and update queries I regularly execute, which generally involve two or three tables and ditto indices, I'm seeing what feels like twice the speed of what I got before -- and I've been increasing the amount of data in my tables without any schema changes or index additions! -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
> By the way, PostgreSQL somehow seems to have become significantly > faster for my use sometime over the last month or two. For the select > and update queries I regularly execute, which generally involve two > or three tables and ditto indices, I'm seeing what feels like twice > the speed of what I got before -- and I've been increasing the amount > of data in my tables without any schema changes or index additions! Without knowing the real reason, I'm going to jump in and have the type coersion code take credit for this *grin*. In particular, it _may_ do a better job of matching up indices with queries. Are there other reasons why things may have gotten faster? It's about the time to start working on release notes (Bruce?), and perhaps this could be quantified and mentioned... btw, the release notes are in sgml (doc/src/sgml/release.sgml) and all previous notes and detailed change lists I could find have been put into there. The notes for the next release can look very similar to what is there already, and there is already a section set aside for it. - Tom
> > By the way, PostgreSQL somehow seems to have become significantly > > faster for my use sometime over the last month or two. For the select > > and update queries I regularly execute, which generally involve two > > or three tables and ditto indices, I'm seeing what feels like twice > > the speed of what I got before -- and I've been increasing the amount > > of data in my tables without any schema changes or index additions! > > Without knowing the real reason, I'm going to jump in and have the type > coersion code take credit for this *grin*. > > In particular, it _may_ do a better job of matching up indices with > queries. So, you want to take credit for it. :-) You can. There is nothing I did to speed things up except to remove man sequential scans of system tables from the code. I can't imagine that causing the speedups he is reporting. > > Are there other reasons why things may have gotten faster? It would be interesting to see if the old code did not use indexes, and the new stuff does, and if type conversion was needed in those queries. > > It's about the time to start working on release notes (Bruce?), and > perhaps this could be quantified and mentioned... Yes, this weekend, perhaps tomorrow. I know people are waiting. Once I do it, I have to add every additional change, and that is a pain, so I wait until near the end. Can you work on the regression tests? My oidname,... removal is a problem. > > btw, the release notes are in sgml (doc/src/sgml/release.sgml) and all > previous notes and detailed change lists I could find have been put into > there. The notes for the next release can look very similar to what is > there already, and there is already a section set aside for it. OK, I see it. Looks like I am going to have to learn sgml. I have no way of viewing it, so I will have to do my best, and you can tell me how it looks. Is there an sgml->something conversion tool I can use for testing? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> OK, I see it. Looks like I am going to have to learn sgml. I have no > way of viewing it, so I will have to do my best, and you can tell me > how it looks. Is there an sgml->something conversion tool I can use > for testing? Well, you are allowed to not learn sgml. As you can see from the current release.sgml file, there are a _very_ few tags used to mark up your previous release notes. And you can choose to ignore those. For example, in previous releases you have built a "one-line-per" list of changes. Those I have laid into the release notes without modification, just bracketing them with a "<programlisting>" tag to have them come through unchanged. The written introduction you did for the last big release was put into sgml as a section (probably a "<sect2>" tag) and then the paragraphs were just started with the "<para>" tag. And again, I'm happy to mark them up from your usual plain-text file. Another option, if you don't want or need to try sgml in the privacy of your own home :) is to use the installation on hub.org. It works, and a cvs checkout of the tree (which you have already done), a "configure", and then a "doc/src/sgml/make postgres.html" is sufficient to build the full html docs. I've left out one or two one-time trivial setup steps, but I wanted to give you an idea of how easy it can be. What I'd like to do is tie the cvs tree to an automatic update of the pages on the web site, which would let you check things in and then see a doc rebuild soon after. Been thinking about it, but haven't done it yet :( - Tom
> Can you work on the regression tests? My oidname,... removal is a > problem. Sure. I've stayed away 'til now since I am working on the ODBC interface and the docs (and the serial type :). If it looks like things will build, then I'll do another cvs checkout and try things out. - Tom
> > Can you work on the regression tests? My oidname,... removal is a > > problem. > > Sure. I've stayed away 'til now since I am working on the ODBC interface > and the docs (and the serial type :). ODBC. I didn't realize you were in that area. Docs? > > If it looks like things will build, then I'll do another cvs checkout > and try things out. No problem reports for several days. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> ODBC. I didn't realize you were in that area. I wasn't :) The next version of ApplixWare on Linux is likely to have an ODBC interface, and I got a beta copy. Took a bit of work to figure out how it all was supposed to work (and _lots_ of help from the authors) to get it running natively on Linux. Will be in the v6.4 release... > Docs? Jose', Oliver, and myself will (likely) have a full set of reference pages for SQL commands in the next release. Still a good bit of transcription work remains to get fully into sgml from Jose's original flat files. > No problem reports for several days. OK. Let me get the ODBC stuff in sync first, though if we are trying for a Sept. 1 beta I can put that aside for now. Byron, how should we try re-integrating the ODBC code? I have made no changes which will affect the WIN32 port (unless there is small bad behavior with truncating the DSN file name if it has trailing blanks as was the case with ApplixWare). I did change the location of odbcinst.ini if under Unix and if -DODBCINST=$(POSTGRESDIR) (for example) is defined at compile-time, as it is for my new Makefile. That way the installation-wide parameters are in a location available to the installer. No change if the value is not defined; things would still look in /etc/odbcinst.ini as before. - Tom
> > OK, I see it. Looks like I am going to have to learn sgml. I have no > > way of viewing it, so I will have to do my best, and you can tell me > > how it looks. Is there an sgml->something conversion tool I can use > > for testing? > > Well, you are allowed to not learn sgml. As you can see from the current > release.sgml file, there are a _very_ few tags used to mark up your > previous release notes. And you can choose to ignore those. > > For example, in previous releases you have built a "one-line-per" list > of changes. Those I have laid into the release notes without > modification, just bracketing them with a "<programlisting>" tag to have > them come through unchanged. > > The written introduction you did for the last big release was put into > sgml as a section (probably a "<sect2>" tag) and then the paragraphs > were just started with the "<para>" tag. > > And again, I'm happy to mark them up from your usual plain-text file. > > Another option, if you don't want or need to try sgml in the privacy of > your own home :) is to use the installation on hub.org. It works, and a > cvs checkout of the tree (which you have already done), a "configure", > and then a "doc/src/sgml/make postgres.html" is sufficient to build the > full html docs. I've left out one or two one-time trivial setup steps, > but I wanted to give you an idea of how easy it can be. > > What I'd like to do is tie the cvs tree to an automatic update of the > pages on the web site, which would let you check things in and then see > a doc rebuild soon after. Been thinking about it, but haven't done it > yet :( Thomas, as I remember, the HISTORY file is constantly changing up until the final days. PLease let me know when you are ready to cut the docs, and I will make sure the HISTORY file is up-to-date, and you can load it into sgml. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)