Thread: Current sources?
I have seen a few patches fly by on the list, but when I checked the last snapshot was dated May 4th. Unhappily, CVSup is not working for me right now. Is there a later snapshot, or should I just work with this one? Oh, and who should I really direct this kind of "site admin" question to anyway? Thanks -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "Of course, someone who knows more about this will correct me if I'm wrong, and someone who knows less will correct me if I'm right." --David Palmer (palmer@tybalt.caltech.edu)
On Fri, 22 May 1998, David Gould wrote: > > I have seen a few patches fly by on the list, but when I checked the > last snapshot was dated May 4th. Unhappily, CVSup is not working for me > right now. Is there a later snapshot, or should I just work with this one? snapshot's only happen once a week unless in a BETA freeze...what is wrong with CVSup for you though? >Oh, and who should I really direct this kind of "site admin" question to > anyway? Me...but the list works too, since then everyone knows what is going on (for those new *grin*)
> > > I have seen a few patches fly by on the list, but when I checked the > last snapshot was dated May 4th. Unhappily, CVSup is not working for me > right now. Is there a later snapshot, or should I just work with this one? > > Oh, and who should I really direct this kind of "site admin" question to > anyway? > Marc, Mr. scrappy, scrappy@hub.org. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
At 7:54 AM 98.5.22 -0400, The Hermit Hacker wrote: >On Fri, 22 May 1998, David Gould wrote: > >> >> I have seen a few patches fly by on the list, but when I checked the >> last snapshot was dated May 4th. Unhappily, CVSup is not working for me >> right now. Is there a later snapshot, or should I just work with this one? > > snapshot's only happen once a week unless in a BETA freeze...what >is wrong with CVSup for you though? Once a week? the snapshot hasn't been updated for at least 2 weeks now. -- Tatsuo Ishii t-ishii@sra.co.jp -- Tatsuo Ishii t-ishii@sra.co.jp
The Hermit Hacker <scrappy@hub.org> writes: > snapshot's only happen once a week unless in a BETA freeze...what > is wrong with CVSup for you though? CVSup can only be used on a few platforms, and is a hell of a big job to port to new ones, because you have to begin by porting a Modula-3 compiler. Decidedly non-trivial. -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
On 22 May 1998, Tom Ivar Helbekkmo wrote: > The Hermit Hacker <scrappy@hub.org> writes: > > > snapshot's only happen once a week unless in a BETA freeze...what > > is wrong with CVSup for you though? > > CVSup can only be used on a few platforms, and is a hell of a big job > to port to new ones, because you have to begin by porting a Modula-3 > compiler. Decidedly non-trivial. I've tried to get anonCVS working, and am still working on it, but, for some odd reason, it isn't working as expected, even with following the instructions laid out in several FAQs :( Try this: cvs -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot login - password of 'postgresql' cvs -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot co pgsql And tell me what it comes up with...it might just be me *shrug* Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Sat, 23 May 1998, Tatsuo Ishii wrote: > At 7:54 AM 98.5.22 -0400, The Hermit Hacker wrote: > >On Fri, 22 May 1998, David Gould wrote: > > > >> > >> I have seen a few patches fly by on the list, but when I checked the > >> last snapshot was dated May 4th. Unhappily, CVSup is not working for me > >> right now. Is there a later snapshot, or should I just work with this one? > > > > snapshot's only happen once a week unless in a BETA freeze...what > >is wrong with CVSup for you though? > > Once a week? the snapshot hasn't been updated for at least > 2 weeks now. My error...hard coded tar into the script, vs letting it take it from the path...fixed, with a snapshot generated later this afternoon... Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
At 8:17 PM 98.5.22 -0300, The Hermit Hacker wrote: >On Sat, 23 May 1998, Tatsuo Ishii wrote: >> > snapshot's only happen once a week unless in a BETA freeze...what >> >is wrong with CVSup for you though? >> >> Once a week? the snapshot hasn't been updated for at least >> 2 weeks now. > > My error...hard coded tar into the script, vs letting it take it >from the path...fixed, with a snapshot generated later this afternoon... Thanks! -- Tatsuo Ishii t-ishii@sra.co.jp
The Hermit Hacker <scrappy@hub.org> writes: > Try this: > > cvs -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot login > - password of 'postgresql' > cvs -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot co pgsql > > And tell me what it comes up with...it might just be me *shrug* Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and it's pulling the PostgreSQL distribution in as I type. For some reason all the files are mode 666 (directories are 755, as per UMASK), but that's just a minor nit I'll figure out or work around. Is logging in really necessary, though? This is the first time I ever use anonymous CVS, but I'd assumed it would "just work", without any interactive specification of passwords... -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
Tom Ivar Helbekkmo <tih+mail@Hamartun.Priv.NO> writes: > Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and > it's pulling the PostgreSQL distribution in as I type. The "cvs checkout" worked fine, and a "cvs update" afterwards scanned the repository and confirmed I was up to date. Looks good! :-) -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
On 23 May 1998, Tom Ivar Helbekkmo wrote: > Tom Ivar Helbekkmo <tih+mail@Hamartun.Priv.NO> writes: > > > Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and > > it's pulling the PostgreSQL distribution in as I type. > > The "cvs checkout" worked fine, and a "cvs update" afterwards scanned > the repository and confirmed I was up to date. Looks good! :-) Odd...it was doing a 'second checkout' that screwed me, where i didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell me what that does... And, yes, password is required... Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
The Hermit Hacker <scrappy@hub.org> writes: >> Tom Ivar Helbekkmo <tih+mail@Hamartun.Priv.NO> writes: >>>> Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and >>>> it's pulling the PostgreSQL distribution in as I type. I'm at the same point using cvs 1.9 and rcs 5.7. I also see the bug that individual files are checked out with permissions 666. (I've seen the same thing with Mozilla's anon CVS server, BTW. So if it's a server config mistake rather than an outright CVS bug, then at least Marc is in good company...) > Odd...it was doing a 'second checkout' that screwed me, where i > didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell > me what that does... I'd expect that to choke, because you've specified a nonexistent repository... Why would you need to do a second checkout anyway? Once you've got a local copy of the CVS tree, cd'ing into it and saying "cvs update" is the right way to pull an update. BTW, "cvs checkout" is relatively inefficient across a slow link, because it has to pull down each file separately. The really Right Way to do this (again stealing a page from Mozilla) is to offer snapshot tarballs that are images of a CVS checkout done locally at the server. Then, people can pull a fresh fileset by downloading the tarball, and subsequently use "cvs update" within that tree to grab updates. In other words, the snapshot creation script should go something like rm -rf pgsql cvs -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot co pgsql tar cvfz postgresql.snapshot.tar.gz pgsql I dunno how you're doing it now, but the snapshot does not contain the CVS control files so it can't be used as a basis for "cvs update". regards, tom lane PS: for cvs operations across slow links, the Mozilla guys recommend -z3 (eg, "cvs -z3 update") to apply gzip compression to the data being transferred. I haven't tried this yet but it seems like a smart idea, especially for a checkout.
On Sat, 23 May 1998, Tom Lane wrote: > > Odd...it was doing a 'second checkout' that screwed me, where i > > didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell > > me what that does... > > I'd expect that to choke, because you've specified a nonexistent > repository... <> == :pserver:anoncvs@postgresql.org:/usr/local/cvsroot *grin* > Why would you need to do a second checkout anyway? Once you've got > a local copy of the CVS tree, cd'ing into it and saying "cvs update" > is the right way to pull an update. My understanding (and the way I've always done it) is that: cvs checkout -P pgsql Will remove any old files, update any existing, and bring in any new...always worked for me... > PS: for cvs operations across slow links, the Mozilla guys recommend > -z3 (eg, "cvs -z3 update") to apply gzip compression to the data being > transferred. I haven't tried this yet but it seems like a smart idea, > especially for a checkout. Geez, sounds like someone with enough knowledge to build a 'AnonCVS Instructions' web page? :) Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Sat, 23 May 1998, The Hermit Hacker wrote: > On Sat, 23 May 1998, Tom Lane wrote: > > > > Odd...it was doing a 'second checkout' that screwed me, where i > > > didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell > > > me what that does... > > > > I'd expect that to choke, because you've specified a nonexistent > > repository... > > <> == :pserver:anoncvs@postgresql.org:/usr/local/cvsroot *grin* > > > Why would you need to do a second checkout anyway? Once you've got > > a local copy of the CVS tree, cd'ing into it and saying "cvs update" > > is the right way to pull an update. > > My understanding (and the way I've always done it) is that: > > cvs checkout -P pgsql > > Will remove any old files, update any existing, and bring in any > new...always worked for me... What's that? In my understanding you have to login first. Then do one checkout. The checkout (co) creates a new directory and updates everything at that time. If you stay in /usr/local 'co pgsql' creates /usr/local/pgsql. Next day or week you go to /usr/local/pgsql and try 'cvs update -d'. Only the changed files will be updated on your local disc. -Egon
The Hermit Hacker <scrappy@hub.org> writes: > Odd...it was doing a 'second checkout' that screwed me, where > i didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' > and tell me what that does... I assume "<>" means "the same path as before", in which case this is just doing a new checkout on top of an old one, right? I've got one of those running now, to see what happens, but I don't see why you would want do do this. "cvs update" is the way it's supposed to be done, once you've got the tree checked out. I know _that_ worked. Right now, the second "cvs checkout" is running, and there seems to be much communication going on, but no new downloading of files I already have. Pretty much like during the "cvs update", that is. We'll see. Ah, yes. Here we go. This looks just like the "cvs update" pass. In fact, it seems that a second checkout on top of an existing one turns out to behave just like a (more proper) update from within the tree. Done now, worked fine. I'm starting to look forward to when the CVS source tree gets into a buildable state again! This could be a comfortable way of keeping up to date with the current sources. Here's hoping you find a good solution to the s_lock.h misunderstandings soon... :-) > And, yes, password is required... I've found where it's stashed, now. I guess that means you only need to supply it once, to do the initial checkout, and after that you won't have to worry about it. -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
The Hermit Hacker <scrappy@hub.org> writes: > My understanding (and the way I've always done it) is that: > cvs checkout -P pgsql > Will remove any old files, update any existing, and bring in any > new...always worked for me... Hmm. Now that I read the cvs manual carefully, it does say: : Running `checkout' on a directory that was already built by a prior : `checkout' is also permitted, and has the same effect as specifying the : `-d' option to the `update' command, that is, any new directories that : have been created in the repository will appear in your work area. But the more usual way to do it is with "update". Maybe the "checkout" method is broken, or has some peculiar requirements about how to specify the repository --- ordinarily you don't need to name the repository in an update command, since cvs finds it in CVS/Root. I don't know whether the same is true for the "checkout" method. Could there be some kind of conflict between what you said on the command line and what was in CVS/Root? > Geez, sounds like someone with enough knowledge to build a > 'AnonCVS Instructions' web page? :) Actually I'm just a novice with cvs. OTOH a novice might be just the right person to make such a page ;-). Let me see if I can gin up some text. Do you want to adopt the Mozilla method where there is a tarball available with the CVS control files already in place? I'd recommend it; my initial checkout over a 28.8K modem took well over two hours. Admittedly I forgot to supply -z, but judging from the pauses in data transfer, overload of the hub.org server was also a factor ... -z would've made that worse. regards, tom lane
Well, the upshot from here is that having done "cvs checkout pgsql" once, I can do either cvs update -- within pgsql cvs -d blahblah checkout pgsql -- in parent directory and they both seem to work and do the same thing (although with no updates committed in the last two hours, it's hard to be sure). If I omit the -d option to the cvs checkout, it fails; it does not know enough to fetch the repository address from pgsql/CVS/Root. But cvs update is cool. Dunno why it doesn't work for Marc. I'm running cvs 1.9; you? regards, tom lane
On 23 May 1998, Tom Ivar Helbekkmo wrote: > Ah, yes. Here we go. This looks just like the "cvs update" pass. In > fact, it seems that a second checkout on top of an existing one turns > out to behave just like a (more proper) update from within the tree. > > Done now, worked fine. Odd, must be a problem with the machine I was trying to run it from then *shrug* Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Sat, 23 May 1998, Tom Lane wrote: > Do you want to adopt the Mozilla method where there is a tarball > available with the CVS control files already in place? I'd recommend > it; my initial checkout over a 28.8K modem took well over two hours. > Admittedly I forgot to supply -z, but judging from the pauses in > data transfer, overload of the hub.org server was also a factor ... > -z would've made that worse. Can you try it with a -z and time it? I would judge it to be faster, since the -z should be more processor intensive, and the processor on Hub is idle most of the time. The -z would reduce bandwidth, though... As for the CVS control files...will look at it...but even then, its going to take awhile to download, due to the overall size of the file in question... Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Sat, 23 May 1998, Tom Lane wrote: > Well, the upshot from here is that having done "cvs checkout pgsql" > once, I can do either > cvs update -- within pgsql > cvs -d blahblah checkout pgsql -- in parent directory > and they both seem to work and do the same thing (although with no > updates committed in the last two hours, it's hard to be sure). > > If I omit the -d option to the cvs checkout, it fails; it does not > know enough to fetch the repository address from pgsql/CVS/Root. > But cvs update is cool. > > Dunno why it doesn't work for Marc. I'm running cvs 1.9; you? Yup, but I have a few suspicious on why it doesn't work that I'll investigate on monday :) Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
The Hermit Hacker <scrappy@hub.org> writes: > On Sat, 23 May 1998, Tom Lane wrote: >> Do you want to adopt the Mozilla method where there is a tarball >> available with the CVS control files already in place? I'd recommend >> it; my initial checkout over a 28.8K modem took well over two hours. > Can you try it with a -z and time it? Well, the Mozilla boys are quite right: -z3 is a nice win over a modem line. A full checkout with -z3 took 38 minutes, vs about 130 without. Let's see, the tarfile equivalent of the checked-out tree comes to 3947839 bytes, which would take about 25-30 minutes to download across this line. So there's not much savings to be had from downloading a tarfile instead of doing a checkout. It might still be worth providing CVS control files in the snapshot tarballs, just so that someone who downloads a snap and later decides to start using CVS doesn't have to start from scratch. The overhead of doing so seems to be only about a 1% increase in the tar.gz filesize. But it may not be worth your trouble to mess with the snapshot generation script... regards, tom lane
Here is what I have been using. I use -z6 because that is the default gzip compression level. This is not an anon login, but I have been using it for over a month now. You will probably have to do an anon login first, so it stores a file in your home directory for later use. Also attached is my pgcommit script. --------------------------------------------------------------------------- : cd /pgcvs || exit 1 cvs -z 6 -d :pserver:momjian@hub.org:/usr/local/cvsroot -q update "$@" pgsql chown -R postgres . pgfixups cd /u/src/pg/src nice configure \ --with-x --with-tcl \ --enable-cassert \ --with-template=bsdi-3.0 \ --with-includes="/u/readline" \ --with-libraries="/u/readline /usr/contrib/lib" --------------------------------------------------------------------------- : trap "rm -f /tmp/$$ /tmp/$$a" 0 1 2 3 15 cd /u/src/pgsql || exit 1 ema /tmp/$$ fmt /tmp/$$ >/tmp/$$a cat /tmp/$$a chown postgres /tmp/$$a cvs -z 6 -d :pserver:momjian@hub.org:/usr/local/cvsroot commit -F /tmp/$$a pgsql find /u/src/pgsql \( -name '*.rej' -print \) -o \( -name '*.orig' -exec rm {} \; \) -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> Geez, sounds like someone with enough knowledge to build a > 'AnonCVS Instructions' web page? :) Here's some text with HTML markup. The claim in the text that CVS 1.9.28 fixes the mode-666 problem is based on noting a promising-looking entry in that version's ChangeLog. I have not tested it for myself. Anyone want to try that version and see if it works? regards, tom lane <html> <head> <title>PostgreSQL: Getting the source via CVS</title> </head> <body bgcolor=white text=black link=blue vlink=purple> <font size="+3">Getting the source via CVS</font> <p>If you would like to keep up with the current sources on a regular basis, you can fetch them from our CVS server and then use CVS to retrieve updates from time to time. <P>To do this you first need a local copy of CVS (Concurrent Version Control System), which you can get from <A HREF="http://www.cyclic.com/">http://www.cyclic.com/</A> or any GNU software archive site. Currently we recommend version 1.9. <P>Once you have installed the CVS software, do this: <PRE> cvs -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot login </PRE> You will be prompted for a password; enter '<tt>postgresql</tt>'. You should only need to do this once, since the password will be saved in <tt>.cvspass</tt> in your home directory. <P>Having logged in, you are ready to fetch the PostgreSQL sources. Do this: <PRE> cvs -z3 -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot co -P pgsql </PRE> which will install the PostgreSQL sources into a subdirectory <tt>pgsql</tt> of the directory you are currently in. <P>(If you have a fast link to the Internet, you may not need <tt>-z3</tt>, which instructs CVS to use gzip compression for transferred data. But on a modem-speed link, it's a very substantial win.) <P>This initial checkout is a little slower than simply downloading a <tt>tar.gz</tt> file; expect it to take 40 minutes or so if you have a 28.8K modem. The advantage of CVS doesn't show up until you want to update the file set later on. <P>Whenever you want to update to the latest CVS sources, <tt>cd</tt> into the <tt>pgsql</tt> subdirectory, and issue <PRE> cvs -z3 update -d -P </PRE> This will fetch only the changes since the last time you updated. You can update in just a couple of minutes, typically, even over a modem-speed line. <P>You can save yourself some typing by making a file <tt>.cvsrc</tt> in your home directory that contains <PRE> cvs -z3 update -d -P </PRE> This supplies the <tt>-z3</tt> option to all cvs commands, and the <tt>-d</tt> and <tt>-P</tt> options to cvs update. Then you just have to say <PRE> cvs update </PRE> to update your files. <P><strong>CAUTION:</strong> some versions of CVS have a bug that causes all checked-out files to be stored world-writable in your directory. If you see that this has happened, you can do something like <PRE> chmod -R go-w pgsql </PRE> to set the permissions properly. This bug is allegedly fixed in the latest beta version of CVS, 1.9.28 ... but it may have other, less predictable bugs. <P>CVS can do a lot of other things, such as fetching prior revisions of the PostgreSQL sources rather than the latest development version. For more info consult the manual that comes with CVS, or see the online documentation at <A HREF="http://www.cyclic.com/">http://www.cyclic.com/</A>. </body> </html>
> > Here is what I have been using. I use -z6 because that is the default > gzip compression level. This is not an anon login, but I have been > using it for over a month now. You will probably have to do an anon > login first, so it stores a file in your home directory for later use. > > Also attached is my pgcommit script. I am going to change to -z 3. Netscape may have found -z 3 to be faster than -z 6 for one-time modem transfers. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
I wrote: > I'm starting to look forward to when the CVS source tree gets into a > buildable state again! This could be a comfortable way of keeping > up to date with the current sources. Here's hoping you find a good > solution to the s_lock.h misunderstandings soon... :-) A closer look shows that you've actually got it worked out, except that the ugly hack for Sparcs running BSD now has broken completely. It used to work when it was in s_lock.h, but in a separately compiled file, it doesn't. (It relies on an entry point declared inside asm() within an unused function that's explicitly declared static.) I just replaced it with the simpler one for SparcLinux, and it's OK. On the weird side, after I updated to the current sources, the backend dies on me whenever I try to delete a database, whether from psql with 'drop database test' or from the command line with 'destroydb test'. -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
Tom Lane <tgl@sss.pgh.pa.us> writes: > The claim in the text that CVS 1.9.28 fixes the mode-666 problem is > based on noting a promising-looking entry in that version's ChangeLog. > I have not tested it for myself. Anyone want to try that version > and see if it works? I just did, and it did. -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
Tom Ivar Helbekkmo wrote: > > > I'm starting to look forward to when the CVS source tree gets into a > > buildable state again! This could be a comfortable way of keeping > > up to date with the current sources. Here's hoping you find a good > > solution to the s_lock.h misunderstandings soon... :-) > > A closer look shows that you've actually got it worked out, except > that the ugly hack for Sparcs running BSD now has broken completely. > It used to work when it was in s_lock.h, but in a separately compiled > file, it doesn't. (It relies on an entry point declared inside asm() > within an unused function that's explicitly declared static.) Ooops, sorry about that. I guess I should have added a ".globl tas" or whatever the native asm phrase for globalizing an entry point is and then it would have worked as I intended. > I just replaced it with the simpler one for SparcLinux, and it's OK. This is a very nice way to do this. In general, if we can count on having GCC we should use the GCC inlines. Hmmmm, on that note, the current sources are factored: #if defined(linux) #if defined(x86) // x86 code #else if defined(sparc) // sparc code #endif #else // all non linux ... #endif I think that the real commonality might better be expressed as: #if defined(gcc) // all gcc variants #else // no gcc #endif As GCC has a unique (but common to gcc!) "asm" facility. This would allow all the free unixes and many of the comercial ones to share the same asm implementation which should make it easier to get it right on all the platforms. Since I am planning another revision, does anyone object to this? > On the weird side, after I updated to the current sources, the backend > dies on me whenever I try to delete a database, whether from psql with > 'drop database test' or from the command line with 'destroydb test'. Try making the 's_lock_test' target in src/backend/storage/buffer/Makefile. It will let you be sure that spinlocks are working. Just btw, I have been doing some testing based on Bruce's reservations about the inline vs call implementation of spinlocks, and will be posting an updated set of patches and the results of my testing "real soon now". Now that I have at least anoncvs access to the current tree, I think I can do this with fewer iterations (crossing fingers...). David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "Of course, someone who knows more about this will correct me if I'm wrong, and someone who knows less will correct me if I'm right." --David Palmer (palmer@tybalt.caltech.edu)
>> A closer look shows that you've actually got it worked out, except >> that the ugly hack for Sparcs running BSD now has broken completely. >> It used to work when it was in s_lock.h, but in a separately compiled >> file, it doesn't. (It relies on an entry point declared inside asm() >> within an unused function that's explicitly declared static.) > >Ooops, sorry about that. > >I guess I should have added a ".globl tas" or whatever the native asm phrase >for globalizing an entry point is and then it would have worked as I intended. PPC/Linux has been broken too. >> On the weird side, after I updated to the current sources, the backend >> dies on me whenever I try to delete a database, whether from psql with >> 'drop database test' or from the command line with 'destroydb test'. I have made small changes to solve the global tas problem, and got exactly same experience. >Try making the 's_lock_test' target in src/backend/storage/buffer/Makefile. >It will let you be sure that spinlocks are working. I have tested the s_lock_test and seems it is working. However I have lots of failure with various SQL's including 'drop database', 'delete from'. Have you succeeded in running regression tests? If so, what kind of platforms are you using? -- Tatsuo Ishii t-ishii@sra.co.jp
> This is a very nice way to do this. In general, if we can count on having > GCC we should use the GCC inlines. > > Hmmmm, on that note, the current sources are factored: > > #if defined(linux) > #if defined(x86) > // x86 code > #else if defined(sparc) > // sparc code > #endif > #else > // all non linux > ... > #endif > > I think that the real commonality might better be expressed as: > > #if defined(gcc) > // all gcc variants > #else > // no gcc > #endif > > As GCC has a unique (but common to gcc!) "asm" facility. This would allow > all the free unixes and many of the comercial ones to share the same > asm implementation which should make it easier to get it right on all the > platforms. > > Since I am planning another revision, does anyone object to this? Sounds great. > > > On the weird side, after I updated to the current sources, the backend > > dies on me whenever I try to delete a database, whether from psql with > > 'drop database test' or from the command line with 'destroydb test'. > > Try making the 's_lock_test' target in src/backend/storage/buffer/Makefile. > It will let you be sure that spinlocks are working. > > Just btw, I have been doing some testing based on Bruce's reservations about > the inline vs call implementation of spinlocks, and will be posting an updated > set of patches and the results of my testing "real soon now". > > Now that I have at least anoncvs access to the current tree, I think I can > do this with fewer iterations (crossing fingers...). -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
dg@illustra.com (David Gould) writes: > Try making the 's_lock_test' target in > src/backend/storage/buffer/Makefile. It will let you be sure that > spinlocks are working. This is how it looks like here (NetBSD/sparc 1.3, GCC 1.7.2.2), with the broken TAS support replaced with that from SparcLinux: | barsoom# gmake s_lock_test | gcc -I../../../include -I../../../backend -I/usr/local/include -O2 -pipe -Wall -Wmissing-prototypes -I../.. -DS_LOCK_TEST=1-g s_lock.c -o s_lock_test | s_lock.c: In function `main': | s_lock.c:313: warning: implicit declaration of function `select' | ./s_lock_test | S_LOCK_TEST: this will hang for a few minutes and then abort | with a 'stuck spinlock' message if S_LOCK() | and TAS() are working. | | FATAL: s_lock(00004168) at s_lock.c:324, stuck spinlock. Aborting. | | FATAL: s_lock(00004168) at s_lock.c:324, stuck spinlock. Aborting. | gmake: *** [s_lock_test] Abort trap (core dumped) | gmake: *** Deleting file `s_lock_test' | barsoom# ...and it did take a couple of minutes (didn't time it, though), so I guess it works, right? -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
t-ishii@sra.co.jp writes: > I have tested the s_lock_test and seems it is working. However I > have lots of failure with various SQL's including 'drop database', > 'delete from'. I'm seeing the same thing Tatsuo-san does. This is on NetBSD/sparc 1.3, GCC 1.7.2.2, running the very latest anonCVS-fetched PostgreSQL. Haven't run regression tests -- assume they would fail horribly. The installation was done from scratch, including an 'initdb' run. Interestingly, a 'delete from' will kill the backend even if it has a 'where' clause that does not match anything whatsoever, but a 'drop table' is fine, including non-empty tables. Brief testing of 'insert' and 'select' show them working, including joins, as do transactions using 'begin', 'commit' and 'abort'. Any idea where to start looking? -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
t-ishii@sra.co.jp writes: > >> A closer look shows that you've actually got it worked out, except > >> that the ugly hack for Sparcs running BSD now has broken completely. > >> It used to work when it was in s_lock.h, but in a separately compiled > >> file, it doesn't. (It relies on an entry point declared inside asm() > >> within an unused function that's explicitly declared static.) > > > >Ooops, sorry about that. > > > >I guess I should have added a ".globl tas" or whatever the native asm phrase > >for globalizing an entry point is and then it would have worked as I intended. > > PPC/Linux has been broken too. Please let me know what the problem was, even if it was just the 'global tas' thing. I am trying to make sure this works on all platforms. Thanks. > >> On the weird side, after I updated to the current sources, the backend > >> dies on me whenever I try to delete a database, whether from psql with > >> 'drop database test' or from the command line with 'destroydb test'. > > I have made small changes to solve the global tas problem, and got > exactly same experience. > > >Try making the 's_lock_test' target in src/backend/storage/buffer/Makefile. > >It will let you be sure that spinlocks are working. > > I have tested the s_lock_test and seems it is working. However I have > lots of failure with various SQL's including 'drop database', 'delete > from'. > Have you succeeded in running regression tests? If so, what kind of > platforms are you using? I made this patch against 6.3.2 and ran regression successfully. This on a glibc Linux x86 system. I just rebuilt against the latest CVS (from anoncvs) and see 27 tests that fail, many with dropconns. I looked a little into the 'drop database failure' and it does not look related to spinlocks as far as I looked. David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "Of course, someone who knows more about this will correct me if I'm wrong, and someone who knows less will correct me if I'm right." --David Palmer (palmer@tybalt.caltech.edu)
Re: [HACKERS] Death on deletion attempts (was: Current sources?)
From
dg@illustra.com (David Gould)
Date:
Tom Ivar Helbekkmo writes: > t-ishii@sra.co.jp writes: > > > I have tested the s_lock_test and seems it is working. However I > > have lots of failure with various SQL's including 'drop database', > > 'delete from'. > > I'm seeing the same thing Tatsuo-san does. This is on NetBSD/sparc > 1.3, GCC 1.7.2.2, running the very latest anonCVS-fetched PostgreSQL. > Haven't run regression tests -- assume they would fail horribly. The > installation was done from scratch, including an 'initdb' run. > > Interestingly, a 'delete from' will kill the backend even if it has a > 'where' clause that does not match anything whatsoever, but a 'drop > table' is fine, including non-empty tables. Brief testing of 'insert' > and 'select' show them working, including joins, as do transactions > using 'begin', 'commit' and 'abort'. > > Any idea where to start looking? Not at me for starters ;-) I really think I _might_ be innocent here... Btw, could you send me diffs for your bsd s_lock fix if you have not sent them in to be applied yet. I would like avoid the multiple unsynched update problem that happened last time. Thanks. I did spend all of five minutes with gdb looking at the "drop database" failure. It looks like ExecutorStart() returned a null tupleDesc to ProcessQueryDesc() which then passed to BeginCommand() who did not like it at all. So I would start looking at why ExecutorStart() failed. The transcript is below: Program received signal SIGSEGV, Segmentation fault. BeginCommand (pname=0x0, operation=4, tupdesc=0x0, isIntoRel=0 '\000', isIntoPortal=0, tag=0x81359c0 "DELETE", dest=Remote) at dest.c:241 241 AttributeTupleForm *attrs = tupdesc->attrs; (gdb) where #0 BeginCommand (pname=0x0, operation=4, tupdesc=0x0, isIntoRel=0 '\000', isIntoPortal=0, tag=0x81359c0 "DELETE", dest=Remote) at dest.c:241 #1 0x80e24f9 in ProcessQueryDesc (queryDesc=0x81ba640) at pquery.c:293 #2 0x80e258e in ProcessQuery (parsetree=0x81b68f8, plan=0x81ba468, argv=0x0, typev=0x0, nargs=0, dest=Remote) at pquery.c:378 #3 0x80e13b0 in pg_exec_query_dest ( query_string=0xbfffd5f8 "delete from pg_database where pg_database.oid = '18080'::oid", argv=0x0, typev=0x0, nargs=0,dest=Remote) at postgres.c:702 #4 0x80e12b2 in pg_exec_query ( query_string=0xbfffd5f8 "delete from pg_database where pg_database.oid = '18080'::oid", argv=0x0, typev=0x0, nargs=0)at postgres.c:601 #5 0x8096596 in destroydb (dbname=0x81b2558 "regression") at dbcommands.c:136 #6 0x80e304c in ProcessUtility (parsetree=0x81b2578, dest=Remote) at utility.c:570 #7 0x80e1350 in pg_exec_query_dest ( query_string=0xbfffd928 "drop database regression;", argv=0x0, typev=0x0, nargs=0, dest=Remote) at postgres.c:656 #8 0x80e12b2 in pg_exec_query ( query_string=0xbfffd928 "drop database regression;", argv=0x0, typev=0x0, nargs=0) at postgres.c:601 #9 0x80e2001 in PostgresMain (argc=9, argv=0xbffff960) at postgres.c:1417 #10 0x80a7707 in main (argc=9, argv=0xbffff960) at main.c:105 (gdb) l 236 bool isIntoPortal, 237 char *tag, 238 CommandDest dest) 239 { 240 PortalEntry *entry; 241 AttributeTupleForm *attrs = tupdesc->attrs; 242 int natts = tupdesc->natts; 243 int i; 244 char *p; 245 (gdb) up #1 0x80e24f9 in ProcessQueryDesc (queryDesc=0x81ba640) at pquery.c:293 293 BeginCommand(NULL, (gdb) l 280,300 281 /* ---------------- 282 * call ExecStart to prepare the plan for execution 283 * ---------------- 284 */ 285 attinfo = ExecutorStart(queryDesc, state); 286 287 /* ---------------- 288 * report the query's result type information 289 * back to the front end or to whatever destination 290 * we're dealing with. 291 * ---------------- 292 */ 293 BeginCommand(NULL, 294 operation, 295 attinfo, 296 isRetrieveIntoRelation, 297 isRetrieveIntoPortal, 298 tag, 299 dest); (gdb) p attinfo $1 = (struct tupleDesc *) 0x0 -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "Of course, someone who knows more about this will correct me if I'm wrong, and someone who knows less will correct me if I'm right." --David Palmer (palmer@tybalt.caltech.edu)
>> PPC/Linux has been broken too. > >Please let me know what the problem was, even if it was just the 'global tas' >thing. I am trying to make sure this works on all platforms. Thanks. Here are patches for s_lock.c (against May23 snapshot). ---------------------------------------------------------- *** s_lock.c.orig Mon May 25 18:08:20 1998 --- s_lock.c Mon May 25 18:08:57 1998 *************** *** 151,161 **** #if defined(PPC) ! static int ! tas_dummy() { __asm__(" \n\ - tas: \n\ lwarx 5,0,3 \n\ cmpwi 5,0 \n\ bne fail \n\ --- 151,160 ---- #if defined(PPC) ! int ! tas(slock_t *lock) { __asm__(" \n\ lwarx 5,0,3 \n\ cmpwi 5,0 \n\ bne fail \n\ ---------------------------------------------------------- >> I have tested the s_lock_test and seems it is working. However I have >> lots of failure with various SQL's including 'drop database', 'delete >> from'. >> Have you succeeded in running regression tests? If so, what kind of >> platforms are you using? > >I made this patch against 6.3.2 and ran regression successfully. This on a >glibc Linux x86 system. I just rebuilt against the latest CVS (from anoncvs) >and see 27 tests that fail, many with dropconns. I looked a little into the >'drop database failure' and it does not look related to spinlocks as far as >I looked. I see. BTW, I have tested on FreeBSD box and found exactly same thing has occured. -- Tatsuo Ishii t-ishii@sra.co.jp
tih wrote: > > > I'm starting to look forward to when the CVS source tree gets into a > > buildable state again! This could be a comfortable way of keeping > > up to date with the current sources. Here's hoping you find a good > > solution to the s_lock.h misunderstandings soon... :-) ... > On the weird side, after I updated to the current sources, the backend > dies on me whenever I try to delete a database, whether from psql with > 'drop database test' or from the command line with 'destroydb test'. > > -tih Ok, I think I have found the source of the dropconns on "delete" querys that are causing the current problem. The change listed below sets tupType to the junkFilter (whatever that is) jf_cleanTupType unconditionally. This makes a SEGV later as the tupType ends up NULL. Here is what CVS says: --------------- revision 1.46 date: 1998/05/21 03:53:50; author: scrappy; state: Exp; lines: +6 -4 From: David Hartwig <daveh@insightdist.com> Here is a patch to remove the requirement that ORDER/GROUP BY clause identifiers be included in the target list. -------------- I do not believe that this could ever have passed regression. Do we have the whole patch to back out, or do we need to just "fix what we have now"? Also, perhaps we need to be more selective about checkins? Here is the source containing the problem: src/backend/executor/execMain.c in InitPlan() at about line 515 /* ---------------- * now that we have the target list, initialize the junk filter * if this is a REPLACE or a DELETE query. * We also init the junk filter if this is an append query * (there might be some rule lock info there...) * NOTE: in the future we might want to initialize the junk * filter for all queries. * ---------------- * SELECT added by daveh@insightdist.com 5/20/98 to allow * ORDER/GROUP BY have an identifier missing from the target. */ if (operation == CMD_UPDATE || operation == CMD_DELETE || operation == CMD_INSERT || operation == CMD_SELECT) { JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); estate->es_junkFilter = j; >>>> tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ } else estate->es_junkFilter = NULL; Here is my debug transcript for "drop database regression" (gdb) where #0 InitPlan (operation=CMD_DELETE, parseTree=0x81b68f8, plan=0x81ba468, estate=0x81ba668) at execMain.c:527 #1 0x8098017 in ExecutorStart (queryDesc=0x81ba640, estate=0x81ba668) at execMain.c:128 #2 0x80e24d9 in ProcessQueryDesc (queryDesc=0x81ba640) at pquery.c:285 #3 0x80e258e in ProcessQuery (parsetree=0x81b68f8, plan=0x81ba468, argv=0x0, typev=0x0, nargs=0, dest=Remote) at pquery.c:378 #4 0x80e13b0 in pg_exec_query_dest ( query_string=0xbfffd5f8 "delete from pg_database where pg_database.oid = '18080'::oid", argv=0x0, typev=0x0, nargs=0,dest=Remote) at postgres.c:702 #5 0x80e12b2 in pg_exec_query ( query_string=0xbfffd5f8 "delete from pg_database where pg_database.oid = '18080'::oid", argv=0x0, typev=0x0, nargs=0)at postgres.c:601 #6 0x8096596 in destroydb (dbname=0x81b2558 "regression") at dbcommands.c:136 #7 0x80e304c in ProcessUtility (parsetree=0x81b2578, dest=Remote) at utility.c:570 #8 0x80e1350 in pg_exec_query_dest ( query_string=0xbfffd928 "drop database regression;", argv=0x0, typev=0x0, nargs=0, dest=Remote) at postgres.c:656 #9 0x80e12b2 in pg_exec_query ( query_string=0xbfffd928 "drop database regression;", argv=0x0, typev=0x0, nargs=0) at postgres.c:601 #10 0x80e2001 in PostgresMain (argc=9, argv=0xbffff960) at postgres.c:1417 #11 0x80a7707 in main (argc=9, argv=0xbffff960) at main.c:105 530 JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); 531 estate->es_junkFilter = j; (gdb) p j $7 = (JunkFilter *) 0x81bb2c8 (gdb) p *j $8 = {type = T_JunkFilter, jf_targetList = 0x81ba600, jf_length = 1, jf_tupType = 0x81bb238, jf_cleanTargetList = 0x0, jf_cleanLength = 0, jf_cleanTupType = 0x0, jf_cleanMap = 0x0} (gdb) n 533 tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ (gdb) p tupType $9 = (struct tupleDesc *) 0x81baf18 (gdb) n 534 } (gdb) p tupType $10 = (struct tupleDesc *) 0x0 (gdb) n 542 intoRelationDesc = (Relation) NULL; (gdb) n 544 if (operation == CMD_SELECT) (gdb) n 588 estate->es_into_relation_descriptor = intoRelationDesc; (gdb) n 600 return tupType; Returns NULL to ExecutorStart who then pukes. -dg David Gould dg@illustra.com 510.536.1443 (h) 510.628.3783 (w) or davidg@dnai.com 510.305.9468 (Bat phone) -- A child of five could understand this! Fetch me a child of five.
On Mon, 25 May 1998, David Gould wrote: > Ok, I think I have found the source of the dropconns on "delete" querys > that are causing the current problem. The change listed below sets > tupType to the junkFilter (whatever that is) jf_cleanTupType unconditionally. > This makes a SEGV later as the tupType ends up NULL. > > Here is what CVS says: > --------------- > revision 1.46 > date: 1998/05/21 03:53:50; author: scrappy; state: Exp; lines: +6 -4 > > From: David Hartwig <daveh@insightdist.com> > > Here is a patch to remove the requirement that ORDER/GROUP BY clause > identifiers be included in the target list. > -------------- > > I do not believe that this could ever have passed regression. Do we have > the whole patch to back out, or do we need to just "fix what we have now"? fix what we have now...if its possible... > Also, perhaps we need to be more selective about checkins? We're in an alpha/development phase here...my general opinion on that is that I'd rather someone throw in code that is 95% good, with 5% error, then nobody making a start anywhere... Until a Beta freeze, *never* expect the server to actually work...if it does, bonus. Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> -------------- > > I do not believe that this could ever have passed regression. Do we have > the whole patch to back out, or do we need to just "fix what we have now"? > > Also, perhaps we need to be more selective about checkins? Not sure. Marc and I reviewed it, and it looked very good. In fact, I would like to see more of such patches, of course, without the destroydb problem, but many patches have little quirks the author could not have anticipated. > { > JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); > estate->es_junkFilter = j; > >>>> tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ > } > else > estate->es_junkFilter = NULL; > > Here is my debug transcript for "drop database regression" Here is the original patch. I got it with the command: $ pgcvs diff -c -D'05/21/1998 03:00:00 GMT' -D'05/21/1998 04:00:00 GMT' pgcvs on my machines does postgresql cvs for me. --------------------------------------------------------------------------- Index: backend/executor/execMain.c =================================================================== RCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v retrieving revision 1.45 retrieving revision 1.46 diff -c -r1.45 -r1.46 *** execMain.c 1998/02/27 08:43:52 1.45 --- execMain.c 1998/05/21 03:53:50 1.46 *************** *** 26,32 **** * * * IDENTIFICATION ! * $Header: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v 1.45 1998/02/27 08:43:52 vadim Exp $ * *------------------------------------------------------------------------- */ --- 26,32 ---- * * * IDENTIFICATION ! * $Header: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v 1.46 1998/05/21 03:53:50 scrappy Exp $ * *------------------------------------------------------------------------- */ *************** *** 521,534 **** * NOTE: in the future we might want to initialize the junk * filter for all queries. * ---------------- */ if (operation == CMD_UPDATE || operation == CMD_DELETE || ! operation == CMD_INSERT) { - JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); - estate->es_junkFilter = j; } else estate->es_junkFilter = NULL; --- 521,536 ---- * NOTE: in the future we might want to initialize the junk * filter for all queries. * ---------------- + * SELECT added by daveh@insightdist.com 5/20/98 to allow + * ORDER/GROUP BY have an identifier missing from the target. */ if (operation == CMD_UPDATE || operation == CMD_DELETE || ! operation == CMD_INSERT || operation == CMD_SELECT) { JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); estate->es_junkFilter = j; + + tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ } else estate->es_junkFilter = NULL; Index: backend/parser/parse_clause.c =================================================================== RCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_clause.c,v retrieving revision 1.15 retrieving revision 1.16 diff -c -r1.15 -r1.16 *** parse_clause.c 1998/03/31 04:43:53 1.15 --- parse_clause.c 1998/05/21 03:53:50 1.16 *************** *** 7,13 **** * * * IDENTIFICATION ! * $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_clause.c,v 1.15 1998/03/31 04:43:53 momjian Exp $ * *------------------------------------------------------------------------- */ --- 7,13 ---- * * * IDENTIFICATION ! * $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_clause.c,v 1.16 1998/05/21 03:53:50 scrappy Exp $ * *------------------------------------------------------------------------- */ *************** *** 182,187 **** --- 182,218 ---- } } } + + /* BEGIN add missing target entry hack. + * + * Prior to this hack, this function returned NIL if no target_result. + * Thus, ORDER/GROUP BY required the attributes be in the target list. + * Now it constructs a new target entry which is appended to the end of + * the target list. This target is set to be resjunk = TRUE so that + * it will not be projected into the final tuple. + * daveh@insightdist.com 5/20/98 + */ + if ( ! target_result) { + List *p_target = tlist; + Ident *missingTargetId = (Ident *)makeNode(Ident); + TargetEntry *tent = makeNode(TargetEntry); + + /* Fill in the constructed Ident node */ + missingTargetId->type = T_Ident; + missingTargetId->name = palloc(strlen(sortgroupby->name) + 1); + strcpy(missingTargetId->name, sortgroupby->name); + + transformTargetId(pstate, missingTargetId, tent, missingTargetId->name, TRUE); + + /* Add to the end of the target list */ + while (lnext(p_target) != NIL) { + p_target = lnext(p_target); + } + lnext(p_target) = lcons(tent, NIL); + target_result = tent; + } + /* END add missing target entry hack. */ + return target_result; } *************** *** 203,212 **** Resdom *resdom; restarget = find_targetlist_entry(pstate, lfirst(grouplist), targetlist); - - if (restarget == NULL) - elog(ERROR, "The field being grouped by must appear in the target list"); - grpcl->entry = restarget; resdom = restarget->resdom; grpcl->grpOpoid = oprid(oper("<", --- 234,239 ---- *************** *** 262,270 **** restarget = find_targetlist_entry(pstate, sortby, targetlist); - if (restarget == NULL) - elog(ERROR, "The field being ordered by must appear in the target list"); - sortcl->resdom = resdom = restarget->resdom; sortcl->opoid = oprid(oper(sortby->useOp, resdom->restype, --- 289,294 ---- Index: backend/parser/parse_target.c =================================================================== RCS file: /usr/local/cvsroot/pgsql/src/backend/parser/parse_target.c,v retrieving revision 1.12 retrieving revision 1.13 diff -c -r1.12 -r1.13 *** parse_target.c 1998/05/09 23:29:54 1.12 --- parse_target.c 1998/05/21 03:53:51 1.13 *************** *** 7,13 **** * * * IDENTIFICATION ! * $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_target.c,v 1.12 1998/05/09 23:29:54 thomas Exp $ * *------------------------------------------------------------------------- */ --- 7,13 ---- * * * IDENTIFICATION ! * $Header: /usr/local/cvsroot/pgsql/src/backend/parser/parse_target.c,v 1.13 1998/05/21 03:53:51 scrappy Exp $ * *------------------------------------------------------------------------- */ *************** *** 52,57 **** --- 52,102 ---- Oid type_id, Oid attrtype); + + /* + * transformTargetId - transforms an Ident Node to a Target Entry + * Created this a function to allow the ORDER/GROUP BY clause be able + * to construct a TargetEntry from an Ident. + * + * resjunk = TRUE will hide the target entry in the final result tuple. + * daveh@insightdist.com 5/20/98 + */ + void + transformTargetId(ParseState *pstate, + Ident *ident, + TargetEntry *tent, + char *resname, + int16 resjunk) + { + Node *expr; + Oid type_id; + int16 type_mod; + + /* + * here we want to look for column names only, not + * relation names (even though they can be stored in + * Ident nodes, too) + */ + expr = transformIdent(pstate, (Node *) ident, EXPR_COLUMN_FIRST); + type_id = exprType(expr); + if (nodeTag(expr) == T_Var) + type_mod = ((Var *) expr)->vartypmod; + else + type_mod = -1; + tent->resdom = makeResdom((AttrNumber) pstate->p_last_resno++, + (Oid) type_id, + type_mod, + resname, + (Index) 0, + (Oid) 0, + resjunk); + + tent->expr = expr; + return; + } + + + /* * transformTargetList - * turns a list of ResTarget's into a list of TargetEntry's *************** *** 71,106 **** { case T_Ident: { - Node *expr; - Oid type_id; - int16 type_mod; char *identname; char *resname; identname = ((Ident *) res->val)->name; handleTargetColname(pstate, &res->name, NULL, identname); - - /* - * here we want to look for column names only, not - * relation names (even though they can be stored in - * Ident nodes, too) - */ - expr = transformIdent(pstate, (Node *) res->val, EXPR_COLUMN_FIRST); - type_id = exprType(expr); - if (nodeTag(expr) == T_Var) - type_mod = ((Var *) expr)->vartypmod; - else - type_mod = -1; resname = (res->name) ? res->name : identname; ! tent->resdom = makeResdom((AttrNumber) pstate->p_last_resno++, ! (Oid) type_id, ! type_mod, ! resname, ! (Index) 0, ! (Oid) 0, ! 0); ! ! tent->expr = expr; break; } case T_ParamNo: --- 116,128 ---- { case T_Ident: { char *identname; char *resname; identname = ((Ident *) res->val)->name; handleTargetColname(pstate, &res->name, NULL, identname); resname = (res->name) ? res->name : identname; ! transformTargetId(pstate, (Ident*)res->val, tent, resname, FALSE); break; } case T_ParamNo: Index: include/parser/parse_target.h =================================================================== RCS file: /usr/local/cvsroot/pgsql/src/include/parser/parse_target.h,v retrieving revision 1.4 retrieving revision 1.5 diff -c -r1.4 -r1.5 *** parse_target.h 1998/02/26 04:42:49 1.4 --- parse_target.h 1998/05/21 03:53:51 1.5 *************** *** 6,12 **** * * Copyright (c) 1994, Regents of the University of California * ! * $Id: parse_target.h,v 1.4 1998/02/26 04:42:49 momjian Exp $ * *------------------------------------------------------------------------- */ --- 6,12 ---- * * Copyright (c) 1994, Regents of the University of California * ! * $Id: parse_target.h,v 1.5 1998/05/21 03:53:51 scrappy Exp $ * *------------------------------------------------------------------------- */ *************** *** 24,28 **** --- 24,30 ---- extern List *transformTargetList(ParseState *pstate, List *targetlist); extern List *makeTargetNames(ParseState *pstate, List *cols); + extern void transformTargetId(ParseState *pstate, Ident *ident, + TargetEntry *tent, char *resname, int16 resjunk); #endif /* PARSE_TARGET_H */ -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
>> I do not believe that this could ever have passed regression. Do we have >> the whole patch to back out, or do we need to just "fix what we have now"? >> >> Also, perhaps we need to be more selective about checkins? > >Not sure. Marc and I reviewed it, and it looked very good. In fact, I >would like to see more of such patches, of course, without the destroydb >problem, but many patches have little quirks the author could not have >anticipated. > >> { >> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); >> estate->es_junkFilter = j; >> >>>> tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ >> } >> else >> estate->es_junkFilter = NULL; >> >> Here is my debug transcript for "drop database regression" > >Here is the original patch. I got it with the command: I have just removed the patch using patch -R and confirmed that "drop table", and "delete from" works again. regression tests also look good, except char/varchar/strings. Now I can start to create patches for snapshot... -- Tatsuo Ishii t-ishii@sra.co.jp
On Tue, 26 May 1998 t-ishii@sra.co.jp wrote: > I have just removed the patch using patch -R and confirmed that "drop > table", and "delete from" works again. regression tests also look > good, except char/varchar/strings. Wait...that doesn't solve the problem...David requested that this patch get added in order to improve the ODBC driver, which I feel is important. Can we please investigate *why* his patch broken things, as opposed to eliminating it? DavidH? Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> > On Tue, 26 May 1998 t-ishii@sra.co.jp wrote: > > > I have just removed the patch using patch -R and confirmed that "drop > > table", and "delete from" works again. regression tests also look > > good, except char/varchar/strings. > > Wait...that doesn't solve the problem...David requested that this > patch get added in order to improve the ODBC driver, which I feel is > important. Can we please investigate *why* his patch broken things, as > opposed to eliminating it? > > DavidH? Yes, abosolutely. Let's give David some time investigate it. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
>On Tue, 26 May 1998 t-ishii@sra.co.jp wrote: > >> I have just removed the patch using patch -R and confirmed that "drop >> table", and "delete from" works again. regression tests also look >> good, except char/varchar/strings. > > Wait...that doesn't solve the problem...David requested that this >patch get added in order to improve the ODBC driver, which I feel is >important. Can we please investigate *why* his patch broken things, as >opposed to eliminating it? I wouldn't say that the patch should be removed from the CVS. I just need my *private* working snapshot to make sure my patches would not break anything before submitting. I wish I could solve the problem by myself, but spending a few hours for debugging last Sunday, I have not find fixes for that yet. Sorry. -- Tatsuo Ishii t-ishii@sra.co.jp
> I wouldn't say that the patch should be removed from the CVS. I just > need my *private* working snapshot to make sure my patches would not > break anything before submitting. > > I wish I could solve the problem by myself, but spending a few hours > for debugging last Sunday, I have not find fixes for that yet. Sorry. OK, here is a fix I have just applied to the tree. It appears to meet the intent of the original patch. David H. will have to comment on its accuracy. --------------------------------------------------------------------------- Index: backend/executor/execMain.c =================================================================== RCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v retrieving revision 1.46 diff -c -r1.46 execMain.c *** execMain.c 1998/05/21 03:53:50 1.46 --- execMain.c 1998/05/26 03:33:22 *************** *** 530,536 **** JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); estate->es_junkFilter = j; ! tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ } else estate->es_junkFilter = NULL; --- 530,537 ---- JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); estate->es_junkFilter = j; ! if (operation == CMD_SELECT) ! tupType = j->jf_cleanTupType; } else estate->es_junkFilter = NULL; -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > >> I do not believe that this could ever have passed regression. Do we have > >> the whole patch to back out, or do we need to just "fix what we have now"? > >> > >> Also, perhaps we need to be more selective about checkins? > > > >Not sure. Marc and I reviewed it, and it looked very good. In fact, I > >would like to see more of such patches, of course, without the destroydb > >problem, but many patches have little quirks the author could not have > >anticipated. > > > >> { > >> JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); > >> estate->es_junkFilter = j; > >> >>>> tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ > >> } > >> else > >> estate->es_junkFilter = NULL; > >> > >> Here is my debug transcript for "drop database regression" > > > >Here is the original patch. I got it with the command: > > I have just removed the patch using patch -R and confirmed that "drop > table", and "delete from" works again. regression tests also look > good, except char/varchar/strings. > > Now I can start to create patches for snapshot... Maybe I should elaborate a bit on what I meant by "more selective about checkins". First, I agree that we should see more patches like this. Patches in the parser, optimiser, planner, etc are hard. It takes a fair amount of effort to understand this stuff even to the point of being able to attempt a patch in this area. So I applaud anyone who makes that kind of investment and wish that they continue their efforts. Second, patches in the parser, optimiser, planner, etc are hard. It is incredibly easy to do something that works in a given case but creates a problem for some other statement. And these problems can be very obscure and hard to debug, this one was relatively easy. So, how do we balance the goals of "encouraging people to make the effort to do a hard patch" and "keeping the codeline stable enough to work from"? Where I work we have had good success with the following: - every night a from scratch build and regression test is run. - if the build and regression is good, then a snapshot is made into a "last known good" location. This lets someone find a good "recent" source tree even if there is a problem that doesn't get solved for a few days. - a report is generated that lists the results of the build and regression, AND, a list of the checkins since the "last known good" snapshot. This lets someone who just submitted a patch see: a) the patch was applied, b) whether it created any problems. It also helps identify conflicting patches etc. I believe most people submitting patches want them to work, and will be very good about monitoring the results of their submissions and fixing any problems, IF the results are visible. Right now they aren't really. The other tool I believe to be very effective in improving code quality is code review. My experience is that review is both more effective and cheaper than testing in finding problems. To that end, I suggest we create a group of volunteer reviewers, possibly with their own mailing list. The idea is not to impose a bureaucratic barrier to submitting patches, but rather to allow people who have an idea to get some assistance on whether a given change will fit in and work well. I see some people on this list using the list for this purpose now, I merely propose to normalise this so that everyone knows that this resource is available to them, and given an actual patch (rather than mere discussion) to be able to identify specific persons to do a review. I don't have strong preferences for the form of this, so ideas are welcome. My initial concept supposes a group of maybe 5 to 10 people with some experience in the code who would agree to review any patches within say two days of submission and respond directly to the submitter. Perhaps somehow the mailing list could be contrived to randomly pick (or allow reviewers to pick) so that say two reviewers had a look at each patch. Also, I think it is important to identify any reviewers in the changelog or checkin comments so that if there is a problem and the author is unavailable, there are at least the reviewers with knowledge of what the patch was about. I would be happy to volunteer for a term on the ("possible proposed mythical") review team. I would be even happier to know that next time I had a tricky patch that some other heads would take the time to help me see what I had overlooked. Comments? -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "you can no more argue against quality than you can argue against world peace or free video games." -- p.j. plauger
On Mon, 25 May 1998, David Gould wrote: > Where I work we have had good success with the following: > > - every night a from scratch build and regression test is run. > > - if the build and regression is good, then a snapshot is made into a > "last known good" location. This lets someone find a good "recent" source > tree even if there is a problem that doesn't get solved for a few days. Actually, ummm...I've been considering removing the snapshot's altogether, now that anoncvs works. The only reason I was doing it before was that not everyone had access to CVSup for their particular platform...the snapshots are a terrible waste of resources, especially considering that you have to download all ~3.5Meg (and growing) .tar.gz file each time...whereas anoncvs/CVSup only updates those files requiring it... IMHO, the snapshot's are only important during the beta freeze period... > The other tool I believe to be very effective in improving code quality > is code review. My experience is that review is both more effective and > cheaper than testing in finding problems. To that end, I suggest we > create a group of volunteer reviewers, possibly with their own mailing > list. That's kinda what pgsql-patches is/was meant for...some ppl (I won't mention a name though) seems to get nervous if a patch isn't applied within a short period of time after being posted, but if we were to adopt a policy of leaving a patch unapplied for X days after posting, so that everyone gets a chance to see/comment on it... > I don't have strong preferences for the form of this, so ideas are welcome. > My initial concept supposes a group of maybe 5 to 10 people with some > experience in the code who would agree to review any patches within say two > days of submission and respond directly to the submitter. Perhaps somehow the > mailing list could be contrived to randomly pick (or allow reviewers to pick) > so that say two reviewers had a look at each patch. Also, I think it is > important to identify any reviewers in the changelog or checkin comments so > that if there is a problem and the author is unavailable, there are at least > the reviewers with knowledge of what the patch was about. This sounds reasonable to me...this is something that FreeBSD does right now...one of these days, I have to sit back and decode how they have their CVS setup. They have some things, as far as logs are concerned, that are slightly cleaner then I have currently setup :) > I would be even happier to know that next time I had a tricky patch that > some other heads would take the time to help me see what I had overlooked. You always have that...:)
The Hermit Hacker <scrappy@hub.org> writes: > On Mon, 25 May 1998, David Gould wrote: >> - if the build and regression is good, then a snapshot is made into a >> "last known good" location. > Actually, ummm...I've been considering removing the snapshot's > altogether, now that anoncvs works. It may be worth pointing out that cvs allows anyone to retrieve *any* prior state of the code. This opens up a great number of options that a simple periodic snapshot does not. I think it's worth continuing the snapshot series for those who don't want to install cvs for some reason, but that probably won't be the primary access method anymore. The thing that I thought was worth adopting from David's list was the nightly automatic regression test run. Assuming that there are cycles to spare on the server, posting the results of a build and regression test attempt would help provide a reality check for everyone. (It'd be too bulky to send to the mailing lists, and not worth archiving anyway; perhaps the output could be put up as a web page at postgresql.org?) This sort of fiasco could be minimized if everyone got in the habit of running regression tests before submitting their patches. Here I have to disagree with Marc's opinion that it's not really important whether pre-alpha code works. If the tree is currently broken, that prevents everyone else from running regression tests on what *they* are doing, and consequently encourages the submission of even more code that hasn't been adequately tested. I would like to see a policy that you don't check in code until it passes regression test for you locally. We will still have failures because of (a) portability problems --- ie it works at your site, but not for someone else; and (b) unforeseen interactions between patches submitted at about the same time. But hopefully those will be relatively easy to track down if the normal state is that things mostly work. We might also consider making more use of cvs' ability to track multiple development branches. If several people need to cooperate on a large change, they could work together in a cvs branch until their mods are finished, allowing them to share development files without breaking the main branch for others. regards, tom lane
Yikes! I see my little patch as stirred things up a bit. Bruce, your addition does meet the needs of the intent of my patch. I tried it here with positive results. I hope you will keep the whole patch. For what it is worth, I did run the regression test. But I did not get any failures that appeared to be the a result of the patch. There were, however, many failures before and after my patch. Most were due to AIX system messages but there many, though, I could not explain. I will gladly report them if any one is interested. I have to admit that I was nervous about submitting my first patch into an area code as important this one. I would have liked to start off with a new data type or something. Unfortunately, I was getting beat up by ODBC/MS Access users which routinely generate queries which the backend could not handle. Thanks for your tolerance. Bruce Momjian wrote: > > I wouldn't say that the patch should be removed from the CVS. I just > > need my *private* working snapshot to make sure my patches would not > > break anything before submitting. > > > > I wish I could solve the problem by myself, but spending a few hours > > for debugging last Sunday, I have not find fixes for that yet. Sorry. > > OK, here is a fix I have just applied to the tree. It appears to meet > the intent of the original patch. David H. will have to comment on its > accuracy. > > --------------------------------------------------------------------------- > > Index: backend/executor/execMain.c > =================================================================== > RCS file: /usr/local/cvsroot/pgsql/src/backend/executor/execMain.c,v > retrieving revision 1.46 > diff -c -r1.46 execMain.c > *** execMain.c 1998/05/21 03:53:50 1.46 > --- execMain.c 1998/05/26 03:33:22 > *************** > *** 530,536 **** > JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); > estate->es_junkFilter = j; > > ! tupType = j->jf_cleanTupType; /* Added by daveh@insightdist.com 5/20/98 */ > } > else > estate->es_junkFilter = NULL; > --- 530,537 ---- > JunkFilter *j = (JunkFilter *) ExecInitJunkFilter(targetList); > estate->es_junkFilter = j; > > ! if (operation == CMD_SELECT) > ! tupType = j->jf_cleanTupType; > } > else > estate->es_junkFilter = NULL; >
Attachment
On Tue, 26 May 1998, David Hartwig wrote: > Yikes! > > I see my little patch as stirred things up a bit. Bruce, your addition > does meet the needs of the intent of my patch. I tried it here with > positive results. I hope you will keep the whole patch. It will be kept...I'd rather see new stuff added that has some *minor* bugs (as this one turned out to be, actually) vs never seeing anything new because ppl are too nervous to submit them :)
> > The Hermit Hacker <scrappy@hub.org> writes: > > On Mon, 25 May 1998, David Gould wrote: > >> - if the build and regression is good, then a snapshot is made into a > >> "last known good" location. > > > Actually, ummm...I've been considering removing the snapshot's > > altogether, now that anoncvs works. > > It may be worth pointing out that cvs allows anyone to retrieve *any* > prior state of the code. This opens up a great number of options that > a simple periodic snapshot does not. I think it's worth continuing the > snapshot series for those who don't want to install cvs for some reason, > but that probably won't be the primary access method anymore. I have to agree with Marc. The author did test with the regression tests. In fact, the regression tests are not up-to-date, so there are meny diffs even when the code works, and we can't expect someone to keep the regression tests spotless at all times. What I normally do is to run the regression tests, save the output, run them with the patch, and compare the differences. But, sometimes, they don't show up. When people report problems, we do research, find the cause, and get the current tree fixed. cvs with -Ddate to find the date it broke is usually all we need. And I am the one who wants patches applied within a few days of appearance. I think it encourages people to submit patches. Nothing more annoying than to submit a patch that fixes a problem you have, and find that it is not yet in the source tree for others to use and test. > This sort of fiasco could be minimized if everyone got in the habit of > running regression tests before submitting their patches. Here I have > to disagree with Marc's opinion that it's not really important whether > pre-alpha code works. If the tree is currently broken, that prevents > everyone else from running regression tests on what *they* are doing, > and consequently encourages the submission of even more code that hasn't > been adequately tested. I would like to see a policy that you don't > check in code until it passes regression test for you locally. We will > still have failures because of (a) portability problems --- ie it works > at your site, but not for someone else; and (b) unforeseen interactions > between patches submitted at about the same time. But hopefully those > will be relatively easy to track down if the normal state is that things > mostly work. > > We might also consider making more use of cvs' ability to track multiple > development branches. If several people need to cooperate on a large > change, they could work together in a cvs branch until their mods are > finished, allowing them to share development files without breaking the > main branch for others. Actually, things have been working well for quite some time. The age of the snapshots and the lack of anon-cvs/CVSup was slowing down people from seeing/fixing patches. The old folks had access, but the people who were submitting patches did not. That should be fixed now. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > This is a multi-part message in MIME format. > --------------929DA74C986399373CF663B0 > Content-Type: text/plain; charset=us-ascii > Content-Transfer-Encoding: 7bit > > Yikes! > > I see my little patch as stirred things up a bit. Bruce, your addition does meet the needs of the > intent of my patch. I tried it here with positive results. I hope you will keep the whole patch. I was wondering, should the patch be: if (j->jf_cleanTupType) tupType = j->jf_cleanTupType; rather than my current: if (operation == CMD_SELECT) tupType = j->jf_cleanTupType; Not sure. > > For what it is worth, I did run the regression test. But I did not get any failures that appeared to > be the a result of the patch. There were, however, many failures before and after my patch. Most > were due to AIX system messages but there many, though, I could not explain. I will gladly report them > if any one is interested. This brings up an interesting point that bears comment. There has been discussion about how to better review these patches. Certainly, the patch list is for general review, and many people use that for requests for people to review their patches. I even post small patches for review to hackers when I really need help. Second, many bugs do not show up in the regression tests, and often the only way to find the cause is to try checking/backing out recent patches. cvs allows us to revert our tree to any date in the past, and that works well too. I have been more concerned about a patch that works, but adds some ugly hack that causes performance/random problems. That is where my review eye is usually looking. > > I have to admit that I was nervous about submitting my first patch into an area code as important this > one. I would have liked to start off with a new data type or something. Unfortunately, I was > getting beat up by ODBC/MS Access users which routinely generate queries which the backend could not > handle. No way we would NOT use this patch. Obviously, a sophisticated patch that meets a real need for us. > > Thanks for your tolerance. Thanks for the patch. We have gotten quite a few 'nice' patches recently that fix some big problems. Also, all your e-mail comes with this attachement. Thought you should know. > > --------------929DA74C986399373CF663B0 > Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf" > Content-Transfer-Encoding: 7bit > Content-Description: Card for David Hartwig > Content-Disposition: attachment; filename="vcard.vcf" > > begin: vcard > fn: David Hartwig > n: Hartwig;David > email;internet: daveh@insightdist.com > x-mozilla-cpt: ;2 > x-mozilla-html: TRUE > version: 2.1 > end: vcard > > > --------------929DA74C986399373CF663B0-- > > -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Tue, 26 May 1998, Bruce Momjian wrote: > And I am the one who wants patches applied within a few days of > appearance. I think it encourages people to submit patches. Nothing > more annoying than to submit a patch that fixes a problem you have, and > find that it is not yet in the source tree for others to use and test. Except...of course...nobody should be *using* an Alpha/development source tree except for testing :)
> > On Tue, 26 May 1998, David Hartwig wrote: > > > Yikes! > > > > I see my little patch as stirred things up a bit. Bruce, your addition > > does meet the needs of the intent of my patch. I tried it here with > > positive results. I hope you will keep the whole patch. > > It will be kept...I'd rather see new stuff added that has some > *minor* bugs (as this one turned out to be, actually) vs never seeing > anything new because ppl are too nervous to submit them :) Yep, I remember my first patch. Though someone's machine might melt down. Only later did I become confident enough to make changes with my eyes closed. :-) -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > On Tue, 26 May 1998, Bruce Momjian wrote: > > > And I am the one who wants patches applied within a few days of > > appearance. I think it encourages people to submit patches. Nothing > > more annoying than to submit a patch that fixes a problem you have, and > > find that it is not yet in the source tree for others to use and test. > > Except...of course...nobody should be *using* an Alpha/development > source tree except for testing :) Yes. Good point. Our development tree is not for production use. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian <maillist@candle.pha.pa.us> writes: > The author did test with the regression tests. In fact, the > regression tests are not up-to-date, so there are meny diffs even when > the code works, and we can't expect someone to keep the regression > tests spotless at all times. Actually, I sympathize with David on this: I got burnt the same way just a couple weeks ago. (I blithely assumed that the regression tests would test copy in/out ... they don't ...) Perhaps the real lesson to be learned is that a little more effort should be expended on the regression tests. I have a couple of suggestions: 1. As far as I've seen there is no documentation on how to create regression tests. This should be documented and made as easy as possible, to encourage people to create tests for missing cases. 2. System variations (roundoff error differences, etc) create spurious test complaints that make it hard to interpret the results properly. Can anything be done to clean this up? 3. The TODO list should maintain a section on missing regression tests; any failure that gets by the regression tests should cause an entry to get made here. This list would have a side benefit of warning developers about areas that are not getting tested, so that they know they have to do some hand testing if they change relevant code. We can start the new TODO section with: * Check destroydb. (Currently, running regression a second time checks this, but a single run in a clean tree won't.) * Check copy from stdin/to stdout. * Check large-object interface. What else have people been burnt by lately? regards, tom lane
On Tue, 26 May 1998, Tom Lane wrote: > Bruce Momjian <maillist@candle.pha.pa.us> writes: > > The author did test with the regression tests. In fact, the > > regression tests are not up-to-date, so there are meny diffs even when > > the code works, and we can't expect someone to keep the regression > > tests spotless at all times. > > Actually, I sympathize with David on this: I got burnt the same way > just a couple weeks ago. (I blithely assumed that the regression tests > would test copy in/out ... they don't ...) > > Perhaps the real lesson to be learned is that a little more effort > should be expended on the regression tests. I have a couple of > suggestions: > > 1. As far as I've seen there is no documentation on how to create > regression tests. This should be documented and made as easy as > possible, to encourage people to create tests for missing cases. > > 2. System variations (roundoff error differences, etc) create spurious > test complaints that make it hard to interpret the results properly. > Can anything be done to clean this up? See the expected/int2-FreeBSD.out and similar files...I've done what I can with the 'spurious test complaints...
Tom Lane: > Perhaps the real lesson to be learned is that a little more effort > should be expended on the regression tests. I have a couple of > suggestions: > > 1. As far as I've seen there is no documentation on how to create > regression tests. This should be documented and made as easy as > possible, to encourage people to create tests for missing cases. Excellent idea. If everyone making a new feature could also make a test case for it this would help us keep the system stable. > 2. System variations (roundoff error differences, etc) create spurious > test complaints that make it hard to interpret the results properly. > Can anything be done to clean this up? Hmmm, perhaps we could modify the tests to display results through a function that rounded to the expected precision eg: instead of select floatcol, doublecol from testtab; use select display(floatcol, 8), display(doublecol, 16) from testtab; > 3. The TODO list should maintain a section on missing regression tests; > any failure that gets by the regression tests should cause an entry > to get made here. This list would have a side benefit of warning > developers about areas that are not getting tested, so that they know > they have to do some hand testing if they change relevant code. > > We can start the new TODO section with: > > * Check destroydb. (Currently, running regression a second time checks > this, but a single run in a clean tree won't.) > * Check copy from stdin/to stdout. > * Check large-object interface. > > What else have people been burnt by lately? The int2, oidint2, int4, and oidint4 tests (and some others I think) are currently failing because the text of a couple error messages changed and the "expected" output was not updated. This kind of thing is pretty annoying as whoever changed the messages really should have updated the tests as well. If the current messages are preferred to the old messages, I will fix the test output to match, although personally, I like the old messages better. I will argue once again for a clean snapshot that is known to pass regression. This snapshot could be just a CVS tag, but it is important when starting work on a complex change to be able to know that any problems you have when you are done are due to your work, not some pre-existing condition. -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "I believe OS/2 is destined to be the most important operating system, and possibly program, of all time" - Bill Gates, Nov, 1987.
Bruce Momjian wrote: > I was wondering, should the patch be: > > if (j->jf_cleanTupType) > tupType = j->jf_cleanTupType; > rather than my current: > > if (operation == CMD_SELECT) > tupType = j->jf_cleanTupType; > > Not sure. > The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement: SELECT FROM foo GROUP BY bar; Currently the parser will not accept it. Sufficient. The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been. I would rather not risk effecting those calling routines which are not executing a SELECT command. At this time, I do not understand them enough, and I see no benefit. Necessary?
Attachment
> The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and > jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement: > > SELECT FROM foo GROUP BY bar; > > Currently the parser will not accept it. Sufficient. > > The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been. > I would rather not risk effecting those calling routines which are not executing a SELECT command. At this > time, I do not understand them enough, and I see no benefit. Necessary? OK, I will leave it alone. Is there a way to use junk filters only in cases where we need them? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > Perhaps the real lesson to be learned is that a little more effort > > should be expended on the regression tests. I have a couple of > > suggestions: > > 1. As far as I've seen there is no documentation on how to create > > regression tests. This should be documented and made as easy as > > possible, to encourage people to create tests for missing cases. Hmm. It ain't hard, but afaik the only people who have pushed on the regression tests are scrappy and myself. We went for years with no updates to the regression tests at all, and now have a somewhat stable set of tests which actually measure many features of a s/w build. > Excellent idea. If everyone making a new feature could also make a > test case for it this would help us keep the system stable. This would seem to be a truism. Any takers?? > > 2. System variations (roundoff error differences, etc) create > > spurious test complaints that make it hard to interpret the results > > properly. Can anything be done to clean this up? > Hmmm, perhaps we could modify the tests to display results through a > function that rounded to the expected precision eg: > select display(floatcol, 8), display(doublecol, 16) from testtab; Gee, maybe we should go back to the original v4.2/v1.0.x behavior of rounding _all_ double precision floating point results to 6 digits :( We've worked hard to get all of the regression tests to match at least one platform (at the moment, Linux/i686) and scrappy has extended the test mechanism to allow for platform-specific differences. But we don't have access to all of the supported platforms, so others will need to help (and they have been, at least some). > > 3. The TODO list should maintain a section on missing regression > > tests; any failure that gets by the regression tests should cause an > > entry to get made here. This list would have a side benefit of > > warning developers about areas that are not getting tested, so that > > they know they have to do some hand testing if they change relevant > > code. imho it will take more effort to maintain a todo list than to just submit a patch for testing. I would be happy to maintain the "expected" output if people would suggest new tests (and better, submit patches for the test). > I will argue once again for a clean snapshot that is known to pass > regression. I snapshot the system all the time, and then do development for a week or two or more on that revlocked snapshot. afaik the failures in regression testing at the time I submitted my last "typing" patches were due to differences in type conversion behavior; I didn't want the changed behavior to become formalized until others had had a chance to test and comment. (btw, no one has, and anyway I'm changing the results for most of those back to what it was before). It's pretty clear that many patches are submitted without full regression testing; either way it would be helpful to post a comment with patches saying how the patches affect the regression tests, or that no testing was done. I'd like to see another person test patches before committing to the source tree, but others might like to see where the patches/changes are heading even before that so I can see arguments both ways. As has been suggested by yourself and others, regression test contributions would be very helpful; so far the discussion amounts to asking scrappy and myself to do _more_ work on the regression tests. I'd like to see someone offering specific help at some point :) Anyway, if Marc or I led this discussion you will probably just get more ideas similar to what is already there; more brainstorming on the hackers list from y'all will lead to some good new ideas so I'll shut up now... - Tom
Tom Lane wrote: > > 2. System variations (roundoff error differences, etc) create spurious > test complaints that make it hard to interpret the results properly. > Can anything be done to clean this up? > It would be good if the backend looked at errno and put out an appropriate sqlcode number and a human readable message, instead of using the system error messages. That would eliminate some of the regression test diff output. /* m */
Bruce Momjian wrote: > > The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and > > jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement: > > > > SELECT FROM foo GROUP BY bar; > > > > Currently the parser will not accept it. Sufficient. > > > > The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been. > > I would rather not risk effecting those calling routines which are not executing a SELECT command. At this > > time, I do not understand them enough, and I see no benefit. Necessary? > > OK, I will leave it alone. Is there a way to use junk filters only in > cases where we need them? I have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the tatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the filter unconditionally on all SELECTS.
> I have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the > tatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the > filter unconditionally on all SELECTS. Can you foreach() through the target list, and check/set a flag, then call the junk filter if you found a resjunk? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
The Hermit Hacker wrote: > On Tue, 26 May 1998, Tom Lane wrote: > > Perhaps the real lesson to be learned is that a little more effort > > should be expended on the regression tests. I have a couple of > > suggestions: > > > > 1. As far as I've seen there is no documentation on how to create > > regression tests. This should be documented and made as easy as > > possible, to encourage people to create tests for missing cases. > > > > 2. System variations (roundoff error differences, etc) create spurious > > test complaints that make it hard to interpret the results properly. > > Can anything be done to clean this up? > > See the expected/int2-FreeBSD.out and similar files...I've done > what I can with the 'spurious test complaints... > Thanks. One question, is there any reason we can't use the intx tests on all the platforms? I realize that float it another set of problems, but it seems that int should be be the same? -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software 300 Lakeside Drive Oakland, CA 94612 - A child of five could understand this! Fetch me a child of five.
On Sun, 31 May 1998, David Gould wrote: > Thanks. One question, is there any reason we can't use the intx tests on > all the platforms? I realize that float it another set of problems, but it > seems that int should be be the same? 10c10 < ERROR: pg_atoi: error reading "100000": Result too large --- > ERROR: pg_atoi: error reading "100000": Math result not representable The changes are more error message relatd then anything... Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> On Sun, 31 May 1998, David Gould wrote: > > > Thanks. One question, is there any reason we can't use the intx tests on > > all the platforms? I realize that float it another set of problems, but it > > seems that int should be be the same? > > 10c10 > < ERROR: pg_atoi: error reading "100000": Result too large > --- > > ERROR: pg_atoi: error reading "100000": Math result not representable > > The changes are more error message relatd then anything... > > Marc G. Fournier > Systems Administrator @ hub.org > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org Thats what I thought. So can we just rename the int*-*BSD.out to int*.out? -dg
On Sun, 31 May 1998, David Gould wrote: > > On Sun, 31 May 1998, David Gould wrote: > > > > > Thanks. One question, is there any reason we can't use the intx tests on > > > all the platforms? I realize that float it another set of problems, but it > > > seems that int should be be the same? > > > > 10c10 > > < ERROR: pg_atoi: error reading "100000": Result too large > > --- > > > ERROR: pg_atoi: error reading "100000": Math result not representable > > > > The changes are more error message relatd then anything... > > > > Marc G. Fournier > > Systems Administrator @ hub.org > > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org > > Thats what I thought. So can we just rename the int*-*BSD.out to int*.out? No cause then that will break the Linux regression tests :) Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> > > > Is there any reason we can't use the intx tests on all the > > > > platforms? > > > The changes are more error message relatd then anything... > > So can we just rename the int*-*BSD.out to int*.out? > No cause then that will break the Linux regression tests :) ... which has been the regression reference platform since scrappy and I resurrected the regression test suite 'bout a year ago for v6.1... I assume that most platforms have some differences. Or would we find lots more matching each other if we chose something other than Linux for the reference output? - Tom
Marc G. Fournier wrote: > > > On Sun, 31 May 1998, David Gould wrote: > > > > > > > Thanks. One question, is there any reason we can't use the intx tests on > > > > all the platforms? I realize that float it another set of problems, but it > > > > seems that int should be be the same? > > > > > > 10c10 > > > < ERROR: pg_atoi: error reading "100000": Result too large > > > --- > > > > ERROR: pg_atoi: error reading "100000": Math result not representable > > > > > > The changes are more error message relatd then anything... > > > > > Thats what I thought. So can we just rename the int*-*BSD.out to int*.out? > > No cause then that will break the Linux regression tests :) For the "int" tests? I hope not. Anyhow, I will test this and see if I can clean up some regression issues. I plan to break a lot of stuff in the next few weeks (ok, months) and sure want to be able to count on the regression suite to help me find my way. -dg
> > > > > Is there any reason we can't use the intx tests on all the > > > > > platforms? > > > > The changes are more error message relatd then anything... > > > So can we just rename the int*-*BSD.out to int*.out? > > No cause then that will break the Linux regression tests :) > > ... which has been the regression reference platform since scrappy and I > resurrected the regression test suite 'bout a year ago for v6.1... > > I assume that most platforms have some differences. Or would we find > lots more matching each other if we chose something other than Linux for > the reference output? > > - Tom Hmmm, I find that I get lots of diffs on the floating point tests as I am running Linux with the new "glibc". I suspect the reference platform is the old "libc5" Linux. We might want to move the reference to "glibc" Linux as this will be the majority plaform very soon. And since glibc is the not just for Linux it will even help with other platforms in the comming few months. Anyway, I am willing to work on the tests a little but do not want to "take them over". Who "owns" them now, perhaps I could co-ordinate and ask for advice from that person? -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "Of course, someone who knows more about this will correct me if I'm wrong, and someone who knows less will correct me if I'm right." --David Palmer (palmer@tybalt.caltech.edu)
On Sun, 31 May 1998, David Gould wrote: > Anyway, I am willing to work on the tests a little but do not want to "take > them over". Who "owns" them now, perhaps I could co-ordinate and ask for > advice from that person? Nobody "owns" them...Thomas and I have tried to keep them relatively up to date, with Thomas doing the most part of the work on a Linux platform... Stuff like the int* test 'expected' output files are generated under Linux, which generates a different error message then the same test(s) under FreeBSD/NetBSD :(
> On Sun, 31 May 1998, David Gould wrote: > > > Anyway, I am willing to work on the tests a little but do not want to "take > > them over". Who "owns" them now, perhaps I could co-ordinate and ask for > > advice from that person? > > Nobody "owns" them...Thomas and I have tried to keep them > relatively up to date, with Thomas doing the most part of the work on a > Linux platform... > > Stuff like the int* test 'expected' output files are generated > under Linux, which generates a different error message then the same > test(s) under FreeBSD/NetBSD :( Ok, now I am confused. Isn't the error message "our" error message? If so, can't we make it the same? -dg
> > Nobody "owns" them...Thomas and I have tried to keep them > > relatively up to date, with Thomas doing the most part of the work > > on a Linux platform... > > Stuff like the int* test 'expected' output files are generated > > under Linux, which generates a different error message then the same > > test(s) under FreeBSD/NetBSD :( > Ok, now I am confused. Isn't the error message "our" error message? If > so, can't we make it the same? Nope. Some messages come from the system apparently. I can't remember how they come about, but the differences are not due to #ifdef FreeBSD blocks in the code :) The only differences I know of in the regression tests are due to numeric rounding, math libraries and system error messages. I will point out that although no one really "owns" the regression tests (in the spirit that everyone can and should contribute) I (and others) have run them extensively in support of releases. It is important that whoever is running the "reference platform" be willing to run regression tests ad nauseum, and to track down any problems. I've done so the last few releases. When/if this doesn't happen, we get a flakey release. - Tom
> > > Nobody "owns" them...Thomas and I have tried to keep them > > > relatively up to date, with Thomas doing the most part of the work > > > on a Linux platform... > > > Stuff like the int* test 'expected' output files are generated > > > under Linux, which generates a different error message then the same > > > test(s) under FreeBSD/NetBSD :( > > Ok, now I am confused. Isn't the error message "our" error message? If > > so, can't we make it the same? > > Nope. Some messages come from the system apparently. I can't remember > how they come about, but the differences are not due to #ifdef FreeBSD > blocks in the code :) Thank goodness! I always worry about that when dealing with *BSD people ;-) > The only differences I know of in the regression tests are due to > numeric rounding, math libraries and system error messages. That is about what I see. > I will point out that although no one really "owns" the regression tests > (in the spirit that everyone can and should contribute) I (and others) > have run them extensively in support of releases. It is important that > whoever is running the "reference platform" be willing to run regression > tests ad nauseum, and to track down any problems. I've done so the last > few releases. Ok, I will make a set of Linux glibc expected files for 6.3.2 and if that works send them in. Not sure how to handle the reference Linux vs glibc Linux issue in terms of the way the tests are structured and platforms named, but they do have different rounding behavior and messages. Of course, someone is welcome to beat me to this, no really, go ahead... -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." -- Howard Aiken
> > It is important that > > whoever is running the "reference platform" be willing to run > > regression tests ad nauseum, and to track down any problems. > Ok, I will make a set of Linux glibc expected files for 6.3.2 and if > that works send them in. Not sure how to handle the reference Linux vs > glibc Linux issue in terms of the way the tests are structured and > platforms named, but they do have different rounding behavior and > messages. I'm running RH5.0 at work, but have RH4.2 at home. I'm reluctant to upgrade at home because I have _all_ Postgres releases from v1.0.9 to current installed and I can fire them up for testing in less than a minute. If I upgrade to the new glibc2, I might have trouble rebuilding the old source trees. Anyway, will probably upgrade sometime in the next few months, and then the reference platform will be glibc2-based. If you are generating new "expected" files for glibc2 shouldn't they be based on the current development tree? Or are you providing them as a patch for v6.3.2 to be installed in /pub/patches?? - Tom
On Mon, 1 Jun 1998, David Gould wrote: > > Nope. Some messages come from the system apparently. I can't remember > > how they come about, but the differences are not due to #ifdef FreeBSD > > blocks in the code :) > > Thank goodness! I always worry about that when dealing with *BSD people ;-) *BSD people?? At least us *BSD people work at making sure that software developed works on everyone's elses choice of OS too...not like some Linux developers out there (ie. Wine) :) Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> On Mon, 1 Jun 1998, David Gould wrote: > > > > Nope. Some messages come from the system apparently. I can't remember > > > how they come about, but the differences are not due to #ifdef FreeBSD > > > blocks in the code :) > > > > Thank goodness! I always worry about that when dealing with *BSD people ;-) > > *BSD people?? At least us *BSD people work at making sure that > software developed works on everyone's elses choice of OS too...not like > some Linux developers out there (ie. Wine) :) I hear that they didn't do an AS/400 port either ;-). -dg
> > >> PPC/Linux has been broken too. > > > >Please let me know what the problem was, even if it was just the 'global tas' > >thing. I am trying to make sure this works on all platforms. Thanks. > > Here are patches for s_lock.c (against May23 snapshot). > ---------------------------------------------------------- > *** s_lock.c.orig Mon May 25 18:08:20 1998 > --- s_lock.c Mon May 25 18:08:57 1998 > *************** > *** 151,161 **** > > #if defined(PPC) > > ! static int > ! tas_dummy() > { > __asm__(" \n\ > - tas: \n\ > lwarx 5,0,3 \n\ > cmpwi 5,0 \n\ > bne fail \n\ > --- 151,160 ---- > > #if defined(PPC) > > ! int > ! tas(slock_t *lock) > { > __asm__(" \n\ > lwarx 5,0,3 \n\ > cmpwi 5,0 \n\ > bne fail \n\ This patch appears to have been applied already. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > >> PPC/Linux has been broken too. > > > > > >Please let me know what the problem was, even if it was just the 'global tas' > > >thing. I am trying to make sure this works on all platforms. Thanks. > > > > Here are patches for s_lock.c (against May23 snapshot). > > ---------------------------------------------------------- > > *** s_lock.c.orig Mon May 25 18:08:20 1998 > > --- s_lock.c Mon May 25 18:08:57 1998 > > *************** > > *** 151,161 **** > > > > #if defined(PPC) > > > > ! static int > > ! tas_dummy() > > { > > __asm__(" \n\ > > - tas: \n\ > > lwarx 5,0,3 \n\ > > cmpwi 5,0 \n\ > > bne fail \n\ > > --- 151,160 ---- > > > > #if defined(PPC) > > > > ! int > > ! tas(slock_t *lock) > > { > > __asm__(" \n\ > > lwarx 5,0,3 \n\ > > cmpwi 5,0 \n\ > > bne fail \n\ > > This patch appears to have been applied already. > > -- Yes. I picked up all the S_LOCK related patches and messages from the mailinglist and folded them into the big S_LOCK patch that just got commited. So, unless I have messed up, or someone comes up with something new, no other S_LOCK patches should be applied. Thanks -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software 300 Lakeside Drive Oakland, CA 94612 - A child of five could understand this! Fetch me a child of five.
> Bruce Momjian wrote: > > > > The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and > > > jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement: > > > > > > SELECT FROM foo GROUP BY bar; > > > > > > Currently the parser will not accept it. Sufficient. > > > > > > The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been. > > > I would rather not risk effecting those calling routines which are not executing a SELECT command. At this > > > time, I do not understand them enough, and I see no benefit. Necessary? > > > > OK, I will leave it alone. Is there a way to use junk filters only in > > cases where we need them? > > I have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the > tatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the > filter unconditionally on all SELECTS. David, attached is a patch to conditionally use the junk filter only when their is a Resdom that has the resjunk field set. Please review it and let me know if there are any problems with it. I am committing the patch to the development tree. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian wrote: > > Bruce Momjian wrote: > > > > > > The second option (your earlier suggestion) seems to be necessary and sufficient. The junk filter (and > > > > jf_cleanTupType) will always exist, for SELECT statements, as long as the following is not a legal statement: > > > > > > > > SELECT FROM foo GROUP BY bar; > > > > > > > > Currently the parser will not accept it. Sufficient. > > > > > > > > The first option will set tupType, for non-SELECT statements, to something it otherwise may not have been. > > > > I would rather not risk effecting those calling routines which are not executing a SELECT command. At this > > > > time, I do not understand them enough, and I see no benefit. Necessary? > > > > > > OK, I will leave it alone. Is there a way to use junk filters only in > > > cases where we need them? > > > > I have not YET come up with a clean method for detection of the a resjunk flag being set, on some resdom in the > > tatget list, by a GROUP/ORDER BY. I will give it another look. It does seem a bit heavy handed to construct the > > filter unconditionally on all SELECTS. > > David, attached is a patch to conditionally use the junk filter only > when their is a Resdom that has the resjunk field set. Please review it > and let me know if there are any problems with it. > > I am committing the patch to the development tree. I did not get any attached patch. ??? I can check it out at home where I have cvsup. Where there any confirmed problems cause by the aggressive use of the junkfilter? I ask because, adding this extra check probably will not resolve them. It may only reduce the problem. I was planning on including an additional check for resjunk as part of another patch I am working on. (GROUP/ORDER BY func(x) where func(x) is not in the targetlist) Graciously accepted.
> I did not get any attached patch. ??? I can check it out at home where I have cvsup. Maybe I forgot to attach it. > > Where there any confirmed problems cause by the aggressive use of the junkfilter? I ask because, adding this extra > check probably will not resolve them. It may only reduce the problem. I did not address the problems. This will probably just reduce them. > > I was planning on including an additional check for resjunk as part of another patch I am working on. (GROUP/ORDER BY > func(x) where func(x) is not in the targetlist) Graciously accepted. > > This is the code fragment I added to execMain.c: --------------------------------------------------------------------------- { bool junk_filter_needed = false; List *tlist; if (operation == CMD_SELECT) { foreach(tlist, targetList) { TargetEntry *tle = lfirst(tlist); if (tle->resdom->resjunk) { junk_filter_needed = true; break; } } } if (operation == CMD_UPDATE || operation == CMD_DELETE || operation == CMD_INSERT || (operation == CMD_SELECT && junk_filter_needed)) { -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)