Thread: pg_dump getBlobs query broken for 7.3 servers
Just noticed that the getBlobs() query does not work for a 7.3 server (maybe <= 7.3) due to the following change in commit 23f34fa4 [1]: else if (fout->remoteVersion >= 70100) appendPQExpBufferStr(blobQry, - "SELECT DISTINCT loid, NULL::oid, NULL::oid" + "SELECT DISTINCT loid, NULL::oid, NULL, " + "NULL AS rlomacl, NULL AS initlomacl, " + "NULL AS initrlomacl " " FROM pg_largeobject"); else appendPQExpBufferStr(blobQry, - "SELECT oid, NULL::oid, NULL::oid" + "SELECT oid, NULL::oid, NULL, " + "NULL AS rlomacl, NULL AS initlomacl, " + "NULL AS initrlomacl " " FROM pg_class WHERE relkind = 'l'"); The following error is reported by the server: pg_dump: [archiver (db)] query failed: ERROR: Unable to identify an ordering operator '<' for type '"unknown"' Use an explicit ordering operator or modify the query pg_dump: [archiver (db)] query was: SELECT DISTINCT loid, NULL::oid, NULL, NULL AS rlomacl, NULL AS initlomacl, NULL AS initrlomacl FROM pg_largeobject I could fix that using the attached patch. Thanks, Amit [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=23f34fa4ba358671adab16773e79c17c92cbc870
Attachment
On 2016/10/07 11:47, Amit Langote wrote: > Just noticed that the getBlobs() query does not work for a 7.3 server > (maybe <= 7.3) due to the following change in commit 23f34fa4 [1]: > > else if (fout->remoteVersion >= 70100) > appendPQExpBufferStr(blobQry, > - "SELECT DISTINCT loid, NULL::oid, NULL::oid" > + "SELECT DISTINCT loid, NULL::oid, NULL, " > + "NULL AS rlomacl, NULL AS initlomacl, " > + "NULL AS initrlomacl " > " FROM pg_largeobject"); > else > appendPQExpBufferStr(blobQry, > - "SELECT oid, NULL::oid, NULL::oid" > + "SELECT oid, NULL::oid, NULL, " > + "NULL AS rlomacl, NULL AS initlomacl, " > + "NULL AS initrlomacl " > " FROM pg_class WHERE relkind = 'l'"); > > The following error is reported by the server: > > pg_dump: [archiver (db)] query failed: ERROR: Unable to identify an > ordering operator '<' for type '"unknown"' > Use an explicit ordering operator or modify the query > pg_dump: [archiver (db)] query was: SELECT DISTINCT loid, NULL::oid, NULL, > NULL AS rlomacl, NULL AS initlomacl, NULL AS initrlomacl FROM pg_largeobject > > I could fix that using the attached patch. Forgot to mention that it needs to be fixed in both HEAD and 9.6. Thanks, Amit
Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes: > Just noticed that the getBlobs() query does not work for a 7.3 server > (maybe <= 7.3) due to the following change in commit 23f34fa4 [1]: Ugh. > I could fix that using the attached patch. There's more wrong than that, as you'd notice if you tried dumping a DB that actually had some LOs in it :-(. This obviously wasn't tested on anything older than 9.0. Will push a fix in a bit, as soon as I can boot up my dinosaur with a working 7.0 server to test that branch. regards, tom lane
On Fri, Oct 7, 2016 at 9:39 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes: >> Just noticed that the getBlobs() query does not work for a 7.3 server >> (maybe <= 7.3) due to the following change in commit 23f34fa4 [1]: > > Ugh. > >> I could fix that using the attached patch. > > There's more wrong than that, as you'd notice if you tried dumping > a DB that actually had some LOs in it :-(. This obviously wasn't > tested on anything older than 9.0. > > Will push a fix in a bit, as soon as I can boot up my dinosaur with > a working 7.0 server to test that branch. Back in 2014, we talked about removing support for some older server versions: https://www.postgresql.org/message-id/24529.1415921093@sss.pgh.pa.us I think there have been other discussions, too, but I can't find them at the moment. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > On Fri, Oct 7, 2016 at 9:39 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> There's more wrong than that, as you'd notice if you tried dumping >> a DB that actually had some LOs in it :-(. This obviously wasn't >> tested on anything older than 9.0. > Back in 2014, we talked about removing support for some older server versions: > https://www.postgresql.org/message-id/24529.1415921093@sss.pgh.pa.us > I think there have been other discussions, too, but I can't find them > at the moment. I just re-raised the subject: https://www.postgresql.org/message-id/2661.1475849167@sss.pgh.pa.us regards, tom lane
* Tom Lane (tgl@sss.pgh.pa.us) wrote: > Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes: > > Just noticed that the getBlobs() query does not work for a 7.3 server > > (maybe <= 7.3) due to the following change in commit 23f34fa4 [1]: > > Ugh. > > > I could fix that using the attached patch. > > There's more wrong than that, as you'd notice if you tried dumping > a DB that actually had some LOs in it :-(. This obviously wasn't > tested on anything older than 9.0. > > Will push a fix in a bit, as soon as I can boot up my dinosaur with > a working 7.0 server to test that branch. Ugh. Thanks for fixing. I had tested back to 7.4 with the regression tests but either those didn't include blobs or something got changed after my testing and I didn't re-test all the way back when I should have. I wasn't able to (easily) get anything older than 7.4 to compile on my box, which is why I had stopped there. In any case, thanks again for the fix. Stephen
Stephen Frost <sfrost@snowman.net> writes: > Ugh. Thanks for fixing. I had tested back to 7.4 with the regression > tests but either those didn't include blobs or something got changed > after my testing and I didn't re-test all the way back when I should > have. It looks like the final state of the regression tests doesn't include any blobs before about 9.4. You wouldn't have seen any results worse than a warning message in 7.4-8.4, unless there were some blobs so that the data extraction loop got iterated. It might be a good idea to retroactively modify 9.1-9.3 so that there are some blobs in the final state, for purposes of testing pg_dump and pg_upgrade. regards, tom lane
* Tom Lane (tgl@sss.pgh.pa.us) wrote: > Stephen Frost <sfrost@snowman.net> writes: > > Ugh. Thanks for fixing. I had tested back to 7.4 with the regression > > tests but either those didn't include blobs or something got changed > > after my testing and I didn't re-test all the way back when I should > > have. > > It looks like the final state of the regression tests doesn't include > any blobs before about 9.4. You wouldn't have seen any results worse > than a warning message in 7.4-8.4, unless there were some blobs so that > the data extraction loop got iterated. > > It might be a good idea to retroactively modify 9.1-9.3 so that there > are some blobs in the final state, for purposes of testing pg_dump and > pg_upgrade. I certainly think that would be a good idea. I thought we had been insisting on coverage via the regression tests for a lot farther back than 9.4. though perhaps that was only for newer features and we never went back and added it for existing capabilities. What would be really nice would be code coverage information for the back-branches also, as that would allow us to figure out what we're missing coverage for. I realize that we don't like adding new things to back-branches as those changes could impact packagers, but that might not impact them since that only runs when you run 'make coverage'. Thanks! Stephen
Stephen Frost wrote: > What would be really nice would be code coverage information for the > back-branches also, as that would allow us to figure out what we're > missing coverage for. I realize that we don't like adding new things to > back-branches as those changes could impact packagers, but that might > not impact them since that only runs when you run 'make coverage'. Hmm? 9.1 already has "make coverage", so there's nothing to backpatch. Do you mean to backpatch that infrastructure even further back than that? Or perhaps you are saying that coverage.pg.org should report results for each branch separately? We could do that ... -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
* Alvaro Herrera (alvherre@2ndquadrant.com) wrote: > Stephen Frost wrote: > > > What would be really nice would be code coverage information for the > > back-branches also, as that would allow us to figure out what we're > > missing coverage for. I realize that we don't like adding new things to > > back-branches as those changes could impact packagers, but that might > > not impact them since that only runs when you run 'make coverage'. > > Hmm? 9.1 already has "make coverage", so there's nothing to backpatch. > Do you mean to backpatch that infrastructure even further back than > that? I wasn't sure how far back it went, but if it's only to 9.1, then yes, farther than that. Specifically, to as far back as we wish to provide support for pg_dump, assuming it's reasonable to do so. > Or perhaps you are saying that coverage.pg.org should report results for > each branch separately? We could do that ... This would certainly be nice to have, but the first is more important. coverage.pg.org is nice to tell people "hey, here's where you can look to find what we aren't covering", but when you're actually hacking on code, you really want a much faster turn-around and you'd like that pre-commit too. Thanks! Stephen
Stephen Frost wrote: > * Alvaro Herrera (alvherre@2ndquadrant.com) wrote: > > Stephen Frost wrote: > > > > > What would be really nice would be code coverage information for the > > > back-branches also, as that would allow us to figure out what we're > > > missing coverage for. I realize that we don't like adding new things to > > > back-branches as those changes could impact packagers, but that might > > > not impact them since that only runs when you run 'make coverage'. > > > > Hmm? 9.1 already has "make coverage", so there's nothing to backpatch. > > Do you mean to backpatch that infrastructure even further back than > > that? > > I wasn't sure how far back it went, but if it's only to 9.1, then yes, > farther than that. Specifically, to as far back as we wish to provide > support for pg_dump, assuming it's reasonable to do so. I said 9.1 because that's the oldest we support, but it was added in 8.4. Do you really want to go back to applying patches back to 7.0? That's brave. > > Or perhaps you are saying that coverage.pg.org should report results for > > each branch separately? We could do that ... > > This would certainly be nice to have, but the first is more important. > coverage.pg.org is nice to tell people "hey, here's where you can look > to find what we aren't covering", but when you're actually hacking on > code, you really want a much faster turn-around True. We could actually update things in coverage.postgresql.org much faster, actually. Right now it's twice a day, but if we enlarge the machine I'm sure we can do better (yes, we can do that pretty easily). Also, to make it faster, we could install ccache 3.10 in that machine, although that would be against our regular pginfra policy. At some point I thought about providing reports for each day, so that we can see how it has improved over time, but that may be too much :-) > and you'd like that pre-commit too. Yeah, that's a good point. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Stephen Frost <sfrost@snowman.net> writes: > * Tom Lane (tgl@sss.pgh.pa.us) wrote: >> It might be a good idea to retroactively modify 9.1-9.3 so that there >> are some blobs in the final state, for purposes of testing pg_dump and >> pg_upgrade. > I certainly think that would be a good idea. I thought we had been > insisting on coverage via the regression tests for a lot farther back > than 9.4. though perhaps that was only for newer features and we never > went back and added it for existing capabilities. Well, there were regression tests for blobs for a long time, but they carefully cleaned up their mess. It was only in 70ad7ed4e that we made them leave some blobs behind. I took a quick look at back-patching that commit, but the test would need to be rewritten to not depend on features that don't exist further back (like \gset), which likely explains why I didn't do it at the time. regards, tom lane
Alvaro Herrera <alvherre@2ndquadrant.com> writes: > Stephen Frost wrote: >> I wasn't sure how far back it went, but if it's only to 9.1, then yes, >> farther than that. Specifically, to as far back as we wish to provide >> support for pg_dump, assuming it's reasonable to do so. > Do you really want to go back to applying patches back to 7.0? That's > brave. Branches before about 7.3 or 7.4 don't build cleanly on modern tools. In fact, they don't even build cleanly on my old HPUX 10.20 box ... I just tried, and they have problems with the bison and flex I have installed there now. As a data point, that bison executable bears a file date of Jan 31 2003. Andres reported something similar in the year-or-two-ago thread that was mentioned earlier. This doesn't even consider optional features; I wasn't trying to build SSL support for instance, but I'm pretty sure OpenSSL has been a moving target over that kind of time span. So I think trying to collect code coverage info on those branches is nuts. Maybe we could sanely do it for the later 8.x releases. Realistically though, how much would code coverage info have helped? Code coverage on a back branch would not have told you about whether it leaves blobs behind in the final regression DB state. Code coverage on HEAD might have helped you notice some aspects of this failure, but it would not have told you about the same query failing before 7.4 that worked later. regards, tom lane
* Alvaro Herrera (alvherre@2ndquadrant.com) wrote: > Stephen Frost wrote: > > * Alvaro Herrera (alvherre@2ndquadrant.com) wrote: > > > Stephen Frost wrote: > > > > > > > What would be really nice would be code coverage information for the > > > > back-branches also, as that would allow us to figure out what we're > > > > missing coverage for. I realize that we don't like adding new things to > > > > back-branches as those changes could impact packagers, but that might > > > > not impact them since that only runs when you run 'make coverage'. > > > > > > Hmm? 9.1 already has "make coverage", so there's nothing to backpatch. > > > Do you mean to backpatch that infrastructure even further back than > > > that? > > > > I wasn't sure how far back it went, but if it's only to 9.1, then yes, > > farther than that. Specifically, to as far back as we wish to provide > > support for pg_dump, assuming it's reasonable to do so. > > I said 9.1 because that's the oldest we support, but it was added in > 8.4. > > Do you really want to go back to applying patches back to 7.0? That's > brave. Hrm. My thought had actually been "back to whatever we decide we want pg_dump to support." The discussion about that seems to be trending towards 8.0 rather than 7.0, but you bring up an interesting point about if we actually want to back-patch things that far. I guess my thinking is that if we decide that 8.0 is the answer then we should at least be open to back-patching things that allow us to test that we are actually still supporting 8.0 and maybe that even means having a buildfarm member or two which checks back that far. > > > Or perhaps you are saying that coverage.pg.org should report results for > > > each branch separately? We could do that ... > > > > This would certainly be nice to have, but the first is more important. > > coverage.pg.org is nice to tell people "hey, here's where you can look > > to find what we aren't covering", but when you're actually hacking on > > code, you really want a much faster turn-around > > True. We could actually update things in coverage.postgresql.org much > faster, actually. Right now it's twice a day, but if we enlarge the > machine I'm sure we can do better (yes, we can do that pretty easily). > Also, to make it faster, we could install ccache 3.10 in that machine, > although that would be against our regular pginfra policy. > > At some point I thought about providing reports for each day, so that we > can see how it has improved over time, but that may be too much :-) > > > and you'd like that pre-commit too. > > Yeah, that's a good point. This is the real issue, imv, with coverage.pg.org. I still like having it, and having stats kept over time which allow us to see how we're doing over time when it comes to our code coverage would be nice, but the coverage.pg.org site isn't as useful for active development. Thanks! Stephen
* Tom Lane (tgl@sss.pgh.pa.us) wrote: > Realistically though, how much would code coverage info have helped? > Code coverage on a back branch would not have told you about whether > it leaves blobs behind in the final regression DB state. Code coverage > on HEAD might have helped you notice some aspects of this failure, but > it would not have told you about the same query failing before 7.4 > that worked later. The code coverage report is exactly what I was using to figure out what was being tested in pg_dump and what wasn't. Many of the tests that are included in the new TAP testing framework that I wrote for pg_dump were specifically to provide code coverage and did improve the report. If the regression tests in older versions were updated to make sure that all the capabilities of pg_dump in those versions were tested then my testing with the regression test databases would have shown that the newer version of pg_dump wasn't handling those cases correctly. That would require more comprehensive testing to be done in those back-branches though, which would require more than just the code coverage tool being included, that's true. Another approach to this would be to figure out a way for the newer testing framework in HEAD to be run against older versions, though we'd need to have a field which indicates which version of PG a given test should be run against as there are certainly tests of newer capabilities than older versions supported. Ultimately, I'm afraid we may have to just punt on the idea of this kind of testing being done using the same testing structure that exists in HEAD and is used in the buildfarm. That would be unfortunate, but I'm not quite sure how you could have a buildfarm member than runs every major version between 8.0 and HEAD and knows how to tell the HEAD build-system what all the ports are for all those versions to connect to and run tests against. Thanks! Stephen
On Sat, Oct 8, 2016 at 2:59 AM, Stephen Frost <sfrost@snowman.net> wrote: > Another approach to this would be to figure out a way for the newer > testing framework in HEAD to be run against older versions, though we'd > need to have a field which indicates which version of PG a given test > should be run against as there are certainly tests of newer capabilities > than older versions supported. pg_upgrade would benefit from something like that as well. But isn't is that something the buildfarm client would be better in managing? I recall that it runs older branches first, so it would be doable to point to the compiled builds of the branches already ran and perform tests on them. Surely we are going to need to code path on branch X tha tis able to handle test cases depending on the version of the backend involved, that makes maintenance more difficult in the long run. Still I cannot think about something that should do on-the-fly branch checkouts, users should be able to run such tests easily with just a tarball. Perhaps an idea would be to allow past versions of Postgres to be installed in a path of the install folder, say PGINSTALL/bin/old/, then have the tests detect them? installcheck would be the only thing supported of course for such cross-version checks. -- Michael
On 10/7/16 12:48 PM, Tom Lane wrote: > Branches before about 7.3 or 7.4 don't build cleanly on modern tools. > In fact, they don't even build cleanly on my old HPUX 10.20 box ... > I just tried, and they have problems with the bison and flex I have > installed there now. As a data point, that bison executable bears > a file date of Jan 31 2003. Andres reported something similar in > the year-or-two-ago thread that was mentioned earlier. FWIW, Greg Stark did a talk at PG Open about PG performance going back to at least 7.4. He did discuss what he had to do to get those versions to compile on modern tools, and has a set of patches that enable it. Unfortunately his slides aren't posted[1] so I can't provide further details than that. 1: https://wiki.postgresql.org/wiki/Postgres_Open_2016 -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532) mobile: 512-569-9461
On Mon, Oct 10, 2016 at 3:36 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote: > FWIW, Greg Stark did a talk at PG Open about PG performance going back to at > least 7.4. He did discuss what he had to do to get those versions to compile > on modern tools, and has a set of patches that enable it. Unfortunately his > slides aren't posted[1] so I can't provide further details than that. The code is here: https://github.com/gsstark/retropg The build script is called "makeall" and it applies patches from the "old-postgres-fixes" directory though some of the smarts are in the script because it knows about how to run older version of the configure script and it tries to fix up the ecpg parser duplcate tokens separately. It saves a diff after applying the patches and other fixups into the "net-diffs" directory but I've never checked if those diffs would work cleanly on their own. -- greg
On Mon, Oct 10, 2016 at 9:52 PM, Greg Stark <stark@mit.edu> wrote: > > The code is here: > > https://github.com/gsstark/retropg > > The build script is called "makeall" and it applies patches from the > "old-postgres-fixes" directory though some of the smarts are in the > script because it knows about how to run older version of the > configure script and it tries to fix up the ecpg parser duplcate > tokens separately. It saves a diff after applying the patches and > other fixups into the "net-diffs" directory but I've never checked if > those diffs would work cleanly on their own. Fwiw I was considering proposing committing some patches for these old releases to make them easier to build. I would suggest creating a tag for a for this stable legacy version and limiting the commits to just: 1) Disabling warnings 2) Fixing bugs in the configure scripts that occur on more recent systems (version number parsing etc) 3) Backporting things like the variable-length array code that prevents building 4) Adding compiler options like -fwrapv -- greg
On Wed, Oct 12, 2016 at 11:54 AM, Greg Stark <stark@mit.edu> wrote: > On Mon, Oct 10, 2016 at 9:52 PM, Greg Stark <stark@mit.edu> wrote: >> >> The code is here: >> >> https://github.com/gsstark/retropg >> >> The build script is called "makeall" and it applies patches from the >> "old-postgres-fixes" directory though some of the smarts are in the >> script because it knows about how to run older version of the >> configure script and it tries to fix up the ecpg parser duplcate >> tokens separately. It saves a diff after applying the patches and >> other fixups into the "net-diffs" directory but I've never checked if >> those diffs would work cleanly on their own. > > > Fwiw I was considering proposing committing some patches for these old > releases to make them easier to build. I would suggest creating a tag > for a for this stable legacy version and limiting the commits to just: > > 1) Disabling warnings > 2) Fixing bugs in the configure scripts that occur on more recent > systems (version number parsing etc) > 3) Backporting things like the variable-length array code that prevents building > 4) Adding compiler options like -fwrapv I'd support that. The reason why we remove branches from support is so that we don't have to back-patch things to 10 or 15 branches when we have a bug fix. But that doesn't mean that we should prohibit all commits to those branches for any reason, and making it easier to test backward-compatibility when we want to do that seems like a good reason. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > On Wed, Oct 12, 2016 at 11:54 AM, Greg Stark <stark@mit.edu> wrote: >> Fwiw I was considering proposing committing some patches for these old >> releases to make them easier to build. I would suggest creating a tag >> for a for this stable legacy version and limiting the commits to just: >> >> 1) Disabling warnings >> 2) Fixing bugs in the configure scripts that occur on more recent >> systems (version number parsing etc) >> 3) Backporting things like the variable-length array code that prevents building >> 4) Adding compiler options like -fwrapv > I'd support that. The reason why we remove branches from support is > so that we don't have to back-patch things to 10 or 15 branches when > we have a bug fix. But that doesn't mean that we should prohibit all > commits to those branches for any reason, and making it easier to test > backward-compatibility when we want to do that seems like a good > reason. Meh, I think that this will involve a great deal more work than it's worth. We deal with moving-target platforms *all the time*. New compiler optimizations break things, libraries such as OpenSSL whack things around, other libraries such as uuid-ossp stop getting maintained and become unusable on new platforms, bison decides to get stickier about comma placement, yadda yadda yadda. How much of that work are we going to back-port to dead branches? And to what extent is such effort going to be self-defeating because the branch no longer behaves as it did back in the day? If Greg wants to do this kind of work, he's got a commit bit. My position is that we have a limited support lifespan for a reason, and I'm not going to spend time on updating dead branches forever. To me, it's more useful to test them in place on contemporary platforms. We've certainly got enough old platforms laying about in the buildfarm and elsewhere. regards, tom lane
On Wed, Oct 12, 2016 at 12:24 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Wed, Oct 12, 2016 at 11:54 AM, Greg Stark <stark@mit.edu> wrote: >>> Fwiw I was considering proposing committing some patches for these old >>> releases to make them easier to build. I would suggest creating a tag >>> for a for this stable legacy version and limiting the commits to just: >>> >>> 1) Disabling warnings >>> 2) Fixing bugs in the configure scripts that occur on more recent >>> systems (version number parsing etc) >>> 3) Backporting things like the variable-length array code that prevents building >>> 4) Adding compiler options like -fwrapv > >> I'd support that. The reason why we remove branches from support is >> so that we don't have to back-patch things to 10 or 15 branches when >> we have a bug fix. But that doesn't mean that we should prohibit all >> commits to those branches for any reason, and making it easier to test >> backward-compatibility when we want to do that seems like a good >> reason. > > Meh, I think that this will involve a great deal more work than it's > worth. We deal with moving-target platforms *all the time*. New compiler > optimizations break things, libraries such as OpenSSL whack things around, > other libraries such as uuid-ossp stop getting maintained and become > unusable on new platforms, bison decides to get stickier about comma > placement, yadda yadda yadda. How much of that work are we going to > back-port to dead branches? And to what extent is such effort going to be > self-defeating because the branch no longer behaves as it did back in the > day? > > If Greg wants to do this kind of work, he's got a commit bit. My position > is that we have a limited support lifespan for a reason, and I'm not going > to spend time on updating dead branches forever. To me, it's more useful > to test them in place on contemporary platforms. We've certainly got > enough old platforms laying about in the buildfarm and elsewhere. I agree that it shouldn't be an expectation that committers in general will do this, whether you or me or anyone else. However, I think that if Greg or some other committer wants to volunteer their own time to do some of it, that is fine. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company