Thread: Release cycle length
The time from release 7.3 to release 7.4 was 355 days, an all-time high. We really need to shorten that. We already have a number of significant improvements in 7.5 now, and several good ones coming up in the next few weeks. We cannot let people wait 1 year for that. I suggest that we aim for a 6 month cycle, consisting of approximately 4 months of development and 2 months of cleanup. So the start of the next beta could be the 1st of March. What do you think? -- Peter Eisentraut peter_e@gmx.net
On Tue, 18 Nov 2003, Peter Eisentraut wrote: > The time from release 7.3 to release 7.4 was 355 days, an all-time high. > We really need to shorten that. We already have a number of significant > improvements in 7.5 now, and several good ones coming up in the next few > weeks. We cannot let people wait 1 year for that. I suggest that we aim > for a 6 month cycle, consisting of approximately 4 months of development > and 2 months of cleanup. So the start of the next beta could be the 1st > of March. What do you think? That is the usual goal *nod* Same goal we try for each release, and never quite seem to get there ... we'll try 'yet again' with v7.5 though, as we always do :)
Marc G. Fournier writes: > That is the usual goal *nod* Same goal we try for each release, and never > quite seem to get there ... we'll try 'yet again' with 7.5 though, as we > always do :) I don't see how we could have tried for a 4-month development period and ended up with an 8-month period. Something went *really* wrong there. Part of that may have been that few people were actually aware of that schedule. -- Peter Eisentraut peter_e@gmx.net
On Tue, 18 Nov 2003, Peter Eisentraut wrote: > Marc G. Fournier writes: > > > That is the usual goal *nod* Same goal we try for each release, and never > > quite seem to get there ... we'll try 'yet again' with 7.5 though, as we > > always do :) > > I don't see how we could have tried for a 4-month development period and > ended up with an 8-month period. Something went *really* wrong there. > Part of that may have been that few people were actually aware of that > schedule. Everyone on -hackers should have been aware of it, as its always discussed at the end of the previous release cycle ... and I don't think we've hit a release cycle yet that has actually stayed in the 4 month period :( Someone is always 'just sitting on something that is almost done' at the end that pushes it further then originally planned ...
Just did a quick search on archives, and the original plan was for a release in mid-2003, which means the beta would have been *at least* a month before that, so beta starting around May: http://archives.postgresql.org/pgsql-hackers/2002-11/msg00975.php On Mon, 17 Nov 2003, Marc G. Fournier wrote: > On Tue, 18 Nov 2003, Peter Eisentraut wrote: > > > Marc G. Fournier writes: > > > > > That is the usual goal *nod* Same goal we try for each release, and never > > > quite seem to get there ... we'll try 'yet again' with 7.5 though, as we > > > always do :) > > > > I don't see how we could have tried for a 4-month development period and > > ended up with an 8-month period. Something went *really* wrong there. > > Part of that may have been that few people were actually aware of that > > schedule. > > Everyone on -hackers should have been aware of it, as its always > discussed at the end of the previous release cycle ... and I don't think > we've hit a release cycle yet that has actually stayed in the 4 month > period :( Someone is always 'just sitting on something that is almost > done' at the end that pushes it further then originally planned ... > > > ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
Peter Eisentraut <peter_e@gmx.net> writes: > The time from release 7.3 to release 7.4 was 355 days, an all-time > high. We really need to shorten that. Why is that? -Neil
Marc G. Fournier writes: > Just did a quick search on archives, and the original plan was for a > release in mid-2003, which means the beta would have been *at least* a > month before that, so beta starting around May: > > http://archives.postgresql.org/pgsql-hackers/2002-11/msg00975.php That was a Bruce Momjian estimate mentioned in passing, not an affirmed plan. Also, I think Bruce's estimates are notoriously off by years. ;-) -- Peter Eisentraut peter_e@gmx.net
Neil Conway writes: > Peter Eisentraut <peter_e@gmx.net> writes: > > The time from release 7.3 to release 7.4 was 355 days, an all-time > > high. We really need to shorten that. > > Why is that? First, if you develop something today, the first time users would realistically get a hand at it would be January 2005. Do you want that? Don't you want people to use your code? We fix problems, but people must wait a year for the fix? Second, the longer a release cycle, the more problems amass. People just forget what they were doing in the beginning, no one is around to fix the problems introduced earlier, no one remembers anything when it comes time to write release notes. The longer you develop, the more parallel efforts are underway, and it becomes impossible to synchronize them to a release date. People are not encouraged to provide small, well-thought-out, modular improvements. Instead, they break everything open and worry about it later. At the end, it's always a rush to close these holes. Altogether, it's a loss for both developers and users. -- Peter Eisentraut peter_e@gmx.net
> The time from release 7.3 to release 7.4 was 355 days, an all-time high. > We really need to shorten that. We already have a number of significant > improvements in 7.5 now, and several good ones coming up in the next few > weeks. We cannot let people wait 1 year for that. I suggest that we aim > for a 6 month cycle, consisting of approximately 4 months of development > and 2 months of cleanup. So the start of the next beta could be the 1st > of March. What do you think? So long as pg_dump object ordering is an early fix to make upgrades rather more painless, I'm all for it :) Does anyone have a comparison of how many lines of code were added in this release compared to previous? Chris
> Everyone on -hackers should have been aware of it, as its always > discussed at the end of the previous release cycle ... and I don't think > we've hit a release cycle yet that has actually stayed in the 4 month > period :( Someone is always 'just sitting on something that is almost > done' at the end that pushes it further then originally planned ... I think that the core just need to be tough on it, that's all. If we have pre-published target dates, then everyone knows if they can get their code in or not for that date. Chris
Hello, Personally I am for long release cycles, at least for major releases. In fact as of 7.4 I think there should possibly be a slow down in releases with more incremental releases (minor releases) throughout the year. People are running their companies and lives off of PostgreSQL, they should be able to rely on a specific feature set, and support from the community for longer. Sincerely, Joshua Drake Peter Eisentraut wrote: >Neil Conway writes: > > > >>Peter Eisentraut <peter_e@gmx.net> writes: >> >> >>>The time from release 7.3 to release 7.4 was 355 days, an all-time >>>high. We really need to shorten that. >>> >>> >>Why is that? >> >> > >First, if you develop something today, the first time users would >realistically get a hand at it would be January 2005. Do you want that? >Don't you want people to use your code? We fix problems, but people must >wait a year for the fix? > >Second, the longer a release cycle, the more problems amass. People just >forget what they were doing in the beginning, no one is around to fix the >problems introduced earlier, no one remembers anything when it comes time >to write release notes. The longer you develop, the more parallel efforts >are underway, and it becomes impossible to synchronize them to a release >date. People are not encouraged to provide small, well-thought-out, >modular improvements. Instead, they break everything open and worry about >it later. At the end, it's always a rush to close these holes. > >Altogether, it's a loss for both developers and users. > > > -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com Editor-N-Chief - PostgreSQl.Org - http://www.postgresql.org
On Tue, 18 Nov 2003, Christopher Kings-Lynne wrote: > > Everyone on -hackers should have been aware of it, as its always > > discussed at the end of the previous release cycle ... and I don't think > > we've hit a release cycle yet that has actually stayed in the 4 month > > period :( Someone is always 'just sitting on something that is almost > > done' at the end that pushes it further then originally planned ... > > I think that the core just need to be tough on it, that's all. > > If we have pre-published target dates, then everyone knows if they can > get their code in or not for that date. Right now, I believe we are looking at an April 1st beta, and a May 1st related ... those are, as always, *tentative* dates that will become more fine-tuned as those dates become nearer ... April 1st, or 4 mos from last release, tends to be what we aim for with all releases ... as everyone knows, we don't necessarily acheive it, but ... Actually, historically, it looks like we've always been close to 12 months between releases ... 7.0->7.1: ~11mos, 7.1->7.2: ~10mos, 7.2->7.3: ~9 mos, and 7.3->7.4: ~12mos ... so, on average, we're dealing with an ~10mos release cycle for the past 3 years ... svr1# ls -l */postgresql-7.?.tar.gz -rw-rw-r-- 1 pgsql pgsql 9173732 May 9 2000 v7.0/postgresql-7.0.tar.gz -rw-r--r-- 1 pgsql pgsql 8088678 Apr 13 2001 v7.1/postgresql-7.1.tar.gz -rw-r--r-- 1 pgsql pgsql 9180168 Feb 4 2002 v7.2/postgresql-7.2.tar.gz -rw-r--r-- 1 pgsql pgsql 11059455 Nov 27 2002 v7.3/postgresql-7.3.tar.gz -rw-r--r-- 1 pgsql pgsql 12311256 Nov 16 17:57 v7.4/postgresql-7.4.tar.gz
"Joshua D. Drake" <jd@commandprompt.com> writes: > Hello, > > Personally I am for long release cycles, at least for major releases. > In fact > as of 7.4 I think there should possibly be a slow down in releases with more > incremental releases (minor releases) throughout the year. That would pretty much mean changing the "minor releases only for serious bugfixes" philosphy. Is that what you are advocating? > People are running their companies and lives off of PostgreSQL, > they should be able to rely on a specific feature set, and support > from the community for longer. If 7.3.4 works for you, there's nothing to stop you running it until the end of time... If you can't patch in bugfixes yourself, you should be willing to pay for support. Commercial companies like Red Hat don't support their releases indefinitely for free; why should the PG community be obligated to? Also, we very rarely remove features--AUTOCOMMIT on the server is about the only one I can think of. -Doug
> Right now, I believe we are looking at an April 1st beta, and a May 1st > related ... those are, as always, *tentative* dates that will become more > fine-tuned as those dates become nearer ... > > April 1st, or 4 mos from last release, tends to be what we aim for with > all releases ... as everyone knows, we don't necessarily acheive it, but Make it April 2nd, otherwise everyone will think it's a joke :P Chris
Marc G. Fournier writes: > Right now, I believe we are looking at an April 1st beta, and a May 1st > related ... those are, as always, *tentative* dates that will become more > fine-tuned as those dates become nearer ... OK, here start the problems. Development already started, so April 1st is already 5 months development. Add 1 month because no one is willing to hold people to these dates. So that's 6 months. Then for 6 months of development, you need at least 2 months of beta. So we're already in the middle of July, everyone is on vacation, and we'll easily reach the 9 months -- instead of 6. -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut <peter_e@gmx.net> writes: > First, if you develop something today, the first time users would > realistically get a hand at it would be January 2005. Do you want > that? Don't you want people to use your code? Sure :-) But I don't mind a long release cycle if it is better for users. > We fix problems, but people must wait a year for the fix? A couple points: (a) Critical problems can of course be fixed via point releases in the current stable release series (b) As PostgreSQL gets more mature, the number of absolutely show stopping features or bug fixes in a new release gets smaller. For example, I'd argue that neither 7.3 or 7.4 include a single feature that is as important as 7.2'slazy VACUUM or 7.1's WAL. There are lots of great features, but the set of absolutely essential new featurestends to grow smaller over time. I'd wager that for the vast majority of our user base, PostgreSQL alreadyworks well. (c) As PostgreSQL gets more mature, putting out stable, production-worthy releases becomes even more important. In theory, longer release cycles contribute to higher quality releases: we have more time to implement new features properly, polish rough edges and document things, test and find bugs, and ensure that features we've implemented earlierin the release cycle are properly thought out, and so forth. Note that whether or not we are using those 355 days effectively is another story -- it may well be true that thereare we could make parts of the development process much more efficient. Furthermore, longer release cycles reduce, to some degree, the pain of upgrades. Unless we make substantial improvements to the upgrade story any time soon, I wouldn't be surprised if many DBAs are relieved at only needing to upgrade once a year. > The longer you develop, the more parallel efforts are underway, and > it becomes impossible to synchronize them to a release date. I think this is inherent to the way PostgreSQL is developed: Tom has previously compared PostgreSQL release scheduling to herding cats :-) As long as much of the work on the project is done by volunteers in their spare time, ISTM that coordinating everyone toward a single release date is going to be difficult, if not impossible. The length of the release cycle doesn't really effect this, IMHO. > People are not encouraged to provide small, well-thought-out, > modular improvements. I agree we can always do better when it comes to code quality. I think the NetBSD team puts it well: Some systems seem to have the philosophy of "If it works, it's right". In that light NetBSD could be described as "Itdoesn't work /unless/ it's right". That said, I don't see how this is related to the release schedule. In fact, one could argue that a longer release schedule gives new features a longer "gestation period" during which developers can ensure that they are well-thought out and implemented properly. > Altogether, it's a loss for both developers and users. I don't think it's nearly as clear-cut as that. Both types of release scheduling have their benefits and their drawbacks. My main point is really that a short release cycle is not an unqualified good (not to mention that in the past we've been completely unable to actually *execute* a short release cycle, making this whole discussion a little academic). -Neil
Peter Eisentraut wrote: >Marc G. Fournier writes: > > >>Right now, I believe we are looking at an April 1st beta, and a May 1st >>related ... those are, as always, *tentative* dates that will become more >>fine-tuned as those dates become nearer ... >> >> > >OK, here start the problems. Development already started, so April 1st is >already 5 months development. Add 1 month because no one is willing to >hold people to these dates. So that's 6 months. Then for 6 months of >development, you need at least 2 months of beta. So we're already in the >middle of July, everyone is on vacation, and we'll easily reach the 9 >months -- instead of 6. > > Do you think that 2 months for beta is realistic? Tom announced feature freeze on July 1. http://archives.postgresql.org/pgsql-hackers/2003-07/msg00040.php So 7.4 took about 4.5 months to get from feature freeze to release. I think feature freeze is the important date that developers of new features need to concern themselves with. I agree with Peter's other comment, that the longer the development cycle, the longer the beta / bug shakeout period, perhaps a shorter dev cycle would yield a shorter beta period, but perhaps it would also result in a less solid release.
On Tue, 18 Nov 2003, Peter Eisentraut wrote: > Marc G. Fournier writes: > > > Right now, I believe we are looking at an April 1st beta, and a May 1st > > related ... those are, as always, *tentative* dates that will become more > > fine-tuned as those dates become nearer ... > > OK, here start the problems. Development already started, so April 1st is > already 5 months development. Add 1 month because no one is willing to > hold people to these dates. So that's 6 months. Then for 6 months of > development, you need at least 2 months of beta. So we're already in the > middle of July, everyone is on vacation, and we'll easily reach the 9 > months -- instead of 6. 'K, Sept 1st it is then ... sounds reasonable to me :)
"Matthew T. O'Connor" <matthew@zeut.net> writes: > So 7.4 took about 4.5 months to get from feature freeze to release. > I think feature freeze is the important date that developers of new > features need to concern themselves with. Rather than the length of the release cycle, I think it's the length of the beta cycle that we should focus on improving. IMHO, we should try to make the beta process more efficient: sometimes I get the impression that the beta process just drags on and on, without the extra time resulting in a huge improvement in the reliability of the .0 release (witness the fact that all the .0 releases I can remember have had a *lot* of serious bugs in them -- we can't catch everything of course, but I think there is definitely room for improvement). That said, I'm not really sure how we can make better use of the beta period. One obvious improvement would be making the beta announcements more visible: the obscurity of the beta process on www.postgresql.org for 7.4 was pretty ridiculous. Does anyone else have a suggestion on what we can do to produce a more reliable .0 release in less time? -Neil
Neil Conway wrote: > Peter Eisentraut <peter_e@gmx.net> writes: > > First, if you develop something today, the first time users would > > realistically get a hand at it would be January 2005. Do you want > > that? Don't you want people to use your code? > > Sure :-) But I don't mind a long release cycle if it is better for > users. Given that users can run whatever they like, it's not clear that a long release cycle is better for users. > (c) As PostgreSQL gets more mature, putting out stable, > production-worthy releases becomes even more important. In > theory, longer release cycles contribute to higher quality > releases: we have more time to implement new features properly, > polish rough edges and document things, test and find bugs, and > ensure that features we've implemented earlier in the release > cycle are properly thought out, and so forth. On the other hand, the longer you wait to release a new feature, the longer it will be before you get your REAL testing done. You don't want to release something that hasn't at least been looked over and checked out by the development community first, of course, but waiting beyond that point to release a new version of PG doesn't help you that much, because most people aren't going to run the latest CVS version -- they'll run the latest released version, whatever that may be. So the time between the testing phase for the feature you implement and the version release is essentially "dead time" for testing of that feature, because most developers have moved on to working on and/or testing something else. That's why the release methodology used by the Linux kernel development team is a reasonable one. Because the development releases are still releases, people who wish to be more on the bleeding edge can do so without having to grab the source from CVS and compile it themselves. And package maintainers are more likely to package up the development version if it's given to them in a nice, prepackaged format, even if it's just a source tarball. > Note that whether or not we are using those 355 days effectively > is another story -- it may well be true that there are we could > make parts of the development process much more efficient. > > Furthermore, longer release cycles reduce, to some degree, the pain of > upgrades. Unless we make substantial improvements to the upgrade story > any time soon, I wouldn't be surprised if many DBAs are relieved at > only needing to upgrade once a year. But DBAs only "need" to upgrade as often as they feel like. Any reasonable distribution will give them an option of using either the stable version or the development version anyway, if we're talking about prepackaged versions. > > The longer you develop, the more parallel efforts are underway, and > > it becomes impossible to synchronize them to a release date. > > I think this is inherent to the way PostgreSQL is developed: Tom has > previously compared PostgreSQL release scheduling to herding cats :-) > As long as much of the work on the project is done by volunteers in > their spare time, ISTM that coordinating everyone toward a single > release date is going to be difficult, if not impossible. The length > of the release cycle doesn't really effect this, IMHO. Linux, too, is done largely by volunteers in their spare time. Yet Linux kernel releases are much more frequent than PostgreSQL releases. One difference is that the Linux community makes a distinction between development releases and stable releases. The amount of time between stable releases is probably about the same as it is for PostgreSQL. The difference is that the *only* releases PostgreSQL makes are stable releases (or release candidates, when a stable release is close). That's something we might want to re-think. -- Kevin Brown kevin@sysexperts.com
Matthew T. O'Connor wrote: > I agree with Peter's other comment, that the longer the development > cycle, the longer the beta / bug shakeout period, perhaps a shorter dev > cycle would yield a shorter beta period, but perhaps it would also > result in a less solid release. Perhaps. Perhaps not. The fewer the changes, the less complexity you have to manage. But it would certainly result in a smaller set of feature changes per release. Some people might regard that as a good thing. The advantage to doing more frequent releases is that new features end up with more real-world testing within a given block of time, on average, because a lot more people pick up the releases than the CVS snapshots or even release candidates.. -- Kevin Brown kevin@sysexperts.com
> That said, I'm not really sure how we can make better use of the beta > period. One obvious improvement would be making the beta announcements > more visible: the obscurity of the beta process on www.postgresql.org > for 7.4 was pretty ridiculous. Does anyone else have a suggestion on > what we can do to produce a more reliable .0 release in less time? I can think of a few things. 1. Try to encourage list members to actually test stuff. For example, I decided to find stuff that might be broken. So I checked the tutorial scripts (no-one ever looks at them) and found heaps of bugs. I thought about some new features and tried to break them. I also tend to find bugs by coding phpPgAdmin and delving into the nitty gritty of stuff. Maybe we could actually ask for people for the 'beta team'. Then, once we have volunteers, they are each assigned a set of features to test by the 'testing co-ordinator' (a new core position, say?) What you are asked to test depends on your skill, say. eg. Someone who just knows how to use postgres could test my upcoming COMMENT ON patch. (It's best if I myself do not test it) Someone with more skill with a debugger can be asked to test unique hash indexes by playing with concurrency, etc. The test co-ordinator could also manage the testing of new features as they are committed to save time later. The co-ordinator should also maintain a list of what features have been committed, which have been code reviewed (what Tom usually does) and which have been tested. Of course, I'm not talking about exhaustive testing here, just better and more organised that what we currently have. Chris
Neil Conway writes: > That said, I'm not really sure how we can make better use of the beta > period. One obvious improvement would be making the beta announcements > more visible: the obscurity of the beta process on www.postgresql.org > for 7.4 was pretty ridiculous. Does anyone else have a suggestion on > what we can do to produce a more reliable .0 release in less time? Here are a couple of ideas: 0. As you say, make it known to the public. Have people test their in-development applications using a beta. 1. Start platform testing on day 1 of beta. Last minute fixes for AIX or UnixWare are really becoming old jokes. 2. Have a complete account of the changes available at the start of beta, so people know what to test. 3. Use a bug-tracking system so that "open items" are known early and by everyone. 4. Have a schedule. Not "We're looking at a release early in the later part of this year.", but dates for steps such asfeature freeze then, proposals for open issues fielded then, string freeze then, release candiate then. 5. If need be, have a release management team that manages 0-4. -- Peter Eisentraut peter_e@gmx.net
On Mon, 17 Nov 2003, Neil Conway wrote: > That said, I'm not really sure how we can make better use of the beta > period. One obvious improvement would be making the beta announcements > more visible: the obscurity of the beta process on www.postgresql.org > for 7.4 was pretty ridiculous. Does anyone else have a suggestion on > what we can do to produce a more reliable .0 release in less time? Agreed ... here's a thought ... take the download page and break it into two pages: page 1: broken down into "dev" vs "stable" versions, including the date of release ... page 2: when you click on the version you want to download, it brings you to a subpage that is what the main page currently is (with all the flags and such) but instead of just sending ppl to the ftp site itself, actually have the link go to the directory that contains that version on the mirror site ... that first page of the download could contain descriptoins of the variosu releases, and state of releases?
> eg. Someone who just knows how to use postgres could test my upcoming > COMMENT ON patch. (It's best if I myself do not test it) Someone with > more skill with a debugger can be asked to test unique hash indexes by > playing with concurrency, etc. I forgot to mention that people who just have large, complex production databases and test servers at their disposal should be given the task of: 1. Dumping their old version database 2. Loading that into the dev version of postgres 3. Dumping that using dev pg_dump 4. Loading that dump back in 5. Dumping it again 6. Diffing 3 and 5 Chris
--On Tuesday, November 18, 2003 04:43:12 +0100 Peter Eisentraut <peter_e@gmx.net> wrote: > Neil Conway writes: > >> That said, I'm not really sure how we can make better use of the beta >> period. One obvious improvement would be making the beta announcements >> more visible: the obscurity of the beta process on www.postgresql.org >> for 7.4 was pretty ridiculous. Does anyone else have a suggestion on >> what we can do to produce a more reliable .0 release in less time? > > Here are a couple of ideas: > > 0. As you say, make it known to the public. Have people test their > in-development applications using a beta. > > 1. Start platform testing on day 1 of beta. Last minute fixes for AIX or > UnixWare are really becoming old jokes. The only reason we had last minute stuff for UnixWare this time was the timing of PG's release and the UP3 release from SCO. I try to test stuff fairly frequently, and this time I didn't know when, exactly, SCO would make the release of the updated compiler. LER -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: ler@lerctr.org US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749
On Tue, 18 Nov 2003, Peter Eisentraut wrote: > 0. As you say, make it known to the public. Have people test their > in-development applications using a beta. and how do you propose we do that? I think this is the hard part ... other then the first beta, I post a note out to -announce and -general that the beta's have been tag'd and bundled for download ... I know Sean does up a 'devel' port for FreeBSD, but I don't believe any of the RPM/deb maintainers do anything until the final release ... > 1. Start platform testing on day 1 of beta. Last minute fixes for AIX > or UnixWare are really becoming old jokes. then each beta will have to be "re-certified" for that beta, up until release ... doable, but I don't think you'll find many that will bother until we are close to release ... > 2. Have a complete account of the changes available at the start of beta, > so people know what to test. Bruce, when do you do your initial HISTORY file? Something to move to the start of beta, if not? > 3. Use a bug-tracking system so that "open items" are known early and by > everyone. Waiting to see anyone decide on which one to use ... willing to spend the time working to get it online ... > 4. Have a schedule. Not "We're looking at a release early in the later > part of this year.", but dates for steps such as feature freeze then, > proposals for open issues fielded then, string freeze then, > release candiate then. We try that every release ... > 5. If need be, have a release management team that manages 0-4. Core does that, but we just don't feel that being totally rigid is (or has ever been) a requirement ... but, if you can provide suggestions on points 0 and 3, we're all ears ...
On Mon, 17 Nov 2003, Larry Rosenman wrote: > > I try to test stuff fairly frequently, and this time I didn't know when, > exactly, SCO would make the release of the updated compiler. And there was no way you could predict that your contact there would take off on holidays either :(
"Marc G. Fournier" <scrappy@postgresql.org> writes: > On Tue, 18 Nov 2003, Peter Eisentraut wrote: > >> 0. As you say, make it known to the public. Have people test their >> in-development applications using a beta. > > and how do you propose we do that? I think this is the hard part (1) Make the beta more obvious on the website, as we've already discussed (2) Make a freshmeat.net release announcement for _each_ beta, RC, and of course the final release (we totally missed thisduring 7.4). There are probably other software release announcement sites we could inform. (3) Is it worth trying to get some technical press coverage for the start of the beta process? I don't mean on PHB-orientedsites like ComputerWorld or ZdNet where we need to do the work of getting a press release prepared, but OSNewsor Slashdot just need a link to the release notes and the source tarballs. Any other suggestions? Perhaps we could add a list of this sort to src/tools/RELEASE_CHANGES? -Neil
On Tue, 18 Nov 2003, Neil Conway wrote: > "Marc G. Fournier" <scrappy@postgresql.org> writes: > > On Tue, 18 Nov 2003, Peter Eisentraut wrote: > > > >> 0. As you say, make it known to the public. Have people test their > >> in-development applications using a beta. > > > > and how do you propose we do that? I think this is the hard part > > (1) Make the beta more obvious on the website, as we've already > discussed > > (2) Make a freshmeat.net release announcement for _each_ beta, RC, and > of course the final release (we totally missed this during 7.4). > There are probably other software release announcement sites we > could inform. Damn, I keep forgetting freshmeat.net altogether ... will get that one during the day tomorrow ... > (3) Is it worth trying to get some technical press coverage for the > start of the beta process? I don't mean on PHB-oriented sites like > ComputerWorld or ZdNet where we need to do the work of getting a > press release prepared, but OSNews or Slashdot just need a link to > the release notes and the source tarballs. I think so ... just a heads up to tell ppl that we are heading into the final stretch for the next release, and testing would be 'a good thing' ...
On Tue, 2003-11-18 at 04:36, Marc G. Fournier wrote: > On Tue, 18 Nov 2003, Peter Eisentraut wrote: > > > 0. As you say, make it known to the public. Have people test their > > in-development applications using a beta. > > and how do you propose we do that? I think this is the hard part ... > other then the first beta, I post a note out to -announce and -general > that the beta's have been tag'd and bundled for download ... I know Sean > does up a 'devel' port for FreeBSD, but I don't believe any of the RPM/deb > maintainers do anything until the final release ... I do in fact build debs of the beta and rc releases. These have gone into the experimental archive in Debian and are announced on Debian lists. I even posted an announcement to pgsql-general, on 10th October for example. -- Oliver Elphick Oliver.Elphick@lfix.co.uk Isle of Wight, UK http://www.lfix.co.uk/oliver GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C ======================================== "A Song for the sabbath day. It is a good thing to give thanks unto the LORD,and to sing praises unto thy name, O most High." Psalms 92:1
> > 0. As you say, make it known to the public. Have people test > > their in-development applications using a beta. > > and how do you propose we do that? I think this is the hard part > ... other then the first beta, I post a note out to -announce and > -general that the beta's have been tag'd and bundled for download > ... I know Sean does up a 'devel' port for FreeBSD, but I don't > believe any of the RPM/deb maintainers do anything until the final > release ... Incidentally, the reason that I created the -devel port is because I needed some of the features now and didn't want to wait for a release. As things stand, I'm getting roughly 50-100 downloads a day of my -devel snapshots, which leads me to believe that there is some interest in having the release engineering team push features out the door more quickly. My eye on the pgsql repo isn't perfect, but I know I'm not the only one using it in production. > > 5. If need be, have a release management team that manages 0-4. > > Core does that, but we just don't feel that being totally rigid is > (or has ever been) a requirement ... but, if you can provide > suggestions on points 0 and 3, we're all ears ... You've got FreeBSD blood in you, you know that core@pgsql is the same as trb@FreeBSD + core@FreeBSD + re@FreeBSD + qa@FreeBSD. I think that core@pgsql's big reason for wanting to have long release cycles is to minimize the time that pgsql developers spend with their re@ and qa@ hats on. Truth be told, pgsql's code quality in the tree is so high that a snapshot of HEAD is almost as good as a release... the difference being the amount of attention spent on detail, docs, finishing touches/polish. For the # of lines of code that go into pgsql, it's nearly bug free over 95% of the time which means to me with an releng hat on, that pgsql could stand to increase the rate of releases so long as the developers can stomach doing the extra merges from HEAD to the stable branch for feature additions or possibly watching micro version numbers increment faster than they have historically. For all intents and purposes, pgsql's releases are stellar and the Pg team makes every release very important to most everyone, where important is defined as containing features useful for everyone: as opposed to a re@ release often model where releases don't necessarily useful features to a majority and just lead to upgrade thrashing which is costly to organizations. Food for thought... nothing conclusive here. -sc -- Sean Chittenden
On Mon, Nov 17, 2003 at 20:08:41 -0500, Neil Conway <neilc@samurai.com> wrote: > Peter Eisentraut <peter_e@gmx.net> writes: > > The time from release 7.3 to release 7.4 was 355 days, an all-time > > high. We really need to shorten that. > > Why is that? End users will find it useful. I started using 7.4 from CVS early on because check constraints for domains were available. With a long release cycle you have to wait a long time to get any of the features in a release when some of them may have been developed early in the release cycle.
... > > Does anyone have a comparison of how many lines of code were added in > this release compared to previous? > 7.2.4: 456204 lines of code in 1021 files 7.3.4: 480491 lines of code in 1012 files 7.4: 554567 lines of code in 1128 files (boah!) I used a fresh extracted source-directory and executed 'find postgresql-7.xxx -name '*.c' - o -name '*.h'|wc -l' and 'find postgresql-7.xxx -name '*.c' - o -name '*.h'|xargs cat|wc -l' Tommi > Chris > > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster -- Dr. Eckhardt + Partner GmbH http://www.epgmbh.de
On Tue, Nov 18, 2003 at 02:33:41PM +0100, Tommi Maekitalo wrote: > ... > > > > Does anyone have a comparison of how many lines of code were added in > > this release compared to previous? > > > 7.2.4: 456204 lines of code in 1021 files > 7.3.4: 480491 lines of code in 1012 files > 7.4: 554567 lines of code in 1128 files (boah!) I used SLOCcount by David A. Wheeler at some point on various releases (including 7.1.3 IIRC) and at some point the number of lines actually _decreased_. I didn't look into it in more detail, but I think the number of lines of code doesn't even come near to telling the whole story. -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) You liked Linux a lot when he was just the gawky kid from down the block mowing your lawn or shoveling the snow. But now that he wants to date your daughter, you're not so sure he measures up. (Larry Greenemeier)
Marc G. Fournier wrote: > On Tue, 18 Nov 2003, Peter Eisentraut wrote: > > > 0. As you say, make it known to the public. Have people test their > > in-development applications using a beta. > > and how do you propose we do that? I think this is the hard part ... > other then the first beta, I post a note out to -announce and -general > that the beta's have been tag'd and bundled for download ... I know Sean > does up a 'devel' port for FreeBSD, but I don't believe any of the RPM/deb > maintainers do anything until the final release ... > > > 1. Start platform testing on day 1 of beta. Last minute fixes for AIX > > or UnixWare are really becoming old jokes. > > then each beta will have to be "re-certified" for that beta, up until > release ... doable, but I don't think you'll find many that will bother > until we are close to release ... > > > 2. Have a complete account of the changes available at the start of beta, > > so people know what to test. > > Bruce, when do you do your initial HISTORY file? Something to move to the > start of beta, if not? I see beta starting on: revision 1.277date: 2003/08/04 22:30:30; author: pgsql; state: Exp; lines: +3 -3change tag to 7.4beta1 and update theCopyright to 2003Guess what folks? We are now in Beta!! and 7.4 HISTORY updated on: revision 1.196date: 2003/08/03 23:26:05; author: momjian; state: Exp; lines: +324 -26Update HISTORY file for 7.4. so the HISTORY file was updated the day before beta started. I haven't always been good about this, but I am now. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Guys, I agree with Neil ... it's not the length of the development part of the cycle, it's the length of the beta testing. I do think an online bug tracker (bugzilla or whatever) would help. I also think that having a person in charge of "testing" would help as well ... no biggie, just someone whose duty it is to e-mail people in the community and ask about the results of testing, especially on the more obscure ports. I think a few e-mail reminders would do a *lot* to speed things up. But I'm not volunteering for this job; managing the release PR is "herding cats" enough! I also contributed to the delays on this release because it took longer than I expected to get the "PR machinery" started. We have a sort of system now, though, and the next release should be easier. HOWEVER, a release cycle of *less than 6 months* would kill the advocacy vols if we wanted the same level of publicity. I do support the idea of "dev" releases. For example, if there was a "dev" release of PG+ARC as soon as Jan is done with it, I have one client would would be willing to test it against a simulated production load on pretty heavy-duty hardware. (Oddly enough, my problem in doing more testing myself is external to PostgreSQL; most of our apps are PHP apps and you can't compile PHP against two different versions of PostgreSQL on the same server. Maybe with User Mode Linux I'll be able to do more testing now.) -- Josh Berkus Aglio Database Solutions San Francisco
On Tue, Nov 18, 2003 at 09:42:31AM -0800, Josh Berkus wrote: > (Oddly enough, my problem in doing more testing myself is external to > PostgreSQL; most of our apps are PHP apps and you can't compile PHP against > two different versions of PostgreSQL on the same server. Maybe with User > Mode Linux I'll be able to do more testing now.) I'm not sure UML would help you here. I think you'd be better trying to run Apache in a chrooted environment, PHP and PostgreSQL included. You don't need another kernel, but another set of libraries. BTW, I think UMLSIM (umlsim.sf.net) could help to play the "unplug-the-server" game. In theory you could rewrite the block subsystem to "fail", simulating a real disk failure and possible a system shutdown. I don't have time to do it myself right now however ... -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) "Find a bug in a program, and fix it, and the program will work today. Show the program how to find and fix a bug, and the program will work forever" (Oliver Silfridge)
Josh Berkus wrote: >Guys, > >I agree with Neil ... it's not the length of the development part of the >cycle, it's the length of the beta testing. > >I do think an online bug tracker (bugzilla or whatever) would help. I also >think that having a person in charge of "testing" would help as well ... no >biggie, just someone whose duty it is to e-mail people in the community and >ask about the results of testing, especially on the more obscure ports. I >think a few e-mail reminders would do a *lot* to speed things up. But I'm >not volunteering for this job; managing the release PR is "herding cats" >enough! > Maybe some sort of automated distributed build farm would be a good idea. Check out http://build.samba.org/about.html to see how samba does it (much lighter than the Mozilla tinderbox approach). We wouldn't need to be as intensive as they appear to be - maybe a once or twice a day download and test run would do the trick, but it could pick up lots of breakage fairly quickly. That is not to say that more intensive testing isn't also needed on occasion. cheers andrew
On Tue, 18 Nov 2003, Andrew Dunstan wrote: > Josh Berkus wrote: > > >Guys, > > > >I agree with Neil ... it's not the length of the development part of the > >cycle, it's the length of the beta testing. > > > >I do think an online bug tracker (bugzilla or whatever) would help. I also > >think that having a person in charge of "testing" would help as well ... no > >biggie, just someone whose duty it is to e-mail people in the community and > >ask about the results of testing, especially on the more obscure ports. I > >think a few e-mail reminders would do a *lot* to speed things up. But I'm > >not volunteering for this job; managing the release PR is "herding cats" > >enough! > > > > Maybe some sort of automated distributed build farm would be a good > idea. Check out http://build.samba.org/about.html to see how samba does > it (much lighter than the Mozilla tinderbox approach). > > We wouldn't need to be as intensive as they appear to be - maybe a once > or twice a day download and test run would do the trick, but it could > pick up lots of breakage fairly quickly. > > That is not to say that more intensive testing isn't also needed on > occasion. Check the archives on this, as its been hashed out already once at least ... I think the big issue/problem is that nobody seems able (or wants) to come up with a script that could be setup in cron on machines to do this ... something simple that would dump the output to a log file and, if regression tests failed, email'd the machine owner that it needs to be checked would do, I would think ...
Marc G. Fournier wrote: >On Tue, 18 Nov 2003, Andrew Dunstan wrote: > > > >>Maybe some sort of automated distributed build farm would be a good >>idea. Check out http://build.samba.org/about.html to see how samba does >>it (much lighter than the Mozilla tinderbox approach). >> >>We wouldn't need to be as intensive as they appear to be - maybe a once >>or twice a day download and test run would do the trick, but it could >>pick up lots of breakage fairly quickly. >> >>That is not to say that more intensive testing isn't also needed on >>occasion. >> >> > >Check the archives on this, as its been hashed out already once at least >... I think the big issue/problem is that nobody seems able (or wants) to >come up with a script that could be setup in cron on machines to do this >... something simple that would dump the output to a log file and, if >regression tests failed, email'd the machine owner that it needs to be >checked would do, I would think ... > If there's general interest I'll try to cook something up. (This kind of stuff is right up my alley). I'd prefer some automateddisplay of results, though. A simple CGI script should be all that's required for that. cheers andrew
On Tue, Nov 18, 2003 at 12:36:11AM -0400, Marc G. Fournier wrote: > On Tue, 18 Nov 2003, Peter Eisentraut wrote: > > > 0. As you say, make it known to the public. Have people test their > > in-development applications using a beta. > > and how do you propose we do that? I think this is the hard part ... > other then the first beta, I post a note out to -announce and -general > that the beta's have been tag'd and bundled for download ... I know Sean > does up a 'devel' port for FreeBSD, but I don't believe any of the RPM/deb > maintainers do anything until the final release ... > For what it is worth, I try to promote the beta testing on general bits. I also invite people to write articles for me. To highlight a feature or concept or just to egg people on in a short article by a guest to general bits is very appropriate. My audience might not be core hackers, but getting the larger user group to participate as well as prepare for conversion is something I can help promote. (Just contact me to submit articles for publication--the invitation is always open.) --elein ============================================================= elein@varlena.com Varlena, LLC www.varlena.com PostgreSQL Consulting, Support & Training PostgreSQL General Bits http://www.varlena.com/GeneralBits/ ============================================================= I have always depended on the [QA] of strangers.
On Tue, 2003-11-18 at 14:36, Andrew Dunstan wrote: > Marc G. Fournier wrote: > > >On Tue, 18 Nov 2003, Andrew Dunstan wrote: > > > > > > > >>Maybe some sort of automated distributed build farm would be a good > >>idea. Check out http://build.samba.org/about.html to see how samba does > >>it (much lighter than the Mozilla tinderbox approach). > >> > >>We wouldn't need to be as intensive as they appear to be - maybe a once > >>or twice a day download and test run would do the trick, but it could > >>pick up lots of breakage fairly quickly. > >> > >>That is not to say that more intensive testing isn't also needed on > >>occasion. > >> > >> > > > >Check the archives on this, as its been hashed out already once at least > >... I think the big issue/problem is that nobody seems able (or wants) to > >come up with a script that could be setup in cron on machines to do this > >... something simple that would dump the output to a log file and, if > >regression tests failed, email'd the machine owner that it needs to be > >checked would do, I would think ... > > > > If there's general interest I'll try to cook something up. (This kind of stuff is right up my alley). I'd prefer some automateddisplay of results, though. A simple CGI script should be all that's required for that. > look in the tools directory of cvs, i swear Bruce checked in a script he uses for similar tasks.. Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
Robert Treat wrote: > > >Check the archives on this, as its been hashed out already once at least > > >... I think the big issue/problem is that nobody seems able (or wants) to > > >come up with a script that could be setup in cron on machines to do this > > >... something simple that would dump the output to a log file and, if > > >regression tests failed, email'd the machine owner that it needs to be > > >checked would do, I would think ... > > > > > > > If there's general interest I'll try to cook something up. (This kind of stuff is right up my alley). I'd prefer someautomated display of results, though. A simple CGI script should be all that's required for that. > > > > look in the tools directory of cvs, i swear Bruce checked in a script he > uses for similar tasks.. /tools/pgtest -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
> HOWEVER, a release cycle of *less than 6 months* would kill the advocacy vols > if we wanted the same level of publicity. > > I do support the idea of "dev" releases. For example, if there was a "dev" > release of PG+ARC as soon as Jan is done with it, I have one client would > would be willing to test it against a simulated production load on pretty > heavy-duty hardware. Can't we have nightly builds always available? Why can't they just use the CVS version? > (Oddly enough, my problem in doing more testing myself is external to > PostgreSQL; most of our apps are PHP apps and you can't compile PHP against > two different versions of PostgreSQL on the same server. Maybe with User > Mode Linux I'll be able to do more testing now.) I'd be willing to give testing coordination a go, not sure where I'd begin though. Chris
On Wed, 19 Nov 2003, Christopher Kings-Lynne wrote: > > HOWEVER, a release cycle of *less than 6 months* would kill the advocacy vols > > if we wanted the same level of publicity. > > > > I do support the idea of "dev" releases. For example, if there was a "dev" > > release of PG+ARC as soon as Jan is done with it, I have one client would > > would be willing to test it against a simulated production load on pretty > > heavy-duty hardware. > > Can't we have nightly builds always available? Why can't they just use > the CVS version? We do do nightly builds ... have for years now ... ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
Andrew Dunstan writes: > If there's general interest I'll try to cook something up. (This kind of > stuff is right up my alley). I'd prefer some automated display of > results, though. A simple CGI script should be all that's required for > that. The real problem will be to find enough machines so that the build farm becomes useful. IMO, that would mean *more* machines than are currently lines in the supported-platforms table. -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: >Andrew Dunstan writes: > > > >>If there's general interest I'll try to cook something up. (This kind of >>stuff is right up my alley). I'd prefer some automated display of >>results, though. A simple CGI script should be all that's required for >>that. >> >> > >The real problem will be to find enough machines so that the build farm >becomes useful. IMO, that would mean *more* machines than are currently >lines in the supported-platforms table. > > "Useful" is probably subjective. That list would at least be a good place to start, though. What combinations of variables do you think we would need? This would be a fairly painless way for users to be helpful to the project, btw - the way I am envisioning things this would be fairly much a "set and forget" process. I'll have an example page available in a few days. cheers andrew
Marc G. Fournier wrote: > On Tue, 18 Nov 2003, Peter Eisentraut wrote: > >> The time from release 7.3 to release 7.4 was 355 days, an all-time high. >> We really need to shorten that. We already have a number of significant >> improvements in 7.5 now, and several good ones coming up in the next few >> weeks. We cannot let people wait 1 year for that. I suggest that we aim >> for a 6 month cycle, consisting of approximately 4 months of development >> and 2 months of cleanup. So the start of the next beta could be the 1st >> of March. What do you think? > > That is the usual goal *nod* Same goal we try for each release, and never > quite seem to get there ... we'll try 'yet again' with v7.5 though, as we > always do :) I don't see much of a point for a shorter release cycle as long as we don't get rid of the initdb requirement for releases that don't change the system catalog structure. All we gain from that is spreading out the number of different versions used in production. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Andrew Dunstan writes: > "Useful" is probably subjective. That list would at least be a good > place to start, though. What combinations of variables do you think we > would need? First of all, I don't necessarily think that a large list of CPU/operation system combinations is going to help much. IIRC, this round of platform testing showed us two real problems, and both happened because the operating system version in question came out the previous day, so we could not have caught it. Much more problems arise when people use different versions of secondary packages, such as Tcl, Perl, Kerberos, Flex, Bison. So you would need to compile a large collection of these things. The problem again is that it is usually the brand-new or the odd intermediate version of such a tool that breaks things, so a "set and forget" build farm is not going to catch it. Another real source of problems are real systems. Weird combinations of packages, weird network setups, weird applications, custom kernels. These cannot be detected on out of the box setups. In fact, the regression tests might not detect them at all. Hence the open-source community approach. Closed-source development teams can do all the above, with great effort. But by throwing out the code and have real people test them on real systems with real applications, you can do much better. -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: >Andrew Dunstan writes: > > > >>"Useful" is probably subjective. That list would at least be a good >>place to start, though. What combinations of variables do you think we >>would need? >> >> > >First of all, I don't necessarily think that a large list of CPU/operation >system combinations is going to help much. IIRC, this round of platform >testing showed us two real problems, and both happened because the >operating system version in question came out the previous day, so we >could not have caught it. Much more problems arise when people use >different versions of secondary packages, such as Tcl, Perl, Kerberos, >Flex, Bison. So you would need to compile a large collection of these >things. The problem again is that it is usually the brand-new or the odd >intermediate version of such a tool that breaks things, so a "set and >forget" build farm is not going to catch it. Another real source of >problems are real systems. Weird combinations of packages, weird network >setups, weird applications, custom kernels. These cannot be detected on >out of the box setups. In fact, the regression tests might not detect >them at all. > >Hence the open-source community approach. Closed-source development teams >can do all the above, with great effort. But by throwing out the code and >have real people test them on real systems with real applications, you can >do much better. > > > The fact that something doesn't find everything doesn't mean it is of no value. (Thinks of Scott Adams' nice example: "Your theory of gravity doesn't prove why there are no unicorns, so it is wrong." ;-) ) I don't believe there is a single "open source community" approach - open source projects all have differing ways of handling problems. At least 2 very significant open source projects I know of run build farms, notwithstanding that your objections should apply equally to them. Mozilla's is fairly centralised and very complex and heavy, but gives fairly immediate feedback if anything gets broken. Samba's is much lighter, distributed, and they still apparently see good value in it. (Samba uses a "torture test" - perhaps we need one of those in addition to the regression tests.) Maybe it wouldn't be of great value to PostgreSQL. And maybe it would. I have an open mind about it. I don't think incompleteness is an argument against it, though. cheers andrew
Andrew Dunstan writes: > Maybe it wouldn't be of great value to PostgreSQL. And maybe it would. I > have an open mind about it. I don't think incompleteness is an argument > against it, though. If you want to do it, by all means go for it. I'm sure it would give everyone a fuzzy feeling to see the green lights everywhere. But realistically, don't expect any significant practical benefits, such cutting beta time by 10%. The Samba build daemon suite is pretty good. We have a couple of those hosts in our office in fact. (I think they're building PostgreSQL regularly as well.) A tip: You might find that adopting the source code of the Samba suite to PostgreSQL is harder than writing a new one. -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: >The Samba build daemon suite is pretty good. We have a couple of those >hosts in our office in fact. (I think they're building PostgreSQL >regularly as well.) A tip: You might find that adopting the source code >of the Samba suite to PostgreSQL is harder than writing a new one. > > Yes, I agree. I have looked at it for ideas, but not for code. I'm not using rsync or anything like that, for instance. I'm going for something very simple to start with. Essentially what I have is something like this pseudocode: cvs update check if there really was an update and if not exit configure; get config.log make 2>&1 | make-filter >makelogmake check 2>&1 | check-filter > checklog (TBD) send config status, make status, check status, logfiles make distclean The send piece will probably be a perl script using LWP and talking to a CGI script. cheers andrew
Andrew Dunstan writes: > Essentially what I have is something like this pseudocode: > > cvs update Be sure check past branches as well. > check if there really was an update and if not exit OK. > configure; get config.log Ideally, you'd try all possible option combinations for configure. Or at least enable everything. > make 2>&1 | make-filter >makelog > make check 2>&1 | check-filter > checklog You could also try out make distcheck. It tries out the complete build, installation, uninstallation, regression test, and distribution building. > (TBD) send config status, make status, check status, logfiles OK. > make distclean When I played around with this, always copied the CVS tree to a new directory and deleted that one at the end. That way, bugs in the clean procedure (known to happen) don't trip up the whole process. > The send piece will probably be a perl script using LWP and talking to a > CGI script. That will be the difficult part to organize, if it's supposed to be distributed and autonomous. -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: >Andrew Dunstan writes: > > > >>Essentially what I have is something like this pseudocode: >> >> cvs update >> >> > >Be sure check past branches as well. > > > >> check if there really was an update and if not exit >> >> > >OK. > > > >> configure; get config.log >> >> > >Ideally, you'd try all possible option combinations for configure. Or at >least enable everything. > I have had in mind from the start doing multiple configurations and multiple branches. Right now I'm working only with everything/head, but will make provision for multiple sets of both. How many branches back do you think should we go? Right now I'd be inclined only to do REL7_4_STABLE and HEAD as a default. Maybe we could set the default to be gettable from the web server so that as new releases come along build farm members using the default wouldn't need to make any changes. However, everything would also be settable locally on each build farm member in an options file. > > > >> make 2>&1 | make-filter >makelog >> make check 2>&1 | check-filter > checklog >> >> > >You could also try out make distcheck. It tries out the complete build, >installation, uninstallation, regression test, and distribution building. > > OK. > > >> (TBD) send config status, make status, check status, logfiles >> >> > >OK. > > > >> make distclean >> >> > >When I played around with this, always copied the CVS tree to a new >directory and deleted that one at the end. That way, bugs in the clean >procedure (known to happen) don't trip up the whole process. > > OK. We've also seen odd problems with "cvs update", I seem to recall, but I'd rather avoid having to fetch the entire tree for each run, to keep bandwidth use down. (I believe "cvs update" should be fairly reliable if there are no local changes, which would be true in this instance). > > >>The send piece will probably be a perl script using LWP and talking to a >>CGI script. >> >> > >That will be the difficult part to organize, if it's supposed to be >distributed and autonomous. > > sending the results won't be a huge problem - storing and displaying them nicely will be a bit more fun :-) Upload of results would be over authenticated SSL to prevent spurious results being fed to us - all you would need to join the build farm would be a username/password from the buildfarm admin. Thanks for your input cheers andrew
Jan Wieck <JanWieck@Yahoo.com> writes: > On Tue, 18 Nov 2003, Peter Eisentraut wrote: >> The time from release 7.3 to release 7.4 was 355 days, an all-time high. >> We really need to shorten that. > I don't see much of a point for a shorter release cycle as long as we > don't get rid of the initdb requirement for releases that don't change > the system catalog structure. All we gain from that is spreading out the > number of different versions used in production. Yeah, I think the main issue in all this is that for real production sites, upgrading Postgres across major releases is *painful*. We have to find a solution to that before it makes sense to speed up the major-release cycle. By the same token, I'm not sure that there's much of a market for "development" releases --- people who find a 7.3->7.4 upgrade painful aren't going to want to add additional upgrades to incompatible intermediate states. If we could fix that, there'd be more interest. regards, tom lane
Kevin Brown <kevin@sysexperts.com> writes: > ... That's why the release methodology used by the Linux kernel development > team is a reasonable one. I do not think we have the manpower to manage multiple active development branches. The Postgres developer community is a fraction of the size of the Linux community; if we try to adopt what they do we'll just drown in work. It's hard enough to deal with the existing level of commitment to back-patching one stable release --- I know that we miss back-patching bug fixes that probably should have been back-patched. And the stuff that does get back-patched isn't really tested to the level that it ought to be, which discourages us from applying fixes to the stable branch if they are too large to be "obviously correct". I don't see manpower emerging from the woodwork to fix those problems. If we were doing active feature development in more than one branch I think our process would break down completely. regards, tom lane
Larry Rosenman <ler@lerctr.org> writes: > <peter_e@gmx.net> wrote: >> 1. Start platform testing on day 1 of beta. Last minute fixes for AIX or >> UnixWare are really becoming old jokes. > The only reason we had last minute stuff for UnixWare this time was the > timing of PG's release and the UP3 release from SCO. Yes. The late fixes for OS X also arose from the fact that Apple released a new OS X version late in our beta cycle. I don't think it's reasonable to complain that there was insufficient port testing done earlier; the issues didn't come from that. I do agree with the opinion that our beta cycles are getting too long, and that it's not clear we are getting any additional reliability out of the longer time period. regards, tom lane
> Yeah, I think the main issue in all this is that for real production > sites, upgrading Postgres across major releases is *painful*. We have > to find a solution to that before it makes sense to speed up the > major-release cycle. Well, I think one of the simplest is to do a topological sort of objects in pg_dump (between object classes that need it),AND regression testing for pg_dump :) Chris
On Fri, Nov 21, 2003 at 09:38:50AM +0800, Christopher Kings-Lynne wrote: > >Yeah, I think the main issue in all this is that for real production > >sites, upgrading Postgres across major releases is *painful*. We have > >to find a solution to that before it makes sense to speed up the > >major-release cycle. > > Well, I think one of the simplest is to do a topological sort of objects > in pg_dump (between object classes that need it), AND regression > testing for pg_dump :) One of the most complex would be to avoid the need of pg_dump for upgrades ... -- Alvaro Herrera (<alvherre[@]dcc.uchile.cl>) "I call it GNU/Linux. Except the GNU/ is silent." (Ben Reiter)
Alvaro Herrera wrote: > On Fri, Nov 21, 2003 at 09:38:50AM +0800, Christopher Kings-Lynne wrote: >> >Yeah, I think the main issue in all this is that for real production >> >sites, upgrading Postgres across major releases is *painful*. We have >> >to find a solution to that before it makes sense to speed up the >> >major-release cycle. >> >> Well, I think one of the simplest is to do a topological sort of objects >> in pg_dump (between object classes that need it), AND regression >> testing for pg_dump :) > > One of the most complex would be to avoid the need of pg_dump for > upgrades ... > We don't need a simple way, we need a way to create some sort of catalog diff and "a safe" way to apply that to an existing installation during the upgrade. I think with a shutdown postmaster, a standalone backend used to check that no conflicts exist in any DB, then using the new backend in bootstrap mode to apply the changes, could be an idea to think of. It would still require some downtime, but nobody can avoid that when replacing the postgres binaries anyway, so that's not a real issue. As long as it eliminates dump, initdb, reload it will be acceptable. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Peter Eisentraut <peter_e@gmx.net> writes: > Andrew Dunstan writes: >> Maybe it wouldn't be of great value to PostgreSQL. And maybe it would. I >> have an open mind about it. I don't think incompleteness is an argument >> against it, though. > If you want to do it, by all means go for it. I'm sure it would give > everyone a fuzzy feeling to see the green lights everywhere. But > realistically, don't expect any significant practical benefits, such > cutting beta time by 10%. I think the main value of a build farm is that we'd get nearly immediate feedback about the majority of simple porting problems. Your previous arguments that it wouldn't smoke everything out are certainly valid --- but we wouldn't abandon the regression tests just because they don't find everything. Immediate feedback is good because a patch can be fixed while it's still fresh in the author's mind. I'm for it ... regards, tom lane
Jan Wieck <JanWieck@Yahoo.com> writes: > Alvaro Herrera wrote: >> One of the most complex would be to avoid the need of pg_dump for >> upgrades ... > We don't need a simple way, we need a way to create some sort of catalog > diff and "a safe" way to apply that to an existing installation during > the upgrade. I still think that pg_upgrade is the right idea: load a schema dump from the old database into the new one, then transfer the user data files and indexes via cheating (doubly linking, if possible). Obviously there is a lot of work still to make this happen reliably, but we have seen proof-of-concept some while ago, whereas "catalog diffs" are pie in the sky IMHO. (You could not use either the old postmaster version or the new version to apply such a diff...) A big advantage of the pg_upgrade concept in my mind is that if it fails partway through, you need have made no changes to the original installation. Any mid-course problem with an in-place-diff approach leaves you completely screwed :-( regards, tom lane
Tom Lane wrote: > >I think the main value of a build farm is that we'd get nearly immediate >feedback about the majority of simple porting problems. Your previous >arguments that it wouldn't smoke everything out are certainly valid --- >but we wouldn't abandon the regression tests just because they don't >find everything. Immediate feedback is good because a patch can be >fixed while it's still fresh in the author's mind. > Yes, I seem to recall seeing several instances of things like "you mean foonix version 97 1/2 has a bad frobnitz.h?" over the last 6 months. Having that caught early is exactly the advantage, I believe. > >I'm for it ... > > I'm working on it :-) Regarding "make distcheck" that Peter suggested I use, unless I'm mistaken it carefully does its own configure, thus ignoring the configure options set in the original directory. Perhaps we need either to have the distcheck target pick up all the --with/--without and --enable/--disable options, or to have a similar target that does that. Thoughts? cheers andrew
Hello hackers Sorry when I am talking to the gurus... There is a database, which has a concept called "Transportable Tablespace" (TTS). Would it not be a verry easy and fast solution to just do this with the Tables, Index and all non catalog related files. - You create a new db cluster (e.g 8.0). - Generate a TTS export skript. - Shut the (old) db-cluster (files should be consistent now, ev. do something with the log files before). - Move the files (eventually not needed) and - plug it in to the new db cluster (via the export skript). Expected downtime (without moving data files) 5-10 minutes. Regards Oli ------------------------------------------------------- Oli Sennhauser Database-Engineer (Oracle & PostgreSQL) Rebenweg 6 CH - 8610 Uster / Switzerland Phone (+41) 1 940 24 82 or Mobile (+41) 79 450 49 14 e-Mail oli.sennhauser@bluewin.ch Website http://mypage.bluewin.ch/shinguz/PostgreSQL/ Secure (signed/encrypted) e-Mail with a Free Personal SwissSign ID: http://www.swisssign.ch Import the SwissSign Root Certificate: http://swisssign.net/cgi-bin/trust/import
Le Vendredi 21 Novembre 2003 19:47, Tom Lane a écrit : > I think the main value of a build farm is that we'd get nearly immediate > feedback about the majority of simple porting problems. Your previous > arguments that it wouldn't smoke everything out are certainly valid --- > but we wouldn't abandon the regression tests just because they don't > find everything. Immediate feedback is good because a patch can be > fixed while it's still fresh in the author's mind. Dear friends, We have a small build farm for pgAdmin covering Win32, FreeBSD and most GNU/ Linux systems. See http://www.pgadmin.org/pgadmin3/download.php#snapshots The advantage are immediate feedback and correction of problems. Also, in a release cycle, developers and translators are quite motivated to see their work published fast. Of course, it is always hard to "mesure" the real impact of a build farm. My opinion it that it is quite positive, as it helps tighten the links between people, which is free software is mostly about. Cheers, Jean-Michel Pouré
Jean-Michel POURE wrote: >Le Vendredi 21 Novembre 2003 19:47, Tom Lane a écrit : > > >>I think the main value of a build farm is that we'd get nearly immediate >>feedback about the majority of simple porting problems. Your previous >>arguments that it wouldn't smoke everything out are certainly valid --- >>but we wouldn't abandon the regression tests just because they don't >>find everything. Immediate feedback is good because a patch can be >>fixed while it's still fresh in the author's mind. >> >> > >Dear friends, > >We have a small build farm for pgAdmin covering Win32, FreeBSD and most GNU/ >Linux systems. See http://www.pgadmin.org/pgadmin3/download.php#snapshots > >The advantage are immediate feedback and correction of problems. Also, in a >release cycle, developers and translators are quite motivated to see their >work published fast. > >Of course, it is always hard to "mesure" the real impact of a build farm. My >opinion it that it is quite positive, as it helps tighten the links between >people, which is free software is mostly about. > > > Right. But I think we have been talking about using the build farm to do test builds rather than to provide snapshots. I'd be very wary of providing arbitrary snapshots of postgres, whereas I'd be prepared to try a snapshot of pgadmin3 under certain circumstances. (Also, building your own snapshot of postgres is somewhat easier than building your own snapshot of pgadmin3). cheers andrew
Andrew Dunstan wrote: > > > Jean-Michel POURE wrote: > >> Le Vendredi 21 Novembre 2003 19:47, Tom Lane a écrit : >> >> >>> I think the main value of a build farm is that we'd get nearly >>> immediate >>> feedback about the majority of simple porting problems. Your previous >>> arguments that it wouldn't smoke everything out are certainly valid --- >>> but we wouldn't abandon the regression tests just because they don't >>> find everything. Immediate feedback is good because a patch can be >>> fixed while it's still fresh in the author's mind. >>> >> >> >> Dear friends, >> >> We have a small build farm for pgAdmin covering Win32, FreeBSD and >> most GNU/ >> Linux systems. See >> http://www.pgadmin.org/pgadmin3/download.php#snapshots >> >> The advantage are immediate feedback and correction of problems. >> Also, in a release cycle, developers and translators are quite >> motivated to see their work published fast. >> Of course, it is always hard to "mesure" the real impact of a build >> farm. My opinion it that it is quite positive, as it helps tighten >> the links between people, which is free software is mostly about. >> >> >> > > Right. But I think we have been talking about using the build farm to > do test builds rather than to provide snapshots. I'd be very wary of > providing arbitrary snapshots of postgres, whereas I'd be prepared to > try a snapshot of pgadmin3 under certain circumstances. (Also, > building your own snapshot of postgres is somewhat easier than > building your own snapshot of pgadmin3). Testing a build and creating a snapshot compilation is quite the same, just a different name and announcement. I agree that using a pgadmin snapshot is different from pgsql, somebody using a bleeding edge pgsql version should be prepared to compile it on his own machine. And a tiny correction: The farm member for win32 is my machine, and it's operated manually :-) Regards, Andreas
Le Lundi 24 Novembre 2003 16:38, Andreas Pflug a écrit : > And a tiny correction: The farm member for win32 is my machine, and it's > operated manually :-) Some GNU/Linux farm animals are living in my garage running on very old 50 euros machines ... Ancient farming :-) By the way, we would love if someone could provide pgAdmin3 daily snapshots under other systems. The list of platforms can be viewed here, anyone is welcome to provide additional ones: http://www.pgadmin.org/pgadmin3/download.php#snapshots Cheers, Jean-Michel
FYI, the HP testdrive farm, http://www.testdrive.hp.com, has shared directories for most of the machines, meaning you can CVS update once and telnet in to compile for each platform. --------------------------------------------------------------------------- Andrew Dunstan wrote: > Tom Lane wrote: > > > > >I think the main value of a build farm is that we'd get nearly immediate > >feedback about the majority of simple porting problems. Your previous > >arguments that it wouldn't smoke everything out are certainly valid --- > >but we wouldn't abandon the regression tests just because they don't > >find everything. Immediate feedback is good because a patch can be > >fixed while it's still fresh in the author's mind. > > > > Yes, I seem to recall seeing several instances of things like "you mean > foonix version 97 1/2 has a bad frobnitz.h?" over the last 6 months. > Having that caught early is exactly the advantage, I believe. > > > > >I'm for it ... > > > > > > I'm working on it :-) > > Regarding "make distcheck" that Peter suggested I use, unless I'm > mistaken it carefully does its own configure, thus ignoring the > configure options set in the original directory. Perhaps we need either > to have the distcheck target pick up all the --with/--without and > --enable/--disable options, or to have a similar target that does that. > > Thoughts? > > cheers > > andrew > > > > ---------------------------(end of broadcast)--------------------------- > TIP 7: don't forget to increase your free space map settings > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
On Wed, Nov 19, 2003 at 04:34:27PM +0100, Peter Eisentraut wrote: > Hence the open-source community approach. Closed-source development teams > can do all the above, with great effort. But by throwing out the code and > have real people test them on real systems with real applications, you can > do much better. Would it be reasonable to promote users testing daily snapshots with popular applications? I'm guessing there's not many applications that have automated test frameworks, but any that do would theoretically provide another good test of PGSQL changes. -- Jim C. Nasby, Database Consultant jim@nasby.net Member: Triangle Fraternity, Sports Car Club of America Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"
> Would it be reasonable to promote users testing daily snapshots with > popular applications? I'm guessing there's not many applications that > have automated test frameworks, but any that do would theoretically > provide another good test of PGSQL changes. May I quote Joel on Software here? http://www.joelonsoftware.com/articles/fog0000000043.html The Joel Test 1. Do you use source control? 2. Can you make a build in one step? 3. Do you make daily builds? 4. Do you have abug database? 5. Do you fix bugs before writing new code? 6. Do you have an up-to-date schedule? 7. Do you have aspec? 8. Do programmers have quiet working conditions? 9. Do you use the best tools money can buy? 10. Do you havetesters? 11. Do new candidates write code during their interview? 12. Do you do hallway usability testing? "The neat thing about The Joel Test is that it's easy to get a quick yes or no to each question. You don't have to figure out lines-of-code-per-day or average-bugs-per-inflection-point. Give your team 1 point for each "yes" answer. The bummer about The Joel Test is that you really shouldn't use it to make sure that your nuclear power plant software is safe. A score of 12 is perfect, 11 is tolerable, but 10 or lower and you've got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time. " Not everything there applies to us, of course. Chris
On Fri, Nov 21, 2003 at 01:32:38PM -0500, Jan Wieck wrote: > bootstrap mode to apply the changes, could be an idea to think of. It > would still require some downtime, but nobody can avoid that when > replacing the postgres binaries anyway, so that's not a real issue. As > long as it eliminates dump, initdb, reload it will be acceptable. Has anyone looked at using replication as a migration method? If replication can be setup in such a way that you can replicate from an old version to a new version, you can use that to build the new version of the database on a seperate machine/instance while the old version is still live. With some sophisticated middleware, you could theoretically migrate without any downtime. -- Jim C. Nasby, Database Consultant jim@nasby.net Member: Triangle Fraternity, Sports Car Club of America Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"
Quoth jim@nasby.net ("Jim C. Nasby"): > On Fri, Nov 21, 2003 at 01:32:38PM -0500, Jan Wieck wrote: >> bootstrap mode to apply the changes, could be an idea to think of. It >> would still require some downtime, but nobody can avoid that when >> replacing the postgres binaries anyway, so that's not a real issue. As >> long as it eliminates dump, initdb, reload it will be acceptable. > > Has anyone looked at using replication as a migration method? If > replication can be setup in such a way that you can replicate from an > old version to a new version, you can use that to build the new version > of the database on a seperate machine/instance while the old version is > still live. With some sophisticated middleware, you could theoretically > migrate without any downtime. The idea has indeed been "looked at," and seems pretty feasible. It would certainly take some sophisticated middleware to totally evade downtime. But replicating from "old version" to "new version" does have the merit of keeping the downtime to fairly much a minimum. -- wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','cbbrowne.com'). http://www3.sympatico.ca/cbbrowne/x.html :FATAL ERROR -- VECTOR OUT OF HILBERT SPACE
Bruce Momjian writes: > FYI, the HP testdrive farm, http://www.testdrive.hp.com, has shared > directories for most of the machines, meaning you can CVS update once > and telnet in to compile for each platform. Except that you can't open connections to the outside from these machines. -- Peter Eisentraut peter_e@gmx.net
Peter Eisentraut wrote: > Bruce Momjian writes: > > > FYI, the HP testdrive farm, http://www.testdrive.hp.com, has shared > > directories for most of the machines, meaning you can CVS update once > > and telnet in to compile for each platform. > > Except that you can't open connections to the outside from these machines. Oh, yea. You can connect to the machines with ftp, so I guess you would have to CVS update on your local machine, then push the changes to the farm. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian wrote: >FYI, the HP testdrive farm, http://www.testdrive.hp.com, has shared >directories for most of the machines, meaning you can CVS update once >and telnet in to compile for each platform. > > > As Peter pointed out, these machines are firewalled. But presumably one could upload a snapshot to them. What I had in mindwas a more distributed system, though. Of course, these things are not mutually exclusive - using the HP testdrive farm looks like it might be nice. But it wouldbe hard to automate, I suspect. cheers andrew
Andrew Dunstan wrote: > Bruce Momjian wrote: > > >FYI, the HP testdrive farm, http://www.testdrive.hp.com, has shared > >directories for most of the machines, meaning you can CVS update once > >and telnet in to compile for each platform. > > > > > > > > As Peter pointed out, these machines are firewalled. But presumably > one could upload a snapshot to them. What I had in mind was a > more distributed system, though. > > Of course, these things are not mutually exclusive - using the > HP testdrive farm looks like it might be nice. But it would be > hard to automate, I suspect. I figured you could just upload once and telnet and build on each machine. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian wrote: >Andrew Dunstan wrote: > > >>Bruce Momjian wrote: >> >> >> >>>FYI, the HP testdrive farm, http://www.testdrive.hp.com, has shared >>>directories for most of the machines, meaning you can CVS update once >>>and telnet in to compile for each platform. >>> >>> >>> >>> >>> >>As Peter pointed out, these machines are firewalled. But presumably >>one could upload a snapshot to them. What I had in mind was a >>more distributed system, though. >> >>Of course, these things are not mutually exclusive - using the >>HP testdrive farm looks like it might be nice. But it would be >>hard to automate, I suspect. >> >> > >I figured you could just upload once and telnet and build on each >machine. > > > What I'm working on (slowly - I'm quite busy right now, and about to be away from home for 5 days) is a system which would (or could) run from cron on every member of the farm, and upload its results to a central server where it could be displayed, in a somewhat similar way to the way the Samba build farm works - see http://build.samba.org/ - so we'd be able to see at a glance when something is broken and where and why. We could also incorporate email notification of breakage, as a refinement. I have a few pieces of this working but not a full suite yet - it will essentially be 3 perl scripts - one on the client (to run the update(s), build(s) and upload the results) and two on the central server (one for upload and one for display). When I get a demo page done I'll show it working with a couple of hosts. Of course, you can automate (almost) anything, including telnet, but right now I'm assuming the farm members will have internet connectivity. cheers andrew
On Mon, Nov 24, 2003 at 11:08:44PM -0600, Jim C. Nasby wrote: > Has anyone looked at using replication as a migration method? If Looked at? Sure. Heck, I've done it. Yes, it works. Is it painless? Well, that depends on whether you think using erserver is painless. ;-) It's rather less downtime than pg_dump | psql, I'll tell you. A -- ---- Andrew Sullivan 204-4141 Yonge Street Afilias Canada Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
Just a thought. You could also run the regression test automatically after a successful build? "Andrew Dunstan" <andrew@dunslane.net> wrote in message news:3FC1FFA5.9030003@dunslane.net... > > > Jean-Michel POURE wrote: > > >Le Vendredi 21 Novembre 2003 19:47, Tom Lane a �crit : > > > > > >>I think the main value of a build farm is that we'd get nearly immediate > >>feedback about the majority of simple porting problems. Your previous > >>arguments that it wouldn't smoke everything out are certainly valid --- > >>but we wouldn't abandon the regression tests just because they don't > >>find everything. Immediate feedback is good because a patch can be > >>fixed while it's still fresh in the author's mind. > >> > >> > > > >Dear friends, > > > >We have a small build farm for pgAdmin covering Win32, FreeBSD and most GNU/ > >Linux systems. See http://www.pgadmin.org/pgadmin3/download.php#snapshots > > > >The advantage are immediate feedback and correction of problems. Also, in a > >release cycle, developers and translators are quite motivated to see their > >work published fast. > > > >Of course, it is always hard to "mesure" the real impact of a build farm. My > >opinion it that it is quite positive, as it helps tighten the links between > >people, which is free software is mostly about. > > > > > > > > Right. But I think we have been talking about using the build farm to do > test builds rather than to provide snapshots. I'd be very wary of > providing arbitrary snapshots of postgres, whereas I'd be prepared to > try a snapshot of pgadmin3 under certain circumstances. (Also, building > your own snapshot of postgres is somewhat easier than building your own > snapshot of pgadmin3). > > cheers > > andrew > > > ---------------------------(end of broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly >
Any chance you might be able to put together a HOWTO on this? I think it would be extremely valuable to a lot of people. On Tue, Nov 25, 2003 at 11:25:34PM -0500, Andrew Sullivan wrote: > On Mon, Nov 24, 2003 at 11:08:44PM -0600, Jim C. Nasby wrote: > > > Has anyone looked at using replication as a migration method? If > > Looked at? Sure. Heck, I've done it. Yes, it works. Is it > painless? Well, that depends on whether you think using erserver is > painless. ;-) It's rather less downtime than pg_dump | psql, I'll > tell you. -- Jim C. Nasby, Database Consultant jim@nasby.net Member: Triangle Fraternity, Sports Car Club of America Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"