Thread: branching for 9.2devel
The recent and wide-ranging "formatting curmudgeons" thread included suggestions by Tom and myself that we should consider branching the tree immediately after beta1. http://archives.postgresql.org/pgsql-hackers/2011-04/msg01157.php http://archives.postgresql.org/pgsql-hackers/2011-04/msg01162.php This didn't get much commentary, but others have expressed support for similar ideas in the past, so perhaps we should do it? Comments? The other major issue discussed on the thread was as to how frequent and how long CommitFests should be. I don't think we really came to a consensus on that one. I think that's basically a trade-off: if we make CommitFests more frequent and shorter, we can give people feedback more quickly (but I'm not sure that problem is horribly bad anyway - witness that there have been numerous reviews of WIP patches in just the last few weeks while we've been pursuing beta hard) and committers will have more time to work on their own projects, BUT the rejection rate will go up, patch authors will get less help finishing their work, it'll be harder to organize reviewers (see esp. the note by Greg Smith in that regard), and there may be even more of a crush at the end of the release cycle. On balance, I think I prefer the current arrangement, though if we could make the CommitFests a bit shorter I would certainly like that better. I don't know how to make that happen without more reviewers, though. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 04/25/2011 09:17 AM, Robert Haas wrote: > The recent and wide-ranging "formatting curmudgeons" thread included > suggestions by Tom and myself that we should consider branching the > tree immediately after beta1. > > http://archives.postgresql.org/pgsql-hackers/2011-04/msg01157.php > http://archives.postgresql.org/pgsql-hackers/2011-04/msg01162.php > > This didn't get much commentary, but others have expressed support for > similar ideas in the past, so perhaps we should do it? Comments? > > I am on record in the past as supporting earlier branching. cheers andrew
* Robert Haas (robertmhaas@gmail.com) wrote: > On balance, I think I prefer the > current arrangement, though if we could make the CommitFests a bit > shorter I would certainly like that better. I don't know how to make > that happen without more reviewers, though. Given our current method (where we allow authors to update their patches during a CF) I don't see that we need or should try for shorter CFs. If we actually just reviewed patches onces it'd be a very different situation. So, +1 from me for keeping it as-is. I do wonder if this is coming up now just because we're getting closer to a release and people are, unsuprisingly, wishing they had been able to get their fav. patch in before the deadline. :) Thanks, Stephen
Robert Haas <robertmhaas@gmail.com> wrote: > The recent and wide-ranging "formatting curmudgeons" thread > included suggestions by Tom and myself that we should consider > branching the tree immediately after beta1. My take is that it should be branched as soon as a committer would find it useful to commit something destined for 9.2 instead of 9.1. If *any* committer feels it would be beneficial, that seems like prima facie evidence that it is needed, barring a convincing argument to the contrary. > The other major issue discussed on the thread was as to how > frequent and how long CommitFests should be. > On balance, I think I prefer the current arrangement, though if we > could make the CommitFests a bit shorter I would certainly like > that better. I don't know how to make that happen without more > reviewers, though. Agreed. It is hard to picture doing shorter commit fests without that just pushing more of the initial review burden to the committers. Besides the normal "herding cats" dynamic, there is that matter of schedules in an all-volunteer project. When I've managed CFs, there have been people who were on vacation or under the deadline to complete a major paper during the first week of the CF who were able to contribute later. Some non-committer reviewers were able to complete review of one patch and move on to others. During the weeks of a single CF some patches go through multiple critiques which send them back to the author, so I'm not sure how much a shorter cycle would help with that issue for non-committer reviews. Perhaps we will get some of the stated benefits of shorter CF cycles as reviewers become more skilled and patches get to the reviewers with fewer problems. Maybe we could encourage reviewers to follow patches which they have moved to "Ready for Committer" status, to see what the committers find that they missed, to help develop better skills. -Kevin
Robert Haas <robertmhaas@gmail.com> writes: > The recent and wide-ranging "formatting curmudgeons" thread included > suggestions by Tom and myself that we should consider branching the > tree immediately after beta1. > http://archives.postgresql.org/pgsql-hackers/2011-04/msg01157.php > http://archives.postgresql.org/pgsql-hackers/2011-04/msg01162.php > This didn't get much commentary, but others have expressed support for > similar ideas in the past, so perhaps we should do it? Comments? One small issue that would have to be resolved before branching is whether and when to do a "final" pgindent run for 9.1. Seems like the alternatives would be:1. Don't do anything more, be happy with the one run done already.2. Do another run just before branching.3.Make concurrent runs against HEAD and 9.1 branch sometime later. I don't much care for #3 because it would also affect whatever developmental work had been done to that point, and thus have a considerable likelihood of causing merge problems for WIP patches. Not sure if enough has happened to really require #2. But a much more significant issue is that I don't see a lot of point in branching until we are actually ready to start active 9.2 development. So unless you see this as a vehicle whereby committers get to start hacking 9.2 but nobody else does, there's no point in cutting a branch until shortly before a CommitFest opens. I'm not aware that we've set any dates for 9.2 CommitFests yet ... > The other major issue discussed on the thread was as to how frequent > and how long CommitFests should be. I don't think we really came to a > consensus on that one. Yeah, it did not seem like there was enough evidence to justify a change, and Greg's comments were discouraging. (Though you've run more fests than he has, so I was surprised that you weren't arguing similarly.) Should we consider scheduling one short-cycle fest during 9.2, just to see whether it works? regards, tom lane
On Mon, Apr 25, 2011 at 3:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > One small issue that would have to be resolved before branching is > whether and when to do a "final" pgindent run for 9.1. Seems like the > alternatives would be: > If the tools become easy to run is it possible we cold get to the point where we do an indent run on every commit? This wold require a stable list of system symbols plus the tool would need to add any new symbols added by the patch. As long as the tool produced consistent output I don't see that it would produce the spurious merge conflicts we've been afraid of in the past. Those would only occur if a patch went in without pgindent being run, someone developed a patch against that tree, then pgindent was run before merging that patch. As long as it's run on every patch on commit it shouldn't cause those problems since nobody could use a non-pgindented code as their base. Personally I've never really liked the pgindent run. White-space always seemed like the least interesting of the code style issues, none of which seemed terribly important compared to the more important things like staying warning-clean and defensive coding rules. But if we're going to do it letting things diverge for a whole release and then running it once a year seems the worst of both worlds. -- greg
Greg Stark <gsstark@mit.edu> writes: > If the tools become easy to run is it possible we cold get to the > point where we do an indent run on every commit? This wold require a > stable list of system symbols plus the tool would need to add any new > symbols added by the patch. As long as the tool produced consistent > output I don't see that it would produce the spurious merge conflicts > we've been afraid of in the past. Those would only occur if a patch > went in without pgindent being run, someone developed a patch against > that tree, then pgindent was run before merging that patch. As long as > it's run on every patch on commit it shouldn't cause those problems > since nobody could use a non-pgindented code as their base. No, not at all, because you're ignoring the common case of a series of dependent patches that are submitted in advance of the first one having been committed. To get to the point where we could do things that way, it would have to be the case that every developer could run pgindent locally and get the same results that the committer would get. Maybe we'll get there someday, and we should certainly try. But we're not nearly close enough to be considering changing policy on that basis. > Personally I've never really liked the pgindent run. If everybody followed roughly the same coding/layout standards without prompting, we'd not need it. But they don't so we do. I think pgindent gets a not-trivial share of the credit for the frequently-mentioned fact that the PG sources are pretty readable. regards, tom lane
On Mon, Apr 25, 2011 at 11:03 AM, Greg Stark <gsstark@mit.edu> wrote: > If the tools become easy to run is it possible we cold get to the > point where we do an indent run on every commit? This wold require a > stable list of system symbols plus the tool would need to add any new > symbols added by the patch. Methinks there'd need to be an experiment run where pgindent is run each time on some sort of "parallel tree" for a little while, to let people get some feel for what changes it introduces. Unfortunately, I'd fully expect there to be some interference between patches. Your patch changes the indentation of the code a little, breaking the patch I wanted to submit just a little later. And, by the way, I had already submitted my patch. So you broke my patch, even though mine was contributed first. That seems a little antisocial... -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?"
On Mon, Apr 25, 2011 at 10:45 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > One small issue that would have to be resolved before branching is > whether and when to do a "final" pgindent run for 9.1. Seems like the > alternatives would be: > 1. Don't do anything more, be happy with the one run done already. > 2. Do another run just before branching. > 3. Make concurrent runs against HEAD and 9.1 branch sometime later. > I don't much care for #3 because it would also affect whatever > developmental work had been done to that point, and thus have a > considerable likelihood of causing merge problems for WIP patches. > Not sure if enough has happened to really require #2. I'd vote for #1, unless by doing #2 we can fix the problems created by omission of some typedefs from the symbol tables emitted by newer gcc versions. > But a much more significant issue is that I don't see a lot of point in > branching until we are actually ready to start active 9.2 development. > So unless you see this as a vehicle whereby committers get to start > hacking 9.2 but nobody else does, there's no point in cutting a branch > until shortly before a CommitFest opens. I'm not aware that we've set > any dates for 9.2 CommitFests yet ... That doesn't strike me as a condition prerequisite for opening the tree. If anything, I'd say we ought to decide first when we'll be open for development (current question) and then schedule CommitFests around that. And I do think there is some value in having the tree open even if we haven't gotten the schedule quite hammered out yet, because even if we don't have any formal process in place to be working through the 9.2 queue, some people might choose to work on it anyway. >> The other major issue discussed on the thread was as to how frequent >> and how long CommitFests should be. I don't think we really came to a >> consensus on that one. > > Yeah, it did not seem like there was enough evidence to justify a > change, and Greg's comments were discouraging. (Though you've run more > fests than he has, so I was surprised that you weren't arguing > similarly.) Should we consider scheduling one short-cycle fest during > 9.2, just to see whether it works? Well, I basically think Greg is right, but the process is so darn much work that I don't want to be too quick to shut down ideas for improvement. If we do a one-week CommitFest, then there is time for ONE review. Either a reviewer will do it, and no committer will look at it, or the other way around, but it will not get the level of attention that it does today. There is a huge amount of work involved on getting "up to speed" on a patch, and so it really makes a lot more sense to me to do it in a sustained push than in little dribs and drabs. I have to think my productivity would be halved by spending a week on it and then throwing in the towel. I'm inclined to suggest that we just go ahead and schedule five CommitFests, using the same schedule we have used for the last couple of releases, but with one more inserted at the front end: May 15, 2011 - June 14, 2011 July 15, 2011 - August 14, 2011 September 15, 2011 - October 14, 2011 November 15, 2011 - December 14, 2011 January 15, 2012 - February 14, 2012 I also think we should also publicize as widely as possible that design proposals are welcome any time. Maybe that's not what we've said in the past, but I think it's the new normal, and we should make sure people know that. And I think we should reaffirm our previous commitment not to accept new, previously-unreviewed large patches in the last CommitFest. If anything we should strengthen it in some way.The crush of 100 patches in the last CF of the 9.1cycle was entirely due to people waiting until the last minute, and a lot of that stuff was pretty half-baked, including a bunch of things that got committed after substantial further baking that should properly have been done much sooner. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Mon, Apr 25, 2011 at 11:32 AM, Christopher Browne <cbbrowne@gmail.com> wrote: > Methinks there'd need to be an experiment run where pgindent is run > each time on some sort of "parallel tree" for a little while, to let > people get some feel for what changes it introduces. The point is that if the tools worked "everywhere", "the same", then it it should be run *before* the commit is finalized (git has a hundred+1 ways to get this to happen, be creative). So if you ever ran it on a $COMMIT from the published tree, it would never do anything. From the sounds of it though, it's not quite ready for that. > Unfortunately, I'd fully expect there to be some interference between patches. > > Your patch changes the indentation of the code a little, breaking the > patch I wanted to submit just a little later. And, by the way, I had > already submitted my patch. So you broke my patch, even though mine > was contributed first. But if the only thing changed was the indentation level (because $PATCH2 wrapped a section of code your $PATCH1 changes completely in a new block, or removed a block level), git tools are pretty good at handling that. So, if everything is *always* pgindent clean, that means your new patch is too, and the only conflicting white-space-only change would be a complete block-level indentation (easily handled). And you still have those block-level indentation changes even if not using pgindent. Of course, that all depends on: 1) pgindent being "work everywhere", "exactly the same" 2) Discipline of all new published commits being "pgindent clean". a. -- Aidan Van Dyk Create like a god, aidan@highrise.ca command like a king, http://www.highrise.ca/ work like a slave.
Robert Haas <robertmhaas@gmail.com> writes: > On Mon, Apr 25, 2011 at 10:45 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> But a much more significant issue is that I don't see a lot of point in >> branching until we are actually ready to start active 9.2 development. >> So unless you see this as a vehicle whereby committers get to start >> hacking 9.2 but nobody else does, there's no point in cutting a branch >> until shortly before a CommitFest opens. �I'm not aware that we've set >> any dates for 9.2 CommitFests yet ... > That doesn't strike me as a condition prerequisite for opening the > tree. If anything, I'd say we ought to decide first when we'll be > open for development (current question) and then schedule CommitFests > around that. And I do think there is some value in having the tree > open even if we haven't gotten the schedule quite hammered out yet, > because even if we don't have any formal process in place to be > working through the 9.2 queue, some people might choose to work on it > anyway. You're ignoring the extremely real costs involved in an early branch, namely having to double-patch every bug fix we make during beta. (And no, my experiences with git cherry-pick are not so pleasant as to make me feel that that's a non-problem.) I really don't think that we should branch until we're willing to start doing 9.2 development in earnest. You're essentially saying that we should encourage committers to do some cowboy committing of whatever 9.2 stuff seems ready, and never mind the distributed costs that imposes on the rest of the project. I don't buy that. IOW, the decision process ought to be set 9.2 schedule -> set CF dates -> set branch date. You're attacking it from the wrong end. > I'm inclined to suggest that we just go ahead and schedule five > CommitFests, using the same schedule we have used for the last couple > of releases, but with one more inserted at the front end: > May 15, 2011 - June 14, 2011 > July 15, 2011 - August 14, 2011 > September 15, 2011 - October 14, 2011 > November 15, 2011 - December 14, 2011 > January 15, 2012 - February 14, 2012 Well, if you go with that, then I will personally refuse to have anything to do with the first CF, because I was intending to spend my non-bug-fix time during beta on reading the already committed but probably still pretty buggy stuff from 9.1 (SSI and SR in particular). I think a schedule like the above will guarantee that no beta testing gets done by the development community at all, which will be great for moving 9.2 along and terrible for the release quality of 9.1. I think the earliest we could start a CF without blowing off the beta process entirely is June. Maybe we could start CFs June 1, August 1, etc? regards, tom lane
Aidan Van Dyk <aidan@highrise.ca> wrote: > 2) Discipline of all new published commits being "pgindent clean". Heck, I think it would be reasonable to require that patch submitters run it before creating their patches. If people merged in changes from the main repository and then ran pgindent, I don't think there would be much in the way of merge problems from it. Personally, once I had pgindent set up I didn't find running it any more onerous than running filterdiff to get things into context diff format. (That is, both seemed pretty trivial.) The problem is that getting it set up isn't yet trivial. This is all assuming that we fix that. -Kevin
On Mon, Apr 25, 2011 at 12:03 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > You're ignoring the extremely real costs involved in an early branch, > namely having to double-patch every bug fix we make during beta. > (And no, my experiences with git cherry-pick are not so pleasant as > to make me feel that that's a non-problem.) I really don't think that > we should branch until we're willing to start doing 9.2 development in > earnest. You're essentially saying that we should encourage committers > to do some cowboy committing of whatever 9.2 stuff seems ready, and > never mind the distributed costs that imposes on the rest of the > project. I don't buy that. > > IOW, the decision process ought to be set 9.2 schedule -> set CF dates > -> set branch date. You're attacking it from the wrong end. > >> I'm inclined to suggest that we just go ahead and schedule five >> CommitFests, using the same schedule we have used for the last couple >> of releases, but with one more inserted at the front end: > >> May 15, 2011 - June 14, 2011 >> July 15, 2011 - August 14, 2011 >> September 15, 2011 - October 14, 2011 >> November 15, 2011 - December 14, 2011 >> January 15, 2012 - February 14, 2012 > > Well, if you go with that, then I will personally refuse to have > anything to do with the first CF, because I was intending to spend > my non-bug-fix time during beta on reading the already committed but > probably still pretty buggy stuff from 9.1 (SSI and SR in particular). > I think a schedule like the above will guarantee that no beta testing > gets done by the development community at all, which will be great for > moving 9.2 along and terrible for the release quality of 9.1. > > I think the earliest we could start a CF without blowing off the beta > process entirely is June. Maybe we could start CFs June 1, August 1, > etc? I can't object to taking another two weeks, especially since that would give people who may have been expecting a later branch more time to get their stuff into shape for CF1. One problem with that is that it would make the fourth CommitFest start on December 1st, which will tend to make that CommitFest pretty half-baked, due to the large number of PostgreSQL developers who observe Christmas. That seems particularly bad if we're planning to end the cycle at that point. Perhaps that would be a good time to employ Peter's idea for a short, one week CommitFest: CF #1: June 1-30 CF #2: August 1-31 CF #3: October 1-31 CF #4 (one week shortened CF): December 1-7 CF #5: January 1-31 That would give people another crack at getting feedback before the final push, right at the time of the release cycle when timely feedback becomes most important. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: > Aidan Van Dyk <aidan@highrise.ca> wrote: >> Of course, that all depends on: >> 1) pgindent being "work everywhere", "exactly the same" >> 2) Discipline of all new published commits being "pgindent clean". > The problem is that getting it set up isn't yet trivial. This is > all assuming that we fix that. Yeah, there is not much point in thinking about #2 until we have #1. regards, tom lane
On Mon, Apr 25, 2011 at 1:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: >> Aidan Van Dyk <aidan@highrise.ca> wrote: >>> Of course, that all depends on: >>> 1) pgindent being "work everywhere", "exactly the same" >>> 2) Discipline of all new published commits being "pgindent clean". > >> The problem is that getting it set up isn't yet trivial. This is >> all assuming that we fix that. > > Yeah, there is not much point in thinking about #2 until we have #1. Would this be a good GSoC project (or has the deadline passed)? -- Thanks, David Blewett
On 04/25/2011 01:12 PM, David Blewett wrote: > On Mon, Apr 25, 2011 at 1:04 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote: >> "Kevin Grittner"<Kevin.Grittner@wicourts.gov> writes: >>> Aidan Van Dyk<aidan@highrise.ca> wrote: >>>> Of course, that all depends on: >>>> 1) pgindent being "work everywhere", "exactly the same" >>>> 2) Discipline of all new published commits being "pgindent clean". >>> The problem is that getting it set up isn't yet trivial. This is >>> all assuming that we fix that. >> Yeah, there is not much point in thinking about #2 until we have #1. > Would this be a good GSoC project (or has the deadline passed)? > Greg Smith and I have done some work on it, and we're going to discuss it at pgCon. I don't think there's terribly far to go. cheers andrew
On mån, 2011-04-25 at 09:17 -0400, Robert Haas wrote: > it'll be harder to organize reviewers (see esp. the note > by Greg Smith in that regard), As far as I'm concerned, those who run the commit fests will have to work out how to best configure the commit fests. I have no strong feelings about my various suggestions; they were just ideas. Altogether, I feel that keeping it the same is probably the more acceptable option at the moment.
All, > �I'm not aware that we've set >>> any dates for 9.2 CommitFests yet ... I thought the idea of setting the initial CF for July 15th for 9.1 was that we would consistently have the first CF in July every year? As discussed at that time, there's value to our corporate-sponsored developers in knowing a regular annual cycle. As much as I'd like to start development early officially, I'm with Tom in being pessimistic about the bugs we're going to find in SSI, Collations and Synch Rep. Frankly, if you and Tom weren't so focused on fixing it, I'd be suggesting that we pull Collations from 9.1; there seems to be a *lot* of untested issues there still. I do think that we could bump the first CF up to July 1st, but I don't think sooner than that is realistic without harming beta testing ... and potentially delaying the release. Let's first demonstrate a track record in getting a final release out consistently by July, and if that works, maybe we can bump up the date. ============= Re: shorter CF cycle: this works based on the idea of a "one strike" for each patch. That has the benefit of pushing more of the fixing work onto the authors and having less of it on the committers: "Not ready, fix X,Y,Z and resubmit." I think that doing thing that way might actually work. However, it will require us to change the CF process in several ways. I'll also point out that pushing fixing work back on the authors is something which committers could be doing *already* in the present structure. And that there's no requirement that our present CFs need to last for a month. The main issues with a "monthly commit week" are: 1) Triage: it's hard to go from first-time reviewer --> review --> committer in a week, so a lot of patches would get booted the next CF just due to time, and 2) availability: some patches can only be understood by certain committers, who are more likely to be gone for a week than a month, and 3) The CF tool, which is currently fairly manual when it comes to pushing a patch from one CF to the other. This is the easiest thing to fix. However, given all that, there would be some serious advantages to a monthly commit week: a) faster feedback to submitters, and b) more chances for a developer to fix their feature and try again, and c) more of an emphasis on having the submitter fix what's wrong based on advice, which* conserves scarce committer time, and* helps the submitters learn more and become better coders d) eliminates the annoying dead time in each CF, where for the last week of the CF only 2 extremely difficult patches are under review, and e) eliminates the stigma/trauma of having your stuff rejected because everyone's stuff will be rejected at least once before acceptance, and f) even allows us to punt on "everything must be reviewed" if nothing gets punted more than once. Overall, I think the advantages to a faster/shorter CF cycle outweigh the disadvantages enough to make it at least worth trying. I'm willing to run the first 1-week CF, as well as several of the others during the 9.2 cycle to try and make it work. I also have an idea for dealing with Problem 1: we actually have 2 weeks, a "triage week" and a "commitfest week". During the Triage week, non-committer volunteers will go through the pending patches and flag stuff which is obviously either broken or ready. That way, by the time committers actually need to review stuff during CF week, the easy patches will have already been eliminated. Not only will this streamline processing of the patches, it'll help us train new reviewers by giving them a crack at the easy reviews before Tom/Robert/Heikki look at them. It may not work. I think it's worth trying though, and we can always revert to the present system if the 1-week CFs are impeding development or are accumulating a snowball of patch backlog. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
On Mon, Apr 25, 2011 at 4:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > No, not at all, because you're ignoring the common case of a series of > dependent patches that are submitted in advance of the first one having > been committed. Uh, true. > To get to the point where we could do things that way, it would have > to be the case that every developer could run pgindent locally and get > the same results that the committer would get. Maybe we'll get there > someday, and we should certainly try. But we're not nearly close enough > to be considering changing policy on that basis. Fwiw I tried getting Gnu indent to work. I'm having a devil of a time figuring out how to get even remotely similar output. I can't even get -ncsb to work which means it puts *every* one-line comment into a block with the /* and */ delimiters on a line by themselves. And it does line-wrapping differently such that any lines longer than the limit are split at the *first* convenient place rather than the last which produces some, imho, strange looking lines. And it doesn't take a file for the list of typedefs. You have to provide each one as an argment on the command-line. I hacked the source to add the typedefs to the gperf hash it uses but if we have to patch it it rather defeats the point of even pondering switching. Afaict it hasn't seen development since 2008 so I don't get the impression it's any more of a live project than the NetBSD source. All in all even if they've fixed the things it used to mangle I don't see much point in switching from one moribund project we have to patch to another moribund project we have to patch, especially as it will mean patches won't backpatch as easily since the output will be quite different. -- greg
Josh Berkus <josh@agliodbs.com> writes: > As much as I'd like to start development early officially, I'm with Tom > in being pessimistic about the bugs we're going to find in SSI, > Collations and Synch Rep. Frankly, if you and Tom weren't so focused on > fixing it, I'd be suggesting that we pull Collations from 9.1; there > seems to be a *lot* of untested issues there still. If I had realized two months ago what poor shape the collations patch was in, I would have argued to pull it. But the work is done now; there's no reason not to keep it in. The cost is that I wasn't paying any attention to these other areas for those two months, and we can't get that back by pulling the feature. > I do think that we could bump the first CF up to July 1st, but I don't > think sooner than that is realistic without harming beta testing ... and > potentially delaying the release. Let's first demonstrate a track > record in getting a final release out consistently by July, and if that > works, maybe we can bump up the date. The start-date-on-the-15th was an oddity anyway, and it cannot work well in November or December. +1 for putting the CFs back to starting on the 1st. > Overall, I think the advantages to a faster/shorter CF cycle outweigh > the disadvantages enough to make it at least worth trying. I'm willing > to run the first 1-week CF, as well as several of the others during the > 9.2 cycle to try and make it work. I think we could try this once or twice without committing to doing the whole 9.2 cycle that way. > I also have an idea for dealing with Problem 1: we actually have 2 > weeks, a "triage week" and a "commitfest week". During the Triage > week, non-committer volunteers will go through the pending patches and > flag stuff which is obviously either broken or ready. That way, by the > time committers actually need to review stuff during CF week, the easy > patches will have already been eliminated. Not only will this > streamline processing of the patches, it'll help us train new reviewers > by giving them a crack at the easy reviews before Tom/Robert/Heikki look > at them. We've sort of unofficially done that already, in that lately it seems the committers don't pay much attention to a new fest until several days in, when things start to reach "ready for committer" state. That behavior would definitely not work very well in 1-week CFs, so I agree that some kind of multi-stage design would be needed. regards, tom lane
Greg Stark <gsstark@mit.edu> writes: > Fwiw I tried getting Gnu indent to work. I'm having a devil of a time > figuring out how to get even remotely similar output. > ... > And it doesn't take a file for the list of typedefs. You have to > provide each one as an argment on the command-line. *Ouch*. Really? It's hard to believe that anyone would consider it remotely usable for more than toy-sized projects, if you have to list all the typedef names on the command line. regards, tom lane
On 04/25/2011 03:30 PM, Tom Lane wrote: > Greg Stark<gsstark@mit.edu> writes: >> Fwiw I tried getting Gnu indent to work. I'm having a devil of a time >> figuring out how to get even remotely similar output. >> ... >> And it doesn't take a file for the list of typedefs. You have to >> provide each one as an argment on the command-line. > *Ouch*. Really? It's hard to believe that anyone would consider it > remotely usable for more than toy-sized projects, if you have to list > all the typedef names on the command line. Looks like BSD does the same. It's just that we hide it in pgindent: $INDENT -bad -bap -bc -bl -d0 -cdb -nce -nfc1 -di12 -i4 -l79 \ -lp -nip -npro -bbb $EXTRA_OPTS \ `egrep-v '^(FD_SET|date|interval|timestamp|ANY)$' "$TYPEDEFS" | sed -e '/^$/d' -e 's/.*/-T& /'` I agree it's horrible. cheers andrew
On Mon, Apr 25, 2011 at 2:26 PM, Josh Berkus <josh@agliodbs.com> wrote: > I thought the idea of setting the initial CF for July 15th for 9.1 was > that we would consistently have the first CF in July every year? As > discussed at that time, there's value to our corporate-sponsored > developers in knowing a regular annual cycle. Huh? We've never guaranteed anyone a regular annual cycle, and we've never had one. We agreed to use the same schedule for 9.1 as for 9.0; I don't remember anything more than that being discussed anywhere, ever. > As much as I'd like to start development early officially, I'm with Tom > in being pessimistic about the bugs we're going to find in SSI, > Collations and Synch Rep. Frankly, if you and Tom weren't so focused on > fixing it, I'd be suggesting that we pull Collations from 9.1; there > seems to be a *lot* of untested issues there still. > > I do think that we could bump the first CF up to July 1st, but I don't > think sooner than that is realistic without harming beta testing ... and > potentially delaying the release. Let's first demonstrate a track > record in getting a final release out consistently by July, and if that > works, maybe we can bump up the date. I have no idea where you're coming up with this estimate. > I also have an idea for dealing with Problem 1: we actually have 2 > weeks, a "triage week" and a "commitfest week". During the Triage > week, non-committer volunteers will go through the pending patches and > flag stuff which is obviously either broken or ready. That way, by the > time committers actually need to review stuff during CF week, the easy > patches will have already been eliminated. Not only will this > streamline processing of the patches, it'll help us train new reviewers > by giving them a crack at the easy reviews before Tom/Robert/Heikki look > at them. This is basically admitting on its face that one week isn't long enough. One week of triage and one week of CommitFest is two weeks, and right there we've lost all of the supposed benefit of reducing the percentage of time we spend "in CommitFest mode". Furthermore, it's imposing a rigid separation between triage and commit that seems to me to have no value. If a patch is ready to commit after 3 days, should we ignore it for 4 days and then go back and look at it? Or should we maybe just commit it while the thread is still fresh in someone's mind and move on? The current process allows for that and, well, it doesn't work perfectly, but defining more rigid process around the existing process does not seem likely to help. At the risk of getting a bit cranky, you haven't participated in a material way in any CommitFest we've had in well over a year. AFAICS, the first, last, and only time you are listed in the CommitFest application is as co-reviewer of a patch in July 2009, which means that the last time you really had a major roll in this process was during the 8.4 cycle. So I'm really rather suspicious that you know what's wrong with the process and how to fix it better than the people who are involved currently. I think we need here is more input from the people who are regularly submitting and reviewing patches, and those who have tried recently but been turned off by some aspect of the process. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 04/25/2011 02:26 PM, Josh Berkus wrote: > Overall, I think the advantages to a faster/shorter CF cycle outweigh > the disadvantages enough to make it at least worth trying. I'm willing > to run the first 1-week CF, as well as several of the others during the > 9.2 cycle to try and make it work. > It will be interesting to see if it's even possible to get all the patches assigned a reviewer in a week. The only idea I've come up with for making this idea more feasible is to really buckle down on the idea that all submitters should also be reviewing. I would consider a fair policy to say that anyone who doesn't volunteer to review someone else's patch gets nudged toward the bottom of the reviewer priority list. Didn't offer to review someone else's patch? Don't be surprised if you get bumped out of a week long 'fest. This helps with two problems. It helps fill in short-term reviewers, and in the long-term it makes submitters more competent. The way things are setup now, anyone who submits a patch without having done a review first is very likely to get their patch bounced back; the odds of getting everything right without that background are low. Ideally submitters might even start fixing their own patches without reviewer prompting, once they get into someone else's and realize what they didn't do. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
On 04/25/2011 12:40 PM, Robert Haas wrote: > > At the risk of getting a bit cranky, you haven't participated in a > material way in any CommitFest we've had in well over a year. AFAICS, > the first, last, and only time you are listed in the CommitFest > application is as co-reviewer of a patch in July 2009, which means > that the last time you really had a major roll in this process was > during the 8.4 cycle. So I'm really rather suspicious that you know > what's wrong with the process and how to fix it better than the people > who are involved currently. I think we need here is more input from > the people who are regularly submitting and reviewing patches, and > those who have tried recently but been turned off by some aspect of > the process. Although a valid point, let's remember that conversely it is very easy for those who are closest to the process to lose themselves within that process. It never hurts to consider an objective opinion. Sincerely, Joshua D. Drake -- Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Support, Training, Professional Services and Developement Organizers of the PostgreSQL Conference - http://www.postgresqlconference.org/ @cmdpromptinc - @postgresconf - 509-416-6579
On Mon, Apr 25, 2011 at 3:47 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > On 04/25/2011 12:40 PM, Robert Haas wrote: >> At the risk of getting a bit cranky, you haven't participated in a >> material way in any CommitFest we've had in well over a year. AFAICS, >> the first, last, and only time you are listed in the CommitFest >> application is as co-reviewer of a patch in July 2009, which means >> that the last time you really had a major roll in this process was >> during the 8.4 cycle. So I'm really rather suspicious that you know >> what's wrong with the process and how to fix it better than the people >> who are involved currently. I think we need here is more input from >> the people who are regularly submitting and reviewing patches, and >> those who have tried recently but been turned off by some aspect of >> the process. > > Although a valid point, let's remember that conversely it is very easy for > those who are closest to the process to lose themselves within that process. > It never hurts to consider an objective opinion. I agree. That's why I said "I think we need more input from the people ho are regularly submitting and reviewing patches", rather than, say, me, Tom, and Greg. Not that we're not smart guys, but we are in this up to our eyeballs. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Andrew Dunstan <andrew@dunslane.net> writes: > On 04/25/2011 03:30 PM, Tom Lane wrote: >> *Ouch*. Really? It's hard to believe that anyone would consider it >> remotely usable for more than toy-sized projects, if you have to list >> all the typedef names on the command line. > Looks like BSD does the same. It's just that we hide it in pgindent: Oh wow, I never noticed that. That's going to be a severe problem for the "run it anywhere" goal. The typedefs list is already close to 32K, and is not going anywhere but up. There are already platforms on which a shell command line that long will fail, and I think once we break past 32K we might find it failing on even pretty popular ones. regards, tom lane
On Apr 25, 2011, at 3:28 PM, Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: >> On 04/25/2011 03:30 PM, Tom Lane wrote: >>> *Ouch*. Really? It's hard to believe that anyone would consider it >>> remotely usable for more than toy-sized projects, if you have to list >>> all the typedef names on the command line. > >> Looks like BSD does the same. It's just that we hide it in pgindent: > > Oh wow, I never noticed that. That's going to be a severe problem for > the "run it anywhere" goal. The typedefs list is already close to 32K, > and is not going anywhere but up. There are already platforms on which > a shell command line that long will fail, and I think once we break past > 32K we might find it failing on even pretty popular ones. I take it the behavior of the `indent` program is sufficiently complex that it couldn't be modeled sufficiently easily bya smart enough perl script? Regards, David -- David Christensen End Point Corporation david@endpoint.com
On 11-04-25 03:40 PM, Robert Haas wrote: > At the risk of getting a bit cranky, you haven't participated in a > material way in any CommitFest we've had in well over a year. AFAICS, > the first, last, and only time you are listed in the CommitFest > application is as co-reviewer of a patch in July 2009, which means > that the last time you really had a major roll in this process was > during the 8.4 cycle. So I'm really rather suspicious that you know > what's wrong with the process and how to fix it better than the people > who are involved currently. I think we need here is more input from > the people who are regularly submitting and reviewing patches, and > those who have tried recently but been turned off by some aspect of > the process. > I reviewed a handful of patches during commitfests during the 9.1 release. I think a commitfest lasting one week will make me review fewer patches not more. At the start of a commitfest I would volunteer to review a single patch knowing that it wouldn't be hard to find 4-6 hours to review the patch during the next few weeks. Once I was done with the first patch when I thought I'd have sometime in the next few days to review another patch I would pick another one off the list. At the start of the commitfest I wasn't concerned about picking up a patch because I knew I had lots of time to get to it. Near the end of the last commitfest I wasn't concerned about picking up an unassigned patch because there were so many patches people picked up earlier on in the commitfest waiting for review that I didn't think I'd get pressured on a patch I had only picked up a day or two ago. If I knew I was expected to turn a review around in a short window I think I would only pick up a patch if I was 90% sure I'd have time to get to it during the week. I sometimes can spend $work time on reviewing patches but I can rarely block time off or schedule when that will be, and the same somewhat applies to the patch reviews I do on my own time (postgresql stuff comes after other commitments). It is easy to say four weeks in a row "I won't have time to review one patch this week" it is harder to say "I won't have time to review a single patch in the next month" I also should add that sometimes I'd review a patch and the only issue from the review might be "is this really how we want the thing to work?" and the commitfest app doesn't have a good state for this. If patch needs feedback or a decision from the community and it sometimes isn't clear when enough silence or +1's justify moving it to a committer or bouncing the patch to be reworked. Steve
On Mon, Apr 25, 2011 at 4:29 PM, Steve Singer <ssinger_pg@sympatico.ca> wrote: > On 11-04-25 03:40 PM, Robert Haas wrote: >> >> At the risk of getting a bit cranky, you haven't participated in a >> material way in any CommitFest we've had in well over a year. AFAICS, >> the first, last, and only time you are listed in the CommitFest >> application is as co-reviewer of a patch in July 2009, which means >> that the last time you really had a major roll in this process was >> during the 8.4 cycle. So I'm really rather suspicious that you know >> what's wrong with the process and how to fix it better than the people >> who are involved currently. I think we need here is more input from >> the people who are regularly submitting and reviewing patches, and >> those who have tried recently but been turned off by some aspect of >> the process. > > I reviewed a handful of patches during commitfests during the 9.1 release. > I think a commitfest lasting one week will make me review fewer patches > not more. At the start of a commitfest I would volunteer to review a > single patch knowing that it wouldn't be hard to find 4-6 hours to review > the patch during the next few weeks. Once I was done with the first patch > when I thought I'd have sometime in the next few days to review another > patch I would pick another one off the list. At the start of the > commitfest I wasn't concerned about picking up a patch because I knew I had > lots of time to get to it. Near the end of the last commitfest I wasn't > concerned about picking up an unassigned patch because there were so many > patches people picked up earlier on in the commitfest waiting for review > that I didn't think I'd get pressured on a patch I had only picked up a day > or two ago. If I knew I was expected to turn a review around in a short > window I think I would only pick up a patch if I was 90% sure I'd have time > to get to it during the week. I sometimes can spend $work time on reviewing > patches but I can rarely block time off or schedule when that will be, and > the same somewhat applies to the patch reviews I do on my own time > (postgresql stuff comes after other commitments). > > It is easy to say four weeks in a row "I won't have time to review one patch > this week" it is harder to say "I won't have time to review a single patch > in the next month" > > I also should add that sometimes I'd review a patch and the only issue from > the review might be "is this really how we want the thing to work?" and the > commitfest app doesn't have a good state for this. If patch needs feedback > or a decision from the community and it sometimes isn't clear when enough > silence or +1's justify moving it to a committer or bouncing the patch to be > reworked. Thanks for the input. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 04/25/2011 04:28 PM, Tom Lane wrote: > Andrew Dunstan<andrew@dunslane.net> writes: >> On 04/25/2011 03:30 PM, Tom Lane wrote: >>> *Ouch*. Really? It's hard to believe that anyone would consider it >>> remotely usable for more than toy-sized projects, if you have to list >>> all the typedef names on the command line. >> Looks like BSD does the same. It's just that we hide it in pgindent: > Oh wow, I never noticed that. That's going to be a severe problem for > the "run it anywhere" goal. The typedefs list is already close to 32K, > and is not going anywhere but up. There are already platforms on which > a shell command line that long will fail, and I think once we break past > 32K we might find it failing on even pretty popular ones. > > Well, my solution would be to replace pgindent with a perl script (among other advantages, it would then run everywhere we build, including Windows), and filter the typedefs list so that we only use the ones that appear in each file with that file, instead of passing the whole list to each file. cheers andrew
On 04/25/2011 01:55 PM, Andrew Dunstan wrote: > > > > Well, my solution would be to replace pgindent with a perl script > (among other advantages, it would then run everywhere we build, > including Windows), and filter the typedefs list so that we only use > the ones that appear in each file with that file, instead of passing > the whole list to each file. Perl seems like a winner here as a whole. It would also open the support of the script up to a wider community. JD -- Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Support, Training, Professional Services and Developement Organizers of the PostgreSQL Conference - http://www.postgresqlconference.org/ @cmdpromptinc - @postgresconf - 509-416-6579
Andrew Dunstan <andrew@dunslane.net> writes: > Well, my solution would be to replace pgindent with a perl script (among > other advantages, it would then run everywhere we build, including > Windows), Sounds good to me ... who's volunteering? > and filter the typedefs list so that we only use the ones > that appear in each file with that file, instead of passing the whole > list to each file. Not sure I gather the value of doing that. regards, tom lane
On 04/25/2011 07:00 PM, Tom Lane wrote: > Andrew Dunstan<andrew@dunslane.net> writes: >> Well, my solution would be to replace pgindent with a perl script (among >> other advantages, it would then run everywhere we build, including >> Windows), > Sounds good to me ... who's volunteering? I'll take a look. >> and filter the typedefs list so that we only use the ones >> that appear in each file with that file, instead of passing the whole >> list to each file. > Not sure I gather the value of doing that. Well, that way you'll have a handful of -Ttypdef parameters for each invocation of indent instead of a gazillion of them. No more command line length issues. cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > On 04/25/2011 07:00 PM, Tom Lane wrote: >> Andrew Dunstan<andrew@dunslane.net> writes: >>> and filter the typedefs list so that we only use the ones >>> that appear in each file with that file, instead of passing the whole >>> list to each file. >> Not sure I gather the value of doing that. > Well, that way you'll have a handful of -Ttypdef parameters for each > invocation of indent instead of a gazillion of them. No more command > line length issues. Well, -Ttypedef is wrong on its face. Right would be a switch specifying the name of the file to read the typedef list from. Then you don't need massive script-level infrastructure to try to spoonfeed that data to the program doing the work. regards, tom lane
On 04/25/2011 07:48 PM, Tom Lane wrote: > >> Well, that way you'll have a handful of -Ttypdef parameters for each >> invocation of indent instead of a gazillion of them. No more command >> line length issues. > Well, -Ttypedef is wrong on its face. Right would be a switch > specifying the name of the file to read the typedef list from. > Then you don't need massive script-level infrastructure to try > to spoonfeed that data to the program doing the work. > > Ok, but that would account for about 5 lines of the current 400 or so in pgindent, and we'd have to extend our patch of BSD indent to do it. That's not to say that we shouldn't, but we should be aware of how much it will buy us on its own. cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > On 04/25/2011 07:48 PM, Tom Lane wrote: >> Well, -Ttypedef is wrong on its face. Right would be a switch >> specifying the name of the file to read the typedef list from. >> Then you don't need massive script-level infrastructure to try >> to spoonfeed that data to the program doing the work. > Ok, but that would account for about 5 lines of the current 400 or so in > pgindent, and we'd have to extend our patch of BSD indent to do it. Huh? I thought the context here was reimplementing it from scratch in perl. > That's not to say that we shouldn't, but we should be aware of how much > it will buy us on its own. The point isn't so much to remove a few lines of shell code (though I think that's a bigger deal than you say, if we want this to be usable on Windows). It's to not run into shell line length limits, which I believe we are dangerously close to already on many platforms. regards, tom lane
On 04/25/2011 08:54 PM, Tom Lane wrote: > Andrew Dunstan<andrew@dunslane.net> writes: >> On 04/25/2011 07:48 PM, Tom Lane wrote: >>> Well, -Ttypedef is wrong on its face. Right would be a switch >>> specifying the name of the file to read the typedef list from. >>> Then you don't need massive script-level infrastructure to try >>> to spoonfeed that data to the program doing the work. >> Ok, but that would account for about 5 lines of the current 400 or so in >> pgindent, and we'd have to extend our patch of BSD indent to do it. > Huh? I thought the context here was reimplementing it from scratch in > perl. yes. >> That's not to say that we shouldn't, but we should be aware of how much >> it will buy us on its own. > The point isn't so much to remove a few lines of shell code (though I > think that's a bigger deal than you say, if we want this to be usable on > Windows). It's to not run into shell line length limits, which I > believe we are dangerously close to already on many platforms. > > The current script calls our (patched) BSD indent. Any rewrite would have to also. It (the BSD indent) doesn't have any facility to pass a typedef file parameter. If you want that we have to patch the C code. No amount of rewriting in Perl or anything else would overcome that. My suggestion was to work around it as part of a script rewrite, but you didn't seem to like that idea. cheers andrew
On 04/25/2011 09:02 PM, Andrew Dunstan wrote: > > > The current script calls our (patched) BSD indent. Any rewrite would > have to also. It (the BSD indent) doesn't have any facility to pass a > typedef file parameter. If you want that we have to patch the C code. > No amount of rewriting in Perl or anything else would overcome that. > My suggestion was to work around it as part of a script rewrite, but > you didn't seem to like that idea. > Oh, it just occurred to me that maybe you thought the whole thing *including* indent would be reimplemented in perl. That was never my intention, and is a much larger project than I had in mind. cheers andrew
On Tue, Apr 26, 2011 at 2:07 AM, Andrew Dunstan <andrew@dunslane.net> wrote: > Oh, it just occurred to me that maybe you thought the whole thing > *including* indent would be reimplemented in perl. That was never my > intention, and is a much larger project than I had in mind. Oh, I thought that was what you were proposing too. I was king of shocked that people would propose that so freely since it seemed like a huge project to me too. But then after thinking about it I wonder if it wouldn't be easier than it sounds. Being able to hard code our own indentation rules instead of having to implement every possible combination and not having to recognize cases that don't arise in Postgres might make it easier than the problem GNU/BSD indent are trying to solve. It doesn't strike me as a project that's particularly rewarding though. -- greg
-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 > Sounds good to me ... who's volunteering? (Andrew) I will as well. Github perhaps, Andrew? I'll be happy to get some unit tests written. - -- Greg Sabino Mullane greg@turnstep.com End Point Corporation http://www.endpoint.com/ PGP Key: 0x14964AC8 201104252157 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iEYEAREDAAYFAk22JnMACgkQvJuQZxSWSsj4SgCg9k2HHBfAVXeZx7CwxDPuUTCX ZkYAnRCalvoKB4yhIeHaZywwBtzcz+93 =JptO -----END PGP SIGNATURE-----
Excerpts from Tom Lane's message of lun abr 25 20:48:42 -0300 2011: > Andrew Dunstan <andrew@dunslane.net> writes: > > Well, that way you'll have a handful of -Ttypdef parameters for each > > invocation of indent instead of a gazillion of them. No more command > > line length issues. > > Well, -Ttypedef is wrong on its face. Right would be a switch > specifying the name of the file to read the typedef list from. > Then you don't need massive script-level infrastructure to try > to spoonfeed that data to the program doing the work. I gather that Andrew will be working on replacing the pgindent shell script with a Perl script, but this new script will still rely on our patched BSD indent, right? Of course, it would make sense to further patch indent so that it accepts typedefs in a file instead of thousands of -T switches, but that seems orthogonal. -- Álvaro Herrera <alvherre@commandprompt.com> The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Greg Stark <gsstark@mit.edu> writes: > On Tue, Apr 26, 2011 at 2:07 AM, Andrew Dunstan <andrew@dunslane.net> wrote: >> Oh, it just occurred to me that maybe you thought the whole thing >> *including* indent would be reimplemented in perl. That was never my >> intention, and is a much larger project than I had in mind. > Oh, I thought that was what you were proposing too. Yeah, me too. If it still depends on a patched BSD indent then it's not clear to me how much portability/ease-of-installation we're really buying. I thought what was being proposed was a pure Perl script. > I was king of shocked that people would propose that so freely since > it seemed like a huge project to me too. But then after thinking about > it I wonder if it wouldn't be easier than it sounds. I thought there were enough loose parsing bits out there on CPAN that this wasn't an unreasonable proposition. regards, tom lane
On Mon, Apr 25, 2011 at 4:03 PM, Greg Stark <gsstark@mit.edu> wrote: > On Mon, Apr 25, 2011 at 3:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> One small issue that would have to be resolved before branching is >> whether and when to do a "final" pgindent run for 9.1. Seems like the >> alternatives would be: >> > > If the tools become easy to run is it possible we cold get to the > point where we do an indent run on every commit? This wold require a > stable list of system symbols plus the tool would need to add any new > symbols added by the patch. As long as the tool produced consistent > output I don't see that it would produce the spurious merge conflicts > we've been afraid of in the past. Those would only occur if a patch > went in without pgindent being run, someone developed a patch against > that tree, then pgindent was run before merging that patch. As long as > it's run on every patch on commit it shouldn't cause those problems > since nobody could use a non-pgindented code as their base. > > Personally I've never really liked the pgindent run. White-space > always seemed like the least interesting of the code style issues, > none of which seemed terribly important compared to the more important > things like staying warning-clean and defensive coding rules. But if > we're going to do it letting things diverge for a whole release and > then running it once a year seems the worst of both worlds. +1 to get rid of the pgindent run I'd rather have an automatic checker telling me I need to check my commits than to ignore any weirdness for 6 months and then fix everything at once. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Mon, Apr 25, 2011 at 2:17 PM, Robert Haas <robertmhaas@gmail.com> wrote: > if we > make CommitFests more frequent and shorter, we can give people > feedback more quickly "We" can give people feedback more quickly. There is no "we" in that regard. Some individuals may be in a position to give more frequent feedback, most cannot. If individuals want to give more frequent feedback there is nothing stopping them doing that at the moment. That requires no change to the CF process. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Mon, Apr 25, 2011 at 3:19 PM, Stephen Frost <sfrost@snowman.net> wrote: > * Robert Haas (robertmhaas@gmail.com) wrote: >> On balance, I think I prefer the >> current arrangement, though if we could make the CommitFests a bit >> shorter I would certainly like that better. I don't know how to make >> that happen without more reviewers, though. > > Given our current method (where we allow authors to update their patches > during a CF) I don't see that we need or should try for shorter CFs. If > we actually just reviewed patches onces it'd be a very different > situation. > > So, +1 from me for keeping it as-is. I do wonder if this is coming up > now just because we're getting closer to a release and people are, > unsuprisingly, wishing they had been able to get their fav. patch in > before the deadline. :) +1 from me for keeping it as-is as well. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
> Huh? We've never guaranteed anyone a regular annual cycle, and we've > never had one. We agreed to use the same schedule for 9.1 as for 9.0; > I don't remember anything more than that being discussed anywhere, > ever. We *want* to have a regular annual cycle which doesn't vary by more than a few weeks. This benefits people who have to schedule work with their boss, or upgrades with their IT department. The fact that we haven't achieved one yet is a flaw, not an argument. >> I do think that we could bump the first CF up to July 1st, but I don't >> think sooner than that is realistic without harming beta testing ... and >> potentially delaying the release. Let's first demonstrate a track >> record in getting a final release out consistently by July, and if that >> works, maybe we can bump up the date. > > I have no idea where you're coming up with this estimate. I don't know what estimate you're talking about. Reference? > So I'm really rather suspicious that you know > what's wrong with the process and how to fix it better than the people > who are involved currently. I think we need here is more input from > the people who are regularly submitting and reviewing patches, and > those who have tried recently but been turned off by some aspect of > the process. I don't think the process *is* broken in any major way. It's just a question of whether we could improve things further, and make the CF process less annoying for some of the participants. Tom just suggested that we could do better in week-a-month mode, and I was thinking about ways to make that work, since it sounded attractive to me. You'll also notice that I volunteered to run the first few CFs if we decide to try it. In other words, it wasn't my idea originally, and a few committers supported it before I said anything, so ad hominem criticism isn't a very good way to argue. You need to stop "going for the jugular" whenever you disagree with people ;-) Certainly the relevant decision is whether you, Tom, Heikki, Peter, Kevin, Andrew, Jeff, Bruce, etc. think that a different time cycle will improve things, since you are currently the ones paying the biggest costs of any lack of optimization of the current system. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
Josh Berkus <josh@agliodbs.com> writes: >> Huh? We've never guaranteed anyone a regular annual cycle, and we've >> never had one. We agreed to use the same schedule for 9.1 as for 9.0; >> I don't remember anything more than that being discussed anywhere, >> ever. > We *want* to have a regular annual cycle which doesn't vary by more than > a few weeks. There may be some people who want that, but it's not project policy and I don't think it will ever become so. Our policy is "we release when it's ready". To allow the development schedule to become purely calendar-driven would mean a drastic decline in our quality standards. I suppose we could have something like a predetermined branch-from-devel date for each major release, with the time from branch to actual release varying depending on release stabilization progress, while new development proceeds forward on a regular commitfest clock. But I fail to see any significant advantage from doing that. What it would mostly do is decouple the development community entirely from release stabilization work, and I think that would be a seriously bad idea. Not only from the take-responsibility-for-your-work angle, but because diverting manpower from release stabilization will also mean that it's that much longer from feature freeze (or whatever you call the branch event) to actual release. I don't think that people will be that happy about knowing "if I finish this by date X, it will be in release N" if they have no idea when release N will reach production status. regards, tom lane
* Tom Lane (tgl@sss.pgh.pa.us) wrote: > What it would mostly > do is decouple the development community entirely from release > stabilization work, and I think that would be a seriously bad idea. +1000%, seriously. This is a huge concern that we need to make sure is addressed in a sensible way. We do *not* want to encourage everyone in the developer community to focus on the next-great-feature before we have the latest release out the door. To that extent, I'd vote for holding off on branching the tree until we're actually ready to focus on work being done on the new branch, which should be either after our next major release or pretty darn close to it. Thanks, Stephen
All, > +1 from me for keeping it as-is as well. So it sounds like most committers want to keep the CFs on their existing schedule for another year. Also that we don't wantto branch until RC1. While it would be nice to get some feedback from those who had bad experiences with the CF cycle,I don't know how to get it ... and the complaints I've received from submitters are NOT about the CF cycle. What it sounds like we do have consensus on, though, is: a) improving pg_indent so that it can be run portably, easily, and repeatably b) greatly improving the "so you want to submit a patch" documentation c) making CFs a little shorter (3 weeks instead of 4?) I'll also add one of my own: developing some kind of dependable mentoring system for first-time patch submitters. Beyond that, are we ready to set the schedule for 9.2 yet? I'd tend to say that: CF1: July 1-30 CF2: Sept 1-21 CF3: November 1-21 CF4: January 3-31 Realistically, given that we usually seem to still be hacking in March, we could have a 5th CF which would be exclusivelyfor patches already reviewed in CF4 and "tiny" patches. *however*, we've historically been extremely poor inenforcing gatekeeping rules on what's accepted to a CF, so I'm not sure that's a good idea. Oh, and just so Robert will get off my back, I volunteer to run the 9.2CF1. Since I'm a better administrator than a reviewer. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com San Francisco
On Apr 29, 2011, at 5:19 PM, Joshua Berkus <josh@agliodbs.com> wrote: > Beyond that, are we ready to set the schedule for 9.2 yet? I'd tend to say that: > > CF1: July 1-30 > CF2: Sept 1-21 > CF3: November 1-21 > CF4: January 3-31 Tom and I were talking about starting maybe June 1, rather than July 1. You seem opposed but I'm not sure why. ...Robert
Robert, > Tom and I were talking about starting maybe June 1, rather than July > 1. You seem opposed but I'm not sure why. Because I think -- strictly based on history and the complexity of the new features -- we'll still be fixing major issueswith the beta in June, which was what Tom said as well the last time he posted about it on this thread. If CF1 is June1, though, when will CF4 be? Having a CF start Dec. 1 is probably a bad idea. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com San Francisco
On Apr 30, 2011, at 9:23 PM, Joshua Berkus <josh@agliodbs.com> wrote: > Robert, > >> Tom and I were talking about starting maybe June 1, rather than July >> 1. You seem opposed but I'm not sure why. > > Because I think -- strictly based on history and the complexity of the new features -- we'll still be fixing major issueswith the beta in June, which was what Tom said as well the last time he posted about it on this thread. > > If CF1 is June1, though, when will CF4 be? Having a CF start Dec. 1 is probably a bad idea. Well, I made a suggestion on this topic in my previous email on the subject... ...Robert
> > If CF1 is June1, though, when will CF4 be? Having a CF start Dec. 1 > > is probably a bad idea. > > Well, I made a suggestion on this topic in my previous email on the > subject... I just searched backwards on this thread and I can't find it. There's been a lot of posts. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com San Francisco
Joshua Berkus <josh@agliodbs.com> wrote: > I just searched backwards on this thread and I can't find it. I think he's talking about the bottom of this post: http://archives.postgresql.org/message-id/BANLkTimnjZNemdpqgK=8Mj=pzq33Pz0ntQ@mail.gmail.com -Kevin
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: > Joshua Berkus <josh@agliodbs.com> wrote: >> I just searched backwards on this thread and I can't find it. > I think he's talking about the bottom of this post: > http://archives.postgresql.org/message-id/BANLkTimnjZNemdpqgK=8Mj=pzq33Pz0ntQ@mail.gmail.com ... which was: CF #1: June 1-30CF #2: August 1-31CF #3: October 1-31CF #4 (one week shortened CF): December 1-7CF #5: January 1-31 I think the main thing we have to think about before choosing is whether we believe that we can shorten the CFs at all. Josh's proposal had 3-week CFs after the first one, which makes it a lot easier to have a fest in November or December, but only if you really can end it on time. In addition to the fun of working around the holiday season, perhaps we should also consider how much work we're likely to get out of people in the summer. Is it going to be useful to schedule a fest in either July or August? Will one month be better than the other? regards, tom lane
Robert Treat <rob@xzilla.net> wrote: > Tom Lane <tgl@sss.pgh.pa.us> wrote: >> CF #1: June 1-30 >> CF #2: August 1-31 >> CF #3: October 1-31 >> CF #4 (one week shortened CF): December 1-7 >> CF #5: January 1-31 >> >> I think the main thing we have to think about before choosing is >> whether we believe that we can shorten the CFs at all. Josh's >> proposal had 3-week CFs after the first one, which makes it a lot >> easier to have a fest in November or December, but only if you >> really can end it on time. > > If we made the "deadline" for patch acceptance into 9.2 CF#4, then > shortening that to a two week cycle whose main goal was simply to > sanity check patches for 9.2 would probably work. Most would > probably still need further work, which we would expect to get > handled in the final, full CF#5, but we wouldn't let anything new > come into CF#5. This way when we get the 100 patch pile up in > CF#4, there's no expectation that those patches will be committed, > just that they can be sanity checked for the 9.2 release. Which makes it not really a CommitFest, but rather ... a SanityFest? To make sure I understand you, you're suggesting no WIP patch review in the last two CFs? (Of course nothing stops someone from looking at someone else's WIP between fests.) Would a patch submitted to #4, the sanity of which was questioned, be eligible for another try in #5. I'm just trying to picture how this idea might work. -Kevin
On Sat, Apr 30, 2011 at 5:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: >> Joshua Berkus <josh@agliodbs.com> wrote: >>> I just searched backwards on this thread and I can't find it. > >> I think he's talking about the bottom of this post: >> http://archives.postgresql.org/message-id/BANLkTimnjZNemdpqgK=8Mj=pzq33Pz0ntQ@mail.gmail.com > > ... which was: > > CF #1: June 1-30 > CF #2: August 1-31 > CF #3: October 1-31 > CF #4 (one week shortened CF): December 1-7 > CF #5: January 1-31 > > I think the main thing we have to think about before choosing is whether > we believe that we can shorten the CFs at all. Josh's proposal had > 3-week CFs after the first one, which makes it a lot easier to have a > fest in November or December, but only if you really can end it on time. > If we made the "deadline" for patch acceptance into 9.2 CF#4, then shortening that to a two week cycle whose main goal was simply to sanity check patches for 9.2 would probably work. Most would probably still need further work, which we would expect to get handled in the final, full CF#5, but we wouldn't let anything new come into CF#5. This way when we get the 100 patch pile up in CF#4, there's no expectation that those patches will be committed, just that they can be sanity checked for the 9.2 release. Robert Treat play: xzilla.net work: omniti.com
> I think the main thing we have to think about before choosing is > whether > we believe that we can shorten the CFs at all. Josh's proposal had > 3-week CFs after the first one, which makes it a lot easier to have a > fest in November or December, but only if you really can end it on > time. I think that 3 weeks is doable. Generally by the last week of all of the CF except for the last one, we're largely waitingon either (a) authors who are slow to respond, (b) patches which are really hard to review, or (c) arguing out specstuff on -hackers. Generally the last week only has 1-3 patches open, and any of these things could be grounds for bootingto the next CF anyway or working on the patches outside the CF. For really hard patches (like Synch Rep) those thingsdon't fit into the CF cycle anyway. I'm not convinced that shorter than 3 weeks is doable, at least not without changing to a model of binary accept-or-reject. Communications speeds are too slow and reviewer's availability is too random. > In addition to the fun of working around the holiday season, perhaps > we should also consider how much work we're likely to get out of > people > in the summer. Is it going to be useful to schedule a fest in either > July or August? Will one month be better than the other? Doesn't make a difference, both are equally bad. However, if we're short on European reviewers, at least we'll be able topunt European patches immediately because the authors won't be answering their e-mail. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com San Francisco
Joshua Berkus <josh@agliodbs.com> wrote: > Generally the last week only has 1-3 patches open The last CF I managed the end of the third week looked like this: http://archives.postgresql.org/pgsql-hackers/2010-08/msg00334.php That is, we had 15 patches still pending out of 72 submitted: 9 ready for committer 1 waiting on author 5 needing review If you want to view it as a *commit* fest, that is really 15 to go. If you're viewing it as a *review* fest, those six broke down: 3 were patches submitted by committers (1 of which was WIP) 1 other was WIP 1 was down to tweaking docs 1 got a review the next day, showing it wasn't ready So we either need to markedly increase the pace of CFs (which is hard without more reviewers unless we provide "brisker" review and kick things back a lot faster) or we need to stop thinking that the goal is to get them *committed* during the CommitFest; but I thought that was kinda the point. -Kevin
On May 1, 2011, at 9:34 PM, "Kevin Grittner" <Kevin.Grittner@wicourts.gov> wrote:
Joshua Berkus <josh@agliodbs.com> wrote:Generally the last week only has 1-3 patches open
The last CF I managed the end of the third week looked like this:
http://archives.postgresql.org/pgsql-hackers/2010-08/msg00334.php
That is, we had 15 patches still pending out of 72 submitted:
9 ready for committer
1 waiting on author
5 needing review
If you want to view it as a *commit* fest, that is really 15 to go.
If you're viewing it as a *review* fest, those six broke down:
3 were patches submitted by committers (1 of which was WIP)
1 other was WIP
1 was down to tweaking docs
1 got a review the next day, showing it wasn't ready
So we either need to markedly increase the pace of CFs (which is
hard without more reviewers unless we provide "brisker" review and
kick things back a lot faster) or we need to stop thinking that the
goal is to get them *committed* during the CommitFest; but I thought
that was kinda the point.
+1.
...Robert
On Sun, May 1, 2011 at 1:14 PM, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote: > Robert Treat <rob@xzilla.net> wrote: >> Tom Lane <tgl@sss.pgh.pa.us> wrote: > >>> CF #1: June 1-30 >>> CF #2: August 1-31 >>> CF #3: October 1-31 >>> CF #4 (one week shortened CF): December 1-7 >>> CF #5: January 1-31 >>> >>> I think the main thing we have to think about before choosing is >>> whether we believe that we can shorten the CFs at all. Josh's >>> proposal had 3-week CFs after the first one, which makes it a lot >>> easier to have a fest in November or December, but only if you >>> really can end it on time. >> >> If we made the "deadline" for patch acceptance into 9.2 CF#4, then >> shortening that to a two week cycle whose main goal was simply to >> sanity check patches for 9.2 would probably work. Most would >> probably still need further work, which we would expect to get >> handled in the final, full CF#5, but we wouldn't let anything new >> come into CF#5. This way when we get the 100 patch pile up in >> CF#4, there's no expectation that those patches will be committed, >> just that they can be sanity checked for the 9.2 release. > > Which makes it not really a CommitFest, but rather ... a SanityFest? > > To make sure I understand you, you're suggesting no WIP patch review > in the last two CFs? (Of course nothing stops someone from looking > at someone else's WIP between fests.) Would a patch submitted to > #4, the sanity of which was questioned, be eligible for another try > in #5. > I think you can have WIP patches for both CF#4 and CF#5. What we're hoping to get from CF#4 is a better scope on the number of patches we might have to get 9.2 out the door. WRT patches whose sanity is questioned, I'd presume that questioning would have a list of specific complaints, so if you address those between CF#4 and CF#5, I don't see why you can't try again. Robert Treat play: xzilla.net work: omniti.com
On Tue, Apr 26, 2011 at 2:25 AM, Andrew Dunstan <andrew@dunslane.net> wrote:
Well, my solution would be to replace pgindent with a perl script (among other advantages, it would then run everywhere we build, including Windows), and filter the typedefs list so that we only use the ones that appear in each file with that file, instead of passing the whole list to each file.
On 04/25/2011 04:28 PM, Tom Lane wrote:Andrew Dunstan<andrew@dunslane.net> writes:On 04/25/2011 03:30 PM, Tom Lane wrote:Oh wow, I never noticed that. That's going to be a severe problem for*Ouch*. Really? It's hard to believe that anyone would consider itLooks like BSD does the same. It's just that we hide it in pgindent:
remotely usable for more than toy-sized projects, if you have to list
all the typedef names on the command line.
the "run it anywhere" goal. The typedefs list is already close to 32K,
and is not going anywhere but up. There are already platforms on which
a shell command line that long will fail, and I think once we break past
32K we might find it failing on even pretty popular ones.
Can we not setup a automatic mechanism where a submitter can send a patch to some email id, the patch gets applied on the current HEAD, pgindent is run and the new patch is sent back to the submitter who can then submit it to the hackers for review. If the patch does not apply cleanly, the same can also be emailed back to the submitter.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
On Mon, May 2, 2011 at 2:01 AM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote: > Can we not setup a automatic mechanism where a submitter can send a patch to > some email id, the patch gets applied on the current HEAD, pgindent is run > and the new patch is sent back to the submitter who can then submit it to > the hackers for review. If the patch does not apply cleanly, the same can > also be emailed back to the submitter. This seems like a pretty good idea, but maybe it'd be easiest to take it a step further and add an "automatic pgindent-ified" patch is created when a patch is added to the commitfest app? If the original patch didn't apply cleanly, just don't make the "pgindet-ified" link a link and instead mark it red/strikethrough or some such? It would probably be good to have both pieces, so that patch authors could verify things outside of the app. -- Thanks, David Blewett
On Tue, May 3, 2011 at 9:51 PM, David Blewett <david@dawninglight.net> wrote: > This seems like a pretty good idea, but maybe it'd be easiest to take > it a step further and add an "automatic pgindent-ified" patch is > created when a patch is added to the commitfest app? That should read: ... but maybe it'd be easiest to take it a step further and have an additional, automatically created patch file that is run through pgindent when a patch is added to the commitfest app. -- Thanks, David Blewett
On 05/03/2011 09:53 PM, David Blewett wrote: > On Tue, May 3, 2011 at 9:51 PM, David Blewett<david@dawninglight.net> wrote: >> This seems like a pretty good idea, but maybe it'd be easiest to take >> it a step further and add an "automatic pgindent-ified" patch is >> created when a patch is added to the commitfest app? > That should read: ... but maybe it'd be easiest to take it a step > further and have an additional, automatically created patch file that > is run through pgindent when a patch is added to the commitfest app. > You can't indent patches, only patched files. And that's the problem with this happy scheme. For it to work at all sanely we'd need to keep the committed code that the patch is to be applied against strictly pgindent clean, presumably via some automated process such as a commit hook. That's been suggested in the past, but hasn't met with universal approval, IIRC. cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > On 05/03/2011 09:53 PM, David Blewett wrote: >> On Tue, May 3, 2011 at 9:51 PM, David Blewett<david@dawninglight.net> wrote: >> That should read: ... but maybe it'd be easiest to take it a step >> further and have an additional, automatically created patch file that >> is run through pgindent when a patch is added to the commitfest app. > You can't indent patches, only patched files. And that's the problem > with this happy scheme. For it to work at all sanely we'd need to keep > the committed code that the patch is to be applied against strictly > pgindent clean, presumably via some automated process such as a commit > hook. That's been suggested in the past, but hasn't met with universal > approval, IIRC. Another point here is that insisting on perfectly indented results can often be counterproductive for the readability of the patch. Consider a patch that does + if (new-condition) + { + do something new; + } + else + {large existing block of code; + } Now, obviously, the large existing block of code is going to have to be pushed one tab stop to the right eventually. But it is no service to the readability of the patch to insist that that be part of the submitted diff. It's much better if that happens separately. Mind you, I've read more than enough horribly-formatted patches to wish that we could do something about this. But I doubt that a mechanical reformatting pass before reviewing will be a net plus. regards, tom lane
On May 3, 2011, at 11:10 PM, Andrew Dunstan wrote: > On 05/03/2011 09:53 PM, David Blewett wrote: >> On Tue, May 3, 2011 at 9:51 PM, David Blewett<david@dawninglight.net> wrote: >>> This seems like a pretty good idea, but maybe it'd be easiest to take >>> it a step further and add an "automatic pgindent-ified" patch is >>> created when a patch is added to the commitfest app? >> That should read: ... but maybe it'd be easiest to take it a step >> further and have an additional, automatically created patch file that >> is run through pgindent when a patch is added to the commitfest app. >> > > You can't indent patches, only patched files. And that's the problem with this happy scheme. For it to work at all sanelywe'd need to keep the committed code that the patch is to be applied against strictly pgindent clean, presumably viasome automated process such as a commit hook. That's been suggested in the past, but hasn't met with universal approval,IIRC. What if this hypothetical tool pulled the latest source, made a copy of that source, applied the patch to the copy, pg_indentedthe original AND the copy, and then diff'd? I think that would give you a properly indented patch. The contextlines in the patch would have the wrong indentation, but I think patch is pretty smart about dealing with that (orat least can be told to ignore whitespace differences). -- Jim C. Nasby, Database Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net
On Wed, May 4, 2011 at 12:19 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Mind you, I've read more than enough horribly-formatted patches to wish > that we could do something about this. But I doubt that a mechanical > reformatting pass before reviewing will be a net plus. It wouldn't hurt to have the option. It would also be nice if we could come to some conclusions on how to handle $SUBJECT. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
>> You can't indent patches, only patched files. And that's the problem >> with this happy scheme. For it to work at all sanely we'd need to keep >> the committed code that the patch is to be applied against strictly >> pgindent clean, presumably via some automated process such as a commit >> hook. That's been suggested in the past, but hasn't met with universal >> approval, IIRC. Well, there is another solution to this, which is to use Git branches and forks instead of mailing around patches. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
On Wed, May 4, 2011 at 19:21, Josh Berkus <josh@agliodbs.com> wrote: > >>> You can't indent patches, only patched files. And that's the problem >>> with this happy scheme. For it to work at all sanely we'd need to keep >>> the committed code that the patch is to be applied against strictly >>> pgindent clean, presumably via some automated process such as a commit >>> hook. That's been suggested in the past, but hasn't met with universal >>> approval, IIRC. > > Well, there is another solution to this, which is to use Git branches > and forks instead of mailing around patches. That makes no difference to this problem, really. If the committer (or reviewer) has to reindent it anyway, you can just as well do a "git checkout work && patch -p1 < /where/ever && pgindent && git diff" as "git remote add somewhere && git fetch somewhere && git checkout work --track somewhere/something && pgindent && git diff". There are some reasons why using git branches and forks are nice to work with, but they don't solve tihs problem. Or are you saying there should be an automated service where you registered your git url + branch and then it would pull that branch, run pgindent for you, and then republish it somewhere? Not sure how big a win that is in the end, plus it's going to fail as soon as you get a confligt anywhere anyway... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/
On Wed, May 4, 2011 at 1:21 PM, Josh Berkus <josh@agliodbs.com> wrote: > >>> You can't indent patches, only patched files. And that's the problem >>> with this happy scheme. For it to work at all sanely we'd need to keep >>> the committed code that the patch is to be applied against strictly >>> pgindent clean, presumably via some automated process such as a commit >>> hook. That's been suggested in the past, but hasn't met with universal >>> approval, IIRC. > > Well, there is another solution to this, which is to use Git branches > and forks instead of mailing around patches. Shouldn't it be as simple as keeping a git clone of trunk up to date, applying the patch, running pgindent and emitting the resulting diff? Once it's been generated, just run git reset --hard to clean out all local changes. -- Thanks, David Blewett
Alvaro Herrera wrote: > Excerpts from Tom Lane's message of lun abr 25 20:48:42 -0300 2011: > > Andrew Dunstan <andrew@dunslane.net> writes: > > > > Well, that way you'll have a handful of -Ttypdef parameters for each > > > invocation of indent instead of a gazillion of them. No more command > > > line length issues. > > > > Well, -Ttypedef is wrong on its face. Right would be a switch > > specifying the name of the file to read the typedef list from. > > Then you don't need massive script-level infrastructure to try > > to spoonfeed that data to the program doing the work. > > I gather that Andrew will be working on replacing the pgindent shell > script with a Perl script, but this new script will still rely on our > patched BSD indent, right? > > Of course, it would make sense to further patch indent so that it > accepts typedefs in a file instead of thousands of -T switches, but that > seems orthogonal. I have done as you suggested, modifying our version of BSD indent to allow a file of typedefs to be processed. I also renamed the download and binary to 'pg_bsd_indent' so it can be installed on a system that already has 'indent'. It should propogate to the ftp mirrors in a few hours. Even after we go to Perl, this is still a necessary change. I have modified the pgindent script to use this new flag, and applied those changes, attached. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + diff --git a/src/tools/pgindent/README b/src/tools/pgindent/README new file mode 100644 index e81e85d..7504650 *** a/src/tools/pgindent/README --- b/src/tools/pgindent/README *************** pgindent *** 6,31 **** This can format all PostgreSQL *.c and *.h files, but excludes *.y, and *.l files. ! 1) Change directory to the top of the build tree. ! 2) Download the typedef file from the buildfarm: wget -O src/tools/pgindent/typedefs.list http://buildfarm.postgresql.org/cgi-bin/typedefs.pl ! 3) Remove all derived files (pgindent has trouble with one of the flex macros): gmake maintainer-clean ! 4) Run pgindent: find . -name '*.[ch]' -type f -print | \ egrep -v -f src/tools/pgindent/exclude_file_patterns | \ xargs -n100 pgindent src/tools/pgindent/typedefs.list ! 5) Remove any files that generate errors and restore their original versions. ! 6) Do a full test build: run configure # stop is only necessary if it's going to install in a location with an --- 6,33 ---- This can format all PostgreSQL *.c and *.h files, but excludes *.y, and *.l files. ! 1) Install pg_bsd_indent (see below for details) ! 2) Change directory to the top of the build tree. ! ! 3) Download the typedef file from the buildfarm: wget -O src/tools/pgindent/typedefs.list http://buildfarm.postgresql.org/cgi-bin/typedefs.pl ! 4) Remove all derived files (pgindent has trouble with one of the flex macros): gmake maintainer-clean ! 5) Run pgindent: find . -name '*.[ch]' -type f -print | \ egrep -v -f src/tools/pgindent/exclude_file_patterns | \ xargs -n100 pgindent src/tools/pgindent/typedefs.list ! 6) Remove any files that generate errors and restore their original versions. ! 7) Do a full test build: run configure # stop is only necessary if it's going to install in a location with an *************** This can format all PostgreSQL *.c and * *** 38,54 **** --------------------------------------------------------------------------- ! We have standardized on NetBSD's indent. We have fixed a few bugs which ! requre the NetBSD source to be patched with indent.bsd.patch patch. A ! fully patched version is available at ftp://ftp.postgresql.org/pub/dev. GNU indent, version 2.2.6, has several problems, and is not recommended. These bugs become pretty major when you are doing >500k lines of code. If you don't believe me, take a directory and make a copy. Run pgindent on the copy using GNU indent, and do a diff -r. You will see what I ! mean. GNU indent does some things better, but mangles too. ! Notes about excluded files: src/include/storage/s_lock.h is excluded because it contains assembly code that pgindent tends to mess up. --- 40,67 ---- --------------------------------------------------------------------------- ! BSD indent ! ---------- ! ! We have standardized on NetBSD's indent, and renamed it pg_bsd_indent. ! We have fixed a few bugs which requre the NetBSD source to be patched ! with indent.bsd.patch patch. A fully patched version is available at ! ftp://ftp.postgresql.org/pub/dev. GNU indent, version 2.2.6, has several problems, and is not recommended. These bugs become pretty major when you are doing >500k lines of code. If you don't believe me, take a directory and make a copy. Run pgindent on the copy using GNU indent, and do a diff -r. You will see what I ! mean. GNU indent does some things better, but mangles too. For details, ! see: ! http://archives.postgresql.org/pgsql-hackers/2003-10/msg00374.php ! http://archives.postgresql.org/pgsql-hackers/2011-04/msg01436.php ! ! --------------------------------------------------------------------------- ! ! Notes about excluded files ! -------------------------- src/include/storage/s_lock.h is excluded because it contains assembly code that pgindent tends to mess up. *************** should not be changed. *** 63,70 **** --------------------------------------------------------------------------- ! Obsolete typedef list creation instructions: ! -------------------------------------------- To use pgindent: --- 76,83 ---- --------------------------------------------------------------------------- ! Obsolete typedef list creation instructions ! ------------------------------------------- To use pgindent: diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent new file mode 100755 index 05f69ef..eeb6f5d *** a/src/tools/pgindent/pgindent --- b/src/tools/pgindent/pgindent *************** fi *** 21,32 **** TYPEDEFS="$1" shift ! if [ -z "$INDENT" ] ! then ! INDENT=indent ! fi trap "rm -f /tmp/$$ /tmp/$$a" 0 1 2 3 15 entab </dev/null >/dev/null if [ "$?" -ne 0 ] then echo "Go to the src/tools/entab directory and do a 'make' and 'make install'." >&2 --- 21,32 ---- TYPEDEFS="$1" shift ! [ -z "$INDENT" ] && INDENT=pg_bsd_indent trap "rm -f /tmp/$$ /tmp/$$a" 0 1 2 3 15 + + # check the environment + entab </dev/null >/dev/null if [ "$?" -ne 0 ] then echo "Go to the src/tools/entab directory and do a 'make' and 'make install'." >&2 *************** then echo "Go to the src/tools/entab dir *** 36,42 **** fi $INDENT -? </dev/null >/dev/null 2>&1 if [ "$?" -ne 1 ] ! then echo "You do not appear to have 'indent' installed on your system." >&2 exit 1 fi $INDENT -gnu </dev/null >/dev/null 2>&1 --- 36,46 ---- fi $INDENT -? </dev/null >/dev/null 2>&1 if [ "$?" -ne 1 ] ! then echo "You do not appear to have '$INDENT' installed on your system." >&2 ! exit 1 ! fi ! if [ "`$INDENT -V`" != "$INDENT 1.0" ] ! then echo "You do not appear to have $INDENT version 1.0 installed on your system." >&2 exit 1 fi $INDENT -gnu </dev/null >/dev/null 2>&1 *************** do *** 140,149 **** # Protect wrapping in CATALOG(). sed 's;^CATALOG(.*$;/*&*/;' >/tmp/$$a # We get the list of typedef's from /src/tools/find_typedef $INDENT -bad -bap -bc -bl -d0 -cdb -nce -nfc1 -di12 -i4 -l79 \ ! -lp -nip -npro -bbb $EXTRA_OPTS \ ! `egrep -v '^(FD_SET|date|interval|timestamp|ANY)$' "$TYPEDEFS" | sed -e '/^$/d' -e 's/.*/-T& /'` \ /tmp/$$a >/tmp/$$ 2>&1 if [ "$?" -ne 0 -o -s /tmp/$$ ] --- 144,154 ---- # Protect wrapping in CATALOG(). sed 's;^CATALOG(.*$;/*&*/;' >/tmp/$$a + egrep -v '^(FD_SET|date|interval|timestamp|ANY)$' "$TYPEDEFS" | sed -e '/^$/d' > /tmp/$$b + # We get the list of typedef's from /src/tools/find_typedef $INDENT -bad -bap -bc -bl -d0 -cdb -nce -nfc1 -di12 -i4 -l79 \ ! -lp -nip -npro -bbb $EXTRA_OPTS -U/tmp/$$b \ /tmp/$$a >/tmp/$$ 2>&1 if [ "$?" -ne 0 -o -s /tmp/$$ ]