Re: Commitfest problems - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: Commitfest problems
Date
Msg-id 20141216182149.GX25679@tamriel.snowman.net
Whole thread Raw
In response to Re: Commitfest problems  (Craig Ringer <craig@2ndquadrant.com>)
List pgsql-hackers
* Craig Ringer (craig@2ndquadrant.com) wrote:
> It's not like development on a patch series is difficult. You commit
> small fixes and changes, then you 'git rebase -i' and reorder them to
> apply to the appropriate changesets. Or you can do a 'rebase -i' and in
> 'e'dit mode make amendments to individual commits. Or you can commit
> 'fixup!'s that get auto-squashed.

I don't have a problem with this, but it's an independent consideration
for how a patch might be developed vs. what the main git repo looks
like.  I don't think it makes sense to commit to the main repo a new
catalog table without any commands to manage it, nor to have a catalog
table which can be managed through the grammar but doesn't actually do
anything due to lacking the backend code for it.

Now, I fully agree that we want to continue to build features on top of
other features, but, in general, those need to be independently valuable
features when they are committed to the main repo (eg: WITH CHECK for
security barrier views went in first as it was independently useful, and
then RLS simply built on top of it).

> This is part of my grumbling about the use of git like it's still CVS.

Our git repo is actually readable and reasonably easy to follow and, for
my part, that's a great thing we have that most other projects don't.
That isn't to say that we shouldn't develop with smaller pieces, but I
tend to agree with Tom that it's actually easier (for me, at least) to
review a single, complete, patch than a bunch of independent ones which
build on each other.  Perhaps that's my failing or a fault of my
processes, but I will actually sit and read through a patch in my email
client quite often.  In general, I expect the code to compile and pass
'make check', those are necessary, of course, but detecting compile-time
problems is something the compiler is good at and I'm not.  Thinking
about the impact of a new data structure, a given code block, or making
sure that all of the pieces of a new catalog row are set correctly are
things that the compiler isn't good at and is where a reviewer is most
valuable.

Pulling the code in and testing it by hand is useful, as is getting into
gdb and looking at the structures as they are manipulated, but I find it
extremely important to also actually review the *code*, which means
reading it and considering "are there places this patch needs to be
touching that it isn't?  how will this other bit of code react to these
changes?  does this code still look sane and like one person thought
through the whole code path?  do the comments explain why things are
happening or do they just repeat what the code already says?", etc.

This goes back to the excellent point elsewhere on this thread that the
current documentation for reviewers is far too focused on the mechanical
bits which could basically be automated (in fact, I think Peter's
already got most of it automated..).
Thanks,
    Stephen

pgsql-hackers by date:

Previous
From: Josh Berkus
Date:
Subject: Re: Commitfest problems
Next
From: Stephen Frost
Date:
Subject: Re: pgaudit - an auditing extension for PostgreSQL