Re: management of large patches - Mailing list pgsql-hackers

From Magnus Hagander
Subject Re: management of large patches
Date
Msg-id AANLkTikqUEjg6LEAjvFTcfZ11tbjzw8-fuFemfj6sQw-@mail.gmail.com
Whole thread Raw
In response to management of large patches  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: management of large patches
List pgsql-hackers
On Sun, Jan 2, 2011 at 06:32, Robert Haas <robertmhaas@gmail.com> wrote:
> We're coming the end of the 9.1 development cycle, and I think that
> there is a serious danger of insufficient bandwidth to handle the
> large patches we have outstanding.  For my part, I am hoping to find
> the bandwidth to two, MAYBE three major commits between now and the
> end of 9.1CF4, but I am not positive that I will be able to find even
> that much time, and the number of major patches vying for attention is
> considerably greater than that.  Quick estimate:
>
> - SQL/MED - probably needs >~3 large commits: foreign table scan, file
> FDW, postgresql FDW, plus whatever else gets submitted in the next two
> weeks
> - MERGE
> - checkpoint improvements
> - SE-Linux integration
> - extensions - may need 2 or more commits
> - true serializability - not entirely sure of the status of this
> - writeable CTEs (Tom has indicated he will look at this)
> - PL/python patches (Peter has indicated he will look look at this)
> - snapshot taking inconsistencies (Tom has indicated he will look at this)
> - per-column collation (Peter)
> - synchronous replication (Simon, and, given the level of interest in
> and complexity of this feature, probably others as well)
>
> I guess my basic question is - is it realistic to think that we're
> going to get all of the above done in the next 45 days?  Is there
> anything we can do make the process more efficient?  If a few more
> large patches drop into the queue in the next two weeks, will we have
> bandwidth for those as well?  If we don't think we can get everything
> done in the time available, what's the best way to handle that?  I

Well, we've always (well, since we had cf's) said that large patches
shouldn't be submitted for the last CF, they should be submitted for
one of the first. So if something *new* gets dumped on us for the last
one, giving priority to the existing ones in the queue seems like the
only fair option.

As for priority between those that *were* submitted earlier, and have
been reworked (which is how the system is supposed to work), it's a
lot harder. And TBH, I think we're going to have a problem getting all
those done. But the question is - are all ready enough, or are a
couple going to need the "returned with feedback" status *regardless*
of if this is the last CF or not?


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: [COMMITTERS] pgsql: Basic foreign table support.
Next
From: Magnus Hagander
Date:
Subject: Re: Libpq PGRES_COPY_BOTH - version compatibility