We're coming the end of the 9.1 development cycle, and I think that
there is a serious danger of insufficient bandwidth to handle the
large patches we have outstanding. For my part, I am hoping to find
the bandwidth to two, MAYBE three major commits between now and the
end of 9.1CF4, but I am not positive that I will be able to find even
that much time, and the number of major patches vying for attention is
considerably greater than that. Quick estimate:
- SQL/MED - probably needs >~3 large commits: foreign table scan, file
FDW, postgresql FDW, plus whatever else gets submitted in the next two
weeks
- MERGE
- checkpoint improvements
- SE-Linux integration
- extensions - may need 2 or more commits
- true serializability - not entirely sure of the status of this
- writeable CTEs (Tom has indicated he will look at this)
- PL/python patches (Peter has indicated he will look look at this)
- snapshot taking inconsistencies (Tom has indicated he will look at this)
- per-column collation (Peter)
- synchronous replication (Simon, and, given the level of interest in
and complexity of this feature, probably others as well)
I guess my basic question is - is it realistic to think that we're
going to get all of the above done in the next 45 days? Is there
anything we can do make the process more efficient? If a few more
large patches drop into the queue in the next two weeks, will we have
bandwidth for those as well? If we don't think we can get everything
done in the time available, what's the best way to handle that? I
would hate to discourage people from continuing to hack away, but I
think it would be even worse to give people the impression that
there's a chance of getting work reviewed and committed if there
really isn't.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company