Christopher Masto <chris@netmonger.net> writes:
> Anyway, I guess my point is that there is some incentive here for
> having a postgres that is completely non-iffy when it comes to >2GB
> databases. Shortly we will be filling the system with test data and I
> will be glad to help out as much as possible (which may not be much in
> the way of code, as I've got my hands rather full right now).
Great, we need some people keeping us honest. I don't think any of the
core developers have >2Gb databases (I sure don't).
I think there are two known issues right now: the VACUUM one and
something about DROP TABLE neglecting to delete the additional files
for a multi-segment table. To my mind the VACUUM problem is a "must
fix" because you can't really live without VACUUM, especially not on
a huge database. The DROP problem is less severe since you could
clean up by hand if necessary (not that it shouldn't get fixed of
course, but we have more critical issues to deal with for 6.5).
In theory, you can test the behavior for >2Gb tables without actually
needing to expend much time in creating huge tables: just reduce the
value of RELSEG_SIZE in include/config.h to some more convenient value,
like a couple meg, so that you can get a segmented table without so
much effort. This doesn't speak to performance issues of course,
but at least you can check for showstopper bugs.
(BTW, has anyone been thinking about increasing OID to more than 4
bytes? Folks are going to start hitting the 4G-tuples-per-database
mark pretty soon.)
regards, tom lane