On Mon, 2011-06-27 at 00:37 +0200, Florian Pflug wrote:
> I actually wouldn't expect there to be one. From what I gathered
> during the last discussion, the ideal behind range types is that they
> model sets of the form {x in T | a <= x < b} for arbitrary types
> T, with the only requirement being that T be ordered. To compute
> a length, you additionally need either an algebraic structure on
> T which defines an operation "minus", or some metric which defines
> distance(a,b). Both are *much* stronger concepts than simply being
> ordered. The problems you outline below seem to me to all root in
> this discrepancy.
I agree with you here. It does seem like supporting length() increases
the the complexity of range types quite a bit.
> Strings are a nice example of an ordered type on which no "intuitive"
> definition of either "s1 - s2" or "distance(s1,s2)" exists.
Another good point. There's no logical "length()" function at all for a
text range.
> > The length() function is obviously an
> > important function to provide.
>
>
> I'd say it isn't, but maybe I'm missing some use-case that you have
> in mind.
The reason I said that is because, if making only a single range type
for, say, timestamptz, I would make a length() function without even
thinking about it.
There are a few types of queries where that kind of thing is useful,
like billing based on the amount of time some resource is allocated to
you.
But I think you're right, it shouldn't be the responsibility of range
types. Perhaps I should leave length() as some inlinable SQL functions
like I mentioned, or perhaps I should remove them completely.
Regards,Jeff Davis