On Tue, Mar 26, 2013 at 9:02 AM, Brendan Jurd <direvus@gmail.com> wrote:
> On 26 March 2013 22:57, Robert Haas <robertmhaas@gmail.com> wrote:
>> They hate it twice as much when the change is essentially cosmetic.
>> There's no functional problems with arrays as they exist today that
>> this change would solve.
>
> We can't sensibly test for whether an array is empty. I'd call that a
> functional problem.
Sure you can. Equality comparisons work just fine.
rhaas=# select '{}'::int4[] = '{}'::int4[];?column?
----------t
(1 row)
rhaas=# select '{}'::int4[] = '{1}'::int4[];?column?
----------f
(1 row)
> The NULL return from array_{length,lower,upper,ndims} is those
> functions' way of saying their arguments failed a sanity check. So we
> cannot distinguish in a disciplined way between a valid, empty array,
> and bad arguments. If the zero-D implementation had been more
> polished and say, array_ndims returned zero, we had provided an
> array_empty function, or the existing functions threw errors for silly
> arguments instead of returning NULL, then I'd be more inclined to see
> your point. But as it stands, the zero-D implementation has always
> been half-baked and slightly broken, we just got used to working
> around it.
Well, you could easily change array_ndims() to error out if ARR_NDIM()
is negative or more than MAXDIM and return NULL only if it's exactly
0. That wouldn't break backward compatibility because it would throw
an error only if fed a value that shouldn't ever exist in the first
place, short of a corrupted database. I imagine the other functions
are amenable to similar treatment.
And if neither that nor just comparing against an empty array literal
floats your boat, adding an array_is_empty() function would let you
test for this condition without breaking backward compatibility, too.
That's overkill, I think, but it would work.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company