Hi Dean,
On Wed, Oct 4, 2023 at 6:59 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:
>
> I think we should introduce interval_out_of_range_error() function on
> the lines of float_overflow_error(). Later patches introduce more
> places where we raise that error. We can introduce the function as
> part of those patches.
>
Not done yet. The error is raised from multiple places and we might
find improving error message at some of those places. This refactoring
will make that work difficult. Let me know if you think otherwise.
> >
> >
> > 2. The various in_range() functions needed adjusting to handle
> > infinite interval offsets.
> >
> > For timestamp values, I followed the precedent set by the equivalent
> > float/numeric code. I.e., all (finite and non-finite) timestamps are
> > regarded as infinitely following -infinity and infinitely preceding
> > +infinity.
> >
> > For time values, it's a bit different because no time values precede
> > or follow any other by more than 24 hours, so a window frame between
> > +inf following and +inf following is empty (whereas in the timestamp
> > case it contains +inf). Put another way, such a window frame is empty
> > because a time value can't be infinity.
> >
I think this code is reasonable. But users may find it inconsistent
with the other ways to achieve the same outcome. For example, We don't
allow non-finite intervals to be added or subtracted from time or
timetz. So if someone tries to compare the rows using val >/<= base
+/- offset, those queries will fail whereas similar implied conditions
in window specification will not throw an error. If you have
considered this already, I am fine with the code as is.
This code doesn't handle non-finite intervals explicitly. But that's
inline with the interval comparison functions (interval_le/ge etc.)
which rely on infinities being represented by extreme values.
>
> I will review and test this. I will also take a look at what else we
> might be missing in the patch. [5] did mention that in_range()
> functions need to be assessed but I don't see corresponding changes in
> the subsequent patches. I will go over that list again.
Added a separate patch (0009) to fix
brin_minmax_multi_distance_interval(). The fix is inconsistent with
the way infinte timestamp and date is handled in that file. But I
think infinite timestamp and date handling itself is inconsistent with
the way infinite values of float are handled. I have tried to be
consistent with float. May be we should fix date and timestamp
functions as well.
I also changed brin_multi.sql to test infinte interval values in BRIN
index. This required some further changes to existing queries.
I thought about combining these two INSERTs but decided against that
since we would loose NULL interval values.
-- throw in some NULL's and different values
INSERT INTO brintest_multi (inetcol, cidrcol) SELECT
inet 'fe80::6e40:8ff:fea9:8c46' + tenthous,
cidr 'fe80::6e40:8ff:fea9:8c46' + tenthous
FROM tenk1 ORDER BY thousand, tenthous LIMIT 25;
INSERT INTO brintest_multi(intervalcol) VALUES ('-infinity'), ('+infinity');
I took some time understand BRIN before making any changes. That took time.
On your patches.
I like the eval() function you have used and its usage. It's a bit
harder to understand it initially but makes the query and output
crisp. Saves some SQL and output lines too. I tried to use the same
trick for time and timetz
select t as time, i as interval,
eval(format('time %L + interval %L', t, i)) AS time_pl,
eval(format('time %L - interval %L', t, i)) AS time_mi,
eval(format('timetz %L + interval %L', t, i)) AS timetz_pl,
eval(format('timetz %L - interval %L', t, i)) AS timetz_mi
from (values ('11:27:42')) t1(t),
(values ('infinity'),
('-infinity')) as t2(i);
The query and output take the same space. So I decided against using it.
I have added a separate patch (0008) to test negative interval values,
including -infinity, in preceding and following specification.
Patches from 0001 to 0007 are same as what you attached but rebased on
the latest HEAD.
I think we should squash 0002 to 0007.
--
Best Wishes,
Ashutosh Bapat