Thread: Re: [GENERAL] Issues with generate_series using integer boundaries
On 3 February 2011 13:58, Thom Brown <thom@linux.com> wrote: > On 3 February 2011 13:32, Thom Brown <thom@linux.com> wrote: >> Actually, further testing indicates this causes other problems: >> >> postgres=# SELECT x FROM generate_series(1, 9,-1) AS a(x); >> x >> --- >> 1 >> (1 row) >> >> Should return no rows. >> >> postgres=# SELECT x FROM generate_series(1, 9,3) AS a(x); >> x >> ---- >> 1 >> 4 >> 7 >> 10 >> (4 rows) >> >> Should return 3 rows. > > Still messy code, but the attached patch does the job now: > > postgres=# SELECT x FROM > generate_series(2147483643::int4,2147483647::int4) AS a(x); > x > ------------ > 2147483643 > 2147483644 > 2147483645 > 2147483646 > 2147483647 > (5 rows) > > postgres=# SELECT x FROM > generate_series(2147483642::int4,2147483647::int4, 2) AS a(x); > x > ------------ > 2147483642 > 2147483644 > 2147483646 > (3 rows) > > postgres=# SELECT x FROM > generate_series(2147483643::int4,2147483647::int4, 6) AS a(x); > x > ------------ > 2147483643 > (1 row) > > postgres=# SELECT x FROM generate_series((-2147483643)::int4, > (-2147483648)::int4, -1) AS a(x); > x > ------------- > -2147483643 > -2147483644 > -2147483645 > -2147483646 > -2147483647 > -2147483648 > (6 rows) > > postgres=# SELECT x FROM generate_series(1, 9,-1) AS a(x); > x > --- > (0 rows) > > postgres=# SELECT x FROM generate_series(1, 9,3) AS a(x); > x > --- > 1 > 4 > 7 > (3 rows) > Copying to -hackers. The issue is that generate_series will not return if the series hits either the upper or lower boundary during increment, or goes beyond it. The attached patch fixes this behaviour, but should probably be done a better way. The first 3 examples above will not return. -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935
Attachment
On Fri, Feb 4, 2011 at 21:32, Thom Brown <thom@linux.com> wrote: > The issue is that generate_series will not return if the series hits > either the upper or lower boundary during increment, or goes beyond > it. The attached patch fixes this behaviour, but should probably be > done a better way. The first 3 examples above will not return. There are same bug in int8 and timestamp[tz] versions. We also need fix for them. =# SELECT x FROM generate_series(9223372036854775807::int8, 9223372036854775807::int8) AS a(x); =# SELECT x FROM generate_series('infinity'::timestamp, 'infinity', '1 sec') AS a(x); =# SELECT x FROM generate_series('infinity'::timestamptz, 'infinity', '1 sec') AS a(x); > postgres=# SELECT x FROM generate_series(1, 9,-1) AS a(x); > postgres=# SELECT x FROM generate_series(1, 9,3) AS a(x); They work as expected in 9.1dev. -- Itagaki Takahiro
On 7 February 2011 09:04, Itagaki Takahiro <itagaki.takahiro@gmail.com> wrote: > On Fri, Feb 4, 2011 at 21:32, Thom Brown <thom@linux.com> wrote: >> The issue is that generate_series will not return if the series hits >> either the upper or lower boundary during increment, or goes beyond >> it. The attached patch fixes this behaviour, but should probably be >> done a better way. The first 3 examples above will not return. > > There are same bug in int8 and timestamp[tz] versions. > We also need fix for them. > =# SELECT x FROM generate_series(9223372036854775807::int8, > 9223372036854775807::int8) AS a(x); Yes, of course, int8 functions are separate. I attach an updated patch, although I still think there's a better way of doing this. > =# SELECT x FROM generate_series('infinity'::timestamp, 'infinity', '1 > sec') AS a(x); > =# SELECT x FROM generate_series('infinity'::timestamptz, 'infinity', > '1 sec') AS a(x); I'm not sure how this should be handled. Should there just be a check for either kind of infinity and return an error if that's the case? I didn't find anything wrong with using timestamp boundaries: postgres=# SELECT x FROM generate_series('1 Jan 4713 BC 00:00:00'::timestamp, '1 Jan 4713 BC 00:00:05'::timestamp, '1 sec') AS a(x); x ------------------------ 4713-01-01 00:00:00 BC 4713-01-01 00:00:01 BC 4713-01-01 00:00:02 BC 4713-01-01 00:00:03 BC 4713-01-01 00:00:04 BC 4713-01-01 00:00:05 BC (6 rows) Although whether this demonstrates a true timestamp boundary, I'm not sure. >> postgres=# SELECT x FROM generate_series(1, 9,-1) AS a(x); >> postgres=# SELECT x FROM generate_series(1, 9,3) AS a(x); > They work as expected in 9.1dev. Those 2 were to demonstrate that the changes don't affect existing functionality. My previous patch proposal (v2) caused these to return unexpected output. -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935
Attachment
On Mon, Feb 7, 2011 at 20:38, Thom Brown <thom@linux.com> wrote: > Yes, of course, int8 functions are separate. I attach an updated > patch, although I still think there's a better way of doing this. Thanks. Please add the patch to the *current* commitfest because it's a bugfix. https://commitfest.postgresql.org/action/commitfest_view?id=9 I've not tested the patch yet, but if we could drop the following line in the patch, the code could be much cleaner. /* ensure first value in series should exist */ > I'm not sure how this should be handled. Should there just be a check > for either kind of infinity and return an error if that's the case? I Maybe so. It also works if we had infinity on timestamp overflow, but I've not tested yet. Anyway, we need similar fix for timestamp versions. -- Itagaki Takahiro
On 8 February 2011 09:22, Itagaki Takahiro <itagaki.takahiro@gmail.com> wrote: > On Mon, Feb 7, 2011 at 20:38, Thom Brown <thom@linux.com> wrote: >> Yes, of course, int8 functions are separate. I attach an updated >> patch, although I still think there's a better way of doing this. > > Thanks. Please add the patch to the *current* commitfest > because it's a bugfix. > https://commitfest.postgresql.org/action/commitfest_view?id=9 > > I've not tested the patch yet, but if we could drop the following > line in the patch, the code could be much cleaner. > /* ensure first value in series should exist */ > >> I'm not sure how this should be handled. Should there just be a check >> for either kind of infinity and return an error if that's the case? I > > Maybe so. It also works if we had infinity on timestamp overflow, but > I've not tested yet. Anyway, we need similar fix for timestamp versions. Well, in its current state, I expect it to get rejected, but I guess at least it gets a better chance of being looked at. I've added it to the commitfest now. -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935
On 02/07/2011 06:38 AM, Thom Brown wrote: > On 7 February 2011 09:04, Itagaki Takahiro<itagaki.takahiro@gmail.com> wrote: >> On Fri, Feb 4, 2011 at 21:32, Thom Brown<thom@linux.com> wrote: >>> The issue is that generate_series will not return if the series hits >>> either the upper or lower boundary during increment, or goes beyond >>> it. The attached patch fixes this behaviour, but should probably be >>> done a better way. The first 3 examples above will not return. >> There are same bug in int8 and timestamp[tz] versions. >> We also need fix for them. >> =# SELECT x FROM generate_series(9223372036854775807::int8, >> 9223372036854775807::int8) AS a(x); > Yes, of course, int8 functions are separate. I attach an updated > patch, although I still think there's a better way of doing this. > >> =# SELECT x FROM generate_series('infinity'::timestamp, 'infinity', '1 >> sec') AS a(x); >> =# SELECT x FROM generate_series('infinity'::timestamptz, 'infinity', >> '1 sec') AS a(x); > I'm not sure how this should be handled. Should there just be a check > for either kind of infinity and return an error if that's the case? I > didn't find anything wrong with using timestamp boundaries: > > postgres=# SELECT x FROM generate_series('1 Jan 4713 BC > 00:00:00'::timestamp, '1 Jan 4713 BC 00:00:05'::timestamp, '1 sec') AS > a(x); > x > ------------------------ > 4713-01-01 00:00:00 BC > 4713-01-01 00:00:01 BC > 4713-01-01 00:00:02 BC > 4713-01-01 00:00:03 BC > 4713-01-01 00:00:04 BC > 4713-01-01 00:00:05 BC > (6 rows) > > Although whether this demonstrates a true timestamp boundary, I'm not sure. > >>> postgres=# SELECT x FROM generate_series(1, 9,-1) AS a(x); >>> postgres=# SELECT x FROM generate_series(1, 9,3) AS a(x); >> They work as expected in 9.1dev. > Those 2 were to demonstrate that the changes don't affect existing > functionality. My previous patch proposal (v2) caused these to return > unexpected output. > Isn't this all really a bug fix that should be backpatched, rather than a commitfest item? cheers andrew
On Wed, Feb 9, 2011 at 10:17, Andrew Dunstan <andrew@dunslane.net> wrote: > Isn't this all really a bug fix that should be backpatched, rather than a > commitfest item? Sure, but we don't have any bug trackers... -- Itagaki Takahiro
On 02/08/2011 08:19 PM, Itagaki Takahiro wrote: > On Wed, Feb 9, 2011 at 10:17, Andrew Dunstan<andrew@dunslane.net> wrote: >> Isn't this all really a bug fix that should be backpatched, rather than a >> commitfest item? > Sure, but we don't have any bug trackers... Quite right, but the commitfest manager isn't meant to be a substitute for one. Bug fixes aren't subject to the same restrictions of feature changes. cheers andrew
On Tue, Feb 8, 2011 at 8:30 PM, Andrew Dunstan <andrew@dunslane.net> wrote: > Quite right, but the commitfest manager isn't meant to be a substitute for > one. Bug fixes aren't subject to the same restrictions of feature changes. Another option would be to add this here: http://wiki.postgresql.org/wiki/PostgreSQL_9.1_Open_Items -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 9 February 2011 02:11, Robert Haas <robertmhaas@gmail.com> wrote: > On Tue, Feb 8, 2011 at 8:30 PM, Andrew Dunstan <andrew@dunslane.net> wrote: >> Quite right, but the commitfest manager isn't meant to be a substitute for >> one. Bug fixes aren't subject to the same restrictions of feature changes. > > Another option would be to add this here: > > http://wiki.postgresql.org/wiki/PostgreSQL_9.1_Open_Items I've removed it from the commitfest because it really doesn't belong there, and I've added it to the open items list. Thanks -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935
On Wed, Feb 9, 2011 at 4:50 AM, Thom Brown <thom@linux.com> wrote: > On 9 February 2011 02:11, Robert Haas <robertmhaas@gmail.com> wrote: >> On Tue, Feb 8, 2011 at 8:30 PM, Andrew Dunstan <andrew@dunslane.net> wrote: >>> Quite right, but the commitfest manager isn't meant to be a substitute for >>> one. Bug fixes aren't subject to the same restrictions of feature changes. >> >> Another option would be to add this here: >> >> http://wiki.postgresql.org/wiki/PostgreSQL_9.1_Open_Items > > I've removed it from the commitfest because it really doesn't belong > there, and I've added it to the open items list. So, I finally got around to look at this, and I think there is a simpler solution. When an overflow occurs while calculating the next value, that just means that the value we're about to return is the last one that should be generated. So we just need to frob the context state so that the next call will decide we're done. There are any of number of ways to do that; I just picked what looked like the easiest one. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Attachment
On 17 June 2011 04:44, Robert Haas <robertmhaas@gmail.com> wrote: > On Wed, Feb 9, 2011 at 4:50 AM, Thom Brown <thom@linux.com> wrote: >> On 9 February 2011 02:11, Robert Haas <robertmhaas@gmail.com> wrote: >>> On Tue, Feb 8, 2011 at 8:30 PM, Andrew Dunstan <andrew@dunslane.net> wrote: >>>> Quite right, but the commitfest manager isn't meant to be a substitute for >>>> one. Bug fixes aren't subject to the same restrictions of feature changes. >>> >>> Another option would be to add this here: >>> >>> http://wiki.postgresql.org/wiki/PostgreSQL_9.1_Open_Items >> >> I've removed it from the commitfest because it really doesn't belong >> there, and I've added it to the open items list. > > So, I finally got around to look at this, and I think there is a > simpler solution. When an overflow occurs while calculating the next > value, that just means that the value we're about to return is the > last one that should be generated. So we just need to frob the > context state so that the next call will decide we're done. There are > any of number of ways to do that; I just picked what looked like the > easiest one. I knew there'd be a much simpler way of solving this. Works for me. Thanks Robert. -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935 EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
> > On Wed, Feb 9, 2011 at 4:50 AM, Thom Brown <thom@linux.com> wrote: > > On 9 February 2011 02:11, Robert Haas <robertmhaas@gmail.com> wrote: > >> On Tue, Feb 8, 2011 at 8:30 PM, Andrew Dunstan > <andrew@dunslane.net> wrote: > >>> Quite right, but the commitfest manager isn't meant to be a > >>> substitute for one. Bug fixes aren't subject to the same restrictions of > feature changes. > >> > >> Another option would be to add this here: > >> > >> http://wiki.postgresql.org/wiki/PostgreSQL_9.1_Open_Items > > > > I've removed it from the commitfest because it really doesn't belong > > there, and I've added it to the open items list. > > So, I finally got around to look at this, and I think there is a simpler solution. > When an overflow occurs while calculating the next value, that just means > that the value we're about to return is the last one that should be generated. > So we just need to frob the context state so that the next call will decide > we're done. There are any of number of ways to do that; I just picked what > looked like the easiest one. > Tangential comment but have you considered emitting a warning (and/or log entry) when you are 10,000-50,000 away from issuing the last available number in the sequence so that some recognition exists that any code depending on the sequence is going to fail soon? Also, during sequence creation you know the integer type being used so that maximum value is known and an overflow should not need to come into play (I guess the trade-off is the implicit "try-catch" [or whatever mechanism C uses] performance hit versus the need to store another full integer in the data structure). You could also give access to the "warning threshold" value so that the developer can change it to whatever value is desired (with a meaningful default of course). David J.
On Fri, Jun 17, 2011 at 10:39 AM, David Johnston <polobo@yahoo.com> wrote: > Tangential comment but have you considered emitting a warning (and/or log > entry) when you are 10,000-50,000 away from issuing the last available > number in the sequence so that some recognition exists that any code > depending on the sequence is going to fail soon? > > Also, during sequence creation you know the integer type being used so that > maximum value is known and an overflow should not need to come into play (I > guess the trade-off is the implicit "try-catch" [or whatever mechanism C > uses] performance hit versus the need to store another full integer in the > data structure). > > You could also give access to the "warning threshold" value so that the > developer can change it to whatever value is desired (with a meaningful > default of course). There are already tools out there that can monitor this stuff - for example, check_postgres.pl. http://bucardo.org/check_postgres/check_postgres.pl.html#sequence We tend to avoid emitting warnings for this kind of thing because they can consume vast amounts of disk space, and a lot of times no one's looking at them anyway. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > So, I finally got around to look at this, and I think there is a > simpler solution. When an overflow occurs while calculating the next > value, that just means that the value we're about to return is the > last one that should be generated. So we just need to frob the > context state so that the next call will decide we're done. There are > any of number of ways to do that; I just picked what looked like the > easiest one. +1 for this solution. BTW, there was some mention of changing the timestamp versions of generate_series as well, but right offhand I'm not convinced that those need any change. I think you'll get overflow detection there automatically from the functions being used --- and if not, it's a bug in those functions, not in generate_series. regards, tom lane
On Fri, Jun 17, 2011 at 2:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> So, I finally got around to look at this, and I think there is a >> simpler solution. When an overflow occurs while calculating the next >> value, that just means that the value we're about to return is the >> last one that should be generated. So we just need to frob the >> context state so that the next call will decide we're done. There are >> any of number of ways to do that; I just picked what looked like the >> easiest one. > > +1 for this solution. > > BTW, there was some mention of changing the timestamp versions of > generate_series as well, but right offhand I'm not convinced that > those need any change. I think you'll get overflow detection there > automatically from the functions being used --- and if not, it's a > bug in those functions, not in generate_series. Maybe not, because those functions probably throw an error if an overflow is detected, and that's not really correct. By definition, the second generate_series() is the point at which we should stop generating, and that point has to be within the range of the underlying data type, by definition. So if an overflow occurs, that's just another way of saying that we've certainly gone past the stop point and needn't generate anything further. The error is an artifact of the method we've used to generate the next point. I'm not sure how much energy it's worth expending on that case. Using really large dates may be less common that using values that strain the range of a 4-byte integer. But it might at least be worth a TODO. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > On Fri, Jun 17, 2011 at 2:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> BTW, there was some mention of changing the timestamp versions of >> generate_series as well, but right offhand I'm not convinced that >> those need any change. �I think you'll get overflow detection there >> automatically from the functions being used --- and if not, it's a >> bug in those functions, not in generate_series. > Maybe not, because those functions probably throw an error if an > overflow is detected, and that's not really correct. Oh, good point. > I'm not sure how much energy it's worth expending on that case. Using > really large dates may be less common that using values that strain > the range of a 4-byte integer. But it might at least be worth a TODO. Yeah, I can't get excited about it either; restructuring that code enough to avoid an error seems like a lot more work than the case is worth. Maybe someday somebody will hit the case in practice and then be motivated to work on it, but in the meantime ... regards, tom lane