Thread: Best practices: Handling Daylight-saving time

Best practices: Handling Daylight-saving time

From
Együd Csaba
Date:
Hi All,
I'd like to ask your opininon about how to handle DST on an 7/24 system.
Where should it be handled: on the server side or on the client side? And
how could I (at all could I???) make it transparent?

Or we must bow to the fact that twice a year there are two unusable hours?
If it cannot be solved technically, than it is acceptable, but if there is a
chance to do it, I'd like to try it.

Our system stores 200-1000 measured data per minute, comming from
substations. The substations clock is synchronized periodically as well.
When the DST is switched there is 1 hour of data missing (or overlapped).
Certainly the client machines are autmatically adjusted for the DST.

How do this others?

Many many thanks,

-- Csaba

--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.6.4 - Release Date: 2005.03.07.




--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.6.4 - Release Date: 2005.03.07.


Re: Best practices: Handling Daylight-saving time

From
Bruno Wolff III
Date:
On Fri, Mar 11, 2005 at 15:25:28 +0100,
  Együd Csaba <csegyud@vnet.hu> wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24 system.

My advice would be to use GMT and not have to worry about DST while
collecting data. When displaying data you might convert the timestamps
to localtime if that is useful for people using the data.

Re: Best practices: Handling Daylight-saving time

From
Martijn van Oosterhout
Date:
On Fri, Mar 11, 2005 at 03:25:28PM +0100, Együd Csaba wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24 system.
> Where should it be handled: on the server side or on the client side? And
> how could I (at all could I???) make it transparent?
>
> Or we must bow to the fact that twice a year there are two unusable hours?
> If it cannot be solved technically, than it is acceptable, but if there is a
> chance to do it, I'd like to try it.

I guess the same way computers have been handling it for years: store
time as "seconds since epoch" and let the user interface handle the
translation. Timezone files for anywhere in the world are available as
well as routines for converting seconds since epoch to localtime and
vice-versa.

Hope this helps,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.

Attachment

Re: Best practices: Handling Daylight-saving time

From
Christopher Browne
Date:
csegyud@vnet.hu (Együd Csaba) wrote:
> I'd like to ask your opininon about how to handle DST on an 7/24 system.
> Where should it be handled: on the server side or on the client side? And
> how could I (at all could I???) make it transparent?

Don't use DST.

Use GMT/UTC.

That makes the issue go away.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/slony.html
Signs of a Klingon  Programmer #2: "You  question the worthiness of my
code? I should kill you where you stand!"

Re: Best practices: Handling Daylight-saving time

From
Steve Crawford
Date:
On Friday 11 March 2005 6:25 am, Együd Csaba wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24
> system. Where should it be handled: on the server side or on the
> client side? And how could I (at all could I???) make it
> transparent?
>
> Or we must bow to the fact that twice a year there are two unusable
> hours? If it cannot be solved technically, than it is acceptable,
> but if there is a chance to do it, I'd like to try it.
>
> Our system stores 200-1000 measured data per minute, comming from
> substations. The substations clock is synchronized periodically as
> well. When the DST is switched there is 1 hour of data missing (or
> overlapped). Certainly the client machines are autmatically
> adjusted for the DST.

First, at point in time is a point in time. You might call it
2005-03-11 01:02:03.12345 GMT or you might represent it in any other
time zone with or without Daylight Saving Time (or European Summer
Time or...) but it is still the same point in time.

Your assertion about unusable hours is incorrect. Rather, if you have
this problem then you aren't sending/storing the time stamp data
completely/correctly.

Here on the West Coast of the US in the spring the time simply moves
seamlessly and continuously from 01:59:59 PST to 03:00:00 PDT.
Similarly in the fall it goes from 01:59:59 PDT to 01:00:00 PST. If
you throw away the time zone the you will naturally have problems.

The localization in both *nix and PostgreSQL will display the times
just fine based on whether or not they are in standard or daylight
time. (Try "select now();" and "select now() + '1 month'::interval;"

Whether you choose to fix things by specifying complete timestamps,
standardize on GMT, use unix timestamps, etc. is up to you. It will
all work fine if both ends use the same convention and you don't drop
critical parts of the timezone information.

Side note: if you are using cron/at to schedule your data collection
then you need to investigate the behaviour of your versions of those
programs and compare it to your desired outcome.

Cheers,
Steve


Re: Best practices: Handling Daylight-saving time

From
Randall Nortman
Date:
On Fri, Mar 11, 2005 at 03:25:28PM +0100, Egy?d Csaba wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24 system.
> Where should it be handled: on the server side or on the client side? And
> how could I (at all could I???) make it transparent?

As others have mentioned, store timestamps on the server in UTC, and
translate to/from local time on the client side if desired.  Postgres
can do this for you in the query; just look in the docs to see how.

I have personally encountered situations where that is not quite
adequate, however, because the data elements may originate in
different time zones, and it may be necessary to display in the
original time zone instead of (or in addition to) the local time zone
of the client.  (Weather data, for example, should generally be
displayed using the time zone of the source of the data.)  In this
case, you must store some representation of the source time zone in a
separate field, in addition to the UTC timestamp.  You can then use
both fields together to retrieve the correct original local time.
(Make sure to note in that time zone field whether or not DST is
observed in that locale.)  Exactly how to do this depends on your
application.

> Or we must bow to the fact that twice a year there are two unusable hours?
> If it cannot be solved technically, than it is acceptable, but if there is a
> chance to do it, I'd like to try it.

When timestamps are stored in UTC, the missing (when skipping forward)
and duplicate (when setting back) hours are only an issue in the user
interface, for entry and display of the times.  The missing hours are
generally easier to deal with, since the only thing affected is the
calculation of durations.  If your interface displays elapsed time,
make sure you take this into account.  (The easy way is to do the
duration calculations in the database, using UTC.)  Duplicate times
are more difficult: when displaying, you need to indicate whether DST
was in effect or not (i.e., was it the first 02:30 or the second?).
If times are to be entered manually in local time, the interface needs
to notice when an ambiguous time has been entered and ask the user to
disambiguate somehow.  This is a hell of a lot of trouble to go to for
something that will only come up very rarely or never in most
applications, but you have to do it if you want to get it right.

> Our system stores 200-1000 measured data per minute, comming from
> substations. The substations clock is synchronized periodically as well.
> When the DST is switched there is 1 hour of data missing (or overlapped).
> Certainly the client machines are autmatically adjusted for the DST.

If you have control over the production of data on these client
machines, just make sure it is produced in UTC, and the issue goes
away.  Otherwise, you can convert their local time back to UTC for
storage in the database, but then you have the duplicate hour
ambiguity to deal with.  If you know the data will be coming in
sequentially and/or in near real-time, you can probably figure it out
with a little extra logic in the app that loads the data into the DB.

Randall

Re: Best practices: Handling Daylight-saving time

From
Mike Benoit
Date:
Not exactly...

Here is a scenario I ran in to with collecting bandwidth usage and
displaying it back in graph form to customers.

You can store the timestamps in GMT, but the customer wants to see when
spikes happen in his localtime, which most likely has DST. So twice a
year, you are either compressing two hours of bandwidth usage into one,
or the opposite, stretching one hour in to two, which of course produces
somewhat odd looking graphs during that time.

Besides making note of DST on the graph so the customer can see it, I
haven't found a elegant solution to this problem.


On Fri, 2005-03-11 at 12:19 -0500, Christopher Browne wrote:
> csegyud@vnet.hu (Együd Csaba) wrote:
> > I'd like to ask your opininon about how to handle DST on an 7/24 system.
> > Where should it be handled: on the server side or on the client side? And
> > how could I (at all could I???) make it transparent?
>
> Don't use DST.
>
> Use GMT/UTC.
>
> That makes the issue go away.
--
Mike Benoit <ipso@snappymail.ca>

Attachment

Re: Best practices: Handling Daylight-saving time

From
Scott Marlowe
Date:
On Fri, 2005-03-11 at 13:47, Mike Benoit wrote:
> Not exactly...
>
> Here is a scenario I ran in to with collecting bandwidth usage and
> displaying it back in graph form to customers.
>
> You can store the timestamps in GMT, but the customer wants to see when
> spikes happen in his localtime, which most likely has DST. So twice a
> year, you are either compressing two hours of bandwidth usage into one,
> or the opposite, stretching one hour in to two, which of course produces
> somewhat odd looking graphs during that time.
>
> Besides making note of DST on the graph so the customer can see it, I
> haven't found a elegant solution to this problem.

I would think that if you stored them with the local timezone, and used
AT TIME ZONE to convert them to GMT for sorting, then they should show
up in the right order.  Just a guess.

Re: Best practices: Handling Daylight-saving time

From
Andrew - Supernews
Date:
On 2005-03-11, Mike Benoit <ipso@snappymail.ca> wrote:
> Here is a scenario I ran in to with collecting bandwidth usage and
> displaying it back in graph form to customers.
>
> You can store the timestamps in GMT, but the customer wants to see when
> spikes happen in his localtime, which most likely has DST. So twice a
> year, you are either compressing two hours of bandwidth usage into one,
> or the opposite, stretching one hour in to two, which of course produces
> somewhat odd looking graphs during that time.

That seems an odd way to handle it. If you graph the data by days according
to the customer's time, then on one day in the year your graph is one hour
smaller, and on another day it is one hour larger. The point to notice is
that the customer's local time should affect only the _labels_ on the graph,
and possibly your choice of start and end times, and not the _data_ being
plotted.

For example, suppose I have a table:

create table tztst (ts timestamptz primary key, value float8 not null);

and I want to plot individual days from it in the customer's timezone:

test=> set timezone to 'America/Denver';  -- or wherever he is
SET

test=> select ts::time,value from tztst
 where ts between '2005-04-02 00:00:00' and '2005-04-02 23:59:59' order by ts;
    ts    |      value
----------+------------------
 00:00:00 | 286.764410064167
 01:00:00 | 291.294525072763
 02:00:00 | 294.912455364789
 03:00:00 | 297.582051776698
 04:00:00 | 299.276640583591
 05:00:00 | 299.979290014267
 06:00:00 |  299.68297942788
 07:00:00 | 298.390669461862
 08:00:00 | 296.115272450212
 09:00:00 | 292.879523407724
 10:00:00 | 288.715752869235
 11:00:00 | 283.665563853606
 12:00:00 | 277.779416180109
 13:00:00 | 271.116122290598
 14:00:00 | 263.742259615024
 15:00:00 | 255.731505351766
 16:00:00 |  247.16390030942
 17:00:00 | 238.125049165494
 18:00:00 | 228.705265132773
 19:00:00 | 218.998667579544
 20:00:00 | 209.102241619985
 21:00:00 |  199.11486907096
 22:00:00 | 189.136340457592
 23:00:00 | 179.266357939324
(24 rows)

test=> select ts::time,value from tztst
 where ts between '2005-04-03 00:00:00' and '2005-04-03 23:59:59' order by ts;
    ts    |      value
----------+------------------
 00:00:00 | 169.603539118895
 01:00:00 | 160.244431687857
 03:00:00 | 151.282548753949
 04:00:00 | 142.807434489044
 05:00:00 | 134.903769433375
 06:00:00 | 127.650524395576
 07:00:00 | 121.120171402458
 08:00:00 | 115.377959582483
 09:00:00 | 110.481263218032
 10:00:00 | 106.479008480546
 11:00:00 | 103.411184576393
 12:00:00 | 101.308444187935
 13:00:00 |  100.19179720206
 14:00:00 | 100.072400786337
 15:00:00 | 100.951447910284
 16:00:00 | 102.820155425614
 17:00:00 | 105.659851824544
 18:00:00 | 109.442163799338
 19:00:00 | 114.129299739007
 20:00:00 | 119.674427330605
 21:00:00 | 126.022141492211
 22:00:00 | 133.109017962198
 23:00:00 | 140.864247013488
(23 rows)

test=> select ts::time,value from tztst
 where ts between '2005-10-30 00:00:00' and '2005-10-30 23:59:59' order by ts;
    ts    |      value
----------+------------------
 00:00:00 | 110.349122831853
 01:00:00 | 114.741289638094
 01:00:00 | 119.837588745288
 02:00:00 | 125.595930978012
 03:00:00 | 131.968759497219
 04:00:00 | 138.903442561358
 05:00:00 | 146.342708199957
 06:00:00 | 154.225117209803
 07:00:00 | 162.485570567354
 08:00:00 | 171.055847066766
 09:00:00 | 179.865166743321
 10:00:00 | 188.840775429059
 11:00:00 | 197.908545612907
 12:00:00 |  206.99358864294
 13:00:00 | 216.020873214721
 14:00:00 | 224.915845037786
 15:00:00 | 233.605042562575
 16:00:00 | 242.016703682664
 17:00:00 | 250.081358401684
 18:00:00 | 257.732402570221
 19:00:00 | 264.906647954345
 20:00:00 | 271.544844092858
 21:00:00 | 277.592167633387
 22:00:00 | 282.998675105977
 23:00:00 |  287.71971539486
(25 rows)

All of these can be converted to meaningful (and un-distorted) graphs.

--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

Re: Best practices: Handling Daylight-saving time

From
Mike Benoit
Date:
Instead of spikes or dips, with your method customers will just be
confused as to why the labels skip an hour, or have two of the same
hour. It would make for a more accurate graph though, your right.

6 of 1, half a dozen of another I guess.


On Fri, 2005-03-11 at 23:33 +0000, Andrew - Supernews wrote:
> On 2005-03-11, Mike Benoit <ipso@snappymail.ca> wrote:
> > Here is a scenario I ran in to with collecting bandwidth usage and
> > displaying it back in graph form to customers.
> >
> > You can store the timestamps in GMT, but the customer wants to see when
> > spikes happen in his localtime, which most likely has DST. So twice a
> > year, you are either compressing two hours of bandwidth usage into one,
> > or the opposite, stretching one hour in to two, which of course produces
> > somewhat odd looking graphs during that time.
>
> That seems an odd way to handle it. If you graph the data by days according
> to the customer's time, then on one day in the year your graph is one hour
> smaller, and on another day it is one hour larger. The point to notice is
> that the customer's local time should affect only the _labels_ on the graph,
> and possibly your choice of start and end times, and not the _data_ being
> plotted.
>
> For example, suppose I have a table:
>
> create table tztst (ts timestamptz primary key, value float8 not null);
>
> and I want to plot individual days from it in the customer's timezone:
>
> test=> set timezone to 'America/Denver';  -- or wherever he is
> SET
>
> test=> select ts::time,value from tztst
>  where ts between '2005-04-02 00:00:00' and '2005-04-02 23:59:59' order by ts;
>     ts    |      value
> ----------+------------------
>  00:00:00 | 286.764410064167
>  01:00:00 | 291.294525072763
>  02:00:00 | 294.912455364789
>  03:00:00 | 297.582051776698
>  04:00:00 | 299.276640583591
>  05:00:00 | 299.979290014267
>  06:00:00 |  299.68297942788
>  07:00:00 | 298.390669461862
>  08:00:00 | 296.115272450212
>  09:00:00 | 292.879523407724
>  10:00:00 | 288.715752869235
>  11:00:00 | 283.665563853606
>  12:00:00 | 277.779416180109
>  13:00:00 | 271.116122290598
>  14:00:00 | 263.742259615024
>  15:00:00 | 255.731505351766
>  16:00:00 |  247.16390030942
>  17:00:00 | 238.125049165494
>  18:00:00 | 228.705265132773
>  19:00:00 | 218.998667579544
>  20:00:00 | 209.102241619985
>  21:00:00 |  199.11486907096
>  22:00:00 | 189.136340457592
>  23:00:00 | 179.266357939324
> (24 rows)
>
> test=> select ts::time,value from tztst
>  where ts between '2005-04-03 00:00:00' and '2005-04-03 23:59:59' order by ts;
>     ts    |      value
> ----------+------------------
>  00:00:00 | 169.603539118895
>  01:00:00 | 160.244431687857
>  03:00:00 | 151.282548753949
>  04:00:00 | 142.807434489044
>  05:00:00 | 134.903769433375
>  06:00:00 | 127.650524395576
>  07:00:00 | 121.120171402458
>  08:00:00 | 115.377959582483
>  09:00:00 | 110.481263218032
>  10:00:00 | 106.479008480546
>  11:00:00 | 103.411184576393
>  12:00:00 | 101.308444187935
>  13:00:00 |  100.19179720206
>  14:00:00 | 100.072400786337
>  15:00:00 | 100.951447910284
>  16:00:00 | 102.820155425614
>  17:00:00 | 105.659851824544
>  18:00:00 | 109.442163799338
>  19:00:00 | 114.129299739007
>  20:00:00 | 119.674427330605
>  21:00:00 | 126.022141492211
>  22:00:00 | 133.109017962198
>  23:00:00 | 140.864247013488
> (23 rows)
>
> test=> select ts::time,value from tztst
>  where ts between '2005-10-30 00:00:00' and '2005-10-30 23:59:59' order by ts;
>     ts    |      value
> ----------+------------------
>  00:00:00 | 110.349122831853
>  01:00:00 | 114.741289638094
>  01:00:00 | 119.837588745288
>  02:00:00 | 125.595930978012
>  03:00:00 | 131.968759497219
>  04:00:00 | 138.903442561358
>  05:00:00 | 146.342708199957
>  06:00:00 | 154.225117209803
>  07:00:00 | 162.485570567354
>  08:00:00 | 171.055847066766
>  09:00:00 | 179.865166743321
>  10:00:00 | 188.840775429059
>  11:00:00 | 197.908545612907
>  12:00:00 |  206.99358864294
>  13:00:00 | 216.020873214721
>  14:00:00 | 224.915845037786
>  15:00:00 | 233.605042562575
>  16:00:00 | 242.016703682664
>  17:00:00 | 250.081358401684
>  18:00:00 | 257.732402570221
>  19:00:00 | 264.906647954345
>  20:00:00 | 271.544844092858
>  21:00:00 | 277.592167633387
>  22:00:00 | 282.998675105977
>  23:00:00 |  287.71971539486
> (25 rows)
>
> All of these can be converted to meaningful (and un-distorted) graphs.
>
> --
> Andrew, Supernews
> http://www.supernews.com - individual and corporate NNTP services
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
--
Mike Benoit <ipso@snappymail.ca>

Attachment

Re: Best practices: Handling Daylight-saving time

From
Karsten Hilbert
Date:
On Fri, Mar 11, 2005 at 01:43:21PM -0500, Randall Nortman wrote:

> As others have mentioned, store timestamps on the server in UTC,

1) As long as I store them as <timestamp with time zone> I should
not need to care what they are stored as on the backend as
long as I provide the proper timezone for the client location.
Correct ?

2) If I then retrieve them as "... at time zone <...>" I will get
the equivalent time in the time zone of the retrieving client.
The same could be be achieved with "set timezone" per session.
Correct ?

3) If I retrieve them without "at time zone" I will get them with
the time zone that was stored in the first place, right ?

4) I could be wrong on 3, it might be that I then get
times at the time zone the machine running PostgreSQL is set
to - still the correct point in time but not the *source* time
zone.

GnuMed operates on the assumptions that 1 and 2 hold true. It
does not bet itself on 3. Are we safe ?

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

Re: Best practices: Handling Daylight-saving time

From
Randall Nortman
Date:
On Sat, Mar 12, 2005 at 05:44:52PM +0100, Karsten Hilbert wrote:
> On Fri, Mar 11, 2005 at 01:43:21PM -0500, Randall Nortman wrote:
>
> > As others have mentioned, store timestamps on the server in UTC,
>
> 1) As long as I store them as <timestamp with time zone> I should
> not need to care what they are stored as on the backend as
> long as I provide the proper timezone for the client location.
> Correct ?
>
> 2) If I then retrieve them as "... at time zone <...>" I will get
> the equivalent time in the time zone of the retrieving client.
> The same could be be achieved with "set timezone" per session.
> Correct ?

Yes and Yes


> 3) If I retrieve them without "at time zone" I will get them with
> the time zone that was stored in the first place, right ?
[...]

This would be news to me.  I don't think it's possible to *not* have a
timezone set on a session.  The server will have a default timezone
based either on the local (server) system time or the setting of the
timezone variable in postgresql.conf.  Additionally, libpq
applications will, I believe, issue a "set timezone" during initial
connection setup.  The manual (section 8.5.3) seems to indicate that
libpq will only do that if PGTZ is set, but I seem to recall it
happening without PGTZ.  (But I'm not entirely sure; feel free to
experiment.)

Anyway, afaik, Postgres does not store the "original" timezone
anywhere, and so could not possibly retrieve it.  I think the only
physical difference between the "timestamp" and "timestamp with time
zone" types is in the system catalog; the manual states that both of
them store 8 bytes and have the same range.  If "timestamp with time
zone" were storing anything extra, I would think the storage size
would be greater or else the range smaller.

Randall

Re: Best practices: Handling Daylight-saving time

From
Tom Lane
Date:
Randall Nortman <postgreslists@wonderclown.com> writes:
> Anyway, afaik, Postgres does not store the "original" timezone
> anywhere, and so could not possibly retrieve it.  I think the only
> physical difference between the "timestamp" and "timestamp with time
> zone" types is in the system catalog; the manual states that both of
> them store 8 bytes and have the same range.

There is no physical difference between the types: they are both 8-byte
quantities measuring seconds since the Epoch.  I think we use midnight
1/1/2000 as the Epoch rather than the traditional Unix 1/1/1970 Epoch,
but otherwise it's exactly the same idea.

The logical difference between the two is that timestamp with tz assumes
that the Epoch is midnight UTC (which means that any particular stored
value represents a very definite real-world instant), while timestamp
without tz ignores the entire concept of time zones; its Epoch is
midnight in an unspecified time zone.  Thus, timestamp with tz can and
does convert back and forth between UTC (for the stored values) and your
current TimeZone setting (for display).  In timestamp without tz, what
you see is all there is.

Personally I would always use timestamp with tz for representing actual
time instants.  Timestamp without tz has uses in some calendar
applications, but it is inherently ambiguous as a representation of a
specific instant.  In particular, for the data recording application
that started this thread, it'd be a horrid idea to even think of using
timestamp without tz, specifically because it's incapable of dealing
with things like DST jumps.

            regards, tom lane

Re: Best practices: Handling Daylight-saving time

From
Karsten Hilbert
Date:
On Sat, Mar 12, 2005 at 12:22:38PM -0500, Randall Nortman wrote:
>
> Anyway, afaik, Postgres does not store the "original" timezone
> anywhere, and so could not possibly retrieve it.  I think the only
> physical difference between the "timestamp" and "timestamp with time
> zone" types is in the system catalog; the manual states that both of
> them store 8 bytes and have the same range.  If "timestamp with time
> zone" were storing anything extra, I would think the storage size
> would be greater or else the range smaller.
Am I correct to assume that this could be solved with a user
defined composite data type ? From the docs it so seems.

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

Re: Best practices: Handling Daylight-saving time

From
Karsten Hilbert
Date:
For the record, if people search the archives for solutions.

Problem:

You need the time zone in use for <timestamp with time zone>
when data was previously inserted/updated.

Discussion:

> > > As others have mentioned, store timestamps on the server in UTC,
> >
> > 1) As long as I store them as <timestamp with time zone> I should
> > not need to care what they are stored as on the backend as
> > long as I provide the proper timezone for the client location.
> > Correct ?
> >
> > 2) If I then retrieve them as "... at time zone <...>" I will get
> > the equivalent time in the time zone of the retrieving client.
> > The same could be be achieved with "set timezone" per session.
> > Correct ?
>
> Yes and Yes
>
> > 3) If I retrieve them without "at time zone" I will get them with
> > the time zone that was stored in the first place, right ?
> [...]
>
> This would be news to me.
...
> Anyway, afaik, Postgres does not store the "original" timezone
> anywhere, and so could not possibly retrieve it.

Solution:

GnuMed now uses a trigger to store the time zone at the time
of data insertion. This was close enough for our needs at the
time (yes, we are old, triggers still need to return opaque on
some of our installations...).

Code:

\unset ON_ERROR_STOP
drop trigger tr_set_encounter_timezone on clin_encounter;
drop function f_set_encounter_timezone();
\set ON_ERROR_STOP 1

create function f_set_encounter_timezone() returns opaque as '
begin
    if TG_OP = ''INSERT'' then
        NEW.source_time_zone := (select (extract(timezone from (select now()))::text || ''seconds'')::interval);
    else
    NEW.source_time_zone := OLD.source_time_zone;
    end if;
    return NEW;
end;
' language 'plpgsql';

create trigger tr_set_encounter_timezone
    before insert or update on clin_encounter
    for each row execute procedure f_set_encounter_timezone()
;

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

Hi,

I really have a problem with a production environment (RH 9, Postgresql
7.4).

When executing a stored procedure on my computer (development
environment: 7.4 under cygwin. ) everything is oki

When I deploy on the production env the same stored procedure with the
same data (different OS and postgresql instance) the stored procedure
crash. I get this error:
java.io.IOException: Unable to get row: ERROR: cannot EXECUTE a null
querystring when executing the

I looked with pgadminIII and found that a charater used as a string
separator (i.e.: ÿ) is shown as � on the production database. It look
just oki in my dev env. (I included part of both stored procedure below).

Both stored procedure have been added to postgresql via JDBC. When I
update the stored procedure via Pgadmin III the stored procedure look oki.

Any Idea what can be the error. Is there any JDBC/Postgresql 7.4 version
that can cause the behavior. Do I have to set a flag somewhere?! Is
there a way I can work around this problem?

Thanks for your help .. it's really appreciated

/David





CREATE OR REPLACE FUNCTION uk_webos_parseitemprice(int4, _int4)
RETURNS int4 AS
'
DECLARE
commandId ALIAS FOR $1;
arrayProp ALIAS FOR $2;
rawData RECORD;
oneRow text[];
i INTEGER;
idValue VARCHAR;
typeValue VARCHAR;

....

OFFSET 1
LOOP
select into oneRow (string_to_array(rawData.VDDATA,\'ÿ\'));
action:=oneRow[1];
FOR i IN array_lower(oneRow, 1)..array_upper(oneRow, 1) LOOP
column:=oneRow[i];
IF (column = \'þ\')
THEN
....

END;
'
LANGUAGE 'plpgsql' VOLATILE;





CREATE OR REPLACE FUNCTION uk_webos_parseitemprice(int4, _int4)
RETURNS int4 AS
'
DECLARE

....


-- RAISE NOTICE \'test \' ;
FOR rawData IN
SELECT VDNUM, VDVSSRC, VDVSNUM, VDKEY, VDDATA, ts
FROM VD
WHERE VDVSNUM = commandId
AND VDKEY = \'IL\'
AND VDVSSRC = 1
ORDER BY VDNUM
OFFSET 1
LOOP
select into oneRow (string_to_array(rawData.VDDATA,\'�\'));
action:=oneRow[1];
FOR i IN array_lower(oneRow, 1)..array_upper(oneRow, 1) LOOP
column:=oneRow[i];
IF (column = \'�\')
THEN
column:= null ;
END IF;
IF (i = arrayProp[1])
THEN
idValue:= column;
ELSIF (i = arrayProp[2])
THEN
typeValue:= column;
ELSIF (i = arrayProp[3])
THEN
itemIdValue:= column;
ELSIF (i = arrayProp[4])
THEN
resourceIdValue:= column;
ELSIF (i = arrayProp[5])
THEN
minimalQuantityValue:= column;
ELSIF (i = arrayProp[6])
THEN
unitPriceValue:= column;
END IF;
END LOOP;
IF ((action = \'UPDATE\') or (action = \'GUESS\'))
THEN
EXECUTE \'DELETE FROM IL WHERE ILNUM =\' || idValue;
END IF;
-- process the insert statement
insertStatement:= \'INSERT INTO IL ( ILNUM, ILTYPE, ILICNUM, ILRRNUM,
ILQTE, ILPRIX, ts ) VALUES ( \' || idValue ||\', \'|| typeValue ||\',\';
IF (itemIdValue is null)
THEN
insertStatement:= insertStatement || \' null,\';
ELSE
insertStatement:= insertStatement || quote_literal(itemIdValue)|| \',\';
END IF;
IF (resourceIdValue is null)
THEN
insertStatement:= insertStatement || \' null,\';
ELSE
insertStatement:= insertStatement || quote_literal(resourceIdValue)||
\',\';
END IF;
insertStatement:= insertStatement ||
minimalQuantityValue||\',\'||unitPriceValue||\',CURRENT_TIMESTAMP ) \';
-- RAISE NOTICE \'insertStatement %\', insertStatement ;
EXECUTE insertStatement;
END LOOP;
return -1;
END;
'
LANGUAGE 'plpgsql' VOLATILE;

Re: Problem with special character

From
Ragnar Hafstað
Date:
On Mon, 2005-03-14 at 16:32 -0500, David Gagnon wrote:
> Hi,
>
> I really have a problem with a production environment (RH 9, Postgresql
> 7.4).

> When I deploy on the production env the same stored procedure with the
> same data (different OS and postgresql instance) the stored procedure
> crash. I get this error:
> java.io.IOException: Unable to get row: ERROR: cannot EXECUTE a null
> querystring when executing the
>
> I looked with pgadminIII and found that a charater used as a string
> separator (i.e.: ÿ) is shown as � on the production database. It look
> just oki in my dev env. (I included part of both stored procedure below).


were the 2 clusters initialized with the same locale
settings ?

gnari




Re: Problem with special character � on

From
David Gagnon
Date:
Thanks for your answer.

  The ISP created the db fom me .. So I don't have this information. I
search the web to know how to get this info via PgadminIII and I haven't
found :-(  Is there a way to get this information once the database have
been created ?  I looked via psql .. I haven`t found either

Thanks
/David

Ragnar Hafstað wrote:

>On Mon, 2005-03-14 at 16:32 -0500, David Gagnon wrote:
>
>
>>Hi,
>>
>>I really have a problem with a production environment (RH 9, Postgresql
>>7.4).
>>
>>
>
>
>
>>When I deploy on the production env the same stored procedure with the
>>same data (different OS and postgresql instance) the stored procedure
>>crash. I get this error:
>>java.io.IOException: Unable to get row: ERROR: cannot EXECUTE a null
>>querystring when executing the
>>
>>I looked with pgadminIII and found that a charater used as a string
>>separator (i.e.: ÿ) is shown as � on the production database. It look
>>just oki in my dev env. (I included part of both stored procedure below).
>>
>>
>
>
>were the 2 clusters initialized with the same locale
>settings ?
>
>gnari
>
>
>
>
>
>


Re: Problem with special character � on

From
Miroslav Šulc
Date:
David Gagnon wrote:

> Thanks for your answer.
>
>  The ISP created the db fom me .. So I don't have this information. I
> search the web to know how to get this info via PgadminIII and I
> haven't found :-(  Is there a way to get this information once the
> database have been created ?  I looked via psql .. I haven`t found either

David, you can try 'SHOW ALL' command.

>
> Thanks
> /David

Miroslav

Attachment

Re: Problem with special character � on

From
David Gagnon
Date:
Hi

Thanks for the tips.  Locale are the same ...  unless I don`t look at
the right thing...


Production ENV
-----------------
lc_collate                     | C
lc_ctype                       | C
lc_messages                    | C
lc_monetary                    | C
lc_numeric                     | C
lc_time                        | C


Dev ENV
----------
"lc_collate";"C"
"lc_ctype";"C"
"lc_messages";"C"
"lc_monetary";"C"
"lc_numeric";"C"
"lc_time";"C"


Thanks for your help
/David

PROD ALL VARIABLES
---------------------------
"add_missing_from";"on"
"australian_timezones";"off"
"authentication_timeout";"60"
"check_function_bodies";"on"
"checkpoint_segments";"3"
"checkpoint_timeout";"300"
"checkpoint_warning";"30"
"client_encoding";"UNICODE"
"client_min_messages";"notice"
"commit_delay";"0"
"commit_siblings";"5"
"cpu_index_tuple_cost";"0.001"
"cpu_operator_cost";"0.0025"
"cpu_tuple_cost";"0.01"
"DateStyle";"ISO, MDY"
"db_user_namespace";"off"
"deadlock_timeout";"1000"
"debug_pretty_print";"off"
"debug_print_parse";"off"
"debug_print_plan";"off"
"debug_print_rewritten";"off"
"default_statistics_target";"10"
"default_transaction_isolation";"read committed"
"default_transaction_read_only";"off"
"dynamic_library_path";"$libdir"
"effective_cache_size";"1000"
"enable_hashagg";"on"
"enable_hashjoin";"on"
"enable_indexscan";"on"
"enable_mergejoin";"on"
"enable_nestloop";"on"
"enable_seqscan";"on"
"enable_sort";"on"
"enable_tidscan";"on"
"explain_pretty_print";"on"
"extra_float_digits";"0"
"from_collapse_limit";"8"
"fsync";"on"
"geqo";"on"
"geqo_effort";"1"
"geqo_generations";"0"
"geqo_pool_size";"0"
"geqo_selection_bias";"2"
"geqo_threshold";"11"
"join_collapse_limit";"8"
"krb_server_keyfile";"FILE:/etc/sysconfig/pgsql/krb5.keytab"
"lc_collate";"C"
"lc_ctype";"C"
"lc_messages";"C"
"lc_monetary";"C"
"lc_numeric";"C"
"lc_time";"C"
"log_connections";"off"
"log_duration";"off"
"log_error_verbosity";"default"
"log_executor_stats";"off"
"log_hostname";"off"
"log_min_duration_statement";"-1"
"log_min_error_statement";"panic"
"log_min_messages";"notice"
"log_parser_stats";"off"
"log_pid";"off"
"log_planner_stats";"off"
"log_source_port";"off"
"log_statement";"off"
"log_statement_stats";"off"
"log_timestamp";"off"
"max_connections";"100"
"max_expr_depth";"10000"
"max_files_per_process";"1000"
"max_fsm_pages";"20000"
"max_fsm_relations";"1000"
"max_locks_per_transaction";"64"
"password_encryption";"on"
"port";"5432"
"pre_auth_delay";"0"
"preload_libraries";"unset"
"random_page_cost";"4"
"regex_flavor";"advanced"
"rendezvous_name";"unset"
"search_path";"$user,public"
"server_encoding";"UNICODE"
"server_version";"7.4.7"
"shared_buffers";"1000"
"silent_mode";"off"
"sort_mem";"1024"
"sql_inheritance";"on"
"ssl";"off"
"statement_timeout";"0"
"stats_block_level";"off"
"stats_command_string";"off"
"stats_reset_on_server_start";"on"
"stats_row_level";"off"
"stats_start_collector";"on"
"superuser_reserved_connections";"2"
"syslog";"0"
"syslog_facility";"LOCAL0"
"syslog_ident";"postgres"
"tcpip_socket";"on"
"TimeZone";"unknown"
"trace_notify";"off"
"transaction_isolation";"read committed"
"transaction_read_only";"off"
"transform_null_equals";"off"
"unix_socket_directory";"unset"
"unix_socket_group";"unset"
"unix_socket_permissions";"511"
"vacuum_mem";"8192"
"virtual_host";"unset"
"wal_buffers";"8"
"wal_debug";"0"
"wal_sync_method";"fdatasync"
"zero_damaged_pages";"off"


DEV ENV FULL VARIABLE
-------------------------------
"add_missing_from";"on"
"australian_timezones";"off"
"authentication_timeout";"60"
"check_function_bodies";"on"
"checkpoint_segments";"3"
"checkpoint_timeout";"300"
"checkpoint_warning";"30"
"client_encoding";"UNICODE"
"client_min_messages";"notice"
"commit_delay";"0"
"commit_siblings";"5"
"cpu_index_tuple_cost";"0.001"
"cpu_operator_cost";"0.0025"
"cpu_tuple_cost";"0.01"
"DateStyle";"ISO, MDY"
"db_user_namespace";"off"
"deadlock_timeout";"1000"
"debug_pretty_print";"off"
"debug_print_parse";"off"
"debug_print_plan";"off"
"debug_print_rewritten";"off"
"default_statistics_target";"10"
"default_transaction_isolation";"read committed"
"default_transaction_read_only";"off"
"dynamic_library_path";"$libdir"
"effective_cache_size";"1000"
"enable_hashagg";"on"
"enable_hashjoin";"on"
"enable_indexscan";"on"
"enable_mergejoin";"on"
"enable_nestloop";"on"
"enable_seqscan";"on"
"enable_sort";"on"
"enable_tidscan";"on"
"explain_pretty_print";"on"
"extra_float_digits";"0"
"from_collapse_limit";"8"
"fsync";"on"
"geqo";"on"
"geqo_effort";"1"
"geqo_generations";"0"
"geqo_pool_size";"0"
"geqo_selection_bias";"2"
"geqo_threshold";"11"
"join_collapse_limit";"8"
"krb_server_keyfile";"unset"
"lc_collate";"C"
"lc_ctype";"C"
"lc_messages";"C"
"lc_monetary";"C"
"lc_numeric";"C"
"lc_time";"C"
"log_connections";"off"
"log_duration";"off"
"log_error_verbosity";"default"
"log_executor_stats";"off"
"log_hostname";"off"
"log_min_duration_statement";"-1"
"log_min_error_statement";"panic"
"log_min_messages";"notice"
"log_parser_stats";"off"
"log_pid";"off"
"log_planner_stats";"off"
"log_source_port";"off"
"log_statement";"off"
"log_statement_stats";"off"
"log_timestamp";"off"
"max_connections";"40"
"max_expr_depth";"10000"
"max_files_per_process";"1000"
"max_fsm_pages";"20000"
"max_fsm_relations";"1000"
"max_locks_per_transaction";"64"
"password_encryption";"on"
"port";"5432"
"pre_auth_delay";"0"
"preload_libraries";"unset"
"random_page_cost";"4"
"regex_flavor";"advanced"
"rendezvous_name";"unset"
"search_path";"$user,public"
"server_encoding";"UNICODE"
"server_version";"7.4.5"
"shared_buffers";"1000"
"silent_mode";"off"
"sort_mem";"1024"
"sql_inheritance";"on"
"ssl";"off"
"statement_timeout";"0"
"stats_block_level";"off"
"stats_command_string";"off"
"stats_reset_on_server_start";"on"
"stats_row_level";"off"
"stats_start_collector";"on"
"superuser_reserved_connections";"2"
"syslog";"0"
"syslog_facility";"LOCAL0"
"syslog_ident";"postgres"
"tcpip_socket";"on"
"TimeZone";"   5   4,M4.1.0/2,M10.5.0/2"
"trace_notify";"off"
"transaction_isolation";"read committed"
"transaction_read_only";"off"
"transform_null_equals";"off"
"unix_socket_directory";"unset"
"unix_socket_group";"unset"
"unix_socket_permissions";"511"
"vacuum_mem";"8192"
"virtual_host";"unset"
"wal_buffers";"8"
"wal_debug";"0"
"wal_sync_method";"fsync"
"zero_damaged_pages";"off"







Miroslav Šulc wrote:

> David Gagnon wrote:
>
>> Thanks for your answer.
>>
>>  The ISP created the db fom me .. So I don't have this information. I
>> search the web to know how to get this info via PgadminIII and I
>> haven't found :-(  Is there a way to get this information once the
>> database have been created ?  I looked via psql .. I haven`t found
>> either
>
>
> David, you can try 'SHOW ALL' command.
>
>>
>> Thanks
>> /David
>
>
> Miroslav



Re: Best practices: Handling Daylight-saving time

From
Scott Ribe
Date:
>> 3) If I retrieve them without "at time zone" I will get them with
>> the time zone that was stored in the first place, right ?
> [...]
>
> This would be news to me.  I don't think it's possible to *not* have a
> timezone set on a session.  The server will have a default timezone
> based either on the local (server) system time or the setting of the
> timezone variable in postgresql.conf.  Additionally, libpq
> applications will, I believe, issue a "set timezone" during initial
> connection setup.

This is certainly the default behavior--I don't know whether there are
settings to change it. All I know is that I regularly work with a database
located in a different time zone, and displayed times are adjusted to my
local time.

--
Scott Ribe
scott_ribe@killerbytes.com
http://www.killerbytes.com/
(303) 665-7007 voice



Re: Problem with special character � on

From
Miroslav Šulc
Date:
David Gagnon wrote:

> Hi
>
> Thanks for the tips.  Locale are the same ...  unless I don`t look at
> the right thing...
>
> ...
>
Can you try to run the procedure from some other environment? It seems
you use some Java class, so I mean try psql, phpPgAdmin or something
different.

Miroslav

Attachment

Re: Problem with special character � on

From
David Gagnon
Date:
I did tried to update the stored-procedure via PgadminIII and it
worked.  The problem seems to be JDBC driver .. But all works well on my
TOMCA/POSTGRESL/WINDOWS platform.

I think it's something in the database setting .. is there other setting
that can cause this behavior ?

Thanks!

/David

Re: Problem with special character �

From
Ragnar Hafstað
Date:
On Mon, 2005-03-14 at 19:13 -0500, David Gagnon wrote:
> I did tried to update the stored-procedure via PgadminIII and it
> worked.  The problem seems to be JDBC driver .. But all works well on my
> TOMCA/POSTGRESL/WINDOWS platform.
>
> I think it's something in the database setting .. is there other setting
> that can cause this behavior ?

maybe some difference in the environments that the two
tomcats run in? are their locales the same ?

gnari



Re: Best practices: Handling Daylight-saving time

From
Karsten Hilbert
Date:
> > This would be news to me.  I don't think it's possible to *not* have a
> > timezone set on a session.  The server will have a default timezone
> > based either on the local (server) system time or the setting of the
> > timezone variable in postgresql.conf.  Additionally, libpq
> > applications will, I believe, issue a "set timezone" during initial
> > connection setup.
>
> This is certainly the default behavior--I don't know whether there are
> settings to change it. All I know is that I regularly work with a database
> located in a different time zone, and displayed times are adjusted to my
> local time.
That surely works. The question was whether there was a
built-in way to recover the time zone of the client inserting
the data.

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

Re: Problem with special character �

From
David Gagnon
Date:
Hi Gnari,

  I'll do some more test tonight to figure out if it's a tomcat problem
and I'll get back to you with this info.

Thanks!
/David

>maybe some difference in the environments that the two
>tomcats run in? are their locales the same ?
>
>gnari
>
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 6: Have you searched our list archives?
>
>               http://archives.postgresql.org
>
>
>


Re: Best practices: Handling Daylight-saving time

From
Christopher Browne
Date:
Don't use DST.

Use GMT/UTC.

That makes the issue go away.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/slony.html
Signs of a Klingon  Programmer #2: "You  question the worthiness of my
code? I should kill you where you stand!"

Re: Best practices: Handling Daylight-saving time

From
Együd Csaba (Freemail)
Date:
After a few days of working on the problem I can state that - IMHO - this is
the best way:

Using UTC (or any other timezone) with NO DST (this is the most important)
is the only reliable way to store continous data.
On the client we can convert the server time easily to local time. Even if
the server uses a different timezone to UTC/GMT. The client can take into
account the result of (select extract('timezone' from CURRENT_TIMESTAMP))
and correct the local time with that. The opposite direction (converting
local time to server time) is as simple as the other.
Just a small problem is to take into account the DSTBias. This means that
you have to increment (or decrement) the amount of hours to add by the
DSTBias. Thats all.

Thank you very much all of you to open my eyes!

Best Regrds,
-- Csaba

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Christopher Browne
Sent: Thursday, March 17, 2005 6:23 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Best practices: Handling Daylight-saving time

Don't use DST.

Use GMT/UTC.

That makes the issue go away.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/slony.html
Signs of a Klingon  Programmer #2: "You  question the worthiness of my code?
I should kill you where you stand!"

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq


--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 2005.03.15.




--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 2005.03.15.