I'm doing some tests with date-time related fields to design my web
application.
I was already dissatisfied with Postgresql handling of timezones
concepts (issue
already discussed here - not entirely PG's fault, rather a SQL thing)
and I vehemently
reject the idea of a global server-side timezone configuration having
any infuence on
my DB layer, so I am planning to use always plain TIMESTAMP data tipe
(with no TIMEZONE).
What I want is that a TIMESTAMP field to be conceptually equivalent to
a plain {YEAR,MONTH,DAY HH,MM,SS},
data-tuple, i.e. a "local time". To clarifiy, for me "local time" =
"timezone unkown". Which is is NOT the same
as assuming some default (OS or database) timezone. It might very well
happen that I store in a -say- ALARM_TIME two datetimes
that correspond to users that have different ("local") timezones. So,
I want '2011-05-31 10:00:00' in this field
to mean 10.00 AM in some UNKNOWN timezone (that of a particular user).
In this scenario, I assumed the natural convention is: store just a
UTC time, using a TIMESTAMP. I believe that's the idea
of a plain TIMESTAMP.
However, I'm not sure if I can get a totally timezone-indepent behaviour:
CREATE TABLE t1 ( ts timestamp without time zone);
db=# insert into t1 values('1970-01-01 00:00:00');
INSERT 0 1
db=# select ts,extract(epoch from ts) from t1;
ts | date_part
---------------------+-----------
1970-01-01 00:00:00 | 21600
I was dismayed to see this, I assumed that my insert has stored a unix
timestamp = 0.
It seems not?
But on the other side, if I modify the server timezone what gets
changed is the epoch calculation!
asdas=# SET TIMEZONE TO 'XXX11';
SET
asdas=# select ts,extract(epoch from ts) from t1;
ts | date_part
---------------------+-----------
1970-01-01 00:00:00 | 39600
Why? What is happening here?
--
Hernán J. González
http://hjg.com.ar/