> On 10 May 2018, at 14:41, Adrian Klaver <adrian.klaver@aklaver.com> wrote:
>> OK, so by using TIMESTAMP WITH TIME ZONE, you force all apps to submit timezone qualified timestamps in what
languagethey are written in.
>
> Not really:
>
> https://www.postgresql.org/docs/10/static/datatype-datetime.html
>
> "For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time,
traditionallyknown as Greenwich Mean Time, GMT). An input value that has an explicit time zone specified is converted
toUTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is
assumedto be in the time zone indicated by the system's TimeZone parameter, and is converted to UTC using the offset
forthe timezone zone.”
Many thanks for this clarification. So therefore you can’t rely on the TIMESTAMP WITH TIME ZONE forcing the app to
explicitlyspecify the offset. This is is because if the app is not specific, then the server will default back to its
configuredtimezone.
So to get deterministic timestamps, you could either:
a) make sure the server is always configured to run in UTC;
b) issue SET TIME ZONE ‘UTC’; at the beginning of any application session
>
>>> After all, it is always converted to UTC
>>> servside anyway ?
>> And because of the internal UTC representation, there is no room for ambiguous timezones.
>
> Define ambiguous timezone?
What I meant to say that is there should be no possibility for an effective timezone to arise implicitly.
For example, if you
(1) didn’t control the db server config
and
(2) and you forgot to enforce UTC at a client driver level
and
(3) didn’t set the offset in the app session
Then the only way to know what the effective zone offset will be is to find out what the server default is.
Is this plausible?