Re: Re: Getting milliseconds out of TIMESTAMP - Mailing list pgsql-general

From David Wall
Subject Re: Re: Getting milliseconds out of TIMESTAMP
Date
Msg-id 002901c0cb4b$52ab07e0$5a2b7ad8@expertrade.com
Whole thread Raw
In response to Getting milliseconds out of TIMESTAMP  ("David Wall" <d.wall@computer.org>)
Responses Re: Re: Getting milliseconds out of TIMESTAMP
List pgsql-general
> Just curious, but what is the point of having times 'acurate' to the
> milisecond in a database?

In my case it's a simple matter that I include a timestamp in a digital
signature, and since my timestamp comes from Java (and most Unixes are the
same), it has millisecond resolution "built in."  The problem is that when I
retrieve the information about that digital signature, it was failing
because the database only when to centiseconds.  I've "fixed" my code by
reducing the my timestamp resolution.

As another point, computers are incredibly fast these days, and doing more
than 100 logging operations in a second is commonplace.  If you log records
to the database and do more than 100/second, then you cannot use the
TIMESTAMP as an indicator of order that messages were emitted since all rows
logged after the 100th will be the same.

Of course, the question is easy to turn around.  Why not just have
timestamps accurate to the second?  Or perhaps to the minute since many
(most?) computer clocks are not that accurate anyway?

The real question for me is that 7.1 docs say that the resolution of a
timestamp is 8 bytes at "1 microsecond / 14 digits", yet I generally see
YYYY-MM-DD HH-MM-SS.cc returned in my queries (both with pgsql and with
JDBC).  This is unusual since an 8 byte integer is 2^63, which is far more
than 14 digits, and the "ascii digits" of that preceding format is already
16 digits.

David


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: PostgreSQL 7.1 and Sequences
Next
From: Bruce Momjian
Date:
Subject: Re: monitor postgres connect session