Josh Berkus <josh@agliodbs.com> writes:
> I was just checking on our year-2027 compliance, and happened to notice
> that time with time zone takes up 12 bytes. This seems peculiar, given
> that timestamp with time zone is only 8 bytes, and at my count we only
> need 5 for the time with microsecond precision. What's up with that?
I think it's an 8-byte seconds count plus 4 bytes to indicate the
timezone. If this datatype had any actual real-world use then it might
be worth worrying about how big it is, but AFAICS its only excuse for
existence is to satisfy the SQL standard.
> Also, what is the real range of our 8-byte *integer* timestamp?
See the fine manual. I believe the limits have more to do with
calendar arithmetic than with the nominal range of 2^64 microseconds.
regards, tom lane