Melvin Davidson wrote:
> Before ANYONE continues to insist that a serial id column is good, consider
> the case where the number of tuples will exceed a bigint.
> Don't say it cannot happen, because it can.
In practice, it cannot happen.
A tuple with a bigint column weighs at least 32 bytes (in the sense
that it's what pg_column_size("table".*) reports when "table" has
only a bigint column).
So the size of your hypothetical table would be at a minimum
32 bytes * 2^63 tuples = 2^68 bytes
But a postgres table size weighs 32TB max, or 2^45 bytes
(see http://www.postgresql.org/about/ )
So the table with more rows than a bigint can count would have to be
2^23 (=8388608) times bigger than the biggest possible table.
Also there's the fact that COUNT() returns a BIGINT, so the tuples
couldn't be counted in SQL. That by itself hints at the fact that counts of
tuples are expected to always fit in BIGINT these days.
Also what about pg_database_size() returning a bigint?
Even if the hypothetical table was alone in the database, and even if every
tuple occupied only 1 byte instead of 32+, the single table would exceed
what pg_database_size() can report.
Maybe at some point all these will be 128 bits, but that's years ahead.
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite