On 09/14/2017 11:11 AM, Rafal Pietrak wrote:
>
> Not really.
>
> As I said, I'm not looking for performance or "fair probability" of
> planetary-wide uniqueness.
>
> My main objective is the "guarantee". Which I've tried to indicate
> referring to "future UPDATEs".
>
> What I mean here is functionality similar to "primary key", or "unique
> constraint". Whenever somebody (application, like faulty application
> IMPORTANT!) tries to INSERT or UPDATE a not unique value there (which in
> fact could possibly be generated earlier by UUID algorithms, or even a
> sequence), if that value is among table that uses that (misterious)
> "global primary key"; that application just fails the transaction like
> any other "not unique" constraint failing.
>
> That's the goal.
>
> Multitude of tablas using a single sequence does not give that guarantee.
>
> As I've said, a solution closest to my target is a separate table with
> just one column of that "global primary key", which get inserted/updated
> within trigger on insert/update of the "client tables" ... only I'm not
> so sure how to "cleanly" manage multitude of tables using the same key
> of that "global table of keys"... that is its "back references".
>
> So I'm stuck with seriously incomplete solution.
>
> that's why I have an impression, that I'm going into entirely wrong
> direction here.
>
>
So you care if the same id is used in separate, unrelated tables? What's
your fear here? And I completely get the confusion generated be the
same small integer being re-used in various context ("sample id" is the
bane for me). Could you use a sufficiently accurate time value?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general