Haven't tried, but would the md5 routine in pg_crypto (contrib/) work for
you? The odds of your discovering two keys that reduce to the same md5 key
is incredibly small, so it's close enough to check for uniqueness.
Don't know if that will take input that large, tho'.
Joel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton
Knowledge Management & Technology Consultant
> -----Original Message-----
> From: pgsql-sql-owner@postgresql.org
> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of andre
> Sent: Monday, May 20, 2002 4:17 PM
> To: pgsql-sql@postgresql.org
> Cc: pgsql-novice@postgresql.org
> Subject: [SQL] Guarantee uniqueness of PGP key
>
>
> Hi,
>
> More generically, how do I best guarantee that any large varchar
> is unique
> before inserting?
>
> In my application, I need to ensure that each e-mail address and
> corresponding PGP public key are both unique.
>
> I originally implemented this by adding a uniqueness contrainst to both
> columns. easy.
>
> The problem is that some PGP keys are over 2713 bytes, and postgres (7.2)
> gives the error:
>
> ERROR: btree: index item size 4336 exceeds maximum 2713
>
> I thought perhaps I could use a hash or rtree index, but neither
> of them work
> for unique-ness.
>
> I saw in another posting that it may be possible to raise this
> limit to 8k or
> so, but I don't want to do this because 1) it requires a recompile of
> postgres (and rollout to dev, stage, prod machines), and 2) a key could
> theoretically be longer than 8k anyway.
>
> I can guarantee unique-ness by removing the index and doing a
> select before
> the insert. If there is a matching row, then I am trying to
> insert a dup.
> bail.
>
> My concern is that doing this without an index will be
> monumentally slow for
> any large-ish number of keys.
>
> Any bright ideas? All suggestions appreciated.
>
> /andre
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>