> Given that what uuid_to_base32hex() actually does is encoding the
input UUID, I find that it could be confusing if we have a similar function other than encode() function. Also, we could end up introducing as many encoding and decoding functions dedicated for UUID as we want to support encoding methods, bloating the functions.
> So as the first step, +1 for supporting base32hex for encode() and decode() functions and supporting the UUID <-> bytea conversion. I believe it would cover most use cases and the cost of UUID <-> bytea conversion is negligible.
I see you're in favor of base32hex encoding. That's great!
Your arguments make sense, and I generally support enhancing the standard encode() and decode() functions to handle base32hex. It seems like the right approach from a developer experience standpoint.
However, I'm unclear about some implementation aspects. Why add conversions between UUID and bytea data types? Wouldn't that require creating dedicated UUID <-> bytea conversion functions? Instead, could we implement encode() as polymorphic to handle UUID type inputs directly? For decode(), we'd need some way (a parameter?) to specify the UUID output type instead of bytea. Another option would be automatic type casting when inserting bytea data into UUID columns. Neither an extra parameter nor additional type casting seems ideal to me, though I don't have better alternatives.
But actually, for a short UUID text encoding to succeed, it's more important that it becomes the single, de facto standard. We should avoid supporting multiple encodings, just as the authors and contributors of RFC 9562 did: https://github.com/uuid6/new-uuid-encoding-techniques-ietf-draft/discussions/17#discussioncomment-10614817 Therefore, whenever possible, encode() and decode() should support just one UUID text encoding, namely base32hex.