On Mon, Jan 23, 2012 at 5:49 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
> I'm not sure that you're getting anything with that user facing
> complexity. The only realistic case I can see for explicit control of
> wire formats chosen is to defend your application from format changes
> in the server when upgrading the server and/or libpq. This isn't a
> "let's get better compression problem", this is "I upgraded my
> database and my application broke" problem.
>
> Fixing this problem in non documentation fashion is going to require a
> full protocol change, period.
Our current protocol allocates a 2-byte integer for the purposes of
specifying the type of each parameter, and another 2-byte integer for
the purpose of specifying the result type... but only one bit is
really needed at present: text or binary. If we revise the protocol
version at some point, we might want to use some of that bit space to
allow some more fine-grained negotiation of the protocol version. So,
for example, we might define the top 5 bits as reserved (always pass
zero), the next bit as a text/binary flag, and the remaining 10 bits
as a 10-bit "format version number". When a change like this comes
along, we can bump the highest binary format version recognized by the
server, and clients who request the new version can get it.
Alternatively, we might conclude that a 2-byte integer for each
parameter is overkill and try to cut back... but the point is there's
a bunch of unused bitspace there now. In theory we could even do
something this without bumping the protocol version since the
documentation seems clear that any value other than 0 and 1 yields
undefined behavior, but in practice that seems like it might be a bit
too edgy.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company