For sure, I’m thinking of it that way. Thanks for confirming.
What I don’t understand is that if I respond to psql with the RowDescription indicating the format code is 1 for binary (and encode it that way, with 4 bytes, in the DataRow) it doesn’t render the number in the results.
I think this makes sense but I wanted to get confirmation:
I created a table with a column having the type int4 (integer). When I insert a row with a number into that column and get it back out I've observed a discrepancy:
The DataRow message has the field encoded as an ASCII ‘7’ with a column length of 1 despite the RowDescription having a column length 4. I assume that this is because it’s a simple query (Q) and therefore the format code for all columns is 0 (for text format).
It makes sense that at the time the RowDescription is written out that it can’t possibly know how many bytes the textual representation of each int will take so it just uses the length of the underlying type.
Is this accurate?
You probably shouldn't think of DataRow as giving you a "column length" - it is simply giving you the number of bytes you need to read to retrieve all of the bytes for the column and thus position your read pointer at the data length Int32 for the subsequent column (which you do iteratively Int16 column count times).
You now have bytes for columnN - which you need to interpret via RowDescription to transform the raw protocol bytes into a meaningful datum.
You don't care whether the source API was simple or not - RowDescription will tell you what you need to know to interpret the value - it is all self-contained. But yes, because it is a simple query the RowDescription meta-data will inform you that all of the bytes represent (in aggregate ?) the textual representation of the data.
David J.