Thread: PQgetlength vs. octet_length()

PQgetlength vs. octet_length()

From
Michael Clark
Date:
This thread was originally posted (incorrectly by me) to the hackers mailing list.  Moving the discussion to the gerenal.


Hi Greg,

That is what Pierre pointed out, and you are both right.  I am using the text mode.

But it seems pretty crazy that a 140meg bit of data goes to 1.3 gigs.  Does that seem a bit excessive?

I avoided the binary mode because that seemed to be rather confusing when having to deal with non-bytea data types.  The docs make it sound like binary mode should be avoided because what you get back for a datetime varies per platform.

Thanks,
Michael.

On Tue, Aug 18, 2009 at 12:15 PM, Greg Stark <gsstark@mit.edu> wrote:
On Tue, Aug 18, 2009 at 4:04 PM, Michael Clark<codingninja@gmail.com> wrote:
> Hello - am I in the wrong mailing list for this sort of problem? :-

Probably but it's also a pretty technical point and you're programming
in C so it's kind of borderline.

If you're using text-mode then your datum that you're getting from
libpq is a text representation of the datum. For bytea in released
versions that means anything which isn't a printable ascii character
will be octal encoded like \123. You can use PQunescapeBytea to
unescape it.

If you use binary encoding then you don't have to deal with that.
Though I seem to recall there is still a gotcha you have to worry
about if there are nul bytes in your datum. I don't recall exactly
what that meant you had to do though.

--
greg
http://mit.edu/~gsstark/resume.pdf


Re: PQgetlength vs. octet_length()

From
Greg Stark
Date:
On Tue, Aug 18, 2009 at 6:39 PM, Michael Clark<codingninja@gmail.com> wrote:
> But it seems pretty crazy that a 140meg bit of data goes to 1.3 gigs.  Does
> that seem a bit excessive?

From what you posted earlier it looked like it was turning into about
500M which sounds about right. Presumably either libpq or your code is
holding two copies of it in ram at some point in the process.

8.5 will have an option to use a denser hex encoding but it will still
be 2x as large as the raw data.

> I avoided the binary mode because that seemed to be rather confusing when
> having to deal with non-bytea data types.  The docs make it sound like
> binary mode should be avoided because what you get back for a datetime
> varies per platform.

There are definitely disadvantages. Generally it requires you to know
what the binary representation of your data types is and they're not
all well documented or guaranteed not to change in the future. I
wouldn't recommend someone try to decode a numeric or a postgres array
for example. And floating point numbers are platform dependent.  But
bytea is a case where it seems more natural to use binary than text
representation.

--
greg
http://mit.edu/~gsstark/resume.pdf

Fwd: PQgetlength vs. octet_length()

From
Michael Clark
Date:
On Tue, Aug 18, 2009 at 1:48 PM, Greg Stark <gsstark@mit.edu> wrote:
On Tue, Aug 18, 2009 at 6:39 PM, Michael Clark<codingninja@gmail.com> wrote:
> But it seems pretty crazy that a 140meg bit of data goes to 1.3 gigs.  Does
> that seem a bit excessive?

From what you posted earlier it looked like it was turning into about
500M which sounds about right. Presumably either libpq or your code is
holding two copies of it in ram at some point in the process.

From what I saw, stopped at this line in my code running through gdb:
 const char *valC = PQgetvalue(result, rowIndex, i);
my mem usage was 300megs.  Stepping over this line it went to 1.3 gigs.
Unless there is some way to misconfigure something, I can't think how my code could do that.
I will profile it and see if I can tell who is holding on to that memory.


8.5 will have an option to use a denser hex encoding but it will still
be 2x as large as the raw data.

Sweet!
 

> I avoided the binary mode because that seemed to be rather confusing when
> having to deal with non-bytea data types.  The docs make it sound like
> binary mode should be avoided because what you get back for a datetime
> varies per platform.

There are definitely disadvantages. Generally it requires you to know
what the binary representation of your data types is and they're not
all well documented or guaranteed not to change in the future. I
wouldn't recommend someone try to decode a numeric or a postgres array
for example. And floating point numbers are platform dependent.  But
bytea is a case where it seems more natural to use binary than text
representation.

To do something like this, I guess it would be best for my wrapper to being to detect when I have a bytea column in a table and do 2 fetchs, one in text for all other columns, and one in binary for the bytea column.  Is this the best way to handle that do you think?

Thanks,
Michael.
 

Re: PQgetlength vs. octet_length()

From
"Albe Laurenz"
Date:
Michael Clark wrote:
> That is what Pierre pointed out, and you are both right.  I
> am using the text mode.
>
> But it seems pretty crazy that a 140meg bit of data goes to
> 1.3 gigs.  Does that seem a bit excessive?
>
> I avoided the binary mode because that seemed to be rather
> confusing when having to deal with non-bytea data types.  The
> docs make it sound like binary mode should be avoided because
> what you get back for a datetime varies per platform.

That is true.

The best thing would be to retrieve only the bytea columns in
binary format and the rest as text.

The Bind message in the frontend/backend protocol allows to
specify for each individual result column whether it should
be text or binary
( http://www.postgresql.org/docs/current/static/protocol-message-formats.html )
but the C API only allows you to get *all* result columns in either
binary or text.

You could resort to either speaking line protocol with the backend,
(which is probably more than you are ready to do), or you could
create a separate query only for the bytea value.

Yours,
Laurenz Albe