Re: fast read of binary data - Mailing list pgsql-performance

From Arjen van der Meijden
Subject Re: fast read of binary data
Date
Msg-id 50A0DF9C.80108@tweakers.net
Whole thread Raw
In response to fast read of binary data  (Eildert Groeneveld <eildert.groeneveld@fli.bund.de>)
List pgsql-performance
On 12-11-2012 11:45, Eildert Groeneveld wrote:
> Dear All
>
> I am currently implementing using a compressed binary storage scheme
> genotyping data. These are basically vectors of binary data which may be
> megabytes in size.
>
> Our current implementation uses the data type bit varying.

Wouldn't 'bytea' be a more logical choice for binary data?
http://www.postgresql.org/docs/9.2/interactive/datatype-binary.html

> What we want to do is very simple: we want to retrieve such records from
> the database and transfer it unaltered to the client which will do
> something (uncompressing) with it. As massive amounts of data are to be
> moved, speed is of great importance, precluding any to and fro
> conversions.
>
> Our current implementation uses Perl DBI; we can retrieve the data ok,
> but apparently there is some converting going on.
>
> Further, we would like to use ODBC from Fortran90 (wrapping the
> C-library)  for such transfers. However, all sorts funny things happen
> here which look like conversion issues.
>
> In old fashioned network database some decade ago (in pre SQL times)
> this was no problem. Maybe there is someone here who knows the PG
> internals sufficiently well to give advice on how big blocks of memory
> (i.e. bit varying records) can between transferred UNALTERED between
> backend and clients.

Although I have no idea whether bytea is treated differently in this
context. Bit varying should be about as simple as possible (given that
it only has 0's and 1's)

Best regards,

Arjen


pgsql-performance by date:

Previous
From: "Albe Laurenz"
Date:
Subject: Re: fast read of binary data
Next
From: Wu Ming
Date:
Subject: PostreSQL v9.2 uses a lot of memory in Windows XP