Re: inserting huge file into bytea cause out of memory - Mailing list pgsql-general

From Chris Travers
Subject Re: inserting huge file into bytea cause out of memory
Date
Msg-id CAKt_Zfs5kGJ85KMb-iAYwsv2z-OZN-WYikCn_AefUYpU5Rmb-Q@mail.gmail.com
Whole thread Raw
In response to Re: inserting huge file into bytea cause out of memory  (liuyuanyuan <liuyuanyuan@highgo.com.cn>)
List pgsql-general


On Wed, Aug 7, 2013 at 6:41 PM, liuyuanyuan <liuyuanyuan@highgo.com.cn> wrote:

     
      Thanks for your last reply!
      I've test Large Object ( oid type ), and it seems better on out of memory.
      But, for the out of memory problem of bytea, we really have no idea to
solve it ? Why there's no way to solve it ? Is this a problem of JDBC ,or the type itself ?

I think the big difficulty efficiency-wise is in the fact that everything is exchanged in a textual representation.  This means you have likely at least two representations in memory on the client and the server, and maybe more depending on the client framework, and the textual representation is around twice as large as the binary one.  Add to this the fact that it must all be handled at once and you have difficulties which are inherent to the implementation.  In general, I do not recommend byteas for large amounts of binary data for that reason.  If your files are big, use lobs.

Best Wishes,
Chris Travers 
   
Yours,
Liu Yuanyuan



--
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor lock-in.

pgsql-general by date:

Previous
From: Sergey Konoplev
Date:
Subject: Re: Self referencing composite datatype
Next
From: Tom Lane
Date:
Subject: Re: Adding ip4r to Postgresql core?