Thread: very large record sizes and ressource usage

very large record sizes and ressource usage

From
jtkells@verizon.net
Date:
Is there any guidelines to sizing work_mem, shared_bufferes and other
configuration parameters etc., with regards to very large records?  I
have a table that has a bytea column and I am told that some of these
columns contain over 400MB of data.  I am having a problem on several
servers reading and more specifically dumping these records (table)
using pg_dump

Thanks

Re: very large record sizes and ressource usage

From
Robert Haas
Date:
On Thu, Jul 7, 2011 at 10:33 AM,  <jtkells@verizon.net> wrote:
> Is there any guidelines to sizing work_mem, shared_bufferes and other
> configuration parameters etc., with regards to very large records?  I
> have a table that has a bytea column and I am told that some of these
> columns contain over 400MB of data.  I am having a problem on several
> servers reading and more specifically dumping these records (table)
> using pg_dump

work_mem shouldn't make any difference to how well that performs;
shared_buffers might, but there's no special advice for tuning it for
large records vs. anything else.  Large records just get broken up
into small records, under the hood.  At any rate, your email is a
little vague about exactly what the problem is.  If you provide some
more detail you might get more help.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company