Re: large INSERT leads to "invalid memory alloc" - Mailing list pgsql-general

From Stefan Froehlich
Subject Re: large INSERT leads to "invalid memory alloc"
Date
Msg-id 20121125193945.GA4894@89-149-202-102.internetserviceteam.com
Whole thread Raw
In response to Re: large INSERT leads to "invalid memory alloc"  (Pavel Stehule <pavel.stehule@gmail.com>)
Responses Re: large INSERT leads to "invalid memory alloc"
Re: large INSERT leads to "invalid memory alloc"
List pgsql-general
On Sun, Nov 25, 2012 at 06:57:22PM +0100, Pavel Stehule wrote:
> > | INSERT INTO a (id, fkid, displayorder, name, description, internal, mimetype, mimedata, filesize) VALUES (73,6,5,
E'Seefeld.rar',E'Seefeld',0, E'application/octet-stream',decode('5261...0700', 'hex'),311484587); 

> Attention - BYTEA is not BLOB and although physical limit is 1G - real
> limit is significantly less - depends on RAM - 7years ago we found so
> practical limit is about 20MB.

Oops, that's not too much. In the docs I've seen a 4-byte length
descriptor, thus expected a size limit of 4 GB and felt quit safe
with a maximum size of 300 MB.

> If you need more, use blobs instead or you can divide value to more blocks
> http://www.fuzzy.cz/en/articles/storing-files-in-a-postgresql-database/

Yes, storing large data objects in the file system is advisable for
several reasons - we've had the same discussion times ago with mysql as
well. But the decision was made to keep it in the database (and there is
only one object of this size anyways). Rewriting the framework is not an
option at the moment.

If I fail to migrate this into postgresql, we'd rather cancel the
transition.

Stefan


pgsql-general by date:

Previous
From: Peter Geoghegan
Date:
Subject: Re: What happens to a primary key b-tree index when a table tuple is deleted by a transaction?
Next
From: Bexley Hall
Date:
Subject: Re: Query caching absent "query caching"