On Wednesday 30 April 2008 12:50, tv@fuzzy.cz wrote:
> Hi all,
>
> we have a quite nice web application written in PHP over PostgreSQL, with
> all the data stored in a PostgreSQL database, including files stored in
> BYTEA columns. It worked quite nice for a long time, but the users began
> to insert large files (say 1 - 10 megabytes in size) and we began to
> experience 'allowed memory exhausted' PHP fatal errors.
>
> I believe the problem is caused by escaping the bytea data, which
> multiplies the amount of required memory, so a 4MB file easily exhausts
> the 20MB limit we've set when calling pg_fetch_array(). This is probably
> made worse by usage of UTF-8, but not sure about it.
>
> I guess I'm not the only one here who was hit by this problem, so I'd like
> to hear your recommendations how to solve it. I've already thounght about
> these solutions:
>
> 1) Increasing the PHP memory limit
>
> I'm not a big fan of this, as we would hit the new limit sooner or later,
> and we're not the only users of the server.
>
> 2) Using large objects and pg_lo_* functions instead of BYTEA columns.
>
> I don't like this, as it would require a serious redesign of the database
> schema and the PHP application itself. Moreover according to the user
> comments in http://cz.php.net/manual/en/function.pg-lo-open.php it would
> require a superuser access to the database, which is not possible in our
> case.
>
> 3) Fetching the BYTEA piece by piece using substring()
>
> This is the only reliable and 'elegant enough' solution I've found so far.
> It lies in fetching the BYTEA column in 'small chunks' (say 10 - 100k in
> size), unescaping each of them and either storing it or writing it to the
> output before before fetching the next one. I still have to think about
> upload / insert, though ...
>
Just wondering if using cursors might allow you to get around it?
--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL