General Bug Report: backend terminated while attempting to use large objects - Mailing list pgsql-bugs

From Unprivileged user
Subject General Bug Report: backend terminated while attempting to use large objects
Date
Msg-id 103732d05efc7ff87251bd8b386384cb
Whole thread Raw
List pgsql-bugs
============================================================================
                        POSTGRESQL BUG REPORT TEMPLATE
============================================================================


Your name        : Mark Wilson
Your email address    : m.r.wilson@larc.nasa.gov

Category        : runtime: back-end
Severity        : serious

Summary: backend terminated while attempting to use large objects

System Configuration
- --------------------
  Operating System   : Solaris 2.6

  PostgreSQL version : 6.4

  Compiler used      : gcc 2.7.2.3.f.2

Hardware:
- ---------
Sun Ultra 30 - 496M RAM

uname -a output:
SunOS sundog 5.6 Generic_105181-03 sun4u sparc SUNW,Ultra-30


Versions of other tools:
- ------------------------
gmake 3.76.1
flex 2.5.4a
perl 5.004_04
DBI 1.02
DBD-Pg 0.89

- --------------------------------------------------------------------------

Problem Description:
- --------------------
After upgrading to version 6.4 I have attempted to bring
DBI and DBD-Pg up to the latest versions as well.  During
the 'make test' portion of the DBD-Pg build an error was
encountered on both the Sun system mentioned here and a
co-workers SGI system.  Email correspondence with Edmund
Mergl has led him to believe there is still a large object
problem with Posgresql version 6.4 - Have there been any
other reports to this effect?  I have applied a small patch
provide by Edmund which looks like this:

** src/backend/storage/large_object/inv_api.c.orig     Sat Nov 14 12:45:48 1998
- --- src/backend/storage/large_object/inv_api.c  Sat Nov 14 12:46:16 1998
***************
*** 549,556 ****
                                tuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);
                        else
                                tuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);
                }
- -               ReleaseBuffer(buffer);

                /* move pointers past the amount we just wrote */
                buf += tuplen;
- --- 549,556 ----
                                tuplen = inv_wrnew(obj_desc, buf, nbytes - nwritten);
                        else
                                tuplen = inv_wrold(obj_desc, buf, nbytes - nwritten, tuple, buffer);
+                       ReleaseBuffer(buffer);
                }

                /* move pointers past the amount we just wrote */
                buf += tuplen;


- --------------------------------------------------------------------------

Test Case:
- ----------
I have been able to repeatedly recreate the problem on my
system using the following commands:

echo -n "testing large objects using blod_read" >/tmp/gaga
createdb pgtest
psql pgtest
pgtest=> CREATE TABLE lobject (id int4, loid oid);
pgtest=> INSERT INTO lobject (id, loid) VALUES (1, lo_import('/tmp/gaga'));

Resulting in the following error message:

pqReadData() -- backend closed the channel unexpectedly.
        This probably means the backend terminated abnormally before or while pr
ocessing the request.
We have lost the connection to the backend, so further processing is impossible.
  Terminating.


- --------------------------------------------------------------------------

Solution:
- ---------


- --------------------------------------------------------------------------

pgsql-bugs by date:

Previous
From: Unprivileged user
Date:
Subject: General Bug Report: initdb fails with parsing error in creating template1 db
Next
From: Unprivileged user
Date:
Subject: General Bug Report: yacc runs out of resources