Thread: Problems with postgres V6.5.3 large objects

Problems with postgres V6.5.3 large objects

From
Charles Randle
Date:
Hi all, <p>I'm  relatively new to postgres amd I'm attempting to use  large objects to store and play-back messages
directlyfrom the database . I have a number of problems/questions/issues : <p>1) Whenever I try to open more than one
largeobject within my class (I'm using C++)  The following message appears in the postgres backend log-file &  the
applicationterminates .<font color="#3366FF"></font><p><font color="#000000">"query: SELECT message_data FROM
receptormessagesWHERE message_name = 'wait_mess</font><br /><font color="#000000">age_one'</font><br /><font
color="#000000">ProcessQuery</font><br/><font color="#000000">CommitTransactionCommand</font><br /><font
color="#000000">StartTransactionCommand</font><br/><font color="#000000">ERROR:  update_fp_info: cache lookup for
function1 failed</font><br /><font color="#000000">AbortCurrentTransaction</font><br /><font color="#000000">FATAL 1: 
Socketcommand type</font><br /><font color="#000000">proc_exit(0) [#0]"</font><br /><font color="#000000"> </font><br
/>   Note 1) - Note that the Oids for the  large objects are stored in a table called 'receptormessages' <br
/>             2) - The transaction necessary for the access to the large objects begins in the constructor of <br
/>                     the class and terminates of the object is ever destroyed (in the destructor). <br />  <p>2) I am
thinkingof using the libpq++ interface to access the large objects , However from what I've seen each instance of the
classwould create a new connection to the backend , from the same process ! <br />        a) How expensive is this
multipleconnection option given that many large objects may be in use <br />             at any one time  ? <br
/>       b) If this option is expensive , is there any way to access many large objects  and only have <br
/>            one connection to the database  with the C++ interface to these objects ? <br />  <p>Regards, <br
/>CharlesRandle. <p>  <br />  <br />  

Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Tom Lane
Date:
Charles Randle <caspanis@cybervale.com> writes:
> 1) Whenever I try to open more than one large object within my class
> (I'm using C++)  The following message appears in the postgres backend
> log-file &  the application terminates .

> ERROR:  update_fp_info: cache lookup for function 1 failed

This indicates that PQfn is being called with bad parameters (function
ID 1; there's no such function).  How are you accessing the large
objects, exactly?

> 2) I am thinking of using the libpq++ interface to access the large
> objects , However from what I've seen each instance of the class would
> create a new connection to the backend , from the same process !

Yes.  This is a horribly brain-dead design; PgLargeObject needs to be
rethought and rewritten.  Probably PgLargeObject's constructor ought to
take a reference to an existing connection object that it uses, rather
than containing a connection object of its own.  (For robustness, that
would probably mean adding some kind of reference count to the
connection objects...)

>         a) How expensive is this multiple connection option given that
> many large objects may be in use at any one time  ?

Very.  It's totally impractical if you are going to make heavy use of
large objects.

Most of the developers have not wanted to put much effort into large
objects, since where we really want to go is to eliminate tuple size
restrictions; once that happens large objects will be much less
necessary.  In the meantime, though, better code for PgLargeObject
would be a nice contribution if you are interested.
        regards, tom lane

************




Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Charles Randle
Date:
Tom,

In terms of access to the large object, my class maintains an  array of
objects which are currently open
(Oid,name,file descriptor etc ..).  Whenever access is needed to any
large object an accessor method
simply executes 'lo_read'/lo_write' or lo_seek as is desired . If the
object is not on the active list then another method opens the object
(using lo_open ).As is to be expected the destructor of the class closes
all open objects .

These accessor methods are called by C style functions which are
basically wrappers of them . This is done because the access is needed
by  a Dialogic voice  library  . The messages are actually VOX voice
messages which are played by an application that I am designing. The
Dialogic libraries that I'm using allow the
installation of custom I/O functions for reading to or writing from
non-standard  devices (hence the need for C style wrappers ) , but these
custom functions must have the same prototype of standard
open/read/write/lseek systems calls .

The interface works for the first such VOX large object but the error
manifests itself if I try to load (open ) another large object for use .

Regards,
Charles Randle


************




Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Tom Lane
Date:
Charles Randle <caspanis@cybervale.com> writes:
> In terms of access to the large object, my class maintains an array of
> objects which are currently open (Oid,name,file descriptor etc ..).
> Whenever access is needed to any large object an accessor method
> simply executes 'lo_read'/lo_write' or lo_seek as is desired . If the
> object is not on the active list then another method opens the object
> (using lo_open ).As is to be expected the destructor of the class
> closes all open objects .

Sounds reasonable as far as it goes.  My best guess at this point is a
client-side memory stomp that is clobbering libpq's record of which
function OID to call for each available LO operation.  See
interfaces/libpq/fe-lobj.c, particularly lo_initialize() and its
callers, and then try tracing through the client with a debugger
to see where the conn->lobjfuncs struct gets clobbered.

Good luck!
        regards, tom lane

************




Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Tom Lane
Date:
Charles Randle <caspanis@cybervale.com> writes:
> In terms of access to the large object, my class maintains an array of
> objects which are currently open (Oid,name,file descriptor etc ..).
> Whenever access is needed to any large object an accessor method
> simply executes 'lo_read'/lo_write' or lo_seek as is desired . If the
> object is not on the active list then another method opens the object
> (using lo_open ).As is to be expected the destructor of the class
> closes all open objects .

Sounds reasonable as far as it goes.  My best guess at this point is a
client-side memory stomp that is clobbering libpq's record of which
function OID to call for each available LO operation.  See
interfaces/libpq/fe-lobj.c, particularly lo_initialize() and its
callers, and then try tracing through the client with a debugger
to see where the conn->lobjfuncs struct gets clobbered.

Good luck!
        regards, tom lane

************




Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Douglas Thomson
Date:
> Most of the developers have not wanted to put much effort into large
> objects, since where we really want to go is to eliminate tuple size
> restrictions; once that happens large objects will be much less
> necessary.

I am curious about the planned interface once tuple size restrictions
are eliminated. I am using large objects, and want to make my
application port as painlessly as possible.

If *all* that happens is that tuple size and SQL command length
limits are eliminated, then presumably I will just use a TEXT
attribute and do a normal INSERT to store my large object? But will
that mean that I have to go right through my large objects and escape
all the nasty binary characters (such as single quotes) that are
illegal in an SQL string constant?

If there is some other list where this discussion is accessible then
please direct me!

Doug.

P.S. Did anyone ever comment about whether multi-table joins took
     advantage of individual table indexes on the joining attributes,
     or whether the join had to read in all the table data and sort it
     anyway? There was some trouble with duplicated postings at the
     time, and I may have missed something while I was purging the
     messages I had already read...

Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Charles Randle
Date:
Tom,

In terms of access to the large object, my class maintains an  array of
objects which are currently open
(Oid,name,file descriptor etc ..).  Whenever access is needed to any
large object an accessor method
simply executes 'lo_read'/lo_write' or lo_seek as is desired . If the
object is not on the active list then another method opens the object
(using lo_open ).As is to be expected the destructor of the class closes
all open objects .

These accessor methods are called by C style functions which are
basically wrappers of them . This is done because the access is needed
by  a Dialogic voice  library  . The messages are actually VOX voice
messages which are played by an application that I am designing. The
Dialogic libraries that I'm using allow the
installation of custom I/O functions for reading to or writing from
non-standard  devices (hence the need for C style wrappers ) , but these
custom functions must have the same prototype of standard
open/read/write/lseek systems calls .

The interface works for the first such VOX large object but the error
manifests itself if I try to load (open ) another large object for use .

Regards,
Charles Randle


************




Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Tom Lane
Date:
Charles Randle <caspanis@cybervale.com> writes:
> 1) Whenever I try to open more than one large object within my class
> (I'm using C++)  The following message appears in the postgres backend
> log-file &  the application terminates .

> ERROR:  update_fp_info: cache lookup for function 1 failed

This indicates that PQfn is being called with bad parameters (function
ID 1; there's no such function).  How are you accessing the large
objects, exactly?

> 2) I am thinking of using the libpq++ interface to access the large
> objects , However from what I've seen each instance of the class would
> create a new connection to the backend , from the same process !

Yes.  This is a horribly brain-dead design; PgLargeObject needs to be
rethought and rewritten.  Probably PgLargeObject's constructor ought to
take a reference to an existing connection object that it uses, rather
than containing a connection object of its own.  (For robustness, that
would probably mean adding some kind of reference count to the
connection objects...)

>         a) How expensive is this multiple connection option given that
> many large objects may be in use at any one time  ?

Very.  It's totally impractical if you are going to make heavy use of
large objects.

Most of the developers have not wanted to put much effort into large
objects, since where we really want to go is to eliminate tuple size
restrictions; once that happens large objects will be much less
necessary.  In the meantime, though, better code for PgLargeObject
would be a nice contribution if you are interested.
        regards, tom lane

************




Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Douglas Thomson
Date:
> Most of the developers have not wanted to put much effort into large
> objects, since where we really want to go is to eliminate tuple size
> restrictions; once that happens large objects will be much less
> necessary.

I am curious about the planned interface once tuple size restrictions
are eliminated. I am using large objects, and want to make my
application port as painlessly as possible.

If *all* that happens is that tuple size and SQL command length
limits are eliminated, then presumably I will just use a TEXT
attribute and do a normal INSERT to store my large object? But will
that mean that I have to go right through my large objects and escape
all the nasty binary characters (such as single quotes) that are
illegal in an SQL string constant?

If there is some other list where this discussion is accessible then
please direct me!

Doug.

P.S. Did anyone ever comment about whether multi-table joins took
     advantage of individual table indexes on the joining attributes,
     or whether the join had to read in all the table data and sort it
     anyway? There was some trouble with duplicated postings at the
     time, and I may have missed something while I was purging the
     messages I had already read...

************

Re: [INTERFACES] Problems with postgres V6.5.3 large objects

From
Tom Lane
Date:
Douglas Thomson <dougt@mugc.cc.monash.edu.au> writes:
>> Most of the developers have not wanted to put much effort into large
>> objects, since where we really want to go is to eliminate tuple size
>> restrictions; once that happens large objects will be much less
>> necessary.

> I am curious about the planned interface once tuple size restrictions
> are eliminated. I am using large objects, and want to make my
> application port as painlessly as possible.

This hasn't been thought about much, AFAIK.  I agree that with large
fields in tuples, it'd be nice to have operations that read or write
portions of fields, and also ways to read/write binary data without any
encoding.  But I'm not real sure what the API should look like.  Are
there precedents in other DBMSes?
        regards, tom lane