Thread: Re: [JDBC] Possible large object bug?

Re: [JDBC] Possible large object bug?

From
Peter Mount
Date:
At 10:37 27/03/01 +1000, Joe Shevland wrote:
>Hi,
>
>Semi-topical I hope ;)

Yes semi ;-)

>  I've started using Postgres 7.1 (FreeBSD 4.2-S) and large objects via
> JDBC. (postmaster (PostgreSQL) 7.1beta5)

I'm forwarding this to the bugs list as it looks like something nasty in
the back end.


>Everything has been working nicely with storing/retrieving blobs, until
>last night during a vacuum of the database the backend process crashed
>with the messages added to the end of this email. I'm also using the
>'vacuumlo' contributed code. The order of the cron jobs is:
>
>59 2  *  *  *  postgres /usr/local/pgsql/bin/vacuumlo -v db1 db2 db3
>59 3  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db1
>59 4  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db2
>59 5  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db3
>
>so I was wondering if there might be a bug in the vacuumlo code (though
>its vacuumdb dying)? Or I was thinking, because they're development db's,
>that frequent dropping/recreating of tables is maybe causing the prob? The
>same vacuum commands have run fine before, both from cron and the command
>line, the only difference was slightly heavier dropping/recreating yesterday.
>
>I'm yet to see if that particular database is stuffed as I can recreate
>and retest easily enough. Let me know if I can give any further info,
>
>Regards,
>Joe
>
>---
>NOTICE:  Rel pg_attribute: TID 1/115: OID IS INVALID. TUPGONE 1.
>...
>NOTICE:  Rel pg_attribute: TID 1/6087: OID IS INVALID. TUPGONE 1.
>NOTICE:  Rel pg_attribute: TID 1/6111: OID IS INVALID. TUPGONE 1.
>NOTICE:  Rel pg_attribute: TID 1/6112: OID IS INVALID. TUPGONE 1.
>NOTICE:  Rel pg_attribute: TID 1/6136: OID IS INVALID. TUPGONE 1.
>NOTICE:  Rel pg_attribute: TID 1/6137: OID IS INVALID. TUPGONE 1.
>pqReadData() -- backend closed the channel unexpectedly.
>         This probably means the backend terminated abnormally
>         before or while processing the request.
>connection to server was lost
>vacuumdb: vacuum  db2 failed
>---
>
>with ~500 of the NOTICE lines then the crash. About 1% give a TUPGONE 0
>ending instead.
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


RE: [JDBC] Possible large object bug?

From
"Joe Shevland"
Date:
Hi,

Tom C. and I exchanged a few emails about the issue offlist; Tom spotted after a while that the PostgreSQL data had
beenclobbered by the output of my cron job (not possible in normal conditions I guess)... end result being he suspects
itsa hardware glitch or similar (also had 'wc' dump core that same night during the daily periodic run; I'm choosing
cosmicradiation ;). 

I'm going to upgrade to the latest release and also run up another shinier box and see if things work out, I'm sure
theywill (digging out a tripwire floppy to make ultra sure nothings changed). As a side issue, could/should the
vacuumlofunctionality be merged with vacuum? 

Cheers,
Joe

> -----Original Message-----
> From: pgsql-jdbc-owner@postgresql.org
> [mailto:pgsql-jdbc-owner@postgresql.org]On Behalf Of Peter Mount
> Sent: Thursday, March 29, 2001 6:52 PM
> To: Joe Shevland; pgsql-jdbc@postgresql.org
> Cc: pgsql-bugs@postgresql.org
> Subject: Re: [JDBC] Possible large object bug?
>
>
> At 10:37 27/03/01 +1000, Joe Shevland wrote:
> >Hi,
> >
> >Semi-topical I hope ;)
>
> Yes semi ;-)
>
> >  I've started using Postgres 7.1 (FreeBSD 4.2-S) and large objects via
> > JDBC. (postmaster (PostgreSQL) 7.1beta5)
>
> I'm forwarding this to the bugs list as it looks like something nasty in
> the back end.
>
>
> >Everything has been working nicely with storing/retrieving blobs, until
> >last night during a vacuum of the database the backend process crashed
> >with the messages added to the end of this email. I'm also using the
> >'vacuumlo' contributed code. The order of the cron jobs is:
> >
> >59 2  *  *  *  postgres /usr/local/pgsql/bin/vacuumlo -v db1 db2 db3
> >59 3  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db1
> >59 4  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db2
> >59 5  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db3
> >
> >so I was wondering if there might be a bug in the vacuumlo code (though
> >its vacuumdb dying)? Or I was thinking, because they're
> development db's,
> >that frequent dropping/recreating of tables is maybe causing the
> prob? The
> >same vacuum commands have run fine before, both from cron and
> the command
> >line, the only difference was slightly heavier
> dropping/recreating yesterday.
> >
> >I'm yet to see if that particular database is stuffed as I can recreate
> >and retest easily enough. Let me know if I can give any further info,
> >
> >Regards,
> >Joe
> >
> >---
> >NOTICE:  Rel pg_attribute: TID 1/115: OID IS INVALID. TUPGONE 1.
> >...
> >NOTICE:  Rel pg_attribute: TID 1/6087: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6111: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6112: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6136: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6137: OID IS INVALID. TUPGONE 1.
> >pqReadData() -- backend closed the channel unexpectedly.
> >         This probably means the backend terminated abnormally
> >         before or while processing the request.
> >connection to server was lost
> >vacuumdb: vacuum  db2 failed
> >---
> >
> >with ~500 of the NOTICE lines then the crash. About 1% give a TUPGONE 0
> >ending instead.
> >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>