Large Object problems (was Re: JDBC int8 hack) - Mailing list pgsql-patches

From Peter Mount
Subject Large Object problems (was Re: JDBC int8 hack)
Date
Msg-id 5.0.2.1.0.20010410141636.022faaa0@mail.retep.org.uk
Whole thread Raw
In response to Re: JDBC int8 hack  (Kyle VanderBeek <kylev@yaga.com>)
List pgsql-patches
At 18:30 09/04/01 -0700, Kyle VanderBeek wrote:
>On Thu, Apr 05, 2001 at 04:08:48AM -0400, Peter T Mount wrote:
> > Quoting Kyle VanderBeek <kylev@yaga.com>:
> >
> >
> > > Please consider applying my patch to the 7.0 codebase as a stop-gap
> > > measure until such time as the optimizer can be improved to notice
> > > indecies on INT8 columns and cast INT arguments up.
> >
> > This will have to wait until after 7.1 is released. As this is a "new"
> feature,
> > this can not be included into 7.1 as it's now in the final Release
> Candidate
> > phase.
>
>This is a new feature?  Using indecies is "new"?  I guess I really beg to
>differ.  Seems like a bugfix to me (in the "workaround" category).

Yes they are. INT8 is not a feature/type yet supported by the driver, hence
it's "new".

Infact the jdbc driver supports no array's at this time (as PostgreSQL &
SQL3 arrays are different beasts).

If it's worked in the past, then that was sheer luck.

>I'm going to start digging around in the optimizer code so such hacks as
>mine aren't needed.  It's really haenous to find out your production
>server is freaking out and doing sequential scans for EVERYTHING.

Are you talking about the optimiser in the backend as there isn't one in
the jdbc driver.


>Another hack I need to work on (or someone else can) is to squish in a
>layer of filesystem hashing for large objects.  We tried to use large
>objects and got destroyed.  40,000 rows and the server barely functioned.
>I think this is because of 2 things:
>
>1) Filehandles not being closed.  This was an oversite I've seen covered
>in the list archives somewhere.

Ok, ensure you are closing the large objects within JDBC. If you are then
this is a backend problem.

One thing to try is to commit the transaction a bit more often (if you are
running within a single transaction for all 40k objects). Committing the
transaction will force the backend to close all open large objects on that
connection.

>2) The fact that all objects are stored in a the single data directory.
>Once you get up to a good number of objects, directory scans really take a
>long, long time.  This slows down any subsequent openings of large
>objects.  Is someone working on this problem?  Or have a patch already?

Again not JDBC. Forwarding to the hackers list on this one. The naming
conventions were changed a lot in 7.1, and it was for more flexability.

Peter


pgsql-patches by date:

Previous
From: Kyle VanderBeek
Date:
Subject: Re: Large Object problems (was Re: JDBC int8 hack)
Next
From: Thomas Lockhart
Date:
Subject: Re: Large Object problems (was Re: JDBC int8 hack)