Thread: Re: Need your comments/help

Re: Need your comments/help

From
The Hermit Hacker
Date:
Hi Rasool...

    I'm forwarding this into the pgsql-hackers mailing list...I'm
bound to be overlooking *someone*'s hard work, but right now, the only one
that I can think of who's done any work on concurrency issues (assuming
I'm current in that this is the 'shared/multi-user' aspect we are
referring here) is Bruce Momjian...as for recovery, I don't think *anyone*
has dived into that area yet...

    As for your co-operating with us in enhancing and extending...we
look very much forward to it.  Keeping our academic ties, as much and as
far as possible, is in everyone's best interests, as it tends to provide a
fountain of "younger" ideas that us old-timers tend to overlook or not be
aware of :)


On Tue, 7 Jul 1998, Rasool Jalili wrote:

> Dear Marc,
>
>  I am an Assistant Professor in the Department of Computer Engineering,
> Sharif University of Technology, Tehran, Iran.  As a new research field,
> we intend to define some (currently one) Msc projects in concurrency
> control or recovery of Postgresql.  This will help us to share our
> research ability here with you in enhancing/extending postgresql as an
> academic shareware DBMS.  Unfortunately, we have been unable to find
> useful documentation saying in detail how the Postgresql transaction
> management has been formed and which algorithms have been implemented.
> I appreciate you if you let me know:

>     - which (and at which level) of research have been done on these
>         aspects of Postgresql?

>     - how do you think of our cooperating in enhancing/extending
>         Postgresql?

>     - how can we have more information to initiate such projects?

>     - is there any known database benchmarker tools on Linux for
>         evaluating our modifications?


Large objects buffer leak

From
Date:
    Hi.

I wrote some time ago about a buffer leak appearing with PostgreSQL large
objects, calling for hints about where to look. As I did not get any
answer, I dived a little bit more in the code.

The problem is simple. For performance reasons (as far as I can tell), PG
large objects keep the object internal scan index open as long as the
object is not closed. The problem is that this index (may) keep pinned
buffers.
In the CommitTransaction() function, these buffers are examined and
released if necessary, with an error notice. For long objects this causes
a segmentation fault in postmaster (and this is present in current public
release).
Whenever all large objects operations are done inside a transaction (begin
- open lo - ... - close lo - end), this problem does not appear
(CommitTransaction() is only called on the END statement, when the index
is closed).

In order to correct this, I see two solutions:
  - close the index after every operation on large objects
  - clean up large objects opened indexes in CommitTransaction()
I prefer the second one, that offers speed-up inside transactions. But as
I do not know all the evolutions in progress, I would like to know which
should be used in order to be coherent with the current works.

Yet another question, does someone work on large objects ? If not, I can
code this bug fix and submit a patch.

    Thanks.

---
Pascal ANDRE, graduated from Ecole Centrale Paris
andre@via.ecp.fr
"Use the source, Luke. Be one with the Code."  -- Linus Torvalds