Re: Suspending SELECTs - Mailing list pgsql-performance

From Mark Lewis
Subject Re: Suspending SELECTs
Date
Msg-id 1137534050.17090.23.camel@archimedes
Whole thread Raw
In response to Re: Suspending SELECTs  (Alessandro Baretta <a.baretta@barettadeit.com>)
List pgsql-performance
> I am developing my applications in Objective Caml, and I have written the
> middleware layer myself. I could easily implement a cursor-pooling strategy, but
> there is no perfect solution to the problem of guaranteeing that cursors be
> closed. Remember that web applications require the user to "open a session" by
> connecting the appropriate HTTP resource, but users as never required to log
> out. Hence, in order to eventually reclaim all cursors, I must use magical
> "log-out detection" algorithm, which is usually implemented with a simple
> timeout. This guarantees the required property of safety (the population of
> cursors does not diverge) but does not guarantee the required property of
> liveness (a user connecting to the application, who has opened a session but has
> not logged out, and thus possesses a session token, should have access the
> execution context identified by his token).

I fail to see the problem here.  Why should "liveness" be a required
property?  If is it simply that you can't promptly detect when a user is
finished with their web session so you can free resources, then remember
that there is no requirement that you dedicate a connection to their
session in the first place.  Even if you're using your own custom
middleware, it isn't a very complicated or conceptually difficult thing
to implement (see my previous post).  Certainly it's simpler than
allowing clients to pass around runtime state.

As far as implementing this sort of thing in the back-end, it would be
really hard with the PostgreSQL versioning model.  Oracle can more
easily (and kind of does) support cursors like you've described because
they implement MVCC differently than PostgreSQL, and in their
implementation you're guaranteed that you always have access to the most
recent x megabytes of historical rows, so even without an open
transaction to keep the required rows around you can still be relatively
sure they'll be around for "long enough".  In PostgreSQL, historical
rows are kept in the tables themselves and periodically vacuumed, so
there is no such guarantee, which means that you would need to either
implement a lot of complex locking for little material gain, or just
hold the cursors in moderately long-running transactions, which leads
back to the solution suggested earlier.

-- Mark Lewis



pgsql-performance by date:

Previous
From: mark@mark.mielke.cc
Date:
Subject: Re: Suspending SELECTs
Next
From: J@Planeti.Biz
Date:
Subject: Multiple Order By Criteria