On Tue, Sep 25, 2001 at 12:06:48AM -0400, Micah Yoder wrote:
> (sorry to reply to a week-old message. need to keep up with this list more!)
Ditto, but more so.
> I then wrote a daemon in C to do the work and store the results in RAM. The
> PHP script connected to the daemon via a socket, and passed a request ID and
> the numbers of the records it wanted. Sure, it was convoluted, but I
> actually got the speed up to where I was fairly happy with it.
>
> If there's a better solution than that, I'm not aware of it.
A technique I've used with some success is to select the primary keys
from the rows you're interested in, and only have to memorize a list of
integers. Then for each page, select the rows "WHERE pkey IN (...)".
It's sort of a middle ground as far as tradeoffs go. You don't have to
store a huge amount of data in RAM or temporary files, but you still
have to do the work up front.
The problem I have with persistent per-session connections is that you
end up having basically the same
(per-transaction-overhead * simultaneous-transactions), and you add
(per-connection-overhead * simultaneous-open-sessions) on top.
There are certainly situations where you can do better one way or the
other.. figuring out how to best tune the per-session case scares me.
--
Christopher Masto
CB461C61 8AFC E3A8 7CE5 9023 B35D C26A D849 1F6E CB46 1C61