doug@mcnaught.org (Doug McNaught) writes:
> Network Administrator <netadmin@vcsn.com> writes:
>
>> Ok, I did see the autocommit flag setting in DBD:Pg when I starting
>> reading up on the DBI/DBD interfacing methods so I guess I could
>> recode for that. However, how do you "maintain" the current
>> transaction open if your script is writing pages to the web. Even
>> in mod_perl I think that there is a commit after the script ends,
>> no?
>
> Oh, right--I didn't get that bit of your problem.
>
> I think the conventional wisdom on this is that keeping transactions
> open across web page deliveries is a Bad Idea. If you're just doing
> the standard "show N records per page" thing, you can use LIMIT and
> OFFSET on your SELECT call. This is going to be slower thn using a
> transaction (because you're re-executing the query for every page) but
> is fairly simple.
If the set of data is pretty complex, this can Suck Really Badly.
A developer recently came to me with a more or less pathological case
where LIMIT/OFFSET on a particular query made it run for about 3000ms,
whereas dropping the LIMIT dropped query time to 75ms.
The problem was that the table was big, and the ORDER BY DATE caused
the LIMIT to force an index scan on the DATE field, when it would have
been preferable to use an index scan on customer ID, and sort the
resulting result set.
I haven't tried to "punt" that problem over to [PERFORM] because it's
pretty clear that a CURSOR is a better idea, as you suggest next.
> If you really want to have a DB transaction that covers multiple page
> views, you need some kind of persistent application server rather than
> CGI scripts, so you can keep open connections and application state
> around.
Right you are. The challenge, of course, is of how to properly expire
these objects.
--
(format nil "~S@~S" "cbbrowne" "libertyrms.info")
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)