Re: Incremental results from libpq - Mailing list pgsql-interfaces

From Goulet, Dick
Subject Re: Incremental results from libpq
Date
Msg-id 4001DEAF7DF9BD498B58B45051FBEA6502EF5608@25exch1.vicorpower.vicr.com
Whole thread Raw
In response to Incremental results from libpq  (Scott Lamb <slamb@slamb.org>)
List pgsql-interfaces
Bruce,
Humm, you learn something new every day.  Thanks, didn't see
that in the documentation.

-----Original Message-----
From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
Sent: Wednesday, November 16, 2005 3:13 PM
To: Goulet, Dick
Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces@postgresql.org; Scott
Lamb
Subject: Re: [INTERFACES] Incremental results from libpq

Goulet, Dick wrote:
>  Bruce,
>
>     If I may, one item that would be of extreme use to our location
> would be global temporary tables.  These have existed since Oracle
9.0.
> They are defined once and then used by clients as needed.  Each
session
> is ignorant of the data of any other session and once you disconnect
the
> data from the session disappears.  Truly a real temporary table.

How is it better than what we have now?

------------------------------------------------------------------------
---


>
> -----Original Message-----
> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
> Sent: Wednesday, November 16, 2005 11:33 AM
> To: Goulet, Dick
> Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces@postgresql.org; Scott
> Lamb
> Subject: Re: [INTERFACES] Incremental results from libpq
>
>
> Added to TODO:
>
>         o Allow query results to be automatically batched to the
client
>
>           Currently, all query results are transfered to the libpq
>           client before libpq makes the results available to the
>           application.  This feature would allow the application to
make
>           use of the first result rows while the rest are transfered,
or
>           held on the server waiting for them to be requested by
libpq.
>           One complexity is that a query like SELECT 1/col could error
>           out mid-way through the result set.
>
>
>
------------------------------------------------------------------------
> ---
>
> Goulet, Dick wrote:
> > Tom,
> >
> >     Your case for not supporting this is reasonable, at least to me.
> > Personally I believe you should take one side or the other at the
> server
> > level and then allow the app developer to use it as appropriate, so
no
> > argument here.  But, there was a change in behavior introduced by
> Oracle
> > in 10G that supports what was asked for by Trolltech.  The optimizer
> was
> > provided the "smarts" to determine if your query is best supported
by
> a
> > regular cursor or if a bulk collect in the background would be
better.
> > The end result is that the application behaves as normal, but the
> > results are faster at getting back to it.  What appears to be
> happening
> > is that the database returns the first row as normal, but then
> continues
> > collecting data rows and sequestering then off some where, probably
> the
> > temp tablespace, until your ready for them.  Appears to have driven
> the
> > final coffin nail in the old "ORA-01555 Snapshot too old" error.
> Course
> > since Postgresql doesn't have undo segments you don't have that
> problem.
> >
> > -----Original Message-----
> > From: pgsql-interfaces-owner@postgresql.org
> > [mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Tom Lane
> > Sent: Wednesday, November 16, 2005 9:24 AM
> > To: Peter Eisentraut
> > Cc: pgsql-interfaces@postgresql.org; Scott Lamb
> > Subject: Re: [INTERFACES] Incremental results from libpq
> >
> > Peter Eisentraut <peter_e@gmx.net> writes:
> > > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> > >> The main reason why libpq does what it does is that this way we
do
> > not
> > >> have to expose in the API the notion of a command that fails part
> way
> > >> through.
> >
> > > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came
over
> to
> > talk to
> > > me about this.  He opined that it would be beneficial for their
> > purpose (in
> > > certain cases) if the server would first compute the entire result
> set
> > and
> > > keep it in the server memory (thus eliminating potential errors of
> the
> > 1/x
> > > kind) and then ship it to the client in a way that the client
would
> be
> > able
> > > to fetch it piecewise.  Then, the client application could build
the
> > display
> > > incrementally while the rest of the result set travels over the
> (slow)
> > link.
> > > Does that make sense?
> >
> > Ick.  That seems pretty horrid compared to the straight
> > incremental-compute-and-fetch approach.  Yes, it preserves the
> illusion
> > that a SELECT is all-or-nothing, but at a very high cost, both in
> terms
> > of absolute runtime and in terms of needing a new concept in the
> > frontend protocol.  It also doesn't solve the problem for people who
> > need incremental fetch because they have a result set so large they
> > don't want it materialized on either end of the wire.  Furthermore,
> ISTM
> > that any client app that's engaging in incremental fetches really
has
> to
> > deal with the failure-after-part-of-the-query-is-done problem
anyway,
> > because there's always a risk of failures on the client side or in
the
> > network connection.  So I don't see any real gain in conceptual
> > simplicity from adding this feature anyway.
> >
> > Note that if Trolltech really want this behavior, they can have it
> today
> > --- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem
> attractive
> > enough to me to justify any further feature than that.
> >
> >             regards, tom lane
> >
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 9: In versions below 8.0, the planner will ignore your desire to
> >        choose an index scan if your joining column's datatypes do
not
> >        match
> >
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> >        subscribe-nomail command to majordomo@postgresql.org so that
> your
> >        message can get through to the mailing list cleanly
> >
>
> --
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 359-1001
>   +  If your life is a hard drive,     |  13 Roberts Road
>   +  Christ can be your backup.        |  Newtown Square, Pennsylvania
> 19073
>
> ---------------------------(end of
broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faq
>

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania
19073


pgsql-interfaces by date:

Previous
From: "Guy Rouillier"
Date:
Subject: Re: Incremental results from libpq
Next
From: Alvaro Herrera
Date:
Subject: Re: Incremental results from libpq