Thread: Incremental results from libpq

Incremental results from libpq

From
Scott Lamb
Date:
I'm using libpq's asynchronous interface. I'd like to do the  
equivalent of setting fetchSize on the JDBC driver - get a chunk of  
results, handle them, free the memory they use, and then get another  
until I've handled an entire query.

I can do this at the SQL layer using "declare cursor ..." and "fetch  
forward n ..." but it seems like the lower level should be able to do  
this for me. It'd also let me have a more natural interface that (A)  
doesn't make the caller take a PostgreSQL-specific declare/fetch path  
for each query (B) can still use the JDBC-style "execute" that  
doesn't care if it's dealing with a row-returning statement.

I see that JDBC driver (at least in protocol version 3; I don't care  
about 2) does this by passing a maximum number of rows when sending  
Execute, then handling PortalSuspended and Execute again. I also see  
that libpq never sends a maximum number of rows or handles  
PortalSuspended.

Still, I think it should be able to do what I want. The results are  
sent from the database in order. This message type would be necessary  
to ensure the database sends no more than N rows, but it's not  
necessary to ensure the client handles N rows as soon as it has them.

I had been retrieving results from a query in this fashion:
    while True:        readfds = [PQsocket(conn)]        writefds = []        if PQflush(conn):            writefds =
readfds       if PQconsumeInput(conn):            error        if not PQisBusy(conn):            break
poll(readfds,writefds)    return PQgetResult(conn)
 

which the documentation recommends. But PQisBusy(conn) doesn't return  
false until the _entire_ resultset has been retrieved from the  
server. And if I skip PQisBusy(conn) and go straight for the  
PQgetResult(), it blocks internally until it can complete.

I looked inside libpq, and came up with this instead:
    while True:        readfds = [PQsocket(conn)]        writefds = []        if PQflush(conn):            writefds =
readfds       if PQconsumeInput(conn):            error        if PQisBusy(conn):            break        if
conn->result!= NULL and PQntuples(conn->result) >  
 
retrieved:            return conn->result        poll(readfds, writefds)    last = True    return PQgetResult(conn)

where "retrieved" is the number of rows I've examined so far, and  
"last" indicates that I shouldn't call again.

which is 1/3rd right:

- It does return results incrementally; good.
- It pokes inside libpq; ugh.
- It doesn't free any memory until the whole query's done. I suppose  
I could do that by changing conn->result myself, but...ugh. Is there  
a better way?

-- 
Scott Lamb <http://www.slamb.org/>




Re: Incremental results from libpq

From
Tom Lane
Date:
Scott Lamb <slamb@slamb.org> writes:
> Is there a better way?

Not at the moment.  It's been requested before though, so if you want to
develop a patch for libpq, have at it.

The main reason why libpq does what it does is that this way we do not
have to expose in the API the notion of a command that fails part way
through.  If you support partial result fetching then you'll have to
deal with the idea that a SELECT could fail after you've already
returned some rows to the client.  I am not sure that this is a huge
deal, but it definitely uglifies the API a bit.  It would be a good
idea to think through exactly what clients will need to do to cope with
that fact before you start designing the API extension.
        regards, tom lane


Re: Incremental results from libpq

From
Scott Lamb
Date:
On Nov 9, 2005, at 1:22 PM, Tom Lane wrote:
> Scott Lamb <slamb@slamb.org> writes:
>> Is there a better way?
>
> Not at the moment.  It's been requested before though, so if you  
> want to
> develop a patch for libpq, have at it.
>
> The main reason why libpq does what it does is that this way we do not
> have to expose in the API the notion of a command that fails part way
> through.  If you support partial result fetching then you'll have to
> deal with the idea that a SELECT could fail after you've already
> returned some rows to the client.  I am not sure that this is a huge
> deal, but it definitely uglifies the API a bit.  It would be a good
> idea to think through exactly what clients will need to do to cope  
> with
> that fact before you start designing the API extension.

Cool. I think I'll get my own interface hashed out in my kludgey way,  
then look at the broader need if it's a success.

My first idea, though, is to add a callback interface - "got the  
RowDescription", "got a DataRow" - and make the storage of stuff  
tuples in PGresult optional. (Maybe pqAddTuple would just be the  
default callback.)

Regards,
Scott

-- 
Scott Lamb <http://www.slamb.org/>




Re: Incremental results from libpq

From
Frank van Vugt
Date:
> >> Is there a better way?
> >
> > Not at the moment.  It's been requested before though, so if you
> > want to develop a patch for libpq, have at it.

> Cool. I think I'll get my own interface hashed out in my kludgey way,
> then look at the broader need if it's a success.

Just FYI, I'm interested in this extension as well.

At the moment, the PostgreSQL driver in TrollTech's Qt will fetch the complete 
result of any select-query as opposed to for example the Oracle driver which 
will fetch just the rows that are desired. This results in the fact that for 
example database-enabled tables in the UI won't do the initial paint as 
'snappy' as they could since although they will show only part of the 
result-set, the underlying driver is still picking up the complete set or 
records from the backend.


-- 
Best,




Frank.


Re: Incremental results from libpq

From
Frank van Vugt
Date:
> > The main reason why libpq does what it does is that this way we do not
> > have to expose in the API the notion of a command that fails part way
> > through.  If you support partial result fetching then you'll have to
> > deal with the idea that a SELECT could fail after you've already
> > returned some rows to the client.

I'm wondering, what kind of failure do you have in mind, here? If I'm informed 
correctly then Oracle and others are generating the complete static result 
set on the server-side, which will then stay cached until all rows/chunks are 
fetched. The one failure that comes to mind in this scenario is that the 
connection breaks down, but since informing the client would then be a bit 
difficult, you'll certainly be referring to something else ;)

If PostgreSQL were to build the complete result-set before handing over the 
first fetched rows/chunks, then I understand. Is that the case? Or something 
else even...?





-- 
Best,




Frank.


Re: Incremental results from libpq

From
Tom Lane
Date:
Frank van Vugt <ftm.van.vugt@foxi.nl> writes:
> The main reason why libpq does what it does is that this way we do not
> have to expose in the API the notion of a command that fails part way
> through.  If you support partial result fetching then you'll have to
> deal with the idea that a SELECT could fail after you've already
> returned some rows to the client.

> I'm wondering, what kind of failure do you have in mind, here?

There are lots of possibilities, but one fairly obvious example is
SELECT 1/x FROM tab;

where the 10000'th row of tab contains x = 0.  The server will detect
the zero-divide error only after it's already shipped 9999 rows to the
client.  Currently, when libpq gets the error it throws away the 9999
rows and just returns an "error" PQresult to the application.  If you
let libpq return some rows before it's got the whole result set, then
you need to decide what the API behavior ought to be for a failure after
a partial result set has been returned.  This in turn requires a little
thought about how client apps are likely to want to cope with this
scenario, so that you don't come up with a painful-to-use API.
        regards, tom lane


Re: Incremental results from libpq

From
Frank van Vugt
Date:
> There are lots of possibilities, but one fairly obvious example is

Got it, thanks.

So if the the Oracle part is true, I guess this is a winner for PostgreSQL ;)




-- 
Best,




Frank.


Re: Incremental results from libpq

From
Greg Stark
Date:
Frank van Vugt <ftm.van.vugt@foxi.nl> writes:

> I'm wondering, what kind of failure do you have in mind, here? If I'm informed 
> correctly then Oracle and others are generating the complete static result 
> set on the server-side, which will then stay cached until all rows/chunks are 
> fetched. 

That's obviously not true. Try doing "select * from huge_table" on Oracle and
you'll see records start appearing immediately. There are applications where
huge_table could occupy hundreds of gigabytes (or petabytes) and requiring all
queries to create copies of all their result sets before proceeding would make
lots of important applications entirely infeasible.

-- 
greg



Re: Incremental results from libpq

From
Frank van Vugt
Date:
> > If I'm informed correctly then Oracle and others are generating
> > the complete static result set on the server-side, which will then
> > stay cached until all rows/chunks are fetched.

> That's obviously not true. Try doing "select * from huge_table" on Oracle
> and you'll see records start appearing immediately.

Then it seems I haven't (been informed correctly) :)

Now, given that OCI is offering some kind of asynchronous interface as well, 
how are they handling an error occuring after a partial result-set has been 
retrieved by the client?



-- 
Best,




Frank.


Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Frank,
With a lot of things Oracle the real answer is: it depends.
For a simple "select * from <table_name>" even with a where
clause you may simply get the results one row at a time as they are
extracted & deemed appropriate for the result set.  But if you've
included a group by or order by clause, or a union/intersect/minus
clause then yes, the database will assemble results before sending them
to the client.
Also Oracle supports cursors which allow you to pop one or more
rows off of the result set at a time.  The way Postgresql returns data
all at once is different.

Dick Goulet
Senior Oracle DBA
Oracle Certified DBA

-----Original Message-----
From: pgsql-interfaces-owner@postgresql.org
[mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Frank van
Vugt
Sent: Thursday, November 10, 2005 3:33 PM
To: Greg Stark
Cc: pgsql-interfaces@postgresql.org
Subject: Re: [INTERFACES] Incremental results from libpq

> > If I'm informed correctly then Oracle and others are generating
> > the complete static result set on the server-side, which will then
> > stay cached until all rows/chunks are fetched.

> That's obviously not true. Try doing "select * from huge_table" on
Oracle
> and you'll see records start appearing immediately.

Then it seems I haven't (been informed correctly) :)

Now, given that OCI is offering some kind of asynchronous interface as
well,
how are they handling an error occuring after a partial result-set has
been
retrieved by the client?



--
Best,




Frank.

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate      subscribe-nomail command to
majordomo@postgresql.orgso that your      message can get through to the mailing list cleanly 


Re: Incremental results from libpq

From
Alvaro Herrera
Date:
Goulet, Dick wrote:

>     For a simple "select * from <table_name>" even with a where
> clause you may simply get the results one row at a time as they are
> extracted & deemed appropriate for the result set.  But if you've
> included a group by or order by clause, or a union/intersect/minus
> clause then yes, the database will assemble results before sending them
> to the client.

So, what happens with the 1/x query Tom mentioned?  How does Oracle
handles that situation?  Notice there's no special clause in the query
itself, so if it's extracted and returned, there's no way for the server
to know that there's a problem laying ahead.

>     Also Oracle supports cursors which allow you to pop one or more
> rows off of the result set at a time.  The way Postgresql returns data
> all at once is different. 

Postgres supports cursors too.  The Qt guys, and everyone else, could be
using it to get incremental results right now, no libpq mods necessary.

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


Re: Incremental results from libpq

From
Tom Lane
Date:
Alvaro Herrera <alvherre@commandprompt.com> writes:
> So, what happens with the 1/x query Tom mentioned?  How does Oracle
> handles that situation?  Notice there's no special clause in the query
> itself, so if it's extracted and returned, there's no way for the server
> to know that there's a problem laying ahead.

>> Also Oracle supports cursors which allow you to pop one or more
>> rows off of the result set at a time.  The way Postgresql returns data
>> all at once is different. 

> Postgres supports cursors too.  The Qt guys, and everyone else, could be
> using it to get incremental results right now, no libpq mods necessary.

Note that in this context, it's really pretty misleading to make any
blanket assertions about "Postgres does this" or "Postgres does that".
The behavior is partially determined by the client-side code, and
might well differ depending on which client library you are using,
as well as how you are using it.

For all I know, similar remarks might apply to Oracle.  Do they support
more than one application-side API?
        regards, tom lane


Re: Incremental results from libpq

From
Greg Stark
Date:
Alvaro Herrera <alvherre@commandprompt.com> writes:

> Postgres supports cursors too.  The Qt guys, and everyone else, could be
> using it to get incremental results right now, no libpq mods necessary.

Not really since the way Postgres supports cursors is at the SQL level. Users
of Qt and every other driver could be using cursors if their drivers support
them, but Qt can't really be reasonably expected to go into users' SQL and
modify them to use cursors.

Moreover cursors aren't really that great a substitute. With cursors you have
to manually fetch individual records. You're just trading off the
inefficiencies of batching up all the results for entirely different
inefficiencies. Now for every record you retrieve you need a network round
trip as well as a round trip down through your driver, the kernel layers on
both machines, and the backend as well.

The efficient approach as Oracle and other mature network layers implement is
to issue the query once, then pipeline the results back to the application
buffering a substantial amount in the driver. DBD::Oracle goes to some lengths
to ensure the number of records buffered is a reasonable multiple of the
default TCP mss of 1500 bytes. 

So even though the application only retrieves one record at a time it's just
pulling it out of an array that's already prefilled. When the array gets low
the next block of records is retrieved from the server (where they're probably
already buffered as well). The result is a constant flow of network traffic
that keeps the application and network as busy as possible.

-- 
greg



Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Alvaro,
Your observation is true.  In the query specified by Tom the
server has absolutely no idea that something is a miss in the coming
rows.  It is absolutely normal to have a query like that return several
rows and then yack out the error.  Here's an example:  You issue a query
to return all of the rows in a mega-row table at time A.  A second
individual updates say half of the rows at time B which is after you've
gotten 20% or the returned records.  He/she then commits the transaction
permanently modifying the data.  Now Oracle has retained the original
row values in the rollback or undo segments, but for argument's sake
assume that you've stopped retrieving data for a while, long enough for
the undo retention period to expire, and then continue retrieving data.
It is not uncommon in this case for some additional time to pass after
which you get an ORA-01555 Snapshot too old return message since Oracle
can no longer create a read consistent view of the data as of time A.
Something like this is also very common with those who "commit across a
cursor" which is another discussion altogether.
In any case the server does two things almost at once: 1) send
you the error message and set sqlca.sqlcode to -1555, 2) close the
cursor.  Additional calls for data return an additional error message of
cursor not open.

-----Original Message-----
From: Alvaro Herrera [mailto:alvherre@commandprompt.com]
Sent: Sunday, November 13, 2005 10:24 AM
To: Goulet, Dick
Cc: Frank van Vugt; Greg Stark; pgsql-interfaces@postgresql.org
Subject: Re: [INTERFACES] Incremental results from libpq

Goulet, Dick wrote:

>     For a simple "select * from <table_name>" even with a where
> clause you may simply get the results one row at a time as they are
> extracted & deemed appropriate for the result set.  But if you've
> included a group by or order by clause, or a union/intersect/minus
> clause then yes, the database will assemble results before sending
them
> to the client.

So, what happens with the 1/x query Tom mentioned?  How does Oracle
handles that situation?  Notice there's no special clause in the query
itself, so if it's extracted and returned, there's no way for the server
to know that there's a problem laying ahead.

>     Also Oracle supports cursors which allow you to pop one or more
> rows off of the result set at a time.  The way Postgresql returns data
> all at once is different.

Postgres supports cursors too.  The Qt guys, and everyone else, could be
using it to get incremental results right now, no libpq mods necessary.

--
Alvaro Herrera
http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Greg,
While you can bulk collect or array process records in Oracle,
the older less efficient method is also still around.  And yes you can
actually measure the inefficiency thereby created.  Also there are
places where that inefficiency is of value.  Take a case where your
applying a price change to a pricing table, assume that you want to
raise the price by one of several values based on another column in the
table.  Oracle allows you to open the cursor with the "for update of"
clause.  This has the effect or locking the affected rows in update mode
as they are included in the result set.  Now once you've figured out the
new price you can simply update the table "where current of
<cursor_name>".  Works very well, especially in a stored
procedure/package/function where sqlnet does not get in the way.

-----Original Message-----
From: gsstark@mit.edu [mailto:gsstark@mit.edu]
Sent: Sunday, November 13, 2005 11:14 AM
To: Alvaro Herrera
Cc: Goulet, Dick; Frank van Vugt; Greg Stark;
pgsql-interfaces@postgresql.org
Subject: Re: [INTERFACES] Incremental results from libpq


Alvaro Herrera <alvherre@commandprompt.com> writes:

> Postgres supports cursors too.  The Qt guys, and everyone else, could
be
> using it to get incremental results right now, no libpq mods
necessary.

Not really since the way Postgres supports cursors is at the SQL level.
Users
of Qt and every other driver could be using cursors if their drivers
support
them, but Qt can't really be reasonably expected to go into users' SQL
and
modify them to use cursors.

Moreover cursors aren't really that great a substitute. With cursors you
have
to manually fetch individual records. You're just trading off the
inefficiencies of batching up all the results for entirely different
inefficiencies. Now for every record you retrieve you need a network
round
trip as well as a round trip down through your driver, the kernel layers
on
both machines, and the backend as well.

The efficient approach as Oracle and other mature network layers
implement is
to issue the query once, then pipeline the results back to the
application
buffering a substantial amount in the driver. DBD::Oracle goes to some
lengths
to ensure the number of records buffered is a reasonable multiple of the
default TCP mss of 1500 bytes.

So even though the application only retrieves one record at a time it's
just
pulling it out of an array that's already prefilled. When the array gets
low
the next block of records is retrieved from the server (where they're
probably
already buffered as well). The result is a constant flow of network
traffic
that keeps the application and network as busy as possible.

--
greg



Re: Incremental results from libpq

From
Peter Eisentraut
Date:
Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> The main reason why libpq does what it does is that this way we do not
> have to expose in the API the notion of a command that fails part way
> through.

I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over to talk to 
me about this.  He opined that it would be beneficial for their purpose (in 
certain cases) if the server would first compute the entire result set and 
keep it in the server memory (thus eliminating potential errors of the 1/x 
kind) and then ship it to the client in a way that the client would be able 
to fetch it piecewise.  Then, the client application could build the display 
incrementally while the rest of the result set travels over the (slow) link.  
Does that make sense?


Re: Incremental results from libpq

From
"Magnus Hagander"
Date:
> > The main reason why libpq does what it does is that this
> way we do not
> > have to expose in the API the notion of a command that
> fails part way
> > through.
>
> I'm at LinuxWorld Frankfurt and one of the Trolltech guys
> came over to talk to me about this.  He opined that it would
> be beneficial for their purpose (in certain cases) if the
> server would first compute the entire result set and keep it
> in the server memory (thus eliminating potential errors of the 1/x
> kind) and then ship it to the client in a way that the client
> would be able to fetch it piecewise.  Then, the client
> application could build the display incrementally while the
> rest of the result set travels over the (slow) link.
> Does that make sense?

I think it does :-)

It would also remove the requirement to keep the whole resultset in
memory on the client. You'd still nede the RAM on the server, but no
need to duplicate it on the client. (And no need to store it *twice* if
your clietn happens to be running on the db server, in which case the
slow network point doesn't apply)

//Magnus


Re: Incremental results from libpq

From
Tom Lane
Date:
Peter Eisentraut <peter_e@gmx.net> writes:
> Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
>> The main reason why libpq does what it does is that this way we do not
>> have to expose in the API the notion of a command that fails part way
>> through.

> I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over to talk to 
> me about this.  He opined that it would be beneficial for their purpose (in 
> certain cases) if the server would first compute the entire result set and 
> keep it in the server memory (thus eliminating potential errors of the 1/x 
> kind) and then ship it to the client in a way that the client would be able 
> to fetch it piecewise.  Then, the client application could build the display 
> incrementally while the rest of the result set travels over the (slow) link.  
> Does that make sense?

Ick.  That seems pretty horrid compared to the straight
incremental-compute-and-fetch approach.  Yes, it preserves the illusion
that a SELECT is all-or-nothing, but at a very high cost, both in terms
of absolute runtime and in terms of needing a new concept in the
frontend protocol.  It also doesn't solve the problem for people who
need incremental fetch because they have a result set so large they
don't want it materialized on either end of the wire.  Furthermore, ISTM
that any client app that's engaging in incremental fetches really has to
deal with the failure-after-part-of-the-query-is-done problem anyway,
because there's always a risk of failures on the client side or in the
network connection.  So I don't see any real gain in conceptual
simplicity from adding this feature anyway.

Note that if Trolltech really want this behavior, they can have it today
--- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem attractive
enough to me to justify any further feature than that.
        regards, tom lane


Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Tom,
Your case for not supporting this is reasonable, at least to me.
Personally I believe you should take one side or the other at the server
level and then allow the app developer to use it as appropriate, so no
argument here.  But, there was a change in behavior introduced by Oracle
in 10G that supports what was asked for by Trolltech.  The optimizer was
provided the "smarts" to determine if your query is best supported by a
regular cursor or if a bulk collect in the background would be better.
The end result is that the application behaves as normal, but the
results are faster at getting back to it.  What appears to be happening
is that the database returns the first row as normal, but then continues
collecting data rows and sequestering then off some where, probably the
temp tablespace, until your ready for them.  Appears to have driven the
final coffin nail in the old "ORA-01555 Snapshot too old" error.  Course
since Postgresql doesn't have undo segments you don't have that problem.

-----Original Message-----
From: pgsql-interfaces-owner@postgresql.org
[mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Tom Lane
Sent: Wednesday, November 16, 2005 9:24 AM
To: Peter Eisentraut
Cc: pgsql-interfaces@postgresql.org; Scott Lamb
Subject: Re: [INTERFACES] Incremental results from libpq

Peter Eisentraut <peter_e@gmx.net> writes:
> Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
>> The main reason why libpq does what it does is that this way we do
not
>> have to expose in the API the notion of a command that fails part way
>> through.

> I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over to
talk to
> me about this.  He opined that it would be beneficial for their
purpose (in
> certain cases) if the server would first compute the entire result set
and
> keep it in the server memory (thus eliminating potential errors of the
1/x
> kind) and then ship it to the client in a way that the client would be
able
> to fetch it piecewise.  Then, the client application could build the
display
> incrementally while the rest of the result set travels over the (slow)
link.
> Does that make sense?

Ick.  That seems pretty horrid compared to the straight
incremental-compute-and-fetch approach.  Yes, it preserves the illusion
that a SELECT is all-or-nothing, but at a very high cost, both in terms
of absolute runtime and in terms of needing a new concept in the
frontend protocol.  It also doesn't solve the problem for people who
need incremental fetch because they have a result set so large they
don't want it materialized on either end of the wire.  Furthermore, ISTM
that any client app that's engaging in incremental fetches really has to
deal with the failure-after-part-of-the-query-is-done problem anyway,
because there's always a risk of failures on the client side or in the
network connection.  So I don't see any real gain in conceptual
simplicity from adding this feature anyway.

Note that if Trolltech really want this behavior, they can have it today
--- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem attractive
enough to me to justify any further feature than that.
        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to      choose an index scan if your joining column's
datatypesdo not      match 


Re: Incremental results from libpq

From
Bruce Momjian
Date:
Added to TODO:
       o Allow query results to be automatically batched to the client
         Currently, all query results are transfered to the libpq         client before libpq makes the results
availableto the         application.  This feature would allow the application to make         use of the first result
rowswhile the rest are transfered, or         held on the server waiting for them to be requested by libpq.         One
complexityis that a query like SELECT 1/col could error         out mid-way through the result set.
 


---------------------------------------------------------------------------

Goulet, Dick wrote:
> Tom,
> 
>     Your case for not supporting this is reasonable, at least to me.
> Personally I believe you should take one side or the other at the server
> level and then allow the app developer to use it as appropriate, so no
> argument here.  But, there was a change in behavior introduced by Oracle
> in 10G that supports what was asked for by Trolltech.  The optimizer was
> provided the "smarts" to determine if your query is best supported by a
> regular cursor or if a bulk collect in the background would be better.
> The end result is that the application behaves as normal, but the
> results are faster at getting back to it.  What appears to be happening
> is that the database returns the first row as normal, but then continues
> collecting data rows and sequestering then off some where, probably the
> temp tablespace, until your ready for them.  Appears to have driven the
> final coffin nail in the old "ORA-01555 Snapshot too old" error.  Course
> since Postgresql doesn't have undo segments you don't have that problem.
> 
> -----Original Message-----
> From: pgsql-interfaces-owner@postgresql.org
> [mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Tom Lane
> Sent: Wednesday, November 16, 2005 9:24 AM
> To: Peter Eisentraut
> Cc: pgsql-interfaces@postgresql.org; Scott Lamb
> Subject: Re: [INTERFACES] Incremental results from libpq 
> 
> Peter Eisentraut <peter_e@gmx.net> writes:
> > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> >> The main reason why libpq does what it does is that this way we do
> not
> >> have to expose in the API the notion of a command that fails part way
> >> through.
> 
> > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over to
> talk to 
> > me about this.  He opined that it would be beneficial for their
> purpose (in 
> > certain cases) if the server would first compute the entire result set
> and 
> > keep it in the server memory (thus eliminating potential errors of the
> 1/x 
> > kind) and then ship it to the client in a way that the client would be
> able 
> > to fetch it piecewise.  Then, the client application could build the
> display 
> > incrementally while the rest of the result set travels over the (slow)
> link.  
> > Does that make sense?
> 
> Ick.  That seems pretty horrid compared to the straight
> incremental-compute-and-fetch approach.  Yes, it preserves the illusion
> that a SELECT is all-or-nothing, but at a very high cost, both in terms
> of absolute runtime and in terms of needing a new concept in the
> frontend protocol.  It also doesn't solve the problem for people who
> need incremental fetch because they have a result set so large they
> don't want it materialized on either end of the wire.  Furthermore, ISTM
> that any client app that's engaging in incremental fetches really has to
> deal with the failure-after-part-of-the-query-is-done problem anyway,
> because there's always a risk of failures on the client side or in the
> network connection.  So I don't see any real gain in conceptual
> simplicity from adding this feature anyway.
> 
> Note that if Trolltech really want this behavior, they can have it today
> --- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem attractive
> enough to me to justify any further feature than that.
> 
>             regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that your
>        message can get through to the mailing list cleanly
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Bruce,
If I may, one item that would be of extreme use to our location
would be global temporary tables.  These have existed since Oracle 9.0.
They are defined once and then used by clients as needed.  Each session
is ignorant of the data of any other session and once you disconnect the
data from the session disappears.  Truly a real temporary table.

-----Original Message-----
From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
Sent: Wednesday, November 16, 2005 11:33 AM
To: Goulet, Dick
Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces@postgresql.org; Scott
Lamb
Subject: Re: [INTERFACES] Incremental results from libpq


Added to TODO:
       o Allow query results to be automatically batched to the client
         Currently, all query results are transfered to the libpq         client before libpq makes the results
availableto the         application.  This feature would allow the application to make         use of the first result
rowswhile the rest are transfered, or         held on the server waiting for them to be requested by libpq.         One
complexityis that a query like SELECT 1/col could error         out mid-way through the result set. 


------------------------------------------------------------------------
---

Goulet, Dick wrote:
> Tom,
>
>     Your case for not supporting this is reasonable, at least to me.
> Personally I believe you should take one side or the other at the
server
> level and then allow the app developer to use it as appropriate, so no
> argument here.  But, there was a change in behavior introduced by
Oracle
> in 10G that supports what was asked for by Trolltech.  The optimizer
was
> provided the "smarts" to determine if your query is best supported by
a
> regular cursor or if a bulk collect in the background would be better.
> The end result is that the application behaves as normal, but the
> results are faster at getting back to it.  What appears to be
happening
> is that the database returns the first row as normal, but then
continues
> collecting data rows and sequestering then off some where, probably
the
> temp tablespace, until your ready for them.  Appears to have driven
the
> final coffin nail in the old "ORA-01555 Snapshot too old" error.
Course
> since Postgresql doesn't have undo segments you don't have that
problem.
>
> -----Original Message-----
> From: pgsql-interfaces-owner@postgresql.org
> [mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Tom Lane
> Sent: Wednesday, November 16, 2005 9:24 AM
> To: Peter Eisentraut
> Cc: pgsql-interfaces@postgresql.org; Scott Lamb
> Subject: Re: [INTERFACES] Incremental results from libpq
>
> Peter Eisentraut <peter_e@gmx.net> writes:
> > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> >> The main reason why libpq does what it does is that this way we do
> not
> >> have to expose in the API the notion of a command that fails part
way
> >> through.
>
> > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over
to
> talk to
> > me about this.  He opined that it would be beneficial for their
> purpose (in
> > certain cases) if the server would first compute the entire result
set
> and
> > keep it in the server memory (thus eliminating potential errors of
the
> 1/x
> > kind) and then ship it to the client in a way that the client would
be
> able
> > to fetch it piecewise.  Then, the client application could build the
> display
> > incrementally while the rest of the result set travels over the
(slow)
> link.
> > Does that make sense?
>
> Ick.  That seems pretty horrid compared to the straight
> incremental-compute-and-fetch approach.  Yes, it preserves the
illusion
> that a SELECT is all-or-nothing, but at a very high cost, both in
terms
> of absolute runtime and in terms of needing a new concept in the
> frontend protocol.  It also doesn't solve the problem for people who
> need incremental fetch because they have a result set so large they
> don't want it materialized on either end of the wire.  Furthermore,
ISTM
> that any client app that's engaging in incremental fetches really has
to
> deal with the failure-after-part-of-the-query-is-done problem anyway,
> because there's always a risk of failures on the client side or in the
> network connection.  So I don't see any real gain in conceptual
> simplicity from adding this feature anyway.
>
> Note that if Trolltech really want this behavior, they can have it
today
> --- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem
attractive
> enough to me to justify any further feature than that.
>
>             regards, tom lane
>
> ---------------------------(end of
broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
>
> ---------------------------(end of
broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that
your
>        message can get through to the mailing list cleanly
>

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania
19073


Re: Incremental results from libpq

From
Bruce Momjian
Date:
Goulet, Dick wrote:
>  Bruce,
> 
>     If I may, one item that would be of extreme use to our location
> would be global temporary tables.  These have existed since Oracle 9.0.
> They are defined once and then used by clients as needed.  Each session
> is ignorant of the data of any other session and once you disconnect the
> data from the session disappears.  Truly a real temporary table.

How is it better than what we have now?

---------------------------------------------------------------------------


> 
> -----Original Message-----
> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] 
> Sent: Wednesday, November 16, 2005 11:33 AM
> To: Goulet, Dick
> Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces@postgresql.org; Scott
> Lamb
> Subject: Re: [INTERFACES] Incremental results from libpq
> 
> 
> Added to TODO:
> 
>         o Allow query results to be automatically batched to the client
> 
>           Currently, all query results are transfered to the libpq
>           client before libpq makes the results available to the
>           application.  This feature would allow the application to make
>           use of the first result rows while the rest are transfered, or
>           held on the server waiting for them to be requested by libpq.
>           One complexity is that a query like SELECT 1/col could error
>           out mid-way through the result set.
> 
> 
> ------------------------------------------------------------------------
> ---
> 
> Goulet, Dick wrote:
> > Tom,
> > 
> >     Your case for not supporting this is reasonable, at least to me.
> > Personally I believe you should take one side or the other at the
> server
> > level and then allow the app developer to use it as appropriate, so no
> > argument here.  But, there was a change in behavior introduced by
> Oracle
> > in 10G that supports what was asked for by Trolltech.  The optimizer
> was
> > provided the "smarts" to determine if your query is best supported by
> a
> > regular cursor or if a bulk collect in the background would be better.
> > The end result is that the application behaves as normal, but the
> > results are faster at getting back to it.  What appears to be
> happening
> > is that the database returns the first row as normal, but then
> continues
> > collecting data rows and sequestering then off some where, probably
> the
> > temp tablespace, until your ready for them.  Appears to have driven
> the
> > final coffin nail in the old "ORA-01555 Snapshot too old" error.
> Course
> > since Postgresql doesn't have undo segments you don't have that
> problem.
> > 
> > -----Original Message-----
> > From: pgsql-interfaces-owner@postgresql.org
> > [mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Tom Lane
> > Sent: Wednesday, November 16, 2005 9:24 AM
> > To: Peter Eisentraut
> > Cc: pgsql-interfaces@postgresql.org; Scott Lamb
> > Subject: Re: [INTERFACES] Incremental results from libpq 
> > 
> > Peter Eisentraut <peter_e@gmx.net> writes:
> > > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> > >> The main reason why libpq does what it does is that this way we do
> > not
> > >> have to expose in the API the notion of a command that fails part
> way
> > >> through.
> > 
> > > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over
> to
> > talk to 
> > > me about this.  He opined that it would be beneficial for their
> > purpose (in 
> > > certain cases) if the server would first compute the entire result
> set
> > and 
> > > keep it in the server memory (thus eliminating potential errors of
> the
> > 1/x 
> > > kind) and then ship it to the client in a way that the client would
> be
> > able 
> > > to fetch it piecewise.  Then, the client application could build the
> > display 
> > > incrementally while the rest of the result set travels over the
> (slow)
> > link.  
> > > Does that make sense?
> > 
> > Ick.  That seems pretty horrid compared to the straight
> > incremental-compute-and-fetch approach.  Yes, it preserves the
> illusion
> > that a SELECT is all-or-nothing, but at a very high cost, both in
> terms
> > of absolute runtime and in terms of needing a new concept in the
> > frontend protocol.  It also doesn't solve the problem for people who
> > need incremental fetch because they have a result set so large they
> > don't want it materialized on either end of the wire.  Furthermore,
> ISTM
> > that any client app that's engaging in incremental fetches really has
> to
> > deal with the failure-after-part-of-the-query-is-done problem anyway,
> > because there's always a risk of failures on the client side or in the
> > network connection.  So I don't see any real gain in conceptual
> > simplicity from adding this feature anyway.
> > 
> > Note that if Trolltech really want this behavior, they can have it
> today
> > --- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem
> attractive
> > enough to me to justify any further feature than that.
> > 
> >             regards, tom lane
> > 
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 9: In versions below 8.0, the planner will ignore your desire to
> >        choose an index scan if your joining column's datatypes do not
> >        match
> > 
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> >        subscribe-nomail command to majordomo@postgresql.org so that
> your
> >        message can get through to the mailing list cleanly
> > 
> 
> -- 
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 359-1001
>   +  If your life is a hard drive,     |  13 Roberts Road
>   +  Christ can be your backup.        |  Newtown Square, Pennsylvania
> 19073
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
> 
>                http://www.postgresql.org/docs/faq
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Incremental results from libpq

From
"Guy Rouillier"
Date:
Peter Eisentraut wrote:
> I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over
> to talk to me about this.  He opined that it would be beneficial for
> their purpose (in certain cases) if the server would first compute
> the entire result set and keep it in the server memory (thus
> eliminating potential errors of the 1/x kind) and then ship it to the
> client in a way that the client would be able to fetch it piecewise.
> Then, the client application could build the display incrementally
> while the rest of the result set travels over the (slow) link. Does
> that make sense?

No.  How would you handle the 6-million row result set?  You want the
server to cache that?  Remember, the server authors have no way to
predict client code efficiency.  What if a poorly written client
retrieves just 10 of those rows and decides it doesn't want any more,
but doesn't free up the server connection?  The server will be stuck
holding those 6 million rows in memory for a long time.  And readily
available techniques exist for the client to handle this.  Have one
thread reading rows from the DB, and a second thread drawing the
display.

--
Guy Rouillier



Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Bruce,
Humm, you learn something new every day.  Thanks, didn't see
that in the documentation.

-----Original Message-----
From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
Sent: Wednesday, November 16, 2005 3:13 PM
To: Goulet, Dick
Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces@postgresql.org; Scott
Lamb
Subject: Re: [INTERFACES] Incremental results from libpq

Goulet, Dick wrote:
>  Bruce,
>
>     If I may, one item that would be of extreme use to our location
> would be global temporary tables.  These have existed since Oracle
9.0.
> They are defined once and then used by clients as needed.  Each
session
> is ignorant of the data of any other session and once you disconnect
the
> data from the session disappears.  Truly a real temporary table.

How is it better than what we have now?

------------------------------------------------------------------------
---


>
> -----Original Message-----
> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
> Sent: Wednesday, November 16, 2005 11:33 AM
> To: Goulet, Dick
> Cc: Tom Lane; Peter Eisentraut; pgsql-interfaces@postgresql.org; Scott
> Lamb
> Subject: Re: [INTERFACES] Incremental results from libpq
>
>
> Added to TODO:
>
>         o Allow query results to be automatically batched to the
client
>
>           Currently, all query results are transfered to the libpq
>           client before libpq makes the results available to the
>           application.  This feature would allow the application to
make
>           use of the first result rows while the rest are transfered,
or
>           held on the server waiting for them to be requested by
libpq.
>           One complexity is that a query like SELECT 1/col could error
>           out mid-way through the result set.
>
>
>
------------------------------------------------------------------------
> ---
>
> Goulet, Dick wrote:
> > Tom,
> >
> >     Your case for not supporting this is reasonable, at least to me.
> > Personally I believe you should take one side or the other at the
> server
> > level and then allow the app developer to use it as appropriate, so
no
> > argument here.  But, there was a change in behavior introduced by
> Oracle
> > in 10G that supports what was asked for by Trolltech.  The optimizer
> was
> > provided the "smarts" to determine if your query is best supported
by
> a
> > regular cursor or if a bulk collect in the background would be
better.
> > The end result is that the application behaves as normal, but the
> > results are faster at getting back to it.  What appears to be
> happening
> > is that the database returns the first row as normal, but then
> continues
> > collecting data rows and sequestering then off some where, probably
> the
> > temp tablespace, until your ready for them.  Appears to have driven
> the
> > final coffin nail in the old "ORA-01555 Snapshot too old" error.
> Course
> > since Postgresql doesn't have undo segments you don't have that
> problem.
> >
> > -----Original Message-----
> > From: pgsql-interfaces-owner@postgresql.org
> > [mailto:pgsql-interfaces-owner@postgresql.org] On Behalf Of Tom Lane
> > Sent: Wednesday, November 16, 2005 9:24 AM
> > To: Peter Eisentraut
> > Cc: pgsql-interfaces@postgresql.org; Scott Lamb
> > Subject: Re: [INTERFACES] Incremental results from libpq
> >
> > Peter Eisentraut <peter_e@gmx.net> writes:
> > > Am Mittwoch, 9. November 2005 22:22 schrieb Tom Lane:
> > >> The main reason why libpq does what it does is that this way we
do
> > not
> > >> have to expose in the API the notion of a command that fails part
> way
> > >> through.
> >
> > > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came
over
> to
> > talk to
> > > me about this.  He opined that it would be beneficial for their
> > purpose (in
> > > certain cases) if the server would first compute the entire result
> set
> > and
> > > keep it in the server memory (thus eliminating potential errors of
> the
> > 1/x
> > > kind) and then ship it to the client in a way that the client
would
> be
> > able
> > > to fetch it piecewise.  Then, the client application could build
the
> > display
> > > incrementally while the rest of the result set travels over the
> (slow)
> > link.
> > > Does that make sense?
> >
> > Ick.  That seems pretty horrid compared to the straight
> > incremental-compute-and-fetch approach.  Yes, it preserves the
> illusion
> > that a SELECT is all-or-nothing, but at a very high cost, both in
> terms
> > of absolute runtime and in terms of needing a new concept in the
> > frontend protocol.  It also doesn't solve the problem for people who
> > need incremental fetch because they have a result set so large they
> > don't want it materialized on either end of the wire.  Furthermore,
> ISTM
> > that any client app that's engaging in incremental fetches really
has
> to
> > deal with the failure-after-part-of-the-query-is-done problem
anyway,
> > because there's always a risk of failures on the client side or in
the
> > network connection.  So I don't see any real gain in conceptual
> > simplicity from adding this feature anyway.
> >
> > Note that if Trolltech really want this behavior, they can have it
> today
> > --- it's called CREATE TEMP TABLE AS SELECT.  It doesn't seem
> attractive
> > enough to me to justify any further feature than that.
> >
> >             regards, tom lane
> >
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 9: In versions below 8.0, the planner will ignore your desire to
> >        choose an index scan if your joining column's datatypes do
not
> >        match
> >
> > ---------------------------(end of
> broadcast)---------------------------
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> >        subscribe-nomail command to majordomo@postgresql.org so that
> your
> >        message can get through to the mailing list cleanly
> >
>
> --
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 359-1001
>   +  If your life is a hard drive,     |  13 Roberts Road
>   +  Christ can be your backup.        |  Newtown Square, Pennsylvania
> 19073
>
> ---------------------------(end of
broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faq
>

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania
19073


Re: Incremental results from libpq

From
Alvaro Herrera
Date:
Bruce Momjian wrote:
> Goulet, Dick wrote:
>
> >     If I may, one item that would be of extreme use to our location
> > would be global temporary tables.  These have existed since Oracle 9.0.
> > They are defined once and then used by clients as needed.  Each session
> > is ignorant of the data of any other session and once you disconnect the
> > data from the session disappears.  Truly a real temporary table.
> 
> How is it better than what we have now?

Global temporary tables are defined only once (not once per session),
and the schema (definition) is known to all sessions.  Only the content
is private to each session.

At least that's what I remember since the last time I read the spec on
it ...

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: Incremental results from libpq

From
Tom Lane
Date:
Alvaro Herrera <alvherre@commandprompt.com> writes:
> Bruce Momjian wrote:
>> How is it better than what we have now?

> Global temporary tables are defined only once (not once per session),
> and the schema (definition) is known to all sessions.  Only the content
> is private to each session.

Basically, this trades off ease of initialization (you don't have to
create the table in each session, 'cause it's already there) for
flexibility (all sessions have to use the same definition of the same
temp table name).

Note that it's *not* global vs local temp tables; that distinction, in
the spec, has to do with visibility across module boundaries, an issue
that we do not have because we do not have modules.  In the spec, all
temp tables act this way, and there's actually no way to produce the
effect we have of fully session-local temp tables.  I don't think we
want to buy into the spec definition completely.

Still, I can think of lots of applications where all users of the
database are running basically the same code and so there's no real need
for session-private temp table schemas.  In that scenario it's
definitely simpler if functions don't have to worry about creating a
temp table before they can use it.
        regards, tom lane


Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Alvaro,
You are quite correct, which is functionally the only
difference.  Personally I don't think that really amounts to "a hill of
beans".  The functionality is the same, even if the syntax is a touch
different.  Course, I didn't get a chance to tinker with this, but in
Oracle a global temp table can have indexes and constraints.  Is the
same true in Postgresql??  I've found it to be a performance improver
where your loading the temp table with thousands of rows.

-----Original Message-----
From: Alvaro Herrera [mailto:alvherre@commandprompt.com]
Sent: Wednesday, November 16, 2005 7:51 PM
To: Bruce Momjian
Cc: Goulet, Dick; Tom Lane; Peter Eisentraut;
pgsql-interfaces@postgresql.org; Scott Lamb
Subject: Re: [INTERFACES] Incremental results from libpq

Bruce Momjian wrote:
> Goulet, Dick wrote:
>
> >     If I may, one item that would be of extreme use to our location
> > would be global temporary tables.  These have existed since Oracle
9.0.
> > They are defined once and then used by clients as needed.  Each
session
> > is ignorant of the data of any other session and once you disconnect
the
> > data from the session disappears.  Truly a real temporary table.
>
> How is it better than what we have now?

Global temporary tables are defined only once (not once per session),
and the schema (definition) is known to all sessions.  Only the content
is private to each session.

At least that's what I remember since the last time I read the spec on
it ...

--
Alvaro Herrera
http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: Incremental results from libpq

From
"Goulet, Dick"
Date:
Tom,
True in it's entirety, and as I posted recently I don't believe
the differences amount to a hill of beans.  Take a tack & stick with it.
If this is the direction the PostGreSql development team want to follow,
then so be it.  At least the temp table concept exists.  And yes it does
exist in the 8.1 docs, just found it.  Thanks.

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Wednesday, November 16, 2005 8:04 PM
To: Alvaro Herrera
Cc: Bruce Momjian; Goulet, Dick; Peter Eisentraut;
pgsql-interfaces@postgresql.org; Scott Lamb
Subject: Re: [INTERFACES] Incremental results from libpq

Alvaro Herrera <alvherre@commandprompt.com> writes:
> Bruce Momjian wrote:
>> How is it better than what we have now?

> Global temporary tables are defined only once (not once per session),
> and the schema (definition) is known to all sessions.  Only the
content
> is private to each session.

Basically, this trades off ease of initialization (you don't have to
create the table in each session, 'cause it's already there) for
flexibility (all sessions have to use the same definition of the same
temp table name).

Note that it's *not* global vs local temp tables; that distinction, in
the spec, has to do with visibility across module boundaries, an issue
that we do not have because we do not have modules.  In the spec, all
temp tables act this way, and there's actually no way to produce the
effect we have of fully session-local temp tables.  I don't think we
want to buy into the spec definition completely.

Still, I can think of lots of applications where all users of the
database are running basically the same code and so there's no real need
for session-private temp table schemas.  In that scenario it's
definitely simpler if functions don't have to worry about creating a
temp table before they can use it.
        regards, tom lane


Re: Incremental results from libpq

From
Scott Lamb
Date:
On Nov 9, 2005, at 2:09 PM, Scott Lamb wrote:
> Cool. I think I'll get my own interface hashed out in my kludgey  
> way, then look at the broader need if it's a success.

I've done some of this. I've got the start of a new Python<- >PostgreSQL interface at
<http://www.slamb.org/svn/repos/trunk/
 
projects/apypg/>.

Currently, it just "peeks" at connection->result after polling, grabs  
a bunch of results and handles them, then empties the resultset  
kludgily. (If it has a different idea of PGRESULT_DATA_BLOCKSIZE than  
libpq, it'll probably crash.)

Aside from the nastiness of playing with libpq's internal data  
structures, it doesn't handle the case Tom mentioned particularly  
well. It will handle an undetermined fraction of the rows before the  
error, then error out. I think it should handle _all_ the rows it  
gets before the error consistently.

> My first idea, though, is to add a callback interface - "got the  
> RowDescription", "got a DataRow" - and make the storage of stuff  
> tuples in PGresult optional. (Maybe pqAddTuple would just be the  
> default callback.)

Actually, I'm not sure about this anymore. There's a complication -  
if I want to do something fancy while handling a row - a Python  
exception, C++ exception, thread cancellation - I don't think there's  
any good way to do that in a callback structure.

Another complication is the limitation of one active resultset. libpq  
needs to be extended to support cursors other than the unnamed one  
('') in order to handle other statements as it iterates over an  
incremental result set.

-- 
Scott Lamb <http://www.slamb.org/>




Re: Incremental results from libpq

From
Bruce Momjian
Date:
Goulet, Dick wrote:
> Alvaro,
> 
>     You are quite correct, which is functionally the only
> difference.  Personally I don't think that really amounts to "a hill of
> beans".  The functionality is the same, even if the syntax is a touch
> different.  Course, I didn't get a chance to tinker with this, but in
> Oracle a global temp table can have indexes and constraints.  Is the
> same true in Postgresql??  I've found it to be a performance improver
> where your loading the temp table with thousands of rows.

Sure, you can create TEMPORARY anything, and because each backend has
its own table, performance is better.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073