Thread: cursors outside transactions
Folks, I'm currently working on an implementation of cursors that can function outside the transaction that created them (the SQL spec calls them "holdable cursors"). I can see 2 main ways to implement this: (1) During the transaction that created the holdable cursor, don't do anything special. When that transaction ends (and we'd normally be cleaning up cursor resources), fetch all the rows from the cursor and store them in a Tuplestore. When subsequent FETCHs for the cursor are received, handle them by retrieving rows from the Tuplestore. Pros: - simple to implement - doesn't acquire locks (etc.) on any database objects queried by the cursor, so later database operations can continue in parallel with the retrieval of rows from the holdable cursor Cons: - doesn't allow for updates to the cursor - doesn't allow for sensitive/asensitive cursors (i.e. the cursor cannot witness changes made to its result set by other clients -- see 4.34 of SQL 2003) - inefficient if the result set the cursor is fetching is enormous, as it must be stored on disk prior to committing the transaction (2) Use MVCC to ensure that the snapshot of the database that the transaction had is still valid, even after the transaction itself has committed. This would require: (a) changing VACUUM so that it's aware the tuples visible to the cursor can't be removed yet (b) holding locks on database objects, so that future database operations don't cause problems for the cursor (e.g. you can't allow someone to drop a table still in use by a holdable cursor). Another example is the row-level locks used for updated tuples, if updatedable cursors are implemented -- they would be need to be held for much longer than normal. (c) probably more changes: the assumption that a transaction's resources can be cleaned up once it commits is a fairly fundamental one, so there are surely additional things that will need to be kept locked while the holdable cursor is still valid (likely, until the client connection is terminated). Pros: - efficient for large result sets (just like normal cursors) - updateable and sensitive cursors would be easier to implement Cons: - really complex, difficult to get right - would hurt concurrent performance, due to long-term locks I'm currently planning to implement (1), as it is sufficient for the immediate need that I'm facing. Any comments? Is there another way to implement this that I'm not seeing? In particular, I'd like to know if the list would object to integrating (1) into the mainline sources (perhaps until someone gets around to doing something similar to (2), which may be never). Cheers, Neil -- Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
On Mon, Mar 17, 2003 at 09:48:34PM -0500, Neil Conway wrote: > (2) Use MVCC to ensure that the snapshot of the database that the > transaction had is still valid, even after the transaction itself has > committed. What about opening a pseudo-transaction that exists only to serve the cursor? That frees you from modifying VACUUM and resource management. The transaction should be commited (aborted?) when the cursor is closed. Maybe you can call the lowlevel transaction routines directly. -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) "Postgres is bloatware by design: it was built to house PhD theses." (Joe Hellerstein, SIGMOD annual conference 2002)
On Mon, 2003-03-17 at 22:01, Alvaro Herrera wrote: > What about opening a pseudo-transaction that exists only to serve the > cursor? What exactly do you mean by a pseudo-transaction? Keep in mind we don't have nested transactions (yet?), and that the holdable cursor needs to be accessible both inside and outside its creating transaction. Cheers, Neil -- Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC
On Mon, Mar 17, 2003 at 10:26:07PM -0500, Neil Conway wrote: > On Mon, 2003-03-17 at 22:01, Alvaro Herrera wrote: > > What about opening a pseudo-transaction that exists only to serve the > > cursor? > > What exactly do you mean by a pseudo-transaction? Assign an xid, create the transaction (create a pg_clog entry), open the cursor using that xid, and put that xid into some table so it can be ended when the cursor is closed. That's why I said you'll probably need to access the lowlevel routines for transactions. OTOH I can hear objections to that idea right now... > Keep in mind we don't have nested transactions (yet?), I'm looking at the issues about this. I don't think that facility will help you, since you have to end the inner transactions before you end the outer ones. That is, you can't create a normal subtransaction to hold the cursor and expect it to live longer than the outer one. -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) Oh, oh, las chicas galacianas, lo harán por las perlas, ¡Y las de Arrakis por el agua! Pero si buscas damas Que se consuman como llamas, ¡Prueba una hija de Caladan! (Gurney Halleck)
Neil Conway wrote: > > Folks, > > I'm currently working on an implementation of cursors that can function > outside the transaction that created them (the SQL spec calls them > "holdable cursors"). I can see 2 main ways to implement this: I have never meant (1) by cursors outside transactions. BTW why are updatable and sensitive cursors easier to implement using (2). regards, Hiroshi Inouehttp://www.geocities.jp/inocchichichi/psqlodbc/
Alvaro Herrera wrote: > > On Mon, Mar 17, 2003 at 09:48:34PM -0500, Neil Conway wrote: > > > (2) Use MVCC to ensure that the snapshot of the database that the > > transaction had is still valid, even after the transaction itself has > > committed. > > What about opening a pseudo-transaction that exists only to serve the > cursor? That frees you from modifying VACUUM and resource management. The simplest way to achieve it is to invoke another session internally. Even some clients use the trick. However, I'm little excited about it. regards, Hiroshi Inouehttp://www.geocities.jp/inocchichichi/psqlodbc/
On Mon, 2003-03-17 at 22:52, Hiroshi Inoue wrote: > I have never meant (1) by cursors outside transactions. I'm sorry, I don't understand. > BTW why are updatable and sensitive cursors easier > to implement using (2). (Note that I haven't looked into implementing either feature in depth.) My guess is that updateable cursors would be easier with an MVCC-based approach because the executor would still be accessing the data that is being returned. So subsequently updating the tuple would be easier (say, based on its TID), as you could be sure that whatever means you used to ensure the tuple was OK for reading would go most of the way to ensuring that it was OK for writing. Sensitive cursors would be easier to implement just due to the nature of sensitivity: if you make a separate copy of the tuples in the tuplestore, how do you check to see if they have been concurrently updated? Note that neither of these points carries much weight with me: if and when someone actually steps forward to implement either feature, we can take it into account. I'm personally planning to do updateable cursors after holdable ones, but only for the non-holdable case (i.e. updateability and holdability will be mutually exclusive). Cheers, Neil
I think (1) is fine. When I used Informix, we did lots of huge cursors that we pulled from for reports, and they consumed huge amounts of RAM before we could do a fetch --- and we expected that. It doesn't seem worth adding complexity to avoid that, especially since even if (2) was done, there would be downsides to it. One question is how sensitive these cursors should be. Actually, looking at the DECLARE manual page, I see: <varlistentry> <term>INSENSITIVE</term> <listitem> <para><acronym>SQL92</acronym> keyword indicating thatdata retrievedfrom the cursor should be unaffected by updates from other processes or cursors.Since cursor operationsoccur within transactionsin <productname>PostgreSQL</productname> this is always the case.This keyword has no effect. </para> </listitem> </varlistentry> which seems inaccurate. Surely we see commits of other transactions during our multi-statement transaction in the default READ COMMITTED isolation level, so why do the docs say insensitive is meaningless for us? Does sensitivity only apply outside the transaction somehow? So, my question is how do cursors behave now? Do they see commits by other transactions while in a multi-statement transaction? (1) is predictable in terms of sensitivity, or at least frozen at commit. --------------------------------------------------------------------------- Neil Conway wrote: > Folks, > > I'm currently working on an implementation of cursors that can function > outside the transaction that created them (the SQL spec calls them > "holdable cursors"). I can see 2 main ways to implement this: > > (1) During the transaction that created the holdable cursor, don't do > anything special. When that transaction ends (and we'd normally be > cleaning up cursor resources), fetch all the rows from the cursor and > store them in a Tuplestore. When subsequent FETCHs for the cursor are > received, handle them by retrieving rows from the Tuplestore. > > Pros: > > - simple to implement > - doesn't acquire locks (etc.) on any database objects queried by the > cursor, so later database operations can continue in parallel with the > retrieval of rows from the holdable cursor > > Cons: > > - doesn't allow for updates to the cursor > - doesn't allow for sensitive/asensitive cursors (i.e. the cursor cannot > witness changes made to its result set by other clients -- see 4.34 of > SQL 2003) > - inefficient if the result set the cursor is fetching is enormous, as > it must be stored on disk prior to committing the transaction > > (2) Use MVCC to ensure that the snapshot of the database that the > transaction had is still valid, even after the transaction itself has > committed. This would require: > > (a) changing VACUUM so that it's aware the tuples visible to the cursor > can't be removed yet > > (b) holding locks on database objects, so that future database > operations don't cause problems for the cursor (e.g. you can't allow > someone to drop a table still in use by a holdable cursor). Another > example is the row-level locks used for updated tuples, if updatedable > cursors are implemented -- they would be need to be held for much longer > than normal. > > (c) probably more changes: the assumption that a transaction's resources > can be cleaned up once it commits is a fairly fundamental one, so there > are surely additional things that will need to be kept locked while the > holdable cursor is still valid (likely, until the client connection is > terminated). > > Pros: > > - efficient for large result sets (just like normal cursors) > - updateable and sensitive cursors would be easier to implement > > Cons: > > - really complex, difficult to get right > - would hurt concurrent performance, due to long-term locks > > I'm currently planning to implement (1), as it is sufficient for the > immediate need that I'm facing. > > Any comments? Is there another way to implement this that I'm not > seeing? > > In particular, I'd like to know if the list would object to integrating > (1) into the mainline sources (perhaps until someone gets around to > doing something similar to (2), which may be never). > > Cheers, > > Neil > > -- > Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Neil Conway wrote: > > On Mon, 2003-03-17 at 22:52, Hiroshi Inoue wrote: > > I have never meant (1) by cursors outside transactions. > > I'm sorry, I don't understand. That is I strongly object to your proposal. If (1) is OK, I should have already implemented it. > > BTW why are updatable and sensitive cursors easier > > to implement using (2). > > (Note that I haven't looked into implementing either feature in depth.) > My guess is that updateable cursors would be easier with an MVCC-based > approach because the executor would still be accessing the data that is > being returned. So subsequently updating the tuple would be easier (say, > based on its TID), What do you mean by MVCC ? It seems little related to MVCC. regards, Hiroshi Inouehttp://www.geocities.jp/inocchichichi/psqlodbc/
Hiroshi Inoue wrote: > Neil Conway wrote: > > > > On Mon, 2003-03-17 at 22:52, Hiroshi Inoue wrote: > > > I have never meant (1) by cursors outside transactions. > > > > I'm sorry, I don't understand. > > That is I strongly object to your proposal. > If (1) is OK, I should have already implemented it. Why don't you like (1)? It seems fine to me, and I don't see how we are magically going to do any better in the future. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Why don't you like (1)? It seems fine to me, and I don't see how we are > magically going to do any better in the future. The restrictions of (1) seem pretty obvious to me ... but I don't see any prospect of doing better in the near future, either. Cross-transaction cursors are a *hard* problem for us. The question here is do we want to offer a half-baked solution, recognizing that it's some improvement over no solution at all? Or do we feel it doesn't meet our standards? I could be talked into seeing it either way ... regards, tom lane
Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > Why don't you like (1)? It seems fine to me, and I don't see how we are > > magically going to do any better in the future. > > The restrictions of (1) seem pretty obvious to me ... but I don't > see any prospect of doing better in the near future, either. > Cross-transaction cursors are a *hard* problem for us. We could do (2) by adding a new Proc field to show the oldest open cursor transaction. Each backend could keep track of all its open cursors, and update that field to show only the oldest one. VACUUM could then use that to control vacuum of tuples. Of course, preventing vacuum from removing those tuples isn't great, and we would have to keep locks on those objects. Looking at those negatives, materialization doesn't look that bad. It doesn't seem worth messing up our nice commit semantics just to prevent materialization. > The question here is do we want to offer a half-baked solution, > recognizing that it's some improvement over no solution at all? > Or do we feel it doesn't meet our standards? My question is how would you do this if you need this functionality and you don't have WITH HOLD cursors? My guess is you would dump the cursor into a temp table with a serial column (materialize it), and do queries to pull rows from the temp table based on that serial column. So it seems we are implementing WITH HOLD cursors internally in the same way it would be done by client code. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Neil Conway writes: > I'm currently planning to implement (1), as it is sufficient for the > immediate need that I'm facing. What need are you facing, and why is it not sufficient to explicitly store the query results in a temporary table? -- Peter Eisentraut peter_e@gmx.net
Bruce Momjian wrote: > > Tom Lane wrote: > > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > > The question here is do we want to offer a half-baked solution, > > recognizing that it's some improvement over no solution at all? > > Or do we feel it doesn't meet our standards? > > My question is how would you do this if you need this > functionality and you don't have WITH HOLD cursors? ODBC(maybe JDBC also) has cross-transaction result sets (rather than cursors) since long by simply holding all results for a query at client side. Why are cursors outside transactions expected eagerly ? Because it's very hard (almost impossible) for clients to provide a functionality to edit(display/scroll/updateetc) large result sets effectively. I don't object to a half-baked solution if there's a prospect of a real solution. However, I've never seen it and I have little time to investigate it unfortunately. regards, Hiroshi Inouehttp://www.geocities.jp/inocchichichi/psqlodbc/
On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: > I don't object to a half-baked solution if there's a > prospect of a real solution. However, I've never seen > it and I have little time to investigate it unfortunately. On the contrary, I wouldn't be proposing a "half-baked" solution if I thought there was a means to implement holdable cursors within a reasonable amount of time via another method. Cheers, Neil
On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: > Bruce Momjian wrote: > > > > Tom Lane wrote: > > > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > > > > The question here is do we want to offer a half-baked solution, > > > recognizing that it's some improvement over no solution at all? > > > Or do we feel it doesn't meet our standards? > > > > My question is how would you do this if you need this > > functionality and you don't have WITH HOLD cursors? > > ODBC(maybe JDBC also) has cross-transaction result sets > (rather than cursors) since long by simply holding all > results for a query at client side. JDBC is running into problems with this. Large queries cause out of memory exceptions. > Why are cursors outside transactions expected eagerly ? > Because it's very hard (almost impossible) for clients > to provide a functionality to edit(display/scroll/update > etc) large result sets effectively. > > I don't object to a half-baked solution if there's a > prospect of a real solution. However, I've never seen > it and I have little time to investigate it unfortunately. > > regards, > Hiroshi Inoue > http://www.geocities.jp/inocchichichi/psqlodbc/ > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster -- Dave Cramer <Dave@micro-automation.net> -- Dave Cramer <dave@fastcrypt.com> Cramer Consulting
Dave Cramer <Dave@micro-automation.net> writes: > On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: >> ODBC(maybe JDBC also) has cross-transaction result sets >> (rather than cursors) since long by simply holding all >> results for a query at client side. > JDBC is running into problems with this. Large queries cause out of > memory exceptions. Cursors implemented as Neil suggests would cause out-of-disk exceptions. The limit is presumably further away than out-of-memory, but not any the less real. I'm concerned about this because, in my mind, one of the principal uses of cursors is to deal with too-huge-to-materialize result sets. Still, given that we have no prospect of a "real" solution any time soon, a limited solution might be a reasonable thing to offer for now. regards, tom lane
Tom Lane wrote: > Dave Cramer <Dave@micro-automation.net> writes: > > On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: > >> ODBC(maybe JDBC also) has cross-transaction result sets > >> (rather than cursors) since long by simply holding all > >> results for a query at client side. > > > JDBC is running into problems with this. Large queries cause out of > > memory exceptions. > > Cursors implemented as Neil suggests would cause out-of-disk exceptions. > The limit is presumably further away than out-of-memory, but not any the > less real. I'm concerned about this because, in my mind, one of the > principal uses of cursors is to deal with too-huge-to-materialize result > sets. I don't see how you can class out of memory in the same likelyhood as out of disk --- sure they are both real possible failures, but clearly the latter is more rare and giving folks backing store for large result sets is a big win in my book. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
On Tue, Mar 18, 2003 at 11:36:22PM -0500, Bruce Momjian wrote: > Tom Lane wrote: > > Dave Cramer <Dave@micro-automation.net> writes: > > > On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: > > >> ODBC(maybe JDBC also) has cross-transaction result sets > > >> (rather than cursors) since long by simply holding all > > >> results for a query at client side. > > > > > JDBC is running into problems with this. Large queries cause out of > > > memory exceptions. > > > > Cursors implemented as Neil suggests would cause out-of-disk exceptions. > > The limit is presumably further away than out-of-memory, but not any the > > less real. I'm concerned about this because, in my mind, one of the > > principal uses of cursors is to deal with too-huge-to-materialize result > > sets. > > I don't see how you can class out of memory in the same likelyhood as > out of disk --- sure they are both real possible failures, but clearly > the latter is more rare and giving folks backing store for large result > sets is a big win in my book. Note that ODBC etc holding all the result can cause an memory outage on the client side, while Neil's proposal can cause disk outage on the server side. I remember reading somewhere that disk outage on the server side can be dangerous (to pg_xlog or whatever). Maybe the mechanism can be made smart and try to estimate the space it's going to use? -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) "Cuando miro a alguien, mas me atrae como cambia que quien es" (J. Binoche)
Bruce Momjian wrote: > > Tom Lane wrote: > > Dave Cramer <Dave@micro-automation.net> writes: > > > On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: > > >> ODBC(maybe JDBC also) has cross-transaction result sets > > >> (rather than cursors) since long by simply holding all > > >> results for a query at client side. > > > > > JDBC is running into problems with this. Large queries cause out of > > > memory exceptions. > > > > Cursors implemented as Neil suggests would cause out-of-disk > > exceptions. The limit is presumably further away than > > out-of-memory, but not any the less real. I'm concerned > > about this because, in my mind, one of the principal uses of > > cursors is to deal with too-huge-to-materialize result sets. > > I don't see how you can class out of memory in the same likelyhood as > out of disk --- sure they are both real possible failures, but clearly > the latter is more rare and giving folks backing store for large result > sets is a big win in my book. Other than the out of disk/memory problem, there 's another problem. What I expect of cursors is the constant response time while handling them. regards, Hiroshi Inouehttp://www.geocities.jp/inocchichichi/psqlodbc/
On Wednesday 19 March 2003 04:33 am, you wrote: > Dave Cramer <Dave@micro-automation.net> writes: > > On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: > >> ODBC(maybe JDBC also) has cross-transaction result sets > >> (rather than cursors) since long by simply holding all > >> results for a query at client side. > > > > JDBC is running into problems with this. Large queries cause out of > > memory exceptions. > > Cursors implemented as Neil suggests would cause out-of-disk exceptions. > The limit is presumably further away than out-of-memory, but not any the > less real. I'm concerned about this because, in my mind, one of the > principal uses of cursors is to deal with too-huge-to-materialize result > sets. > > Still, given that we have no prospect of a "real" solution any time > soon, a limited solution might be a reasonable thing to offer for now. > We have got more disk space than internal memory. Similar argument would be valid for swap in operating systems, but all operating systems have got swap and large results sets use swap, probably. 'Out of disk' is small problem - we can add more disk easy. Cursors out of a transaction are great features - the most important for clients in Java, but for other clients. regards Haris Peco
Hiroshi Inoue wrote: > > I don't see how you can class out of memory in the same likelyhood as > > out of disk --- sure they are both real possible failures, but clearly > > the latter is more rare and giving folks backing store for large result > > sets is a big win in my book. > > Other than the out of disk/memory problem, there 's > another problem. What I expect of cursors is the > constant response time while handling them. Cursors outside transactions either need materialization or long-lived locks --- I don't see any other possibilities, and most feel materialization is the best. Do you see another option? -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian wrote: > > Hiroshi Inoue wrote: > > > I don't see how you can class out of memory in the same likelyhood as > > > out of disk --- sure they are both real possible failures, but clearly > > > the latter is more rare and giving folks backing store for large result > > > sets is a big win in my book. > > > > Other than the out of disk/memory problem, there 's > > another problem. What I expect of cursors is the > > constant response time while handling them. > > Cursors outside transactions either need materialization or long-lived > locks AccessShare table locks are only needed. What is wrong with it ? regards, Hiroshi Inouehttp://www.geocities.jp/inocchichichi/psqlodbc/
Hiroshi Inoue wrote: > Bruce Momjian wrote: > > > > Hiroshi Inoue wrote: > > > > I don't see how you can class out of memory in the same likelyhood as > > > > out of disk --- sure they are both real possible failures, but clearly > > > > the latter is more rare and giving folks backing store for large result > > > > sets is a big win in my book. > > > > > > Other than the out of disk/memory problem, there 's > > > another problem. What I expect of cursors is the > > > constant response time while handling them. > > > > Cursors outside transactions either need materialization or long-lived > > locks > > > AccessShare table locks are only needed. > What is wrong with it ? But that is going to block VACUUM, right? Aren't we better materializing and letting vacuum run? I am not sure --- I am just asking. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Hiroshi Inoue wrote: >> AccessShare table locks are only needed. >> What is wrong with it ? > But that is going to block VACUUM, right? No. It's a matter of bookkeeping more than anything else. Right now, transaction end releases all the locks a backend holds. You'd need to figure out which locks are associated with cross-transaction cursors and keep those. This overlaps to some extent with bookkeeping that we'd need to add for nested transactions --- but we haven't got a plan for that, either. regards, tom lane
On Tue, 2003-03-18 at 19:00, Hiroshi Inoue wrote: > Bruce Momjian wrote: > > > > Tom Lane wrote: > > > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > > > > The question here is do we want to offer a half-baked solution, > > > recognizing that it's some improvement over no solution at all? > > > Or do we feel it doesn't meet our standards? > > > > My question is how would you do this if you need this > > functionality and you don't have WITH HOLD cursors? > > ODBC(maybe JDBC also) has cross-transaction result sets > (rather than cursors) since long by simply holding all > results for a query at client side. JDBC is running into problems with this. Large queries cause out of memory exceptions. > Why are cursors outside transactions expected eagerly ? > Because it's very hard (almost impossible) for clients > to provide a functionality to edit(display/scroll/update > etc) large result sets effectively. > > I don't object to a half-baked solution if there's a > prospect of a real solution. However, I've never seen > it and I have little time to investigate it unfortunately. > > regards, > Hiroshi Inoue > http://www.geocities.jp/inocchichichi/psqlodbc/ > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster -- Dave Cramer <Dave@micro-automation.net>