Thread: PREPARE
It seems that full support for CORBA and the COS Query Service requires us to enable the user to parse, prepare, and execute the query in three separate stages. Are we also planning to support PREPARE? If so, we should co-ordinate the effort, since the full COSQS support will require pulling apart pg_parse_and_plan(). Taral
On Mon, 16 Nov 1998, Taral wrote: > It seems that full support for CORBA and the COS Query Service requires us > to enable the user to parse, prepare, and execute the query in three > separate stages. Are we also planning to support PREPARE? If so, we should > co-ordinate the effort, since the full COSQS support will require pulling > apart pg_parse_and_plan(). Implementing PREPARE would benefit JDBC. Currently, were implementing it in the driver but having this in the backend would benefit JDBC a lot in performance. -- Peter T Mount peter@retep.org.uk Main Homepage: http://www.retep.org.uk PostgreSQL JDBC Faq: http://www.retep.org.uk/postgresJava PDF Generator: http://www.retep.org.uk/pdf
Taral wrote: > > It seems that full support for CORBA and the COS Query Service requires us > to enable the user to parse, prepare, and execute the query in three > separate stages. Are we also planning to support PREPARE? If so, we should > co-ordinate the effort, since the full COSQS support will require pulling > apart pg_parse_and_plan(). We should. Currently we do support PREPARE (kind of) in the SPI interface. However, it is not strictly necessary (both ODBC and JDBC currently simulate it on the client side), but it would enable interactive applications perform much better if we did. The current FE<->BE protocol is strange mix of CLI and directly usable psql replacement ;) BTW, what does CORBA prescribe about transactions (if anything) ? Is the current transaction model adequate or do we need nested transactions ? PS. It would probably be beneficial to look also at Microsofts ADO for ideas, afaik this is the DCOM version of what we are trying to do with CORBA. ------------ Hannu
On Mon, Nov 16, 1998 at 02:19:32PM -0600, Taral wrote: > separate stages. Are we also planning to support PREPARE? If so, we should > co-ordinate the effort, since the full COSQS support will require pulling > apart pg_parse_and_plan(). Hopefully. I'm still holding back PREPARE for ecpg until I can think of a good solution. The best of course would be in the backend. Hmm, how do ODBC and JDBC solve this? Michael -- Dr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH work: Cuxhavener Str. 36, D-21149 Hamburg home: Th.-Heuss-Str. 61, D-41812 Erkelenz, Michael.Meskes@usa.net Go SF49ers! Go Rhein Fire! Use Debian GNU/Linux!
> I'm still holding back PREPARE for ecpg until I can think of a > good solution. The best of course would be in the backend. So what would it take to do this in the backend? I think the places which would need to be touched fall into areas I either know about or am starting to look at to implement the CASE clause. We'd need: - a "named buffer" (or several) to hold the intermediate input - a way to pass in parameters or substitution arguments- a way to decide if previous parser/planner/executor results can be reused What else? - Tom
> > I'm still holding back PREPARE for ecpg until I can think of a > > good solution. The best of course would be in the backend. > > So what would it take to do this in the backend? I think the places > which would need to be touched fall into areas I either know about or am > starting to look at to implement the CASE clause. > > We'd need: > - a "named buffer" (or several) to hold the intermediate input portals > - a way to pass in parameters or substitution arguments SQL functions? > - a way to decide if previous parser/planner/executor > results can be reused -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
[ Cross-post to pgsql-interfaces ] > BTW, what does CORBA prescribe about transactions (if anything) ? > > Is the current transaction model adequate or do we need nested > transactions ? The Query Service is read-only, so does not have locking or transactions... We will have to implement the Transaction Service... Current service list for our implementation: (in order of importance) Naming Service (provided by most 2.2 ORBs) LifeCycle Service (provided by mico) (dependent on NS) Query Service Security Service ConcurrencyControl Service Transaction Service (dependent on CCS) Relationship Service (provided by mico) (Not sure about the ordering of the last few...) As you can see, this is a non-trivial list of interfaces :) Taral
> Is the current transaction model adequate or do we need nested > transactions ? Err... I didn't answer your question, did I? The COS Transaction Service implements nested transactions. Taral
> > Is the current transaction model adequate or do we need nested > > transactions ? > > Err... I didn't answer your question, did I? The COS Transaction Service > implements nested transactions. Aha... finally found the line I was looking for: "An implementation of the Transaction Service is not required to support nested transactions." Taral
On Tue, 17 Nov 1998, Michael Meskes wrote: > On Mon, Nov 16, 1998 at 02:19:32PM -0600, Taral wrote: > > separate stages. Are we also planning to support PREPARE? If so, we should > > co-ordinate the effort, since the full COSQS support will require pulling > > apart pg_parse_and_plan(). > > Hopefully. I'm still holding back PREPARE for ecpg until I can think of a > good solution. The best of course would be in the backend. Hmm, how do ODBC > and JDBC solve this? Background: JDBC has a class called PrepareStatement. It's created by the prepareStatement() method in the Connection class. The statement passed to it has each required parameter represented by a ? insert into mytable (field1,field2,field3) values (?,?,?); Now the current postgresql jdbc implementation stores this string, and has a Vector (Java for a dynamic array) that has each value stored in it as the client application sets them. When the client calls the executeUpdate() or executeQuery() methods, we just replace the ?'s with the values in sequence, and pass the query to the backend as normal. It's a real botch, but it works. -- Peter T Mount peter@retep.org.uk Main Homepage: http://www.retep.org.uk PostgreSQL JDBC Faq: http://www.retep.org.uk/postgresJava PDF Generator: http://www.retep.org.uk/pdf
On Tue, Nov 17, 1998 at 01:45:19PM +0000, Thomas G. Lockhart wrote: > So what would it take to do this in the backend? I think the places > which would need to be touched fall into areas I either know about or am > starting to look at to implement the CASE clause. > > We'd need: > - a "named buffer" (or several) to hold the intermediate input I didn't get this one completly. What input do you mean? > - a way to pass in parameters or substitution arguments Yes. That means changing of declare cursor as well. > - a way to decide if previous parser/planner/executor > results can be reused Yes. > What else? Running planner on the statement as it is without the variables to be substituted. So execution of declare gets faster. Michael -- Dr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH work: Cuxhavener Str. 36, D-21149 Hamburg home: Th.-Heuss-Str. 61, D-41812 Erkelenz, Michael.Meskes@usa.net Go SF49ers! Go Rhein Fire! Use Debian GNU/Linux!
Michael Meskes wrote: > On Mon, Nov 16, 1998 at 02:19:32PM -0600, Taral wrote: > > separate stages. Are we also planning to support PREPARE? If so, we should > > co-ordinate the effort, since the full COSQS support will require pulling > > apart pg_parse_and_plan(). > > Hopefully. I'm still holding back PREPARE for ecpg until I can think of a > good solution. The best of course would be in the backend. Hmm, how do ODBC > and JDBC solve this? Speaking for ODBC, we keep the PREPARE'd statement in a malloc'ed buffer in the driver. The fun part is that we must support a set of API calls which request things like the number of parameters, and result set, column info. We get the parameter count by simply counting the parameter markers. To get the column info, we send the statement to the backend, retrieve the column info and discard any returned rows. Not very elegant nor inefficient. But it works ok. This functionality should be handled by the backend. May I suggest a protocol that will allow this typical interchange. send PREPARE(statement) receive stmt_handle send GET_PARAM_COUNT(stmt_handle) receive param_count for i = 1 to param_count send DESCRIBE_PARAMETER(stmt_handle, i); -- include: type, nullability, scale, & precision receive parameter description. end for send GET_COLUMN_COUNT(stmt_handle); receive column_count for i = 1 to column_count send DESCRIBE_COLUMN(stmt_handle, i); -- included: tablename, column name, column alias, type, nullability, scale & precision receive column description. end for -- There are other column info attributes worth sending such as: owner, searchable, signed/unsigned, updateable, case sensitive & autoincrement -- I will be quite content if we get the main ones specified above. for n set of parameters for i = 1 to param_count send PUT_DATA(stmt_handle, i, param_data[i]) end for send EXECUTE(stmt_handle) receive result set end for send FREE(stmt_handle)
> > - a "named buffer" (or several) to hold the intermediate input > I didn't get this one completly. What input do you mean? Just the original string/query to be prepared... - Tom
>>>>> "T" == Taral <taral@cyberjunkie.com> writes: >> > Is the current transaction model adequate or do we need nested >> > transactions ? >> >> Err... I didn't answer your question, did I? The COS Transaction Service >> implements nested transactions. T> Aha... finally found the line I was looking for: T> "An implementation of the Transaction Service is not required to support T> nested transactions." To my mind there are _no_ nested transactions in Postgres. -- Anatoly K. Lasareff Email: tolik@icomm.ru Senior programmer
On Wed, Nov 18, 1998 at 03:23:30AM +0000, Thomas G. Lockhart wrote: > > I didn't get this one completly. What input do you mean? > > Just the original string/query to be prepared... I see. But wouldn't it be more useful to preprocess the query and store the resulting nodes instead? We don't want to parse the statement everytime a variable binding comes in. Michael -- Dr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH work: Cuxhavener Str. 36, D-21149 Hamburg home: Th.-Heuss-Str. 61, D-41812 Erkelenz, Michael.Meskes@usa.net Go SF49ers! Go Rhein Fire! Use Debian GNU/Linux!
On Tue, Nov 17, 1998 at 06:40:01PM +0000, Peter T Mount wrote: > it has each required parameter represented by a ? > > insert into mytable (field1,field2,field3) values (?,?,?); > > Now the current postgresql jdbc implementation stores this string, and has > a Vector (Java for a dynamic array) that has each value stored in it as > the client application sets them. When the client calls the > executeUpdate() or executeQuery() methods, we just replace the ?'s with > the values in sequence, and pass the query to the backend as normal. That's exactly what I wanted to use for ecpg. But I guess I postpone it just a little more. :-) Michael -- Dr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH work: Cuxhavener Str. 36, D-21149 Hamburg home: Th.-Heuss-Str. 61, D-41812 Erkelenz, Michael.Meskes@usa.net Go SF49ers! Go Rhein Fire! Use Debian GNU/Linux!
Michael Meskes wrote: > > On Wed, Nov 18, 1998 at 03:23:30AM +0000, Thomas G. Lockhart wrote: > > > I didn't get this one completly. What input do you mean? > > > > Just the original string/query to be prepared... > > I see. But wouldn't it be more useful to preprocess the query and store the > resulting nodes instead? We don't want to parse the statement everytime a > variable binding comes in. Right. A real improvement would only be to have the prepared execution plan in the backend and just giving the parameter values. I can think of the following construct: PREPARE optimizable-statement; That one will run parser/rewrite/planner, create a new memory context with a unique identifier and saves the querytree's and plan's in it. Parameter values are identified by the usual $n notation. The command returns the identifier. EXECUTE QUERY identifier [value [, ...]]; then get's back the prepared plan and querytree by the id, creates an executor context with the given values in the parameter array and calls ExecutorRun() for them. The PREPARE needs to analyze the resulting parsetrees to get the datatypes (and maybe atttypmod's) of the parameters, so EXECUTE QUERY can convert the values into Datum's using the types input functions. And the EXECUTE has to be handled special in tcop (it's something between a regular query and an utility statement). But it's not too hard to implement. Finally a FORGET QUERY identifier; (don't remember how the others named it) will remove the prepared plan etc. simply by destroying the memory context and dropping the identifier from the id->mcontext+prepareinfo mapping. This all restricts the usage of PREPARE to optimizable statements. Is it required to be able to prepare utility statements (like CREATE TABLE or so) too? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #======================================== jwieck@debis.com (Jan Wieck) #
> But wouldn't it be more useful to preprocess the query and store the > resulting nodes instead? We don't want to parse the statement > everytime a variable binding comes in. Sure. Sorry I wasn't being very specific. Also, whoever implements it gets to do it either way at first :) btw, I'm buried in trying to get a CASE statement to work, so am not volunteering for this one... - Tom
On Thu, Nov 19, 1998 at 03:32:54AM +0000, Thomas G. Lockhart wrote: > Sure. Sorry I wasn't being very specific. Also, whoever implements it > gets to do it either way at first :) :-) > btw, I'm buried in trying to get a CASE statement to work, so am not > volunteering for this one... Now I get depressed. :-) Hopefully someone finds time for this. I don't know the internals enough and probably will be short on time too as I change jobs yet again. Michael -- Dr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH work: Cuxhavener Str. 36, D-21149 Hamburg home: Th.-Heuss-Str. 61, D-41812 Erkelenz, Michael.Meskes@usa.net Go SF49ers! Go Rhein Fire! Use Debian GNU/Linux!