Thread: Cache query (PREPARE/EXECUTE)
Hi, as I said, I tring implement PREPARE / EXECUTE command for user a controllable query cache (in TODO: Cache most recent query plan(s)). I have implement first usable version now (I know that it is not interesting for current feature-freeze state, but I believe that it is interesting for next release and for major developers). See: test=# prepare sel as select * from tab where id = $1 and data like $2 using int, text; PREPARE test=# execute sel using 1, '%a';id | data ----+------ 1 | aaaa (1 row) test=# prepare ins as insert into tab (data) values($1) using text; PREPARE test=# execute ins_tab using 'cccc'; INSERT 18974 1 The queryTree and planTree are save in hash table and in the TopMemoryContext (Is it good space for this cache?). All is without change-schema detection (IMHO is user problem if he changes DB schema and use old cached plan). In future I try add any 'change-schema' detection (to alter/drop table,rule..etc). I'am not sure with syntax, now is: PREPARE name AS optimizable-statement [ USING type, ... ] EXECUTE name [ USING value, ... ] Comments? Suggestions? (SQL92?) (Note: I try test speed and speed for cached query plan (select) executed via EXECUTE rise very very up (70% !).) Karel ---------------------------------------------------------------------- Karel Zak <zakkr@zf.jcu.cz> http://home.zf.jcu.cz/~zakkr/ Docs: http://docs.linux.cz (big docs archive) Kim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager) FTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL) -----------------------------------------------------------------------
On Tue, 22 Feb 2000, Karel Zak - Zakkr wrote: > The queryTree and planTree are save in hash table and in the > TopMemoryContext (Is it good space for this cache?). All is > without change-schema detection (IMHO is user problem if he > changes DB schema and use old cached plan). In future I try Just curious, but a new 'PREPARE name AS...' with the same name just overrides the previously saved plan? Actually, can someone who may know the internals of DBI comment on this? If I have a CGI that runs the same SELECT call each and every time, this would come in handy ... but how does DBI do its prepare? Would it set a new name for each invocation, so you would have several 'cached plans' for the exact same SELECT call? Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Tue, 22 Feb 2000, The Hermit Hacker wrote: > On Tue, 22 Feb 2000, Karel Zak - Zakkr wrote: > > > The queryTree and planTree are save in hash table and in the > > TopMemoryContext (Is it good space for this cache?). All is > > without change-schema detection (IMHO is user problem if he > > changes DB schema and use old cached plan). In future I try > > Just curious, but a new 'PREPARE name AS...' with the same name just > overrides the previously saved plan? Current code return you: test=# prepare one as select * from aaa; PREPARE test=# prepare one as select * from aaa; ERROR: Query plan with name 'one' already exist. test=# I prefer any DROP command instead overriding. But I open for any other suggestions... > Actually, can someone who may know the internals of DBI comment on > this? If I have a CGI that runs the same SELECT call each and every time, > this would come in handy ... but how does DBI do its prepare? Would it > set a new name for each invocation, so you would have several 'cached > plans' for the exact same SELECT call? I not sure if I good understand you. But.. 1/ this cache is in memory only (it is not across re-connection persistent), not save in any table..etc. 2/ you can have(equil or differnet) several plans in this cache, number of plans is not limited.3/ you can't have two same query'sname in cache (name is hash key)4/ after EXECUTE is plan still in cache, you can run it again... potential usage: example - you start connection to PG and you know that you need use 20x same question (example INSERT). You can PREPARE plan for this query, and run fast EXECUTE only (instead 20x full insert); Karel
Karel Zak - Zakkr <zakkr@zf.jcu.cz> writes: > as I said, I tring implement PREPARE / EXECUTE command for user a > controllable query cache (in TODO: Cache most recent query plan(s)). Looks cool. > The queryTree and planTree are save in hash table and in the > TopMemoryContext (Is it good space for this cache?). Probably not. I'd suggest making a separate memory context for this purpose --- they're cheap, and that gives you more control. Look at the creation and use of CacheMemoryContext for an example. > I'am not sure with syntax, now is: > PREPARE name AS optimizable-statement [ USING type, ... ] > EXECUTE name [ USING value, ... ] > Comments? Suggestions? (SQL92?) This seems to be quite at variance with SQL92, unfortunately, so it might not be a good idea to use the same keywords they do... regards, tom lane
On Tue, 22 Feb 2000, Tom Lane wrote: > Karel Zak - Zakkr <zakkr@zf.jcu.cz> writes: > > as I said, I tring implement PREPARE / EXECUTE command for user a > > controllable query cache (in TODO: Cache most recent query plan(s)). > > Looks cool. Thanks. > > > The queryTree and planTree are save in hash table and in the > > TopMemoryContext (Is it good space for this cache?). > > Probably not. I'd suggest making a separate memory context for > this purpose --- they're cheap, and that gives you more control. > Look at the creation and use of CacheMemoryContext for an example. Yes, I agree (TopMemoryContext was simpl for first hacking). But I not sure how create new (across transaction persistent?) MemoryContext. It needs new portal? (Sorry I not thoroughly explore PG's memory management.) > > > I'am not sure with syntax, now is: > > > PREPARE name AS optimizable-statement [ USING type, ... ] > > EXECUTE name [ USING value, ... ] > > > Comments? Suggestions? (SQL92?) > > This seems to be quite at variance with SQL92, unfortunately, so it > might not be a good idea to use the same keywords they do... Hmm, I inspire with Jan's TODO item. What use: CREATE PLAN DROP PLANEXECUTE PLAN IMHO these kaywords are better. Karel
At 06:30 PM 2/22/00 +0100, Karel Zak - Zakkr wrote: > Yes, I agree (TopMemoryContext was simpl for first hacking). >But I not sure how create new (across transaction persistent?) >MemoryContext. It needs new portal? (Sorry I not thoroughly explore >PG's memory management.) Jan is caching the plans needed for referential integrity checking and referential actions - look at ri_triggers.c in src/backend/utils/adt. ri_InitHashTables initializes the RI cache. (I *assume* Jan, with his great experience, is doing it right, I'm in no position to judge!) - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
On Tue, 22 Feb 2000, Don Baccus wrote: > At 06:30 PM 2/22/00 +0100, Karel Zak - Zakkr wrote: > > > Yes, I agree (TopMemoryContext was simpl for first hacking). > >But I not sure how create new (across transaction persistent?) > >MemoryContext. It needs new portal? (Sorry I not thoroughly explore > >PG's memory management.) > > Jan is caching the plans needed for referential integrity checking > and referential actions - look at ri_triggers.c in src/backend/utils/adt. > ri_InitHashTables initializes the RI cache. My cache table routines for PREPARE = Jan's RI routines :-) (I copy and a little modify Jan's code (*Thanks* Jan for good inspiration..). But if I good look at Jan use SPI context for this, not any specific context. Karel
> -----Original Message----- > From: owner-pgsql-hackers@postgreSQL.org > [mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Tom Lane > > Karel Zak - Zakkr <zakkr@zf.jcu.cz> writes: > > as I said, I tring implement PREPARE / EXECUTE command for user a > > controllable query cache (in TODO: Cache most recent query plan(s)). > > Looks cool. > > > The queryTree and planTree are save in hash table and in the > > TopMemoryContext (Is it good space for this cache?). > > Probably not. I'd suggest making a separate memory context for > this purpose --- they're cheap, and that gives you more control. > Look at the creation and use of CacheMemoryContext for an example. > Hmm,shoudn't per plan memory context be created ? Though current SPI stuff saves prepared plans to TopMemory Context,we couldn't remove them forever. It seems that SPI should also be changed in its implementation about saving plans. Note that freeObject() is unavailable at all. We would be able to free PREPAREd resources by destroying corrsponding memory context. If I recognize Jan's original idea correctly,he also suggested the same way. Regards. Hiroshi Inoue Inoue@tpf.co.jp
> Though current SPI stuff saves prepared plans to TopMemory > Context,we couldn't remove them forever. It seems that SPI > should also be changed in its implementation about saving > plans. Yes, I know about SPI plan saving... from here is my inspiration with TopMemoryContext. But we have in current PG code very often any cached queryPlan/Tree (PREPARE, SPI and Jan's RI saves plans to TopM. too), I agree with Tom that is not bad idea saving all plans to _one_ specific MemoryContext. My idea is make any basic routines for query cache (hash table, ExecuteCachedQuery() ...etc) and use these routines for more operation (SPI, FKeys, PREPARE..). Comments? > Note that freeObject() is unavailable at all. > We would be able to free PREPAREd resources by destroying > corrsponding memory context. If I good understand, we can't destroy any plan? We must destroy _full_ memory context? If yes (please no), we can't make a DROP PLAN command or we must create for each plan specific memory context (and drop this specific Context (Jan's original idea)). If I call SPI_saveplan(), is the plan forever save in TopMemoryContext? (hmm, the SPI is memory feeder). Karel
> -----Original Message----- > From: Karel Zak - Zakkr [mailto:zakkr@zf.jcu.cz] > > > Though current SPI stuff saves prepared plans to TopMemory > > Context,we couldn't remove them forever. It seems that SPI > > should also be changed in its implementation about saving > > plans. > > Yes, I know about SPI plan saving... from here is my inspiration > with TopMemoryContext. But we have in current PG code very often > any cached queryPlan/Tree (PREPARE, SPI and Jan's RI saves plans > to TopM. too), I agree with Tom that is not bad idea saving all > plans to _one_ specific MemoryContext. > > My idea is make any basic routines for query cache (hash table, > ExecuteCachedQuery() ...etc) and use these routines for more > operation (SPI, FKeys, PREPARE..). Comments? > > > Note that freeObject() is unavailable at all. > > We would be able to free PREPAREd resources by destroying > > corrsponding memory context. > > If I good understand, we can't destroy any plan? We must I think so. The problem is that Node struct couldn't be freed safely due to the lack of reference count in its definition. As far as I see plans could be destroyed only when the memory context in which they are placed are destroyed. > destroy _full_ memory context? If yes (please no), we can't > make a DROP PLAN command or we must create for each plan specific > memory context (and drop this specific Context (Jan's original idea)). > You can DROP a PLAN by removing its hash entry but of cource there remains memory leak. > If I call SPI_saveplan(), is the plan forever save in > TopMemoryContext? (hmm, the SPI is memory feeder). > Probably. Regards. Hiroshi Inoue Inoue@tpf.co.jp
"Hiroshi Inoue" <Inoue@tpf.co.jp> writes: > I think so. The problem is that Node struct couldn't be freed safely > due to the lack of reference count in its definition. As far as I see > plans could be destroyed only when the memory context in which > they are placed are destroyed. This is overly conservative. It should be safe to destroy a plan tree via freeObject() if it was created via copyObject() --- and that is certainly how the plan would get into a permanent memory context. Currently, rule definitions are leaked in CacheContext at relcache flushes. I plan to start freeing them via freeObject at the beginning of the 7.1 development cycle --- I didn't want to risk it during the runup to 7.0, but I believe it will work fine. regards, tom lane
On Wed, 23 Feb 2000, Tom Lane wrote: > "Hiroshi Inoue" <Inoue@tpf.co.jp> writes: > > I think so. The problem is that Node struct couldn't be freed safely > > due to the lack of reference count in its definition. As far as I see > > plans could be destroyed only when the memory context in which > > they are placed are destroyed. > > This is overly conservative. It should be safe to destroy a plan tree > via freeObject() if it was created via copyObject() --- and that is > certainly how the plan would get into a permanent memory context. Yes, SPI and my PREPARE use copyObject() for saving to TopMemoryContext. Well, I believe you Tom that freeObject() is correct and I start implement PlanCacheMemoryContext's routines for PREPARE (and SPI's saveplan ?). Karel Z.
> -----Original Message----- > From: Tom Lane [mailto:tgl@sss.pgh.pa.us] > > "Hiroshi Inoue" <Inoue@tpf.co.jp> writes: > > I think so. The problem is that Node struct couldn't be freed safely > > due to the lack of reference count in its definition. As far as I see > > plans could be destroyed only when the memory context in which > > they are placed are destroyed. > > This is overly conservative. It should be safe to destroy a plan tree > via freeObject() if it was created via copyObject() --- and that is > certainly how the plan would get into a permanent memory context. > I proposed the implementation of copyObject() which keeps the references among objects once before. It seems unnatural to me that such kind of implementation would never be allowed by this restriction. Why is memory context per plan bad ? Regards. Hiroshi Inoue Inoue@tpf.co.jp
> "Hiroshi Inoue" <Inoue@tpf.co.jp> writes: > > I think so. The problem is that Node struct couldn't be freed safely > > due to the lack of reference count in its definition. As far as I see > > plans could be destroyed only when the memory context in which > > they are placed are destroyed. > > This is overly conservative. It should be safe to destroy a plan tree > via freeObject() if it was created via copyObject() --- and that is > certainly how the plan would get into a permanent memory context. > > Currently, rule definitions are leaked in CacheContext at relcache > flushes. I plan to start freeing them via freeObject at the beginning > of the 7.1 development cycle --- I didn't want to risk it during the > runup to 7.0, but I believe it will work fine. I don't see any reason, why each saved plan or rule definition shouldn't go into it's own, private memory context. Then, a simple destruction of the entire context will surely free all it's memory, and I think itwill also be faster since the en-block allocation, done for many small objects, doesn't need to free all them separately- it throws away the entire blocks. No need to traverse the node tree, nor any problems with multiple objectreferences inside the tree. Since plans are (ought to be) saved via SPI_saveplan(plan), there is already a central point where it can be done for plans. And a corresponding SPI_freeplan(savedplan) should be easy to create, since the context can be heldin the SPI plan structure itself. Needs only some general naming convention for these memory contexts. But something like a MemoryContext CreateObjectMemoryContext(); that guarantees uniqueness in the context name and no conflicts by using some appropriate prefix in them shoulddo it. The overhead, payed for separate contexts is IMHO negligible. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
On Thu, 24 Feb 2000, Hiroshi Inoue wrote: > > This is overly conservative. It should be safe to destroy a plan tree > > via freeObject() if it was created via copyObject() --- and that is > > certainly how the plan would get into a permanent memory context. > > > > I proposed the implementation of copyObject() which keeps the > references among objects once before. It seems unnatural to me > that such kind of implementation would never be allowed by this > restriction. > > Why is memory context per plan bad ? One context is more simple. We talking about a *cache*. If exist interface for this cache andall operations are with copy/freeObject it not has restriction.For how action it will restriction? The PlanCacheMemoryContext will store space only, it isn't space for any action. Karel Z.
Karel wrote: > > Why is memory context per plan bad ? > > One context is more simple. I don't see much complexity difference between one context per plan vs. one context for all. At least if we do it transparently inside of SPI_saveplan() and SPI_freeplan(). > We talking about a *cache*. If exist interface for this cache and > all operations are with copy/freeObject it not has restriction. > > For how action it will restriction? No restrictions I can see. But I think one context per plan is still better, since first there is no leakage/multiref problem. Second, there is a performance difference between explicitly pfree()'ing hundreds of small allocations (in freeObject()traverse), and just destroying a context. The changes I made to the MemoryContextAlloc stuff forv6.5 (IIRC), using bigger blocks incl. padding/reuse for small allocations, caused a speedup of 5+% for the entireregression test. This was only because it uses lesser real calls to malloc()/free() and the context destructiondoes not need to process a huge list of all, however small allocations anymore. It simply throws awayall blocks now. This time, we talk about a more complex, recursive freeObject(), switch()'ing for every node type into separate, per object type specific functions, pfree()'ing all the little chunks. So there is at least a difference in first/second-level RAM cache rows required. And if that can simply be avoided by using one contextper plan, I vote for 1by1. Then again, copyObject/freeObject must be fixed WRT leakage/multiref anyway. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
wieck@debis.com (Jan Wieck) writes: > But I think one context per plan is still better, since first > there is no leakage/multiref problem. Second, there is a > performance difference between explicitly pfree()'ing > hundreds of small allocations (in freeObject() traverse), and > just destroying a context. Agreed, though one would hope that performance of cache flushes is not a major consideration ;-). What I find attractive about going in this direction is the idea that we could get rid of freeObject() entirely, and eliminate that part of the work involved in changing node definitions. > Then again, copyObject/freeObject must be fixed WRT > leakage/multiref anyway. Not if we decide to get rid of freeObject, instead. I think that a little work would have to be done to support efficient use of large numbers of contexts, but it's certainly doable. This path seems more attractive than trying to make the world safe for freeObject of arbitrary node trees. regards, tom lane
Tom Lane wrote: > wieck@debis.com (Jan Wieck) writes: > > Then again, copyObject/freeObject must be fixed WRT > > leakage/multiref anyway. > > Not if we decide to get rid of freeObject, instead. > > I think that a little work would have to be done to support efficient > use of large numbers of contexts, but it's certainly doable. This > path seems more attractive than trying to make the world safe for > freeObject of arbitrary node trees. Yes, little work to build the framework. All alloc/realloc/free functions for a particular context are just function-pointers inside the context structure itself. So ther'll be no additional call overhead when dealing with large numbers of contexts. OTOH, this new per-object-context stuff could hand down some lifetime flag, let's say MCXT_UNTIL_STATEMENT, MCXT_UTIL_XEND and MCXT_UNTIL_INFINITY to start from. The memory context creation/destruction routines could managesome global lists of contexts, that automatically get destroyed on AtXactCommitMemory and so on,making such a kind of per- object memory context a fire'n'forget missile (Uh - played F15 too excessively :-). It should still be destroyed explicitly if not needed anymore, but if allocated with the correctlifetime, wouldn't hurt that much if forgotten. More work to get all the existing places in the backend making use of this functionality where applicable. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
wieck@debis.com (Jan Wieck) writes: > OTOH, this new per-object-context stuff could hand down some > lifetime flag, let's say MCXT_UNTIL_STATEMENT, MCXT_UTIL_XEND > and MCXT_UNTIL_INFINITY to start from. A good thing to keep in mind, but for the short term I'm not sure we need it; the proposed new contexts are all for indefinite-lifetime caches, so there's no chance to make them go away automatically. Eventually we might have more uses for limited-lifetime contexts, though. Something else that needs to be looked at is how memory contexts are tied to "portals" presently. That mechanism probably needs to be redesigned. I have to admit I don't understand what it's for... regards, tom lane
Tom Lane wrote: > wieck@debis.com (Jan Wieck) writes: > > OTOH, this new per-object-context stuff could hand down some > > lifetime flag, let's say MCXT_UNTIL_STATEMENT, MCXT_UTIL_XEND > > and MCXT_UNTIL_INFINITY to start from. > > A good thing to keep in mind, but for the short term I'm not sure > we need it; the proposed new contexts are all for indefinite-lifetime > caches, so there's no chance to make them go away automatically. > Eventually we might have more uses for limited-lifetime contexts, > though. Sure, was only what I thought might be useful in some cases. If not used, would it hurt to have support for it either? Some unused List*'ers somewhere - nothing important. > Something else that needs to be looked at is how memory contexts > are tied to "portals" presently. That mechanism probably needs > to be redesigned. I have to admit I don't understand what it's > for... U2? Makes 2 of us. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
On Wed, 23 Feb 2000, Jan Wieck wrote: > > I don't see much complexity difference between one context > per plan vs. one context for all. At least if we do it > transparently inside of SPI_saveplan() and SPI_freeplan(). > Well, I explore PG's memory context routines and is probably more simple destroy mem context (than use feeeObject()) and create new context for plan is simple too. (Jan, Hiroshi and PG's source convince me :-) Today afternoon I rewrite query cache and now is implemented as 'context-per-plan'. It allows me write a 'DROP PLAN' command. We can use this cache in SPI too, and create new command SPI_freeplan() (and stop TopMemoryContext feeding). Now, PREPARE/EXECUTE are ready to usage. See: test=# prepare my_plan as select * from tab where id = $1 using int; PREPARE test=# execute my_plan using 2;id | data ----+------ 2 | aaaa (1 row) test=# drop plan my_plan; DROP test=# execute my_plan using 2; ERROR: Plan with name 'my_plan' not existI still not sure with PREPARE/EXECUTE keywords, I vote for: CREATE PLAN name AS query [ USING type, ... ]EXECUTE PLAN name [ USING values, ... ]DROP PLAN name Comments? (Please. I really not SQL's standard guru...) Karel
Karel Zak - Zakkr writes: > I still not sure with PREPARE/EXECUTE keywords, I vote for: > > CREATE PLAN name AS query [ USING type, ... ] > EXECUTE PLAN name [ USING values, ... ] > DROP PLAN name > > Comments? (Please. I really not SQL's standard guru...) SQL seems to have something like the following. (Note: The section on dynamic SQL is mostly incomprehensible to me.) PREPARE name AS query DESCRIBE INPUT name [ USING x, ... ] DESCRIBE [OUTPUT] name [ USING x, ... ] EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ] DEALLOCATE PREPARE name I'm not sure if these match exactly what you're doing, but if it is at all possible to match what you're doing to these, I'd say it would be a shame not to do it. You've got time. Meanwhile I'm wondering whether it would not be possible to provide the plan caching functionality even if all you do is send the same SELECT twice in a row. Might be tricky, of course. -- Peter Eisentraut Sernanders väg 10:115 peter_e@gmx.net 75262 Uppsala http://yi.org/peter-e/ Sweden
On Sat, 26 Feb 2000, Peter Eisentraut wrote: > Karel Zak - Zakkr writes: > > > I still not sure with PREPARE/EXECUTE keywords, I vote for: > > > > CREATE PLAN name AS query [ USING type, ... ] > > EXECUTE PLAN name [ USING values, ... ] > > DROP PLAN name > > > > Comments? (Please. I really not SQL's standard guru...) > > SQL seems to have something like the following. (Note: The section on > dynamic SQL is mostly incomprehensible to me.) I'am studing SQL92 just now. And I not sure if my idea is same as SQL92's PREPARE. My implementation is very simular with SPI's plan operations, and is designed as simple way to very fast query execution. > PREPARE name AS query In my PREPARE go query to parser and if in PG query is '$n', parser needs (Oid) argstypes array, hence it needs PREPARE name AS <query with parameters - $n> USING valuetype, ... But in SQL92 is PREPARE without "USING valuetype, ...". > DESCRIBE INPUT name [ USING x, ... ] > DESCRIBE [OUTPUT] name [ USING x, ... ] It is probably used instead 'USING' in PREPARE. It specific columns for select (OUTPUT) and INPUT specific values for parser ($n paremetrs in PG). People which define SQL92 must be crazy. This PREPARE concept split one query plan to three commands. Who join it to one plan?.... > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ] This command "Associate input parametrs and output targets with a prepared statement and execute the statement" (SQL92). 'INTO' - I really not sure if is possible in PG join more plans into one plan. If I good understand, INTO is targetlist for cached query, but in cached query is targetlist too. Is any way how join/replace targetlist in cached query with targetlist from EXECUTE's INTO? (QueryRewrite?). But, INTO for EXECUTE is nod bad idea. > DEALLOCATE PREPARE name It is better than 'DROP'. > Meanwhile I'm wondering whether it would not be possible to provide the > plan caching functionality even if all you do is send the same SELECT > twice in a row. Might be tricky, of course. Here, I'am not understand you. Exist any other SQL which has implemented a PREPARE/EXECUTE? (Oracle8 has not it, and other..?) I still vote for simple PREPARE/EXECUTE (or non-standard CREATE PLAN), because SQL92's PREPARE is not implementable :-) Karel
> > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ] > > This command "Associate input parametrs and output targets with a prepared > statement and execute the statement" (SQL92). > > 'INTO' - I really not sure if is possible in PG join more plans into > one plan. If I good understand, INTO is targetlist for cached > query, but in cached query is targetlist too. Is any way how join/replace > targetlist in cached query with targetlist from EXECUTE's INTO? > (QueryRewrite?). But, INTO for EXECUTE is nod bad idea. Sorry, previous paragraph is stupid. The 'into' is simple item in the query struct and not any targetlist. I spend more time with previous stupidity than with implementation: EXECUTE <name> [ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ] [ USING val, ... ] test=# prepare sel as select * from tab; PREPARE test=# execute sel into x; SELECT test=# select * from x;id | data ----+------ 1 | aaaa 2 | bbbb 3 | cccc 4 | dddd 5 | eeee (5 rows) The PostgreSQL source code is really very modular :-) Karel
> -----Original Message----- > From: Karel Zak - Zakkr [mailto:zakkr@zf.jcu.cz] > > > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ] > > > > This command "Associate input parametrs and output targets > with a prepared > > statement and execute the statement" (SQL92). > > I don't know well about PREPARE statement. But is above syntax for interative SQL command ? Isn't it for embedded SQL or SQL module ? Regards. Hiroshi Inoue Inoue@tpf.co.jp
On Tue, 29 Feb 2000, Hiroshi Inoue wrote: > > -----Original Message----- > > From: Karel Zak - Zakkr [mailto:zakkr@zf.jcu.cz] > > > > > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ] > > > > > > This command "Associate input parametrs and output targets > > with a prepared > > > statement and execute the statement" (SQL92). > > > > > I don't know well about PREPARE statement. > But is above syntax for interative SQL command ? > Isn't it for embedded SQL or SQL module ? - PREPARE save to cache any standard sql command (OptimizableStmt).- EXECUTE run this cached plan (query) and send data tofrontend or INTO any relation. Or what you mean? Karel
> -----Original Message----- > From: Karel Zak - Zakkr [mailto:zakkr@zf.jcu.cz] > > On Tue, 29 Feb 2000, Hiroshi Inoue wrote: > > > > -----Original Message----- > > > From: Karel Zak - Zakkr [mailto:zakkr@zf.jcu.cz] > > > > > > > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ] > > > > > > > > This command "Associate input parametrs and output targets > > > with a prepared > > > > statement and execute the statement" (SQL92). > > > > > > > > I don't know well about PREPARE statement. > > But is above syntax for interative SQL command ? > > Isn't it for embedded SQL or SQL module ? > > - PREPARE save to cache any standard sql command (OptimizableStmt). > - EXECUTE run this cached plan (query) and send data to frontend or > INTO any relation. > > Or what you mean? > In old Oracle(I don't know recent Oracle,sorry),PREPARE couldn't be called as an interactive SQL command. It was used only in embedded SQL. Seems x, y after INTO are output variables. In embedded SQL they are host variables. But I don't know what they are in interactive SQL. Regards. Hiroshi Inoue Inoue@tpf.co.jp
On Wed, 1 Mar 2000, Hiroshi Inoue wrote: > > -----Original Message----- > > From: Karel Zak - Zakkr [mailto:zakkr@zf.jcu.cz] > > > > On Tue, 29 Feb 2000, Hiroshi Inoue wrote: > > > > > > -----Original Message----- > > > > From: Karel Zak - Zakkr [mailto:zakkr@zf.jcu.cz] > > > > > > > > > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ] > > > > > > > > > > This command "Associate input parametrs and output targets > > > > with a prepared > > > > > statement and execute the statement" (SQL92). > > > > > > > > > > > I don't know well about PREPARE statement. > > > But is above syntax for interative SQL command ? > > > Isn't it for embedded SQL or SQL module ? > > > > - PREPARE save to cache any standard sql command (OptimizableStmt). > > - EXECUTE run this cached plan (query) and send data to frontend or > > INTO any relation. > > > > Or what you mean? > > > > In old Oracle(I don't know recent Oracle,sorry),PREPARE couldn't be called > as an interactive SQL command. It was used only in embedded SQL. Oh, yes I understand you now. No, prepare is a standard command (interactive) (IMO). > Seems x, y after INTO are output variables. In embedded SQL they are > host variables. But I don't know what they are in interactive SQL. A INTO is same as (example) SELECT ..INTO, see: PREPARE myplan AS SELECT * FROM tab;EXECUTE myplan INTO newtab; A INTO only remove query destination for cached plan. ...it is in my implementation. I don't no how it is in any others SQLs. In my Oracle8's tutorial it isn't. Karel