Thread: Status of plperl inter-sp calling
While waiting for feedback on my earlier plperl refactor and feature patches I'm working on a further patch that adds, among other things, fast inter-plperl-sp calling. I want to outline what I've got and get some feedback on open issues. To make a call to a stored procedure from plperl you just call the function name prefixed by SP::. For example: create function poly() returns text language plperl as $$ return "poly0" $$; create function poly(text) returnstext language plperl as $$ return "poly1" $$ create function poly(text, text) returns text language plperl as $$ return "poly2" $$ create function foo() returns text language plperl as $$ SP::poly(); SP::poly(1); SP::poly(1,2); return undef; $$ That handles the arity of the calls and invokes the right SP, bypassing SQL if the SP is already loaded. That much works currently. Behind the scenes, when a stored procedure is loaded into plperl the code ref for the perl sub is stored in a cache. Effectively just $cache{$name}[$nargs] = $coderef; An SP::AUTOLOAD sub intercepts any SP::* call and effectively does lookup_sp($name, \@_)->(@_); For SPs that are already loaded lookup_sp returns $cache{$name}[$nargs] so the overhead of the call is very small. For SPs that are not cached, lookup_sp returns a code ref of a closure that will invoke $name with the args in @_ via spi_exec_query("select * from $name($encoded_args)"); The fallback-to-SQL behaviour neatly handles non-cached SPs (forcing them to be loaded and thus cached), and inter-language calling (both plperl<->plperl and other PLs). Limitations: * It's not meant to handle type polymorphism, only the number of args. * When invoked via SQL, because the SP isn't cached, all non-ref args are all expressed as strings via quote_nullable().Any array refs are encoded as ARRAY[...] via encode_array_constructor(). I don't see either of those as significant issues: "If you need more control for a particular SP then don't use SP::* to call that SP." Open issues: * What should SP::foo(...) return? The plain as-if-called-by-perl return value, or something closer to what spi_exec_query()returns? * If the called SP::foo(...) calls return_next those rows are returned directly to the client. That can be construed asa feature. * Cache invalidation. How can I hook into an SP being dropped so I can pro-actively invalidate the cache? * Probably many other things I've not thought of. This is all a little rough and exploratory at the moment. I'm very keen to get any feedback you might have. Tim. p.s. Happy New Year! (I may be off-line for a few days.)
On Wed, Dec 30, 2009 at 5:54 PM, Tim Bunce <Tim.Bunce@pobox.com> wrote: > That much works currently. Behind the scenes, when a stored procedure is > loaded into plperl the code ref for the perl sub is stored in a cache. > Effectively just > $cache{$name}[$nargs] = $coderef; That doesn't seem like enough to guarantee that you've got the right function. What if you have two functions with the same number of arguments but different argument types? And what about optional arguments, variable arguments, etc.? ...Robert
On Dec 30, 2009, at 4:17 PM, Robert Haas wrote: >> That much works currently. Behind the scenes, when a stored procedure is >> loaded into plperl the code ref for the perl sub is stored in a cache. >> Effectively just >> $cache{$name}[$nargs] = $coderef; > > That doesn't seem like enough to guarantee that you've got the right > function. What if you have two functions with the same number of > arguments but different argument types? And what about optional > arguments, variable arguments, etc.? As Tim said elsewhere: > I don't see either of those as significant issues: "If you need more > control for a particular SP then don't use SP::* to call that SP." Best, Davdi
"David E. Wheeler" <david@kineticode.com> writes: > On Dec 30, 2009, at 4:17 PM, Robert Haas wrote: >> That doesn't seem like enough to guarantee that you've got the right >> function. > As Tim said elsewhere: >> I don't see either of those as significant issues: "If you need more >> control for a particular SP then don't use SP::* to call that SP." If the thing actively fails when there's more than one possible match, that might be ok. Randomly choosing a match, not so much. regards, tom lane
On Dec 30, 2009, at 2:54 PM, Tim Bunce wrote: > That handles the arity of the calls and invokes the right SP, bypassing > SQL if the SP is already loaded. Nice. > That much works currently. Behind the scenes, when a stored procedure is > loaded into plperl the code ref for the perl sub is stored in a cache. > Effectively just > $cache{$name}[$nargs] = $coderef; > An SP::AUTOLOAD sub intercepts any SP::* call and effectively does > lookup_sp($name, \@_)->(@_); > For SPs that are already loaded lookup_sp returns $cache{$name}[$nargs] > so the overhead of the call is very small. Definite benefit, there. How does it handle the difference between IMMUTABLE | STABLE | VOLATILE, as well as STRICT functions?And what does it do if the function called is not actually a Perl function? > For SPs that are not cached, lookup_sp returns a code ref of a closure > that will invoke $name with the args in @_ via > spi_exec_query("select * from $name($encoded_args)"); > > The fallback-to-SQL behaviour neatly handles non-cached SPs (forcing > them to be loaded and thus cached), and inter-language calling (both > plperl<->plperl and other PLs). Is there a way for such a function to be cached? If not, I'm not sure where cached functions come from. > Limitations: > > * It's not meant to handle type polymorphism, only the number of args. Well, spi_exec_query() handles the type polymorphism. So might it be possible to call SP::function() and have it not usea cached query? That way, one gets the benefit of polymorphism. Maybe there's a SP package that does caching, and an SPIpackage that does not? (Better named, though.) > * When invoked via SQL, because the SP isn't cached, all non-ref args > are all expressed as strings via quote_nullable(). Any array refs > are encoded as ARRAY[...] via encode_array_constructor(). Hrm. Why not use spi_prepare() and let spi_exec_prepared() handle the quoting? > I don't see either of those as significant issues: "If you need more > control for a particular SP then don't use SP::* to call that SP." If there was a non-cached version that was essentially just sugar for the SPI stuff, I think that would be more predicable,no? I'm not saying there shouldn't be a cached interface, just that it should not be the first choice when usingpolymorphic functions and non-PL/Perl functions. > Open issues: > > * What should SP::foo(...) return? The plain as-if-called-by-perl > return value, or something closer to what spi_exec_query() returns? The former. > * If the called SP::foo(...) calls return_next those rows are returned > directly to the client. That can be construed as a feature. As a list? Best, David
On Wed, Dec 30, 2009 at 7:41 PM, David E. Wheeler <david@kineticode.com> wrote: > On Dec 30, 2009, at 4:17 PM, Robert Haas wrote: > >>> That much works currently. Behind the scenes, when a stored procedure is >>> loaded into plperl the code ref for the perl sub is stored in a cache. >>> Effectively just >>> $cache{$name}[$nargs] = $coderef; >> >> That doesn't seem like enough to guarantee that you've got the right >> function. What if you have two functions with the same number of >> arguments but different argument types? And what about optional >> arguments, variable arguments, etc.? > > As Tim said elsewhere: > >> I don't see either of those as significant issues: "If you need more >> control for a particular SP then don't use SP::* to call that SP." Sorry, I missed that. I guess it seems weird to me to handle overloading, but only partially. If we're OK with punting, why not punt the whole thing and just have $cache{$name} = $coderef? ...Robert
On Thu, Dec 31, 2009 at 09:47:24AM -0800, David E. Wheeler wrote: > On Dec 30, 2009, at 2:54 PM, Tim Bunce wrote: > > > That much works currently. Behind the scenes, when a stored procedure is > > loaded into plperl the code ref for the perl sub is stored in a cache. > > Effectively just > > $cache{$name}[$nargs] = $coderef; > > An SP::AUTOLOAD sub intercepts any SP::* call and effectively does > > lookup_sp($name, \@_)->(@_); > > For SPs that are already loaded lookup_sp returns $cache{$name}[$nargs] > > so the overhead of the call is very small. > > Definite benefit, there. How does it handle the difference between > IMMUTABLE | STABLE | VOLATILE, as well as STRICT functions? It doesn't at the moment. I think IMMUTABLE, STABLE and VOLATILE can be (documented as being) ignored in this context. Supporting STRICT probably wouldn't be too hard. > And what does it do if the function called is not actually a Perl function? (See "fallback-to-SQL" two paragraphs below) > > For SPs that are not cached, lookup_sp returns a code ref of a closure > > that will invoke $name with the args in @_ via > > spi_exec_query("select * from $name($encoded_args)"); > > > > The fallback-to-SQL behaviour neatly handles non-cached SPs (forcing > > them to be loaded and thus cached), and inter-language calling (both > > plperl<->plperl and other PLs). > > Is there a way for such a function to be cached? If not, I'm not sure > where cached functions come from. The act of calling the function via spi_exec_query will load it, and thereby cache it in the perl interpreter as a side effect (if the language is the is the same: e.g., plperlu->plperlu). > > Limitations: > > > > * It's not meant to handle type polymorphism, only the number of args. > > Well, spi_exec_query() handles the type polymorphism. So might it be > possible to call SP::function() and have it not use a cached query? > That way, one gets the benefit of polymorphism. Maybe there's a SP > package that does caching, and an SPI package that does not? (Better > named, though.) The underlying issue here is perl's lack of strong typing. See http://search.cpan.org/~mlehmann/JSON-XS-2.26/XS.pm#PERL_-%3E_JSON especially the "simple scalars" section and "used as string" example. As far as I can see there's no way for perl to support the kind of rich type polymorphism that PostgreSQL offers via the kind of "make it look like a perl function call" interface that we're discussing. [I can envisage a more complex interface where you ask for a code ref to a sub with a specific type signature and then use that code ref to make the call. Ah, I've just had a better idea but it needs a little more thought. I'll send a another email later.] > > * When invoked via SQL, because the SP isn't cached, all non-ref args > > are all expressed as strings via quote_nullable(). Any array refs > > are encoded as ARRAY[...] via encode_array_constructor(). > > Hrm. Why not use spi_prepare() and let spi_exec_prepared() handle the quoting? No reason, assuming spi_exec_prepared handles array refs properly [I was just doing "simplest thing that could possibly work" at this stage] > > I don't see either of those as significant issues: "If you need more > > control for a particular SP then don't use SP::* to call that SP." > > If there was a non-cached version that was essentially just sugar for > the SPI stuff, I think that would be more predicable, no? I'm not > saying there shouldn't be a cached interface, just that it should not > be the first choice when using polymorphic functions and non-PL/Perl > functions. So you're suggesting SP::foo(...) _always_ executes foo(...) via bunch of spi_* calls. Umm. I thought performance was a major driving factor. Sounds like you're more keen on syntactic sugar. Tim.
On Jan 5, 2010, at 12:59 PM, Tim Bunce wrote: > So you're suggesting SP::foo(...) _always_ executes foo(...) via bunch > of spi_* calls. Umm. I thought performance was a major driving factor. > Sounds like you're more keen on syntactic sugar. I'm saying do both. Make the cached version the one that will be used most often, but make available a second version thatdoesn't cache so that you get the sugar and the polymorphic dispatch. Such would only have to be used in cases wherethere is more than one function that takes the same number of arguments. The rest of the time -- most of the time, thatis -- one can use the cached version. Best, David
On Tue, Jan 05, 2010 at 01:05:40PM -0800, David E. Wheeler wrote: > On Jan 5, 2010, at 12:59 PM, Tim Bunce wrote: > > > So you're suggesting SP::foo(...) _always_ executes foo(...) via bunch > > of spi_* calls. Umm. I thought performance was a major driving factor. > > Sounds like you're more keen on syntactic sugar. > > I'm saying do both. Make the cached version the one that will be used > most often, but make available a second version that doesn't cache so > that you get the sugar and the polymorphic dispatch. Such would only > have to be used in cases where there is more than one function that > takes the same number of arguments. The rest of the time -- most of > the time, that is -- one can use the cached version. I think I have a best-of-both solution. E-mail to follow... Tim.
Ok, Plan B... Consider this (hypothetical) example: CREATE OR REPLACE FUNCTION test() ... LANGUAGE plperl AS $$ use SP foo_int => 'foo(int)'; use SP foo_text => 'foo(text)', -cached; foo_int(42); foo_text(42); ... $$ Here the user is importing into their function, at load/compile-time, aliases for specific stored procedures with specific type signatures. The importer builds and imports a custom closure. At its most basic it would be something like: my $h = spi_prepare('select foo($1)', 'text'); return sub { spi_exec_prepared($h, @_)->{rows} } or perhaps, with added lazy smartness: my $mk = sub { spi_prepare('select foo($1)', 'text') }; my $h; # initialized on first use record_handle_for_later_freeing_if_needed(\$h); return sub { spi_exec_prepared($h ||= $mk->(), @_)->{rows} } As much as possible has been pre-computed. All foo_text() does is call spi_exec_prepared and do something (to be decided) with the results. That's likely to be fast enough to negate much of the desire for caching. It'll also work for all functions in all languages. I added an example with -cached above to indicate how extra attributes could be specified to influence the behaviour of the import-time code builder. The code builder only needs to handle a few simple cases initially. Enough to cover at least nargs and type polymorphism. I'd guess that VARADIC won't be too hard, but I'll probably skip OUT & INOUT. There probably won't be explicit support for DEFAULT args - just import another alias that has the default arg missing. The only question I have at the moment, before I try implementing this, is the the need for freeing the plan. When would that be needed? (Note that this scheme will only generate a fixed set of plans, one per specific function name and type signature.) Can someone give me some real-world examples? For example, does a plan become 'broken' if an object it references gets dropped and recreated? Assuming it does, or there's some other need to free/recreate plans, then I can add a function call to do that (by recording a reference to the $h's in the example above and using that to undef them). Does the above sound workable? Anything I've missed? Tim. p.s. My earlier plperl feature patch enabled the use of 'use' within plperl stored procedures - but only for modules that have been explicitly configured and pre-loaded.
Tim Bunce <Tim.Bunce@pobox.com> writes: > On Thu, Dec 31, 2009 at 09:47:24AM -0800, David E. Wheeler wrote: >> Definite benefit, there. How does it handle the difference between >> IMMUTABLE | STABLE | VOLATILE, as well as STRICT functions? > It doesn't at the moment. I think IMMUTABLE, STABLE and VOLATILE can be > (documented as being) ignored in this context. Just for the record, I think that would be a seriously bad idea. There is a semantic difference there (having to do with snapshot management), and ignoring it would mean that a function could behave subtly differently depending on how it was called. It's the kind of thing that would be a nightmare to debug, too, because you'd never see a problem except when the right sort of race condition occurred with another transaction. I see downthread that you seem to have an approach without this gotcha, so that's fine, but I wanted to make it clear that you can't just ignore volatility. regards, tom lane
Tim Bunce <Tim.Bunce@pobox.com> writes: > The only question I have at the moment, before I try implementing this, > is the the need for freeing the plan. When would that be needed? There's probably no strong need to do it at all, unless you are dropping your last reference to the plan. regards, tom lane
On Tue, Jan 05, 2010 at 06:54:36PM -0500, Tom Lane wrote: > Tim Bunce <Tim.Bunce@pobox.com> writes: > > On Thu, Dec 31, 2009 at 09:47:24AM -0800, David E. Wheeler wrote: > >> Definite benefit, there. How does it handle the difference between > >> IMMUTABLE | STABLE | VOLATILE, as well as STRICT functions? > > > It doesn't at the moment. I think IMMUTABLE, STABLE and VOLATILE can be > > (documented as being) ignored in this context. > > Just for the record, I think that would be a seriously bad idea. > There is a semantic difference there (having to do with snapshot > management), and ignoring it would mean that a function could behave > subtly differently depending on how it was called. It's the kind of > thing that would be a nightmare to debug, too, because you'd never > see a problem except when the right sort of race condition occurred > with another transaction. > > I see downthread that you seem to have an approach without this gotcha, > so that's fine, but I wanted to make it clear that you can't just ignore > volatility. Ok, thanks Tom. For my own benefit, being a PostgreSQL novice, could you expand a little? For example, given two stored procedures, A and V, where V is marked VOLATILE and both are plperl. How would having A call V directly, within the plperl interpreter, cause problems? Tim.
On Tue, Jan 05, 2010 at 07:06:35PM -0500, Tom Lane wrote: > Tim Bunce <Tim.Bunce@pobox.com> writes: > > The only question I have at the moment, before I try implementing this, > > is the the need for freeing the plan. When would that be needed? > > There's probably no strong need to do it at all, That's good. > unless you are dropping your last reference to the plan. Uh, now I'm confused again. The way I envisage it, each imported function would contain a plan. So each would have the one and only reference to that plan. So, if there was a need to drop them, I would be dropping the last reference to the plan. Let me ask the question another way... is there any reason to drop plans other than limiting memory usage? I couldn't find anything in the docs to suggest there was but want to be sure. Tim.
Tim Bunce <Tim.Bunce@pobox.com> writes: > For my own benefit, being a PostgreSQL novice, could you expand a little? > For example, given two stored procedures, A and V, where V is marked > VOLATILE and both are plperl. How would having A call V directly, within > the plperl interpreter, cause problems? That case is fine. The problem would be in calling, say, VOLATILE from STABLE. Any SPI queries executed inside the VOLATILE function would need to be handled under read-write not read-only rules. Now it's perhaps possible for you to track that yourself and make sure to call SPI with the right arguments for the type of function you're currently in, even if you didn't get to it via the front door. But that's a far cry from "ignoring" the volatility property. It seems nontrivial to do if you try to set things up so that no plperl code is executed during the transition from one function to another. regards, tom lane
Tim Bunce <Tim.Bunce@pobox.com> writes: > Let me ask the question another way... is there any reason to drop plans > other than limiting memory usage? No, that's about it. The only reason to care is if you'd otherwise have a code path that would repetitively leak plans. regards, tom lane
On Wed, Jan 6, 2010 at 9:46 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Tim Bunce <Tim.Bunce@pobox.com> writes: >> For my own benefit, being a PostgreSQL novice, could you expand a little? >> For example, given two stored procedures, A and V, where V is marked >> VOLATILE and both are plperl. How would having A call V directly, within >> the plperl interpreter, cause problems? > > That case is fine. The problem would be in calling, say, VOLATILE from > STABLE. Any SPI queries executed inside the VOLATILE function would > need to be handled under read-write not read-only rules. > > Now it's perhaps possible for you to track that yourself and make sure > to call SPI with the right arguments for the type of function you're > currently in, even if you didn't get to it via the front door. But > that's a far cry from "ignoring" the volatility property. It seems > nontrivial to do if you try to set things up so that no plperl code is > executed during the transition from one function to another. I think it's becoming clear that it's hopeless to make this work in a way that is parallel to what will happen if you call these functions via a real SPI call. Even if Tim were able to reimplement all of our semantics in terms of what is immutable, stable, volatile, overloading, default arguments, variadic arguments, etc., I am fairly certain that we do not wish to maintain a Perl reimplementation of all of our calling conventions which will then have to be updated every time someone adds a new bit of magic to the core code. I think what we should do is either (1) implement a poor man's caching that doesn't try to cope with any of these issues, and document that you get what you pay for or (2) reject this idea in its entirety. Trying to reimplement all of our normal function call semantics in a caching layer does not seem very sane. ...Robert
Robert Haas <robertmhaas@gmail.com> writes: > I think what we should do is either (1) implement a poor man's caching > that doesn't try to cope with any of these issues, and document that > you get what you pay for or (2) reject this idea in its entirety. > Trying to reimplement all of our normal function call semantics in a > caching layer does not seem very sane. What about (3) implementing the caching layer in the core code so that any caller benefit from it? I guess the size of the project is not the same though. Regards, -- dim
Tom Lane wrote: > Tim Bunce <Tim.Bunce@pobox.com> writes: > >> For my own benefit, being a PostgreSQL novice, could you expand a little? >> For example, given two stored procedures, A and V, where V is marked >> VOLATILE and both are plperl. How would having A call V directly, within >> the plperl interpreter, cause problems? >> > > That case is fine. The problem would be in calling, say, VOLATILE from > STABLE. Any SPI queries executed inside the VOLATILE function would > need to be handled under read-write not read-only rules. > > Now it's perhaps possible for you to track that yourself and make sure > to call SPI with the right arguments for the type of function you're > currently in, even if you didn't get to it via the front door. But > that's a far cry from "ignoring" the volatility property. It seems > nontrivial to do if you try to set things up so that no plperl code is > executed during the transition from one function to another. > > I don't understand that phrase "call SPI with the right arguments for the type of function you're currently in". What calls that we make from plperl code would have different arguments depending on the volatility of the function? If a cached plan is going to behave differently, I'd be inclined to say that we should only allow direct inter-sp calling to volatile functions from volatile functions - if U understand you right the only problem could be caused by calling in this direction, a volatile function calling a stable function would not cause a problem. That is surely the most likely case anyway. I at least rarely create non-volatile plperl functions, apart from an occasional immutable function that probably shouldn't be calling SPI anyway. cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > I don't understand that phrase "call SPI with the right arguments for > the type of function you're currently in". What calls that we make from > plperl code would have different arguments depending on the volatility > of the function? eg, in plperl_spi_exec, spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ limit); > If a cached plan is going to behave differently, I'd be > inclined to say that we should only allow direct inter-sp calling to > volatile functions from volatile functions - if U understand you right > the only problem could be caused by calling in this direction, a > volatile function calling a stable function would not cause a problem. The other way is just as wrong. regards, tom lane
Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: > >> I don't understand that phrase "call SPI with the right arguments for >> the type of function you're currently in". What calls that we make from >> plperl code would have different arguments depending on the volatility >> of the function? >> > > eg, in plperl_spi_exec, > > spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly, > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > limit); > > OK, but won't that automatically supply the value from the function called from postgres, which will be the right thing? i.e. if postgres calls S which direct-calls V which calls SPI_execute(), the value of current_call_data->prodesc->fn_readonly in the call above will be supplied from S, not V, since S will be at the top of the plperl call stack. cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > Tom Lane wrote: >> spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly, >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > OK, but won't that automatically supply the value from the function > called from postgres, which will be the right thing? My point was that that is exactly the wrong thing. If I have a function declared stable, it must not suddenly start behaving as volatile because it was called from a volatile function. Nor vice versa. Now as I mentioned upthread, there might be other ways to get the correct value of the readonly parameter. One that comes to mind is to somehow attach it to the spi call "at compile time", whatever that means in the perl world. But you can't just have it be determined by the outermost active function call. regards, tom lane
Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: > >> Tom Lane wrote: >> >>> spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly, >>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>> > > >> OK, but won't that automatically supply the value from the function >> called from postgres, which will be the right thing? >> > > My point was that that is exactly the wrong thing. If I have a function > declared stable, it must not suddenly start behaving as volatile because > it was called from a volatile function. Nor vice versa. > > Now as I mentioned upthread, there might be other ways to get the > correct value of the readonly parameter. One that comes to mind is > to somehow attach it to the spi call "at compile time", whatever that > means in the perl world. But you can't just have it be determined by > the outermost active function call. > > > OK. Well, no doubt Tim might have better ideas, but the only way I can think of is to attach a readonly attribute (see perdoc attributes) to the function and then pass that back in the SPI call (not sure how easy it is to get the caller's attributes in C code). Unless we come up with a neatish way I'd be a bit inclined to agree with Robert that this is all looking too complex. Next question: what do we do if a direct-called function calls return_next()? That at least must surely take effect in the caller's context - the callee won't have anywhere to stash the the results at all. cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > Next question: what do we do if a direct-called function calls > return_next()? That at least must surely take effect in the caller's > context - the callee won't have anywhere to stash the the results at all. Whatever do you mean by "take effect in the caller's context"? I surely hope it's not "return the row to the caller's caller, who likely isn't expecting anything of the kind". I suspect Tim will just answer that he isn't going to try to short-circuit the call path for set-returning functions. regards, tom lane
Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: > > Next question: what do we do if a direct-called function calls > > return_next()? That at least must surely take effect in the caller's > > context - the callee won't have anywhere to stash the the results at all. > > Whatever do you mean by "take effect in the caller's context"? I surely > hope it's not "return the row to the caller's caller, who likely isn't > expecting anything of the kind". > > I suspect Tim will just answer that he isn't going to try to > short-circuit the call path for set-returning functions. FYI, I am excited PL/Perl is getting a good review and cleaning by Tim. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. +
On Wed, Jan 06, 2010 at 11:14:38AM -0500, Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: > > Tom Lane wrote: > >> spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly, > >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > OK, but won't that automatically supply the value from the function > > called from postgres, which will be the right thing? > > My point was that that is exactly the wrong thing. If I have a function > declared stable, it must not suddenly start behaving as volatile because > it was called from a volatile function. Nor vice versa. > > Now as I mentioned upthread, there might be other ways to get the > correct value of the readonly parameter. One that comes to mind is > to somehow attach it to the spi call "at compile time", whatever that > means in the perl world. But you can't just have it be determined by > the outermost active function call. If you want 'a perl compile time hook', those are called attributes. http://search.cpan.org/~dapm/perl-5.10.1/lib/attributes.pm You can define attributes to effect how a given syntax compiles. perl. my $var :foo; or sub bar :foo; The subroutine or variable is compiled in a way defined by the ':foo' attribute. This might be a clean way around the type dispatch issues as well. One could include the invokant type information in the perl declaration. sub sp_something :pg_sp ('bigint bigint'); sp_something ("12",0); Anyway, that looks like a nice interface to me... Although, I don't understand the Pg internals problem faced here so ... I'm not sure my suggestion is helpful. Garick > > regards, tom lane > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers
Garick Hamlin <ghamlin@isc.upenn.edu> writes: > If you want 'a perl compile time hook', those are called attributes. > http://search.cpan.org/~dapm/perl-5.10.1/lib/attributes.pm Hm ... first question that comes to mind is how far back does that work? The comments on that page about this or that part of it being still experimental aren't very comforting either... regards, tom lane
Tom Lane wrote: > Garick Hamlin <ghamlin@isc.upenn.edu> writes: > >> If you want 'a perl compile time hook', those are called attributes. >> http://search.cpan.org/~dapm/perl-5.10.1/lib/attributes.pm >> > > Hm ... first question that comes to mind is how far back does that work? > The comments on that page about this or that part of it being still > experimental aren't very comforting either... > > > That's a case of out of date docco more than anything else, AFAIK. It's been there at least since 5.6.2 (which is the earliest source I have on hand). cheers andrew
On Jan 6, 2010, at 11:27 AM, Andrew Dunstan wrote: > That's a case of out of date docco more than anything else, AFAIK. It's been there at least since 5.6.2 (which is the earliestsource I have on hand). Which likely also means 5.6.1 and quite possibly 5.6.0. I don't recommend anything earlier than 5.6.2, though, frankly, and5.8.9 is a better choice. 5.10.1 better still. Is there a documented required minimum version for PL/Perl? Best, David
"David E. Wheeler" <david@kineticode.com> writes: > Which likely also means 5.6.1 and quite possibly 5.6.0. I don't recommend anything earlier than 5.6.2, though, frankly,and 5.8.9 is a better choice. 5.10.1 better still. Is there a documented required minimum version for PL/Perl? One of the things on my to-do list for today is to make configure reject Perl versions less than $TBD. I thought we had agreed a week or so back that 5.8 was the lowest safe version because of utf8 and other considerations. regards, tom lane
On Jan 6, 2010, at 12:20 PM, Tom Lane wrote: > One of the things on my to-do list for today is to make configure reject > Perl versions less than $TBD. I thought we had agreed a week or so back > that 5.8 was the lowest safe version because of utf8 and other > considerations. +1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better. David
On Wed, Jan 06, 2010 at 01:45:45PM -0800, David E. Wheeler wrote: > On Jan 6, 2010, at 12:20 PM, Tom Lane wrote: > > > One of the things on my to-do list for today is to make configure reject > > Perl versions less than $TBD. I thought we had agreed a week or so back > > that 5.8 was the lowest safe version because of utf8 and other > > considerations. > > +1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better. I think we said 5.8.1 at the time, but 5.8.3 sounds good to me. There would be _very_ few places using < 5.8.6. Tim.
On Wed, Jan 06, 2010 at 11:41:46AM -0500, Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: > > Next question: what do we do if a direct-called function calls > > return_next()? That at least must surely take effect in the caller's > > context - the callee won't have anywhere to stash the the results at all. > > Whatever do you mean by "take effect in the caller's context"? I surely > hope it's not "return the row to the caller's caller, who likely isn't > expecting anything of the kind". > > I suspect Tim will just answer that he isn't going to try to > short-circuit the call path for set-returning functions. For 8.5 I don't think I'll even attempt direct inter-plperl-calls. I'll just do a nicely-sugared wrapper around spi_exec_prepared(). Either via import, as I outlined earlier, or Garick Hamlin's suggestion of attributes - which is certainly worth exploring. Tim.
On Jan 6, 2010, at 3:31 PM, Tim Bunce wrote: > For 8.5 I don't think I'll even attempt direct inter-plperl-calls. > > I'll just do a nicely-sugared wrapper around spi_exec_prepared(). > Either via import, as I outlined earlier, or Garick Hamlin's suggestion > of attributes - which is certainly worth exploring. If it's just the sugar, then in addition to the export, which is a great idea, I'd still like to have the AUTOLOAD solution,since there may be a bunch of different functions and I might not want to import them all. Best, David
Tim Bunce <Tim.Bunce@pobox.com> writes: > On Wed, Jan 06, 2010 at 01:45:45PM -0800, David E. Wheeler wrote: >> On Jan 6, 2010, at 12:20 PM, Tom Lane wrote: >>> One of the things on my to-do list for today is to make configure reject >>> Perl versions less than $TBD. I thought we had agreed a week or so back >>> that 5.8 was the lowest safe version because of utf8 and other >>> considerations. >> >> +1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better. > I think we said 5.8.1 at the time, but 5.8.3 sounds good to me. > There would be _very_ few places using < 5.8.6. I went with 5.8 as the cutoff, for a couple of reasons: we're not in the business of telling people they ought to be up-to-date, but only of rejecting versions that demonstrably fail badly; and I found out that older versions of awk are not sufficiently competent with && and || to code a more complex test properly :-(. A version check that doesn't actually do what it claims to is worse than useless, and old buggy awk is exactly what you'd expect to find on a box with old buggy perl. (It's also worth noting that the perl version seen at configure time is not necessarily that seen at runtime, anyway, so there's not a lot of point in getting too finicky here.) regards, tom lane
On Jan 6, 2010, at 5:46 PM, Tom Lane wrote: > I went with 5.8 as the cutoff, for a couple of reasons: we're not in > the business of telling people they ought to be up-to-date, but only of > rejecting versions that demonstrably fail badly; and I found out that > older versions of awk are not sufficiently competent with && and || to > code a more complex test properly :-(. A version check that doesn't > actually do what it claims to is worse than useless, and old buggy awk > is exactly what you'd expect to find on a box with old buggy perl. Yes, but even a buggy old Perl is quite competent with && and ||. Why use awk to test the version of Perl when you have thisother nice utility to do the job? > (It's also worth noting that the perl version seen at configure time > is not necessarily that seen at runtime, anyway, so there's not a lot > of point in getting too finicky here.) Fair enough. Best, David
On Wed, Jan 06, 2010 at 08:46:11PM -0500, Tom Lane wrote: > Tim Bunce <Tim.Bunce@pobox.com> writes: > > On Wed, Jan 06, 2010 at 01:45:45PM -0800, David E. Wheeler wrote: > >> On Jan 6, 2010, at 12:20 PM, Tom Lane wrote: > >>> One of the things on my to-do list for today is to make configure reject > >>> Perl versions less than $TBD. I thought we had agreed a week or so back > >>> that 5.8 was the lowest safe version because of utf8 and other > >>> considerations. > >> > >> +1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better. > > > I think we said 5.8.1 at the time, but 5.8.3 sounds good to me. > > There would be _very_ few places using < 5.8.6. > > I went with 5.8 as the cutoff, for a couple of reasons: we're not in > the business of telling people they ought to be up-to-date, but only of > rejecting versions that demonstrably fail badly; I think 5.8.0 will fail badly, possibly demonstrably but more likely in subtle ways relating to utf8 that are hard to debug. > and I found out that > older versions of awk are not sufficiently competent with && and || to > code a more complex test properly :-(. A version check that doesn't > actually do what it claims to is worse than useless, and old buggy awk > is exactly what you'd expect to find on a box with old buggy perl. Either of these approaches should work back to perl 5.0... perl -we 'use 5.008001' 2>/dev/null && echo ok or perl -we 'exit($] < 5.008001)' && echo ok > (It's also worth noting that the perl version seen at configure time > is not necessarily that seen at runtime, anyway, so there's not a lot > of point in getting too finicky here.) A simple use 5.008001; at the start of src/pl/plperl/plc_perlboot.pl would address that. I believe Andrew is planing to commit my plperl refactor patch soon. He could add it then, or I could add it to my feature patch (which I plan to reissue soon, with very minor changes, and add to commitfest). Tim.