Error handling in plperl and pltcl - Mailing list pgsql-hackers
From | Tom Lane |
---|---|
Subject | Error handling in plperl and pltcl |
Date | |
Msg-id | 22415.1100901527@sss.pgh.pa.us Whole thread Raw |
Responses |
Re: Error handling in plperl and pltcl
Re: Error handling in plperl and pltcl Re: Error handling in plperl and pltcl |
List | pgsql-hackers |
plperl's error handling is not completely broken, but it's close :-( Consider for example the following sequence on a machine with a relatively old Perl installation: regression=# create or replace function foo(int) returns int as $$ regression$# return $_[0] + 1 $$ language plperl; CREATE FUNCTION regression=# select foo(10); ERROR: trusted perl functions disabled - please upgrade perl Safe module to at least 2.09 regression=# create or replace function foo(int) returns int as $$ regression$# return $_[0] + 1 $$ language plperlu; CREATE FUNCTION regression=# select foo(10); ERROR: creation of function failed: (in cleanup) Undefined subroutine &main::mkunsafefunc called at (eval 6) line 1. What is happening here is that the elog() call that produced the "trusted perl functions disabled" message longjmp'd straight out of the Perl interpreter, without giving Perl any chance to clean up. Perl therefore still thinks it's executing inside the "Safe" module, wherein the mkunsafefunc() function can't be seen. You could probably devise much more spectacular failures than this one, given the fact that Perl's internal state will be left in a mess. We can deal with this in a localized fashion for plperl's elog() subroutine, by PG_CATCH'ing the longjmp and converting it into a Perl croak() call. However it would be unsafe to do that for the spi_exec_query() subroutine, because then the writer of the Perl function might think he could trap the error with eval(). Which he mustn't do because any breakage in Postgres' state won't get cleaned up. We have to go through a transaction or subtransaction abort to be sure we have cleaned up whatever mess the elog was complaining about. Similar problems have plagued pltcl for a long time. pltcl's solution is to save whatever Postgres error was reported from a SPI operation, and to forcibly re-throw that error after we get control back from Tcl, even if the Tcl code tried to catch the error. Needless to say, this is gross, and anybody who runs into it is going to think it's a bug. What I think we ought to do is change both PL languages so that every SPI call is executed as a subtransaction. If the call elogs, we can clean up by aborting the subtransaction, and then we can report the error message as a Perl or Tcl error condition, which the function author can trap if he chooses. If he doesn't choose to, then the language interpreter will return an error condition to plperl.c or pltcl.c, and we can re-throw the error. This will slow down the PL SPI call operations in both languages, but AFAICS it's the only way to provide error handling semantics that aren't too broken for words. The same observations apply to plpython, of course, but I'm not volunteering to fix that language because I'm not at all familiar with it. Perhaps someone who is can make the needed changes there. Comments? regards, tom lane
pgsql-hackers by date: