Thread: NOTICE messages during table drop
Found these messages in the log this morning. Can anyone explain why? dropping tables for ucs... NOTICE: RegisterSharedInvalid: SI buffer overflow NOTICE: InvalidateSharedInvalid: cache state reset TIA, -km ________________________________________________________________________ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com
"kurt miller" <miller_kurt_e@hotmail.com> writes: > Found these messages in the log this morning. > Can anyone explain why? > NOTICE: RegisterSharedInvalid: SI buffer overflow > NOTICE: InvalidateSharedInvalid: cache state reset Probably this indicates that you had another backend somewhere that had been sitting in an open transaction for a long time and therefore was not reading its SI "mail" about system table changes. Eventually the SI message buffer overflows and the above notices result. 7.0 recovers cleanly from an SI overflow, so the notices should be pretty harmless (and in a release or two they'll probably be removed, or at least downgraded to DEBUG level so they don't appear by default). But in prior releases this notice was often a harbinger of impending doom :-(, because the cache-reset code didn't really work reliably. If you see a *lot* of these during normal operations, you might have reason to be concerned about the performance lost due to all the cache flushes --- everybody pays for one backend's slowness when this happens. In that case it'd be worth figuring out why your clients are leaving backends sitting idle for long periods within open transaction blocks, and trying to avoid that. But an occasional SI overrun is normal and nothing to worry about ... at least not in 7.x. regards, tom lane
Re: Leaving transactions open for long periods. Was: NOTICE messages during table drop
From
Lincoln Yeoh
Date:
At 12:09 PM 29-06-2000 -0400, Tom Lane wrote: > >In that case it'd be worth figuring out why your clients are leaving >backends sitting idle for long periods within open transaction blocks, >and trying to avoid that. But an occasional SI overrun is normal and >nothing to worry about ... at least not in 7.x. Would sitting idle with open transactions really cause problems? Say nothing has beem done in that transaction? Because in one of my apps, I have a fastcgi perl webapp which leaves a database connection open for performance reasons. What I found is there's no explicit BEGIN in the perl Postgresql DBI/DBD. When you rollback or commit, a BEGIN is automatically issued. e.g. $dbh->rollback() actually does a rollback;begin; So after serving webpage then doing a rollback/commit, a new transaction is started automatically. And if no new fastcgi requests come in, the new transaction will just be kept open. And that could be for a very long time. Would this be a problem? The program is something like this: --- snippet $dbh = DBI->connect('DBI:Pg(AutoCommit => 0):dbname=mydb','username','password') or htdie("Error connecting to database!",$DBI::errstr); eval { while( ($life>0) and ($q = new CGI::Fast)) { #Main loop where we do the real stuff $dbh->rollback(); #Begin the transaction eval { mainprogram(); }; if ($@) { dolog('ERROR',$@); $life-=500;} # Life much reduced by bad incidents else { $life -- }; #Rollback stuff not explicitly committed $dbh->rollback(); } }; $dbh->rollback(); ---- end snippet What this does is connects to the database (after program initialization etc), then it sits in a loop, waiting for fastcgi requests and then handling them, and then rollback or commit. Note: when a fastcgi request actually comes in, we do a rollback first before running the main program, because this is how we get the driver to start a new transaction with the current time[1]. Then I also prefer to rollback immediately after the main program, to avoid potential problems with locking and other stuff. However this means a minimum of _two_ rollbacks per request, one after another too :(. Not so satisfying. But seems inevitable given the bundling of rollback/commit and begin in Perl DBI/DBD. Cheerio, Link. [1] If not the timestamps will be from the previous rollback, which could be a long time ago! That puzzled me for a while- before I did that, even after the timeout period passed you could load _one_ screen, only the next screen gives a time out message :).