Thread: audit table containing Select statements submitted
I am trying desperately to find a way to create an audit table to log...who SELECTED what... as I am dealing with HR data and Health Care Patient Data Any help would be greatly appreciated and put to good use in the Austin Texas Hospitals. Thanks Jim -----Original Message----- From: Hogan, James F. Jr. Sent: Thursday, May 04, 2006 12:46 PM To: 'pgsql-sql@postgresql.org'; 'pgsql-general@postgresql.org' Subject: audit table containing Select statements submitted No response from the pgsql-admin list so I though I would try cross posting here: pgsql-sql@postgresql.org pgsql-general@postgresql.org I just know I am not the first to try and do this Jim ********************* Can anyone point me in a direction that may help me populate in real time a table that holds? Current_user Timestamp "The Select Statement Submitted by the User" I need to be able to determine who viewed what and when they viewed it. I have considered the fact that the result from SELECT yesterday may be different than the result set returned by the SAME SELECT statement today, but when used in conjunction with the INSERT, UPDATE, DELETE audit logging I have already created, the answers to who viewed, what and when would be readily available. I have been searching all morning and... The only thing I find on logging of Select statements is that the information can be held in the Log Files...if Logging is enabled. As I am only interested in the statements presented against certain tables... Turning on logging gives me more than I need or care to look through. I could write a script to parses the Log Files into a Database Table but would prefer to avoid enabling the file logging of statements if possible. Thanks for any reference or help you may be able to provide. Jim
Jim, > I am trying desperately to find a way to create an audit table to > log...who SELECTED what... as I am dealing with HR data and Health Care > Patient Data Well, the issue with doing this by trigger or RULE is that unlike updates and deletes, SELECTS do *not* guarentee single execution. For example, if the table is on the loop end of a nested loop, it could be fired hundreds or thousands of times. This is the reason why we recommend against trying to build a trigger/RULE for SELECT auditing. There are workarounds though. One possibility, which I have used, is to not allow the application access to the base tables but instead force it to use Set Returning Functions. For example, instead of: SELECT * FROM users NATURAL JOIN permissions WHERE name = 'Joe'; you would do: SELECT * FROM view_users_perms(user,'Joe'); The SRF then can easily log the select statement. This also provides you with the additional security of knowing that a user who hacks the database connection cannot launch ad-hoc queries which the application would not allow. I've used this approach lots for web applications for that reason. Secondly, you can use the log. We've discussed on this list making it possible to log in CSV, XML or other database-digestable format. Unfortuantely, there doesn't appear to be much momentum behind that; I don't know that anyone is writing any code presently. Sponsorship? In the immediate time, you can (others have done this) have the log stream to a parser which digests the log and writes out different files (database-loadable) depending on the logged activity recorded. I don't know of any OSS code which does this but you can probably get advice on the lists fromm people who have done it custom. Good luck! --Josh
Hogan, James F. Jr. wrote: > I am trying desperately to find a way to create an audit table to > log...who SELECTED what... as I am dealing with HR data and Health Care > Patient Data > > Any help would be greatly appreciated and put to good use in the Austin > Texas Hospitals. Use statement logging and make sure each person is logging in with their own role. If you can't do that, then don't allow anyone to select anything but set returning functions, then you can log the application user with that. Sincerely, Joshua D. Drake > > Thanks > > Jim > > > > -----Original Message----- > From: Hogan, James F. Jr. > Sent: Thursday, May 04, 2006 12:46 PM > To: 'pgsql-sql@postgresql.org'; 'pgsql-general@postgresql.org' > Subject: audit table containing Select statements submitted > > No response from the pgsql-admin list so I though I would try cross > posting here: > pgsql-sql@postgresql.org > pgsql-general@postgresql.org > > > > I just know I am not the first to try and do this > > Jim > > ********************* > Can anyone point me in a direction that may help me populate in real > time a table that holds? > > Current_user > Timestamp > "The Select Statement Submitted by the User" > > I need to be able to determine who viewed what and when they viewed it. > > I have considered the fact that the result from SELECT yesterday may be > different than the result set returned by the SAME SELECT statement > today, but when used in conjunction with the INSERT, UPDATE, DELETE > audit logging I have already created, the answers to who viewed, what > and when would be readily available. > > I have been searching all morning and... > > The only thing I find on logging of Select statements is that the > information can be held in the Log Files...if Logging is enabled. > > As I am only interested in the statements presented against certain > tables... > > Turning on logging gives me more than I need or care to look through. > > I could write a script to parses the Log Files into a Database Table but > would prefer to avoid enabling the file logging of statements if > possible. > > Thanks for any reference or help you may be able to provide. > > Jim > > ---------------------------(end of broadcast)--------------------------- > TIP 5: don't forget to increase your free space map settings > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
Josh Berkus wrote: > > Secondly, you can use the log. We've discussed on this list making it > possible to log in CSV, XML or other database-digestable format. > Unfortuantely, there doesn't appear to be much momentum behind that; I > don't know that anyone is writing any code presently. Sponsorship? > Well, let's think about it some first, before we line up $$ :-) We really have 3 bits of the log: the prefix, the tag, and the message. Turning the prefix into whatever is needed is in the hands of the user. We could provide a corresponding log_line_suffix to allow XML element completion if necessary. The tag could likewise easily be XMLized (or CSVized, or whatever). The real problem is the message, which is now from the logging code's point of view basically an opaque string. Changing that would be a massive undertaking, especially when you think of the effect on the translators. And first we would need to come up with a set of fields, or several sets of fields, that we wanted to use. The reason I haven't gone down this road, and just did log_line_prefix, is that it strikes me as too inflexible. I think postprocessing is probably a better way to go, and just leave the messages opaque from postgres' point of view. If someone has a better proposal, let's see an example of how all the various messages would be handled. cheers andrew
Andrew, > The real problem is the message, which is now > from the logging code's point of view basically an opaque string. > Changing that would be a massive undertaking, especially when you think > of the effect on the translators. Hmmm ... I don't see this as a problem. Just stick the whole message into a single XML field. This is one area where XML is easier that SQL; since it's a document format, it has no problem with a great big blob of text. "Unstructured Data" and all that nonsense. Then whatever utility the user uses to *read* the XML can parse the message according to the user's desires. It'll still be an improvement over the current format for log digestion, since it will become easy to separate the message from the prefix and tag (which currently it's not). The only real issue I see is the possibility of XML codes embedded in the text, but that seems minor. -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco
Josh Berkus wrote: > Andrew, > > >> The real problem is the message, which is now >> from the logging code's point of view basically an opaque string. >> Changing that would be a massive undertaking, especially when you think >> of the effect on the translators. >> > > Hmmm ... I don't see this as a problem. Just stick the whole message into > a single XML field. This is one area where XML is easier that SQL; since > it's a document format, it has no problem with a great big blob of text. > "Unstructured Data" and all that nonsense. > > Then whatever utility the user uses to *read* the XML can parse the message > according to the user's desires. It'll still be an improvement over the > current format for log digestion, since it will become easy to separate > the message from the prefix and tag (which currently it's not). > > The only real issue I see is the possibility of XML codes embedded in the > text, but that seems minor. > > well, we could either XML escape the message or put it in a CDATA block. The latter would be arguably more humanly readable. Given that, I think we could get away with a single GUC var to govern this, log_format with possible values (to start with, at least) of 'plain' and 'xml'. The user could just set up log_line_prefix as an XML attribute string. So we'd have something like: <pglog level="LOG" user="andrew" dbname="blurfl">|<![CDATA[| statement: select count(*) from foo where a < b ; |]]></pglog>| Now, what were you saying about sponsorship? ;-) cheers andrew
On Fri, May 12, 2006 at 02:43:56PM -0400, Andrew Dunstan wrote: > Josh Berkus wrote: > >Andrew, > > > > > >>The real problem is the message, which is now > >>from the logging code's point of view basically an opaque string. > >>Changing that would be a massive undertaking, especially when you think > >>of the effect on the translators. > >> > > > >Hmmm ... I don't see this as a problem. Just stick the whole message into > >a single XML field. This is one area where XML is easier that SQL; since > >it's a document format, it has no problem with a great big blob of text. > >"Unstructured Data" and all that nonsense. > > > >Then whatever utility the user uses to *read* the XML can parse the > >message according to the user's desires. It'll still be an improvement > >over the current format for log digestion, since it will become easy to > >separate the message from the prefix and tag (which currently it's not). > > > >The only real issue I see is the possibility of XML codes embedded in the > >text, but that seems minor. > > > > > well, we could either XML escape the message or put it in a CDATA block. > The latter would be arguably more humanly readable. > > Given that, I think we could get away with a single GUC var to govern > this, log_format with possible values (to start with, at least) of > 'plain' and 'xml'. I'm wondering if there would be value in allowing for a second, seperate log stream... -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Josh Berkus <josh@agliodbs.com> writes: > Well, the issue with doing this by trigger or RULE is that unlike > updates and deletes, SELECTS do *not* guarentee single execution. For > example, if the table is on the loop end of a nested loop, it could be > fired hundreds or thousands of times. This is the reason why we > recommend against trying to build a trigger/RULE for SELECT auditing. There's an even more significant reason why you can't rely on within-the-database logging if you want to track SELECTs. Imagine you have a trigger or whatever that tries to log what I do. I just begin;select something-I-shouldn't-know;rollback; I just covered all my tracks quite effectively, because the ROLLBACK canceled any and all side effects of my transaction. But (unlike if I'd rolled back an update) I still know what I found out. So, if you want to log accesses to info as opposed to updates, you really have to use something outside the SQL universe. I concur with Josh's suggestion to rely on reading the postmaster log. regards, tom lane
Josh Berkus <josh@agliodbs.com> writes: > Hmmm ... I don't see this as a problem. Just stick the whole message into > a single XML field. This is one area where XML is easier that SQL; since > it's a document format, it has no problem with a great big blob of text. > "Unstructured Data" and all that nonsense. > Then whatever utility the user uses to *read* the XML can parse the message > according to the user's desires. It'll still be an improvement over the > current format for log digestion, since it will become easy to separate > the message from the prefix and tag (which currently it's not). This argument strikes me as nonsense. You've got a utility that's smart enough to parse the very-free-format message bodies, but it's going to be too confused by the log line prefix? > The only real issue I see is the possibility of XML codes embedded in the > text, but that seems minor. Hardly minor, as anyone who is so in love with XML that he'd want to use it for this would likely also have lots of XML tags in his data. regards, tom lane
Tom, > This argument strikes me as nonsense. You've got a utility that's smart > enough to parse the very-free-format message bodies, but it's going to > be too confused by the log line prefix? Having tinkered a little with PQA, yes, actually. The issue is that the "message" text can easily be multi-line and contain a vast variety of special characters. The issue is figuring out where the prefix, the tag and the message begin and end. And our text log format makes that a PITA. -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco
Tom Lane wrote: > Josh Berkus <josh@agliodbs.com> writes: >> Hmmm ... I don't see this as a problem. Just stick the whole message into >> a single XML field. This is one area where XML is easier that SQL; since >> it's a document format, it has no problem with a great big blob of text. >> "Unstructured Data" and all that nonsense. > >> Then whatever utility the user uses to *read* the XML can parse the message >> according to the user's desires. It'll still be an improvement over the >> current format for log digestion, since it will become easy to separate >> the message from the prefix and tag (which currently it's not). > > This argument strikes me as nonsense. You've got a utility that's smart > enough to parse the very-free-format message bodies, but it's going to > be too confused by the log line prefix? Not that Tom's dissent isn't enough, but I have to agree. It is very easy to set up a parser for the log and XML is just going to add noise. Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
Josh Berkus wrote: > Tom, > >> This argument strikes me as nonsense. You've got a utility that's smart >> enough to parse the very-free-format message bodies, but it's going to >> be too confused by the log line prefix? > > Having tinkered a little with PQA, yes, actually. The issue is that the > "message" text can easily be multi-line and contain a vast variety of > special characters. The issue is figuring out where the prefix, the tag > and the message begin and end. And our text log format makes that a PITA. > Hmmm... well why don't we add log_line_suffix :) Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
"Joshua D. Drake" <jd@commandprompt.com> writes: > Hmmm... well why don't we add log_line_suffix :) Doesn't help; you'd also need logic in there to quote any XML tags appearing in the message text. At that point, adding a "log_line_suffix" is a transparent pretense of generality --- what you might as well do is just have a full-fledged "emit the log in XML" switch. (I concur with Andrew's comments that this is pretty silly, unless someone wants to go to the further work of XML-ifying the message contents to some reasonable extent. If you are going to have to write a parser to make sense of the message contents, it is really pretty lame to claim that you can't cope with parsing the current log format as-is.) regards, tom lane
You could do this using Veil, http://pgfoundry.org/projects/veil/, or something like it. A Veil access function, http://veil.projects.postgresql.org/curdocs/overview-page.html, could be used to record every row returned within a query to the user that requested it. Note that this operates at the level of fetches and not the resultset, meaning that queries like: select stuff from a where exists (select 1 from b where....); would record a fetch against b. The basic trick is to replace table_that_you_want_audited with a view that does something like: select * from table_that_you_want_audited where audit_this_fetch(row_identifier); You will also need instead-of triggers for insert, update and delete of the view. __ Marc On Fri, 2006-05-12 at 14:19 -0300, pgsql-hackers-owner@postgresql.org wrote: > From: Hogan, James F. Jr. > Sent: Thursday, May 04, 2006 12:46 PM > To: 'pgsql-sql@postgresql.org'; 'pgsql-general@postgresql.org' > Subject: audit table containing Select statements submitted > > No response from the pgsql-admin list so I though I would try cross > posting here: > pgsql-sql@postgresql.org > pgsql-general@postgresql.org > > > > I just know I am not the first to try and do this > > Jim > > ********************* > Can anyone point me in a direction that may help me populate in real > time a table that holds? > > Current_user > Timestamp > "The Select Statement Submitted by the User" > > I need to be able to determine who viewed what and when they viewed > it. > > I have considered the fact that the result from SELECT yesterday may > be > different than the result set returned by the SAME SELECT statement > today, but when used in conjunction with the INSERT, UPDATE, DELETE > audit logging I have already created, the answers to who viewed, what > and when would be readily available. > > I have been searching all morning and... > > The only thing I find on logging of Select statements is that the > information can be held in the Log Files...if Logging is enabled. > > As I am only interested in the statements presented against certain > tables... > > Turning on logging gives me more than I need or care to look through. > > I could write a script to parses the Log Files into a Database Table > but > would prefer to avoid enabling the file logging of statements if > possible. > > Thanks for any reference or help you may be able to provide. > > Jim >
Thank you all for the effort you put into response. The biggest thing I want to avoid isn't so much having to parse through the log files but to avoid turning on such extensive logging altogether. I am not sure what kind of additional load logging to this extent may add. Looks like I am not going to have much in the way of alternative. Maybe some day. Good news is that most access is via Web Interface and I capture most activity that way. Again thank you all. Jim -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Friday, May 12, 2006 8:03 PM To: Joshua D. Drake Cc: josh@agliodbs.com; Andrew Dunstan; Hogan, James F. Jr.; pgsql-hackers@postgresql.org Subject: Re: [HACKERS] audit table containing Select statements submitted "Joshua D. Drake" <jd@commandprompt.com> writes: > Hmmm... well why don't we add log_line_suffix :) Doesn't help; you'd also need logic in there to quote any XML tags appearing in the message text. At that point, adding a "log_line_suffix" is a transparent pretense of generality --- what you might as well do is just have a full-fledged "emit the log in XML" switch. (I concur with Andrew's comments that this is pretty silly, unless someone wants to go to the further work of XML-ifying the message contents to some reasonable extent. If you are going to have to write a parser to make sense of the message contents, it is really pretty lame to claim that you can't cope with parsing the current log format as-is.) regards, tom lane
How do you hope to avoid this overhead when you're looking to track information on every single SELECT statement? Or were you looking to only log access to some specific tables? On Mon, May 15, 2006 at 09:38:54AM -0500, Hogan, James F. Jr. wrote: > Thank you all for the effort you put into response. > > The biggest thing I want to avoid isn't so much having to parse through > the log files but to avoid turning on such extensive logging altogether. > > I am not sure what kind of additional load logging to this extent may > add. > > Looks like I am not going to have much in the way of alternative. > > Maybe some day. > > Good news is that most access is via Web Interface and I capture most > activity that way. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Only specific tables. Of the 150 plus existing there are only 8 or 10 that hold sensitive data. This will grow over time but will always be in the minority. -----Original Message----- From: Jim C. Nasby [mailto:jnasby@pervasive.com] Sent: Monday, May 15, 2006 10:42 AM To: Hogan, James F. Jr. Cc: Tom Lane; Joshua D. Drake; josh@agliodbs.com; Andrew Dunstan; pgsql-hackers@postgresql.org Subject: Re: [HACKERS] audit table containing Select statements submitted How do you hope to avoid this overhead when you're looking to track information on every single SELECT statement? Or were you looking to only log access to some specific tables? On Mon, May 15, 2006 at 09:38:54AM -0500, Hogan, James F. Jr. wrote: > Thank you all for the effort you put into response. > > The biggest thing I want to avoid isn't so much having to parse through > the log files but to avoid turning on such extensive logging altogether. > > I am not sure what kind of additional load logging to this extent may > add. > > Looks like I am not going to have much in the way of alternative. > > Maybe some day. > > Good news is that most access is via Web Interface and I capture most > activity that way. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On Mon, May 15, 2006 at 10:55:43AM -0500, Hogan, James F. Jr. wrote: > Only specific tables. > > Of the 150 plus existing there are only 8 or 10 that hold sensitive > data. In that case I'd definately go with the suggestion of creating access functions and logging to a table from within them. Just make sure to mark the functions as volatile. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
"Jim C. Nasby" <jnasby@pervasive.com> writes: > On Mon, May 15, 2006 at 10:55:43AM -0500, Hogan, James F. Jr. wrote: >> Only specific tables. >> >> Of the 150 plus existing there are only 8 or 10 that hold sensitive >> data. > > In that case I'd definately go with the suggestion of creating access > functions and logging to a table from within them. Just make sure to > mark the functions as volatile. But what if the user calls the access function, sees the data, then issues a ROLLBACK? The audit record would be rolled back as well (as Tom pointed out earlier). You could use dblink to log to a separate audit database, I suppose. -Doug
On Mon, May 15, 2006 at 12:37:34PM -0400, Douglas McNaught wrote: > "Jim C. Nasby" <jnasby@pervasive.com> writes: > > > On Mon, May 15, 2006 at 10:55:43AM -0500, Hogan, James F. Jr. wrote: > >> Only specific tables. > >> > >> Of the 150 plus existing there are only 8 or 10 that hold sensitive > >> data. > > > > In that case I'd definately go with the suggestion of creating access > > functions and logging to a table from within them. Just make sure to > > mark the functions as volatile. > > But what if the user calls the access function, sees the data, then > issues a ROLLBACK? The audit record would be rolled back as well (as > Tom pointed out earlier). > > You could use dblink to log to a separate audit database, I suppose. Ooops, forgot about that. Yeah, you'd have to use dblink. If it works with pgpool performance might not be too horrid. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
> Having tinkered a little with PQA, yes, actually. The issue is that the > "message" text can easily be multi-line and contain a vast variety of > special characters. The issue is figuring out where the prefix, the tag > and the message begin and end. And our text log format makes that a PITA. Try pgfouine...
Doug, > But what if the user calls the access function, sees the data, then > issues a ROLLBACK? The audit record would be rolled back as well (as > Tom pointed out earlier). > > You could use dblink to log to a separate audit database, I suppose. Or just write to some other non-transational resource, like a text file. That would require the use of an external "untrusted" PL, though (like PL/PerlU, PL/sh or PL/PythonU) -- Josh Berkus Aglio Database Solutions San Francisco
Do we have any plans of introducing 'AUTONOMOUS TRANSACTION' like feature? Again, it might not be a part of the standard.... but it is very helpful in situations like these!!! You can run a trigger with an autonomous transaction attached to it, which guarantees that the work done by trigger persists even though the calling transaction rolls back (potentially a hacker trying to cover his tracks)!!! (http://asktom.oracle.com/~tkyte/autonomous/index.html) Gurjeet. On 5/16/06, Josh Berkus <josh@agliodbs.com> wrote: > Doug, > > > But what if the user calls the access function, sees the data, then > > issues a ROLLBACK? The audit record would be rolled back as well (as > > Tom pointed out earlier).
"Gurjeet Singh" <singh.gurjeet@gmail.com> writes: > Do we have any plans of introducing 'AUTONOMOUS TRANSACTION' like feature? No. regards, tom lane
Some users of PL/Java make use of a non-default connection from within a Trigger in order to do this. In essence, they load the client JDBC package into the backend to let the backend as such become a client. The second connection is then maintained for the lifetime of the first. Perhaps not the most efficient way of doing it but it works. Regards, Thomas Hallgren Gurjeet Singh wrote: > Do we have any plans of introducing 'AUTONOMOUS TRANSACTION' like > feature? > > Again, it might not be a part of the standard.... but it is very > helpful in situations like these!!! You can run a trigger with an > autonomous transaction attached to it, which guarantees that the work > done by trigger persists even though the calling transaction rolls > back (potentially a hacker trying to cover his tracks)!!! > > (http://asktom.oracle.com/~tkyte/autonomous/index.html) > > Gurjeet. > > On 5/16/06, Josh Berkus <josh@agliodbs.com> wrote: >> Doug, >> >> > But what if the user calls the access function, sees the data, then >> > issues a ROLLBACK? The audit record would be rolled back as well (as >> > Tom pointed out earlier). > > ---------------------------(end of broadcast)--------------------------- > TIP 5: don't forget to increase your free space map settings >
Thomas Hallgren <thomas@tada.se> writes: > Some users of PL/Java make use of a non-default connection from within a > Trigger in order to do this. In essence, they load the client JDBC package into > the backend to let the backend as such become a client. The second connection > is then maintained for the lifetime of the first. Perhaps not the most > efficient way of doing it but it works. And you can do the same thing with any of the PL languages that have database drivers like Perl or Python. It might be a little less inefficient using one of them -- and probably a lot less code. You should be aware of the risk of deadlocks if you touch the same resources. Because the database is unaware that your main transaction is waiting for this other session to complete it won't be able to detect any deadlocks that depend on this hidden dependency. -- greg
Just a small example of the fact that people need such functionality... and will devise other ways, albeit inefficient and dangerous, to implement the missing feature. The success of an RDBMS (or any other product for that matter) depends on how well it strikes the balance between the standards implementation, and what the users need. Gurjeet. On 17 May 2006 02:31:20 -0400, Greg Stark <gsstark@mit.edu> wrote: > Thomas Hallgren <thomas@tada.se> writes: > > > Some users of PL/Java make use of a non-default connection from within a > > Trigger in order to do this. In essence, they load the client JDBC package into > > the backend to let the backend as such become a client. The second connection > > is then maintained for the lifetime of the first. Perhaps not the most > > efficient way of doing it but it works. > > And you can do the same thing with any of the PL languages that have database > drivers like Perl or Python. It might be a little less inefficient using one > of them -- and probably a lot less code. > > You should be aware of the risk of deadlocks if you touch the same resources. > Because the database is unaware that your main transaction is waiting for this > other session to complete it won't be able to detect any deadlocks that depend > on this hidden dependency. > > -- > greg > >