Thread: AutoCommit and DDL
I have a generic function that creates partition tables and their corresponding indexes and constraints. I've tested the function using a GUI and it works great. We implemented in our data loaders (written in perl) and the DB connections have AutoCommit turned off. When we run it in the perl script, I see the code being execute to create the tables, etc. but they are never created, even with a commit at the end. It appears something is causing the table creation to get rolled back. I turned AutoCommit on and re-ran the perl script and the tables get created. Why won't these table's get created when AutoCommit is off? Why do I have to commit DDL? I'm running 7.4.3. Thanks. -Don -- Donald Drake President Drake Consulting http://www.drakeconsult.com/ http://www.MailLaunder.com/ 312-560-1574
On Sat, Feb 26, 2005 at 02:56:52PM -0600, Don Drake wrote: > I turned AutoCommit on and re-ran the perl script and the tables get > created. Why won't these table's get created when AutoCommit is off? > Why do I have to commit DDL? If you're using DBI, are you using PrintError or RaiseError or otherwise checking the success of each command? My first guess would be that some command is failing (e.g., a DROP of an object that doesn't exist), so the transaction is automatically rolled back even though you requested a commit. -- Michael Fuhr http://www.fuhr.org/~mfuhr/
I know it's not failing, I have the server logging the commands and there are no errors. The only change made was turning AutoCommit on. -Don On Sat, 26 Feb 2005 21:20:43 -0700, Michael Fuhr <mike@fuhr.org> wrote: > On Sat, Feb 26, 2005 at 02:56:52PM -0600, Don Drake wrote: > > > I turned AutoCommit on and re-ran the perl script and the tables get > > created. Why won't these table's get created when AutoCommit is off? > > Why do I have to commit DDL? > > If you're using DBI, are you using PrintError or RaiseError or > otherwise checking the success of each command? My first guess > would be that some command is failing (e.g., a DROP of an object > that doesn't exist), so the transaction is automatically rolled > back even though you requested a commit. > > -- > Michael Fuhr > http://www.fuhr.org/~mfuhr/ > -- Donald Drake President Drake Consulting http://www.drakeconsult.com/ http://www.MailLaunder.com/ 312-560-1574
I did some traces and it all looks OK. The problem has to do with multiple concurrent connections to the server causing problems. I've removed the concurrent connections and now this works. Strange. -Don On Sun, 27 Feb 2005 12:54:34 -0700, Michael Fuhr <mike@fuhr.org> wrote: > On Sun, Feb 27, 2005 at 11:55:37AM -0600, Don Drake wrote: > > > I know it's not failing, I have the server logging the commands and > > there are no errors. > > > > The only change made was turning AutoCommit on. > > Have you used any of DBI's tracing capabilities? Could you post a > simple test case? The following works for me with Perl 5.8.6, DBI > 1.47, DBD::Pg 1.32, and PostgreSQL 7.4.7 on FreeBSD 4.11-STABLE: > > #!/usr/bin/perl > > use strict; > use warnings; > use DBI; > > my $dbh = DBI->connect("dbi:Pg:dbname=test", "mfuhr", "", {AutoCommit => 0}); > $dbh->do("CREATE TABLE foo (x integer)"); > $dbh->commit; > $dbh->disconnect; > > -- > Michael Fuhr > http://www.fuhr.org/~mfuhr/ > -- Donald Drake President Drake Consulting http://www.drakeconsult.com/ http://www.MailLaunder.com/ 312-560-1574
On Sun, Feb 27, 2005 at 07:55:35PM -0600, Don Drake wrote: > The problem has to do with multiple concurrent connections to the > server causing problems. I've removed the concurrent connections and > now this works. Strange. Can you elaborate? Can you describe the scenario with enough detail that somebody else could attempt to duplicate it? It's certainly possible to issue DDL statements concurrent with other connections, so we still haven't identified what's really causing the problem. Investigation shouldn't end until "strange" becomes "aha!" -- Michael Fuhr http://www.fuhr.org/~mfuhr/
On Sun, Feb 27, 2005 at 11:55:37AM -0600, Don Drake wrote: > I know it's not failing, I have the server logging the commands and > there are no errors. > > The only change made was turning AutoCommit on. Have you used any of DBI's tracing capabilities? Could you post a simple test case? The following works for me with Perl 5.8.6, DBI 1.47, DBD::Pg 1.32, and PostgreSQL 7.4.7 on FreeBSD 4.11-STABLE: #!/usr/bin/perl use strict; use warnings; use DBI; my $dbh = DBI->connect("dbi:Pg:dbname=test", "mfuhr", "", {AutoCommit => 0}); $dbh->do("CREATE TABLE foo (x integer)"); $dbh->commit; $dbh->disconnect; -- Michael Fuhr http://www.fuhr.org/~mfuhr/
I don't think it would be easy to duplicate since our code base is quite extensive. Basically, what was happening was a script would first open a database connection (AutoCommit turned off by default), create a few objects (that also opened independent db connections), the objects would run queries so they have data populated, an insert is done and committed, then we call a generic function that will create a new table (using inherits, part of our partitioning) as well as adding indexes and constraints to this new table. It would get to a point in the function where it was adding a FK constraint and every query against the table would "hang" which appeared to be some exclusive lock not being released. Activity on the DB would be 100% idle during this period, the alter table never came back so we killed it each time. I commented out the code doing the FK constraint add and everything worked just fine. As a test I moved the partition function call to the beginning of the script (before the objects were created) and it worked just fine. I then changed the object declarations passing in the single DB handle, and every now works just fine. I ran DBI traces and everything looked just fine. This was a strange problem, I'm just happy everything is working. -Don On Sun, 27 Feb 2005 20:33:55 -0700, Michael Fuhr <mike@fuhr.org> wrote: > On Sun, Feb 27, 2005 at 07:55:35PM -0600, Don Drake wrote: > > > The problem has to do with multiple concurrent connections to the > > server causing problems. I've removed the concurrent connections and > > now this works. Strange. > > Can you elaborate? Can you describe the scenario with enough detail > that somebody else could attempt to duplicate it? It's certainly > possible to issue DDL statements concurrent with other connections, > so we still haven't identified what's really causing the problem. > Investigation shouldn't end until "strange" becomes "aha!" > > -- > Michael Fuhr > http://www.fuhr.org/~mfuhr/ > -- Donald Drake President Drake Consulting http://www.drakeconsult.com/ http://www.MailLaunder.com/ 312-560-1574
On 2005-03-01, Don Drake <dondrake@gmail.com> wrote: > I don't think it would be easy to duplicate since our code base is > quite extensive. > > Basically, what was happening was a script would first open a database > connection (AutoCommit turned off by default), create a few objects > (that also opened independent db connections), the objects would run > queries so they have data populated, an insert is done and committed, > then we call a generic function that will create a new table (using > inherits, part of our partitioning) as well as adding indexes and > constraints to this new table. It would get to a point in the > function where it was adding a FK constraint and every query against > the table would "hang" which appeared to be some exclusive lock not > being released. Activity on the DB would be 100% idle during this > period, the alter table never came back so we killed it each time. I > commented out the code doing the FK constraint add and everything > worked just fine. This sounds as though your application deadlocked against itself - by using multiple connections without autocommit, you can easily get into situations where you are waiting for completion on one connection, which is blocked waiting for a lock held by another connection - the lock remains until the second connection commits, which never happens since the app is waiting on the first. The DB can't detect this as a deadlock because it does not know that one session is waiting on another on the client side; deadlock detection considers only sessions waiting _inside the server_. This situation isn't specific to DDL, but is easier to produce that way since most DDL operations acquire very high level locks (often AccessExclusive, which blocks queries). > As a test I moved the partition function call to the beginning of the > script (before the objects were created) and it worked just fine. I > then changed the object declarations passing in the single DB handle, > and every now works just fine. This is consistent with it being a client-side deadlock. -- Andrew, Supernews http://www.supernews.com - individual and corporate NNTP services