Thread: drop/rename table and transactions
Hi, Every now and then I get the following error: cannot write block 0 of tablename [username] blind If this happens, all my database connections get this error when trying to access the database and I need to restart postgresql. The problem causing this error needs to be something like this: create table table2 (col1 text); insert into table2 (col1) values ('some data'); begin work; drop table table1; alter table table2 rename to table1; commit; I've been playing with some statements to repeat the error, but I haven't been able to succesfully repeat the error (only once, but doing the same didn't cause the error again). Maybe it has something to do with using multiple connections. Trying some statements I found an other error, using these statements: create table table1 (col1 text); begin work; drop table table1; alter table table2 rename to table1; create table table2 (col1 text); commit; select * from table1; Caused: couldn't open table1: No such file or directory I'm using postgresql-6.5.2-1.i386.rpm. Is the posgresql development team aware of these errors and will they be fixed? Gr. Jaco
>If this happens, all my database connections get this error when trying >to access the database and I need to restart postgresql. The problem >causing this error needs to be something like this: > > create table table2 (col1 text); > insert into table2 (col1) values ('some data'); > begin work; > drop table table1; > alter table table2 rename to table1; > commit; What happens if two different connections try to "drop table table1"? Try doing it step by step in two different connections? I did a vaguely similar thing. I did a BEGIN, DROP TABLE, ROLLBACK. And yes I know you're not supposed to be able to rollback a drop table, but I could not recreate the table- I had to do a manual file system delete of the table. If a connection gets broken during a transaction, and you get a rollback, you could encounter the same problem. I did what I did coz I was curious. But creating/Dropping tables in a transaction does not appear to be a "good thing", and that's not just for PostgreSQL. I believe doing data definition stuff in transactions is not recommended. Is it possible to achieve your goals by using things like "delete * from table1 where id!=stuffIwant" instead of dropping it? Cheerio, Link.
Lincoln Yeoh wrote: > >If this happens, all my database connections get this error when trying > >to access the database and I need to restart postgresql. The problem > >causing this error needs to be something like this: > > > > create table table2 (col1 text); > > insert into table2 (col1) values ('some data'); > > begin work; > > drop table table1; > > alter table table2 rename to table1; > > commit; > > I did what I did coz I was curious. But creating/Dropping tables in a > transaction does not appear to be a "good thing", and that's not just for > PostgreSQL. I believe doing data definition stuff in transactions is not > recommended. > > Is it possible to achieve your goals by using things like > "delete * from table1 where id!=stuffIwant" instead of dropping it? > > Cheerio, > > Link. This is one of the few areas that I disagree with the development trend in PostgreSQL. Every release contains different bugs related to DDL statements in transactions. The developers appear to want to make them work (i.e., have the ability to rollback a DROP TABLE, ALTER TABLE ADD COLUMN, etc.). This, in my opinion, goes far above and beyond the call of duty for a RDBMS. Oracle issues an implicit COMMIT whenever a DDL statement is found. In fact, one could argue that those who are porting Oracle apps to PostgreSQL would assume, incorrectly, than a DROP TABLE in a transaction committed any work done previously. I personally believe that PostgreSQL should do the same as Oracle and greatly simplify the implementation of DDL statements in the backed by issuing an implicit COMMIT.... Just my opinion, though Mike Mascari
Mike Mascari wrote: > > This is one of the few areas that I disagree with the development trend in > PostgreSQL. Every release contains different bugs related to DDL statements in > transactions. The developers appear to want to make them work (i.e., have the > ability to rollback a DROP TABLE, ALTER TABLE ADD COLUMN, etc.). This, in my > opinion, goes far above and beyond the call of duty for a RDBMS. Oracle issues > an implicit COMMIT whenever a DDL statement is found. In fact, one could argue > that those who are porting Oracle apps to PostgreSQL would assume, > incorrectly, than a DROP TABLE in a transaction committed any work done > previously. > > I personally believe that PostgreSQL should do the same as Oracle and greatly > simplify the implementation of DDL statements in the backed by issuing an > implicit COMMIT.... > > Just my opinion, though And I agreed with this. But I would like to preserve ability to CREATE TABLE, mostly because I think that SELECT ... INTO TABLE ... is very usefull thing. Vadim
Mike Mascari wrote: >> This is one of the few areas that I disagree with the development >> trend in PostgreSQL. Every release contains different bugs related to >> DDL statements in transactions. The developers appear to want to make >> them work (i.e., have the ability to rollback a DROP TABLE, ALTER >> TABLE ADD COLUMN, etc.). This, in my opinion, goes far above and >> beyond the call of duty for a RDBMS. Oracle issues an implicit COMMIT >> whenever a DDL statement is found. So, the limits of our ambition should be to be as good as Oracle? (Only one-half :-) here.) I've seen quite a few discussions on the mailing lists about applications that could really use rollback-able DDL commands. Personally, I certainly wouldn't give up any reliability for this, and darn little performance; but within those constraints I think we should do what we can. regards, tom lane
Hi, I'd like to prevent duplicate ids from being inserted into a table. I can let the database enforce it by using UNIQUE or PRIMARY KEY. But assuming I prefer to catch such things with the application, what would be the best way of doing it? The only way I figured to do it was to use: begin; lock table accounts; select count(*) from accounts where id=$number; if count=0, insert into accounts (id,etc) values ($number,$etc) commit; Is this a good idea? Or is it much better and faster to let the database catch things? Is it faster to use "select count(*) from accounts" or "select id from accounts"? Apparently count(*) has some speed optimizations in MySQL. So wondering if there are similar things in Postgres. Thanks, Link.
Tom Lane wrote: > Mike Mascari wrote: > >> This is one of the few areas that I disagree with the development > >> trend in PostgreSQL. Every release contains different bugs related to > >> DDL statements in transactions. The developers appear to want to make > >> them work (i.e., have the ability to rollback a DROP TABLE, ALTER > >> TABLE ADD COLUMN, etc.). This, in my opinion, goes far above and > >> beyond the call of duty for a RDBMS. Oracle issues an implicit COMMIT > >> whenever a DDL statement is found. > > So, the limits of our ambition should be to be as good as Oracle? > (Only one-half :-) here.) > > I've seen quite a few discussions on the mailing lists about > applications that could really use rollback-able DDL commands. > > Personally, I certainly wouldn't give up any reliability for this, > and darn little performance; but within those constraints I think > we should do what we can. > > regards, tom lane > Well, I agree that it would be GREAT to be able to rollback DDL statements. However, at the moment, failures during a transaction while DDL statements occur usually require direct intervention by the user (in the case of having to drop/recreate indexes) and often require the services of the DBA, if filesystem intervention is necessary (i.e., getting rid of partially dropped/created tables and their associated fileystem files). I guess I'm worried by the current state of ambiguity with respect to which DDL statements can be safely rolled back and which can't. I know you added NOTICEs in current, but it seems less than robust to ask the user not to trigger a bug. And of course, something like the following can always happen: test=# CREATE TABLE example(value text); CREATE test=# BEGIN; BEGIN test=# DROP TABLE example; NOTICE: Caution: DROP TABLE cannot be rolled back, so don't abort now DROP -- someone just yanked the RJ45 cable from the hub in the T-COM closet -- (which, ludicrous as it might seem, happens) From an otherwise EXTREMELY happy user :-) (full smile...), I see 3 scenarios: (1) Disallow DDL statements in transactions (2) Send NOTICE's asking for the user to not trigger the bug until the bugs can be fixed -or- (3) Have all DDL statements implicity commit any running transactions. 1, of course, stinks. 2 is the current state and would probably take several releases before all DDL statement rollback bugs could be crushed (look how many times it took to get segmented files right -- and are we REALLY sure it is?). 3, it seems to me, could be implemented in a day's time, would prevent the various forms of data corruption people often post to this list (GENERAL) about, and still allows 2 to happen in the future as a configure-time or run-time option. Just some ramblings, Mike Mascari
Mike Mascari <mascarm@mascari.com> writes: > Well, I agree that it would be GREAT to be able to rollback DDL > statements. However, at the moment, failures during a transaction while > DDL statements occur usually require direct intervention by the user (in > the case of having to drop/recreate indexes) and often require the services > of the DBA, if filesystem intervention is necessary (i.e., getting rid of > partially dropped/created tables and their associated fileystem > files). And forced commit after the DDL statement completes will improve that how? > I see 3 scenarios: > (1) Disallow DDL statements in transactions > (2) Send NOTICE's asking for the user to not trigger the bug until the bugs > can be fixed -or- > (3) Have all DDL statements implicity commit any running transactions. > 1, of course, stinks. 2 is the current state and would probably take > several releases before all DDL statement rollback bugs could be crushed It's not an overnight project, for sure. > 3, it seems to me, could be implemented in a day's > time, would prevent the various forms of data corruption people often post > to this list (GENERAL) about, I don't believe either of those assumptions. We've had problems with VACUUM's internal commit, and I don't think it'd be either quick or inherently more reliable to apply the same model to all DDL commands. A more significant point is that implicit commit is not a transparent change; it will break applications. People use transaction blocks for two reasons: (1) to define where to roll back to after an error, (2) to ensure that the results of logically related updates become visible to other backends atomically. Implicit commit destroys both of those guarantees, even though only the first one is really related to the implementation problem we are trying to solve. As a user I'd be pretty unhappy if "SELECT ... INTO" suddenly became "COMMIT; SELECT; BEGIN". Not only would that mean that updates made by my transaction would become visible prematurely, but it might also mean that the SELECT retrieves results it should not (ie, results from xacts that were not committed when my xact started). Both of these things could make my application logic fail in hard-to-find, hard-to- reproduce-except-under-load ways. So, although implicit commit might look like a convenient workaround at the level of Postgres itself, it'd be a horrible loss of reliability at the application level. I'd rather go with #1 (hard error) than risk introducing transactional bugs into applications that use Postgres. > Since ORACLE has 70% of the RDBMS market, it is the de facto standard Yes, and Windows is the de facto standard operating system. I don't use Windows, and I'm not willing to follow Oracle's lead when they make a bad decision... regards, tom lane
> Is it possible to achieve your goals by using things like > "delete * from table1 where id!=stuffIwant" instead of dropping it? Yes, I think I better use delete statements instead of drop statements knowing PostgreSQL can't always handle drop/rename statements in transactions correctly. Jaco de Groot
Mike Mascari wrote: > > >From an otherwise EXTREMELY happy user :-) (full smile...), I see 3 > scenarios: > > (1) Disallow DDL statements in transactions > (2) Send NOTICE's asking for the user to not trigger the bug until the bugs > can be fixed -or- > (3) Have all DDL statements implicity commit any running transactions. I think 1 is the best solution as long as there are bugs concerning DDL statements in transactions. It will prevent people from getting in trouble. I've been in this trouble for months :-(. I'm using Java Servlets to connect to PostgreSQL and I'm having DDL statements whitin transactions wich sometimes cause an error. This error is hard to find and solve if you don't know PostgreSQL has problems with DDL statements in transactions. And if the error occures it doesn't simply crash 1 transaction or connection but it crashes all connections wich prevents my website from running correctly until I've manualy fixed the problem (mostly restarting PostgreSQL). To prevent others from getting in the same trouble I'd like to propose that the next release of PosgreSQL will dissalow DDL statements in transactions and notice the user this is a feature wich is currently in development. Jaco de Groot