Thread: Why ContinueUpdateOnError is not implemented in npgsql
.NET data adapter must implement DataAdapter.ContinueUpdateOnError property in transaction. ContinueUpdateOnError requires that transaction continues after error. In this case .NET can mark all bad rows in DataGrid. User can see all errors together. Unfortunately inside transaction after error PostgreSQL returns message "Current transaction aborted, command ignored after end of transaction block" for all subsequent commands. npgsql does NOT have any support for fix this. It marks ALL subsequent commands as invalid. So npgsql cannot used as reliable .NET data provider. Please confirm this. There are only two ways fix this issue. NpgsqlDataAdapter must invoke automatic ROLLBACK after each error or use checkpoints before each command. Why this is not implemented ? SIDENOTE. In this case PosgreSQL acts like ill-designed compiler or spell checker which hangs on first error. Todays compilers return as many errors as possible. Word spell checker mark all misspelled words in document, not only first one. Using PostgreSQL+npgsql to import or edit large amounts of data having minor errors is huge loss in perfomance since major ADO.NET property is not supported. After each error whole process must started again. Andrus
"Andrus" <kobruleht2@hot.ee> writes: > ContinueUpdateOnError requires that transaction continues after error. In > this case .NET can mark all bad rows in DataGrid. User can see all errors > together. > ... > There are only two ways fix this issue. > NpgsqlDataAdapter must invoke automatic ROLLBACK after each error > or use checkpoints before each command. Yup, a savepoint before each command is required if that's the behavior you want. Yes, that adds overhead. The reason it's not automatic is exactly that it adds overhead, which many applications don't need or want. But if you have to have it, that's what you do. regards, tom lane
>> There are only two ways fix this issue. >> NpgsqlDataAdapter must invoke automatic ROLLBACK after each error >> or use checkpoints before each command. > > Yup, a savepoint before each command is required if that's the behavior > you want. Yes, that adds overhead. The reason it's not automatic is > exactly that it adds overhead, which many applications don't need or > want. But if you have to have it, that's what you do. If I want to add 10000 records, using savepoint before each INSERT command makes program very slow. Why MS SQL server and Oracle does not have this issue and implement ContinueUpdateOnError fast ? Is this issue caused by PostgreSQL design failure ? Andrus.
Andrus wrote: > Why MS SQL server and Oracle does not have this issue and implement > ContinueUpdateOnError fast ? > > Is this issue caused by PostgreSQL design failure ? Design decision. An error in a transaction renders that transaction incomplete. If it's incomplete then it can't be committed. Other RDBMSs do take a more relaxed approach to this, giving the application more control. I'm not sure whether it would be practical to change that decision now, or whether it's too deeply embedded in the code. But, if you're wanting to do a bulk data upload, why aren't you using a bulk data loader? Typically these will insert a large set of records and if there is an error, rollback then try a smaller set (e.g. half) until they succeed or have one row, then start expanding the set size again while everything works. Oh, and if you're doing a lot of this you'll want to look at COPY rather than a batch of inserts. Not sure how/if that's supported with npgsql I'm afraid. -- Richard Huxton Archonet Ltd