Thread: DELETE ERROR: tuple concurrently updated

DELETE ERROR: tuple concurrently updated

From
Михаил Кечинов
Date:
Hello.

Here is a problem, which I can't understand.

One week ago our database has crashed and after restore begins some problems.

One of them: 

When I try to delete one row from database (for example):
delete from document where numdoc = 901721617

I have this error:
ERROR: tuple concurrently updated
SQL state: XX000

I know, that no one deleting this row at same time.

What's mean this error?

Thanks.

-- 
Mikhail Kechinov
http://www.mkechinov.ru

Re: DELETE ERROR: tuple concurrently updated

From
Greg Stark
Date:
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff@gmail.com> wrote:
> One week ago our database has crashed and after restore begins some
> problems.

What version?

And how was this backup taken? It sounds like it might be an
inconsistent backup.

--
greg

Re: DELETE ERROR: tuple concurrently updated

From
Михаил Кечинов
Date:
Version 8.4
So, it was not database crash - HDD died. We copied data to new HDD, droped some dead indexes (when vacuuming we has errors with indexes, so we drop it and recreate new indexes), made vacuum full. That's all.

2009/12/29 Greg Stark <gsstark@mit.edu>
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff@gmail.com> wrote:
> One week ago our database has crashed and after restore begins some
> problems.

What version?

And how was this backup taken? It sounds like it might be an
inconsistent backup.

--
greg



--
Михаил Кечинов
http://www.mkechinov.ru

Re: DELETE ERROR: tuple concurrently updated

From
John R Pierce
Date:
Михаил Кечинов wrote:
> Version 8.4
> So, it was not database crash - HDD died. We copied data to new HDD,
> droped some dead indexes (when vacuuming we has errors with indexes,
> so we drop it and recreate new indexes), made vacuum full. That's all.

where did you copy this data from if the drive died?   if it was from a
previously made file system backup, was this backup made with some sort
of point-in-time volume snapshot capability, or was pg_start_backup()
called before it was made ?




Re: DELETE ERROR: tuple concurrently updated

From
Greg Stark
Date:
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff@gmail.com> wrote:
> When I try to delete one row from database (for example):
> delete from document where numdoc = 901721617
> I have this error:
> ERROR: tuple concurrently updated
> SQL state: XX000
> I know, that no one deleting this row at same time.
> What's mean this error?

So this error can only come from a normal SQL-level delete if there is
associated TOAST data which is being deleted as well. In which case
that TOAST data must be already marked deleted -- which shouldn't be
possible.

It sounds like you have a database where some writes from earlier
transactions reached the database and others didn't. That can happen
if you take an inconsistent backup (without using pg_start_backup())
or if the drive you're using confirmed writes before crashing but
didn't actually write them.

You might be able to get somewhat further by reindexing the TOAST
table for this table. To do so do "REINDEX TABLE document". But note
that you could run into further errors from the missing toast data.

--
greg

Re: DELETE ERROR: tuple concurrently updated

From
Михаил Кечинов
Date:
Good. Now I have error:

docs=# REINDEX TABLE document;
ERROR:  could not create unique index "pkey_document"
DETAIL:  Table contains duplicated values.

So, I have primary key and I have some rows with similar "numdoc", but "numdoc" is primary key and must be unique.

I can't drop pkey because there are some tables with foreign keys:

docs=# alter table document drop constraint pkey_document;
NOTICE:  constraint fk_search_document_vid_numdoc on table ref_search_document_v                                             id depends on index pkey_document
ERROR:  cannot drop constraint pkey_document on table document because other obj                                             ects depend on it
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

So, I can drop pkey only with other fkey's. It's bad. Is there any methods, how to clean pkey? When I try to delete some duplicate document, I have that error: tuple concurrently updated

...

2009/12/29 Greg Stark <gsstark@mit.edu>
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff@gmail.com> wrote:
> When I try to delete one row from database (for example):
> delete from document where numdoc = 901721617
> I have this error:
> ERROR: tuple concurrently updated
> SQL state: XX000
> I know, that no one deleting this row at same time.
> What's mean this error?

So this error can only come from a normal SQL-level delete if there is
associated TOAST data which is being deleted as well. In which case
that TOAST data must be already marked deleted -- which shouldn't be
possible.

It sounds like you have a database where some writes from earlier
transactions reached the database and others didn't. That can happen
if you take an inconsistent backup (without using pg_start_backup())
or if the drive you're using confirmed writes before crashing but
didn't actually write them.

You might be able to get somewhat further by reindexing the TOAST
table for this table. To do so do "REINDEX TABLE document". But note
that you could run into further errors from the missing toast data.

--
greg



--
Михаил Кечинов
http://www.mkechinov.ru

Re: DELETE ERROR: tuple concurrently updated

From
Михаил Кечинов
Date:
Ok, I did it.

1. Drop primary key CASCADE (foreign keys too).
2. Reindex table.
3. Delete duplicate rows.
4. Create primary key.
5. Create foreign keys.

Thanks for help.

29 декабря 2009 г. 15:24 пользователь Михаил Кечинов <kechinoff@gmail.com> написал:
Good. Now I have error:

docs=# REINDEX TABLE document;
ERROR:  could not create unique index "pkey_document"
DETAIL:  Table contains duplicated values.

So, I have primary key and I have some rows with similar "numdoc", but "numdoc" is primary key and must be unique.

I can't drop pkey because there are some tables with foreign keys:

docs=# alter table document drop constraint pkey_document;
NOTICE:  constraint fk_search_document_vid_numdoc on table ref_search_document_v                                             id depends on index pkey_document
ERROR:  cannot drop constraint pkey_document on table document because other obj                                             ects depend on it
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

So, I can drop pkey only with other fkey's. It's bad. Is there any methods, how to clean pkey? When I try to delete some duplicate document, I have that error: tuple concurrently updated

...

2009/12/29 Greg Stark <gsstark@mit.edu>
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff@gmail.com> wrote:
> When I try to delete one row from database (for example):
> delete from document where numdoc = 901721617
> I have this error:
> ERROR: tuple concurrently updated
> SQL state: XX000
> I know, that no one deleting this row at same time.
> What's mean this error?

So this error can only come from a normal SQL-level delete if there is
associated TOAST data which is being deleted as well. In which case
that TOAST data must be already marked deleted -- which shouldn't be
possible.

It sounds like you have a database where some writes from earlier
transactions reached the database and others didn't. That can happen
if you take an inconsistent backup (without using pg_start_backup())
or if the drive you're using confirmed writes before crashing but
didn't actually write them.

You might be able to get somewhat further by reindexing the TOAST
table for this table. To do so do "REINDEX TABLE document". But note
that you could run into further errors from the missing toast data.

--
greg



--
Михаил Кечинов
http://www.mkechinov.ru



--
Михаил Кечинов
http://www.mkechinov.ru