Re: BUG #6393: cluster sometime fail under heavy concurrent write load - Mailing list pgsql-bugs

From Alvaro Herrera
Subject Re: BUG #6393: cluster sometime fail under heavy concurrent write load
Date
Msg-id 1326295419-sup-3436@alvh.no-ip.org
Whole thread Raw
In response to BUG #6393: cluster sometime fail under heavy concurrent write load  (maxim.boguk@gmail.com)
Responses Re: BUG #6393: cluster sometime fail under heavy concurrent write load  (Maxim Boguk <maxim.boguk@gmail.com>)
List pgsql-bugs
Excerpts from maxim.boguk's message of mar ene 10 23:00:59 -0300 2012:
> The following bug has been logged on the website:
>=20
> Bug reference:      6393
> Logged by:          Maxim Boguk
> Email address:      maxim.boguk@gmail.com
> PostgreSQL version: 9.0.6
> Operating system:   Linux Ubuntu
> Description:=20=20=20=20=20=20=20=20
>=20
> I have heavy write-load table under PostgreSQL 9.0.6 and sometime (not
> always but more then 50% chance) i'm getting the next error during cluste=
r:
>=20
> db=3D# cluster public.enqueued_mail;
> ERROR:  duplicate key value violates unique constraint
> "pg_toast_119685646_index"
> DETAIL:  Key (chunk_id, chunk_seq)=3D(119685590, 0) already exists.
>=20
> chunk_id different each time.
>=20
> No uncommon datatypes exists in the table.
>=20
> Currently I work on create reproducible test case (but it seems require 2=
-3
> open write transaction on the table).

I don't see how can this be done at all, given that cluster grabs an
exclusive lock on the table in question.  An better example illustrating
what you're really doing would be useful.

--=20
=C3=81lvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #6391: insert does not insert correct value
Next
From: "Kevin Grittner"
Date:
Subject: Re: FreeBSD 9.0/amd64, PostgreSQL 9.1.2, pgbouncer 1.4.2: segmentation fault