Thread: Duplicate key issue in a transaction block

Duplicate key issue in a transaction block

From
Ioana Danes
Date:
Hi Everyone,

I have a hard to reproduce scenario for a production site....I tried to simplify the code and at the end I could get a
similarproblem with the following table and java code. 
The problem is that I have a transaction that deletes all the records in a group and inserts the new records for that
group.If that transaction is called from 2 different clients for the same groupid it happens to get a duplicate key
violationwhich it should never happen on my opinion!!!!!!!! 

Example:

begin transaction
delete from infotest where groupid = 1;
insert into infotest (groupid, subgroupid, datemodified) values (1,1,'2009-01-01 12:00');
insert into infotest (groupid, subgroupid, datemodified) values (1,2,'2009-01-01 12:00');
insert into infotest (groupid, subgroupid, datemodified) values (1,3,'2009-01-01 12:00');
...
commit transaction;

I am running postgres 8.3.1 on a SUSE LINUX 10.1 (X86-64) VERSION = 10.1

This is the postgres log sequence:

2009-05-31 19:05:49.235 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,1,"BEGIN",2009-05-31 19:05:49
CST,2/8,0,LOG,00000,"executeS_1: BEGIN",,,,,,,, 
2009-05-31 19:05:49.236 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,2,"DELETE",2009-05-31 19:05:49
CST,2/8,0,LOG,00000,"execute<unnamed>: delete from infotest where groupid = $1","parameters: $1 = '1'",,,,,,, 
2009-05-31 19:05:49.257 CST,"postgres","test",25305,"172.20.23.16:36748",4a23296d.62d9,104,"SELECT",2009-05-31 19:05:49
CST,1/58,0,LOG,00000,"execute<unnamed>: select groupid, subgroupid, datemodified from infotest where groupid =
1",,,,,,,,
2009-05-31 19:05:49.258 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,3,"INSERT",2009-05-31 19:05:49
CST,2/8,884974,LOG,00000,"execute<unnamed>: insert into infotest (groupid, subgroupid, datemodified) values
($1,$2,$3)","parameters:$1 = '1', $2 = '1', $3 = '2009-06-08 11:33:19.667-04'",,,,,,, 
2009-05-31 19:05:49.258 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,4,"INSERT",2009-05-31 19:05:49
CST,2/8,884974,ERROR,23505,"duplicatekey value violates unique constraint ""pk_infotest""",,,,,,"insert into infotest
(groupid,subgroupid, datemodified) values ($1,$2,$3)",, 
2009-05-31 19:05:49.297 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,5,"idle in
transaction",2009-05-3119:05:49 CST,2/0,884974,LOG,08P01,"unexpected EOF on client connection",,,,,,,, 






This is the script to create the table:

create table infotest (groupid integer, subgroupid integer, datemodified timestamp with time zone);
alter table infotest add constraint pk_infotest primary key (groupid,subgroupid);





And this is the java code I used in a loop:

DbTest1.java file:

import java.sql.Connection;
import java..sql.DatabaseMetaData;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.Timestamp;
import java.util.Calendar;

public class DbTest1
{
        private Connection connection;
        private Statement sql;;
        private DatabaseMetaData dbmd;

        private static final String pgClassName = "org.postgresql.Driver";
        private static final String pgUrl = "jdbc:postgresql://172.20.23.18/test";

        private static final String seqClassName = "org.continuent.sequoia.driver.Driver";
        private static final String seqUrl = "jdbc:sequoia://172.20.23.18/abrazo";

        private String login = "postgres";
        private String password = "testpassword";

        public void doTest(String conn) throws Exception
        {
                try
                {
                        String localURL;
                        String localDriver;
                        System.out.println("Initializing Driver for " + conn);
                        if (conn.toLowerCase().equals("pg"))
                        {
                                new org.postgresql.Driver();
                                localDriver = pgClassName;
                                localURL = pgUrl;
                        }
                        else
                        {
                                new org.continuent.sequoia.driver.Driver();
                                localDriver = seqClassName;
                                localURL = seqUrl;
                        }

                        System.out..println("Getting Connection using [" + localDriver + "] from [" + localURL + "]");
                        connection = DriverManager.getConnection(localURL, login, password);
                        System.out.println("Connection established!");

                        dbmd = connection.getMetaData(); //get MetaData to confirm connection
                        System.out.println("Connection to "+dbmd.getDatabaseProductName()+" "+
                                        dbmd.getDatabaseProductVersion()+" successful.\n");

                        sql = connection.createStatement(); //create a statement that we can use later

                        connection.setAutoCommit(false);

                        String sqlDel = "delete from infotest where groupid = ?";
                        String sqlIns = "insert into infotest (groupid, subgroupid, datemodified) values (?,?,?)";
                        PreparedStatement psDel = connection.prepareStatement(sqlDel);
                        PreparedStatement psIns = connection.prepareStatement(sqlIns);

                        Calendar c = Calendar.getInstance();
                        int GroupId = 1;
                        int LoopCount = 100;

                        System.out.println("Begin transaction...");

                        // Delete
                        psDel.setInt(1,GroupId);
                        psDel.executeUpdate();
                        System.out.println("Finished the delete...");

                        // Insert

                        int SubGroupID;
                        for ( SubGroupID=1; SubGroupID<=LoopCount; SubGroupID++ ) {
                             psIns.setInt(1,GroupId);
                             psIns.setInt(2,SubGroupID);
                             psIns.setTimestamp(3, new Timestamp(c.getTimeInMillis()));
                             psIns.executeUpdate();
                        }
                        System.out.println("Finished the inserts...");

                        psDel.close();
                        psIns.close();

                        connection.commit();
                        System.out.println("Commit transaction...");

                        connection.setAutoCommit(true);
                        ResultSet results = sql.executeQuery("select groupid, subgroupid, datemodified from infotest
wheregroupid = "+GroupId); 
                        while (results.next())
                        {
                                System.out.println("groupid = "+results.getInt(1)+"; subgroupid =
"+results.getInt(2)+";datemodified = "+results.getTimestamp(3) ); 
                        }
                        results.close();

                        connection.close();
                }
                catch (Exception cnfe)
                {
                        cnfe.printStackTrace();
                }
        }

        public static void main (String args[])
        {
                if (args == null || args.length != 1 || (!args[0].toLowerCase().equals("pg") &&
!args[0].toLowerCase().equals("seq")))
                {
                        System.out.println("Usage: " + DbTest1.class.getName() + " pg|seq");
                        System.exit(1);
                }

                try
                {
                        DbTest1 demo = new DbTest1();
                        demo.doTest(args[0]);
                }
                catch (Exception ex)
                {
                        ex.printStackTrace();
                }
        }
}



Thank you very much,

Ioana Danes


      __________________________________________________________________
Make your browsing faster, safer, and easier with the new Internet Explorer® 8. Optimized for Yahoo! Get it Now for
Free!at http://downloads.yahoo.com/ca/internetexplorer/ 


Re: Duplicate key issue in a transaction block

From
Bill Moran
Date:
In response to Ioana Danes <ioanasoftware@yahoo.ca>:
>
> Hi Everyone,
>
> I have a hard to reproduce scenario for a production site....I tried to simplify the code and at the end I could get
asimilar problem with the following table and java code. 
> The problem is that I have a transaction that deletes all the records in a group and inserts the new records for that
group.If that transaction is called from 2 different clients for the same groupid it happens to get a duplicate key
violationwhich it should never happen on my opinion!!!!!!!! 

On what is that opinion based?  Considering the situation you describe, I
would expect it to error every time you try to run that same script twice
in parallel.

Perhaps you want to take an exclusive lock on the table?  The operation
you describe seems to suggest that you'd want to guarantee exclusive
write access to the table.

>
> Example:
>
> begin transaction
> delete from infotest where groupid = 1;
> insert into infotest (groupid, subgroupid, datemodified) values (1,1,'2009-01-01 12:00');
> insert into infotest (groupid, subgroupid, datemodified) values (1,2,'2009-01-01 12:00');
> insert into infotest (groupid, subgroupid, datemodified) values (1,3,'2009-01-01 12:00');
> ...
> commit transaction;
>
> I am running postgres 8.3.1 on a SUSE LINUX 10.1 (X86-64) VERSION = 10.1
>
> This is the postgres log sequence:
>
> 2009-05-31 19:05:49.235 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,1,"BEGIN",2009-05-31 19:05:49
CST,2/8,0,LOG,00000,"executeS_1: BEGIN",,,,,,,, 
> 2009-05-31 19:05:49.236 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,2,"DELETE",2009-05-31 19:05:49
CST,2/8,0,LOG,00000,"execute<unnamed>: delete from infotest where groupid = $1","parameters: $1 = '1'",,,,,,, 
> 2009-05-31 19:05:49.257 CST,"postgres","test",25305,"172.20.23.16:36748",4a23296d.62d9,104,"SELECT",2009-05-31
19:05:49CST,1/58,0,LOG,00000,"execute <unnamed>: select groupid, subgroupid, datemodified from infotest where groupid =
1",,,,,,,,
> 2009-05-31 19:05:49.258 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,3,"INSERT",2009-05-31 19:05:49
CST,2/8,884974,LOG,00000,"execute<unnamed>: insert into infotest (groupid, subgroupid, datemodified) values
($1,$2,$3)","parameters:$1 = '1', $2 = '1', $3 = '2009-06-08 11:33:19.667-04'",,,,,,, 
> 2009-05-31 19:05:49.258 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,4,"INSERT",2009-05-31 19:05:49
CST,2/8,884974,ERROR,23505,"duplicatekey value violates unique constraint ""pk_infotest""",,,,,,"insert into infotest
(groupid,subgroupid, datemodified) values ($1,$2,$3)",, 
> 2009-05-31 19:05:49.297 CST,"postgres","test",25306,"172.20.23.16:33597",4a23296d.62da,5,"idle in
transaction",2009-05-3119:05:49 CST,2/0,884974,LOG,08P01,"unexpected EOF on client connection",,,,,,,, 
>
>
>
>
>
>
> This is the script to create the table:
>
> create table infotest (groupid integer, subgroupid integer, datemodified timestamp with time zone);
> alter table infotest add constraint pk_infotest primary key (groupid,subgroupid);
>
>
>
>
>
> And this is the java code I used in a loop:
>
> DbTest1.java file:
>
> import java.sql.Connection;
> import java..sql.DatabaseMetaData;
> import java.sql.DriverManager;
> import java.sql.PreparedStatement;
> import java.sql.ResultSet;
> import java.sql.Statement;
> import java.sql.Timestamp;
> import java.util.Calendar;
>
> public class DbTest1
> {
>         private Connection connection;
>         private Statement sql;;
>         private DatabaseMetaData dbmd;
>
>         private static final String pgClassName = "org.postgresql.Driver";
>         private static final String pgUrl = "jdbc:postgresql://172.20.23.18/test";
>
>         private static final String seqClassName = "org.continuent.sequoia.driver.Driver";
>         private static final String seqUrl = "jdbc:sequoia://172.20.23.18/abrazo";
>
>         private String login = "postgres";
>         private String password = "testpassword";
>
>         public void doTest(String conn) throws Exception
>         {
>                 try
>                 {
>                         String localURL;
>                         String localDriver;
>                         System.out.println("Initializing Driver for " + conn);
>                         if (conn.toLowerCase().equals("pg"))
>                         {
>                                 new org.postgresql.Driver();
>                                 localDriver = pgClassName;
>                                 localURL = pgUrl;
>                         }
>                         else
>                         {
>                                 new org.continuent.sequoia.driver.Driver();
>                                 localDriver = seqClassName;
>                                 localURL = seqUrl;
>                         }
>
>                         System.out..println("Getting Connection using [" + localDriver + "] from [" + localURL +
"]");
>                         connection = DriverManager.getConnection(localURL, login, password);
>                         System.out.println("Connection established!");
>
>                         dbmd = connection.getMetaData(); //get MetaData to confirm connection
>                         System.out.println("Connection to "+dbmd.getDatabaseProductName()+" "+
>                                         dbmd.getDatabaseProductVersion()+" successful.\n");
>
>                         sql = connection.createStatement(); //create a statement that we can use later
>
>                         connection.setAutoCommit(false);
>
>                         String sqlDel = "delete from infotest where groupid = ?";
>                         String sqlIns = "insert into infotest (groupid, subgroupid, datemodified) values (?,?,?)";
>                         PreparedStatement psDel = connection.prepareStatement(sqlDel);
>                         PreparedStatement psIns = connection.prepareStatement(sqlIns);
>
>                         Calendar c = Calendar.getInstance();
>                         int GroupId = 1;
>                         int LoopCount = 100;
>
>                         System.out.println("Begin transaction...");
>
>                         // Delete
>                         psDel.setInt(1,GroupId);
>                         psDel.executeUpdate();
>                         System.out.println("Finished the delete...");
>
>                         // Insert
>
>                         int SubGroupID;
>                         for ( SubGroupID=1; SubGroupID<=LoopCount; SubGroupID++ ) {
>                              psIns.setInt(1,GroupId);
>                              psIns.setInt(2,SubGroupID);
>                              psIns.setTimestamp(3, new Timestamp(c.getTimeInMillis()));
>                              psIns.executeUpdate();
>                         }
>                         System.out.println("Finished the inserts...");
>
>                         psDel.close();
>                         psIns.close();
>
>                         connection.commit();
>                         System.out.println("Commit transaction...");
>
>                         connection.setAutoCommit(true);
>                         ResultSet results = sql.executeQuery("select groupid, subgroupid, datemodified from infotest
wheregroupid = "+GroupId); 
>                         while (results.next())
>                         {
>                                 System.out.println("groupid = "+results.getInt(1)+"; subgroupid =
"+results.getInt(2)+";datemodified = "+results.getTimestamp(3) ); 
>                         }
>                         results.close();
>
>                         connection.close();
>                 }
>                 catch (Exception cnfe)
>                 {
>                         cnfe.printStackTrace();
>                 }
>         }
>
>         public static void main (String args[])
>         {
>                 if (args == null || args.length != 1 || (!args[0].toLowerCase().equals("pg") &&
!args[0].toLowerCase().equals("seq")))
>                 {
>                         System.out.println("Usage: " + DbTest1.class.getName() + " pg|seq");
>                         System.exit(1);
>                 }
>
>                 try
>                 {
>                         DbTest1 demo = new DbTest1();
>                         demo.doTest(args[0]);
>                 }
>                 catch (Exception ex)
>                 {
>                         ex.printStackTrace();
>                 }
>         }
> }
>
>
>
> Thank you very much,
>
> Ioana Danes
>
>
>       __________________________________________________________________
> Make your browsing faster, safer, and easier with the new Internet Explorer® 8. Optimized for Yahoo! Get it Now for
Free!at http://downloads.yahoo.com/ca/internetexplorer/ 
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: Duplicate key issue in a transaction block

From
Vyacheslav Kalinin
Date:


On Mon, Jun 8, 2009 at 8:33 PM, Bill Moran <wmoran@potentialtech.com> wrote:

Perhaps you want to take an exclusive lock on the table?  The operation
you describe seems to suggest that you'd want to guarantee exclusive
write access to the table.

Exclusive table lock is a bit excessive  IMO. Locking particular group should be good, though it is not quite straightforward to achieve. I'd use advisory locks or would lock a row in a parent group table (if such table exists, if not - it might be worth to make one) referenced by rows in question.

Re: Duplicate key issue in a transaction block

From
Ioana Danes
Date:
--- On Mon, 6/8/09, Bill Moran <wmoran@potentialtech.com> wrote:

> From: Bill Moran <wmoran@potentialtech.com>
> Subject: Re: [GENERAL] Duplicate key issue in a transaction block
> To: "Ioana Danes" <ioanasoftware@yahoo.ca>
> Cc: "PostgreSQL General" <pgsql-general@postgresql.org>
> Received: Monday, June 8, 2009, 12:33 PM
> In response to Ioana Danes <ioanasoftware@yahoo.ca>:
> >
> > Hi Everyone,
> >
> > I have a hard to reproduce scenario for a production
> site....I tried to simplify the code and at the end I could
> get a similar problem with the following table and java
> code.
> > The problem is that I have a transaction that deletes
> all the records in a group and inserts the new records for
> that group. If that transaction is called from 2 different
> clients for the same groupid it happens to get a duplicate
> key violation which it should never happen on my
> opinion!!!!!!!!
>
> On what is that opinion based?  Considering the
> situation you describe, I
> would expect it to error every time you try to run that
> same script twice
> in parallel.

Well, you are right, I had a wrong understanding of Read Committed isolation level. I was expecting that my inserts
willsee only what was committed before the transaction begin not before the query begin.  

Thanks for your answer...



      __________________________________________________________________
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now
http://ca.toolbar.yahoo.com.

Re: Duplicate key issue in a transaction block

From
Bill Moran
Date:
In response to Vyacheslav Kalinin <vka@mgcp.com>:

> On Mon, Jun 8, 2009 at 8:33 PM, Bill Moran <wmoran@potentialtech.com> wrote:
>
> >
> > Perhaps you want to take an exclusive lock on the table?  The operation
> > you describe seems to suggest that you'd want to guarantee exclusive
> > write access to the table.
>
> Exclusive table lock is a bit excessive  IMO. Locking particular group
> should be good, though it is not quite straightforward to achieve. I'd use
> advisory locks or would lock a row in a parent group table (if such table
> exists, if not - it might be worth to make one) referenced by rows in
> question.

Perhaps, but sounds like a lot of unnecessary complexity to me.

... and I didn't say exclusive table lock, I said "exclusive write"  Big
difference there.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: Duplicate key issue in a transaction block

From
Ioana Danes
Date:
Well, I guess I have my answer...

I tried to narrow down an issue I get on one of the production sites, where using a similar transaction I get the same
error.

In my production environment the group id is actually a unique number for the terminal (terminalid) and the same
transactionCANNOT be called for the same terminalid. So this should never happen because each terminal has its own ID
andthis procedure is called on login operation for each terminal... At least that's what I thought so... 

But it does happen during the nightly online backup (pg_dump).
It looks like when the client logs in, the request is sent from the client to the db, the backup (pg_dump) slows down
theserver (or holds the lock on that table?) and the user does not have patience and restarts the client and logs in
again.
In this case I can get two parallel transactions for the same terminal...


Thanks a lot for your answers,
Ioana Danes


      __________________________________________________________________
Looking for the perfect gift? Give the gift of Flickr!

http://www.flickr.com/gift/


Re: Duplicate key issue in a transaction block

From
Scott Marlowe
Date:
On Mon, Jun 8, 2009 at 12:25 PM, Ioana Danes<ioanasoftware@yahoo.ca> wrote:
>
> Well, I guess I have my answer...
>
> I tried to narrow down an issue I get on one of the production sites, where using a similar transaction I get the
sameerror. 
>
> In my production environment the group id is actually a unique number for the terminal (terminalid) and the same
transactionCANNOT be called for the same terminalid. So this should never happen because each terminal has its own ID
andthis procedure is called on login operation for each terminal... At least that's what I thought so... 
>
> But it does happen during the nightly online backup (pg_dump).
> It looks like when the client logs in, the request is sent from the client to the db, the backup (pg_dump) slows down
theserver (or holds the lock on that table?) and the user does not have patience and restarts the client and logs in
again.
> In this case I can get two parallel transactions for the same terminal...

You mentioned earlier you're using slony for replication, so the
answer is obvious, run the backup against a read slave, and set the
users, during backup, to only have access to the written to master.

Re: Duplicate key issue in a transaction block

From
Ioana Danes
Date:
I am actually using Sequoia and the big problem is not the primary key error. I could leave with that!
Sometimes the problem is generated not by insert but by the delete because the two transactions don't happen on the
exactlysame order and time on both backends and then the delete returns different updated records on the the db servers
andsequoia drops one backend... 
That is my big pain..............

I have some ideas I will test now and I will let you know which one I choose to fix my problem...

Thanks a lot for your help,
Ioana


--- On Mon, 6/8/09, Scott Marlowe <scott.marlowe@gmail.com> wrote:

> From: Scott Marlowe <scott.marlowe@gmail.com>
> Subject: Re: [GENERAL] Duplicate key issue in a transaction block
> To: "Ioana Danes" <ioanasoftware@yahoo.ca>
> Cc: "Bill Moran" <wmoran@potentialtech.com>, "Vyacheslav Kalinin" <vka@mgcp.com>, "PostgreSQL General"
<pgsql-general@postgresql.org>
> Received: Monday, June 8, 2009, 2:37 PM
> On Mon, Jun 8, 2009 at 12:25 PM,
> Ioana Danes<ioanasoftware@yahoo.ca>
> wrote:
> >
> > Well, I guess I have my answer...
> >
> > I tried to narrow down an issue I get on one of the
> production sites, where using a similar transaction I get
> the same error.
> >
> > In my production environment the group id is actually
> a unique number for the terminal (terminalid) and the same
> transaction CANNOT be called for the same terminalid. So
> this should never happen because each terminal has its own
> ID and this procedure is called on login operation for each
> terminal... At least that's what I thought so...
> >
> > But it does happen during the nightly online backup
> (pg_dump).
> > It looks like when the client logs in, the request is
> sent from the client to the db, the backup (pg_dump) slows
> down the server (or holds the lock on that table?) and the
> user does not have patience and restarts the client and logs
> in again.
> > In this case I can get two parallel transactions for
> the same terminal...
>
> You mentioned earlier you're using slony for replication,
> so the
> answer is obvious, run the backup against a read slave, and
> set the
> users, during backup, to only have access to the written to
> master.
>


      __________________________________________________________________
Looking for the perfect gift? Give the gift of Flickr!

http://www.flickr.com/gift/


Re: Duplicate key issue in a transaction block

From
Ioana Danes
Date:
Hi All,

I decided to go with the following fix. Instead of strait delete and insert statements I am gonna use stored procedures
fordelete and insert: 

CREATE OR REPLACE FUNCTION delete_group(integer)
RETURNS void AS
$BODY$
     BEGIN
         delete from infotest where infotest.groupid = $1;
         RETURN;
     END;
 $BODY$
LANGUAGE 'plpgsql' VOLATILE;


CREATE OR REPLACE FUNCTION insert_group(integer, integer, timestamp with time zone)
  RETURNS void AS
$BODY$
     BEGIN
         insert into infotest (groupid, subgroupid, datemodified)
        select $1,$2,$3 where 1 not in (select 1 from infotest where groupid = $1 and subgroupid = $2);
         RETURN;
     END;
 $BODY$
  LANGUAGE 'plpgsql' VOLATILE

This solved my problem.
Sequoia is not gonna drop a backend if the delete returns different number of updated rows (because the procedure
returnsvoid) and the record is not gonna be inserted anymore if already exists in the table... 

The info in this table is not critical and does not affect the functionality of the application. Also is gonna be
refreshedat each login. So I am sure I am safe with this solution... 

Thanks for the help,
Ioana


--- On Mon, 6/8/09, Ioana Danes <ioanasoftware@yahoo.ca> wrote:

> From: Ioana Danes <ioanasoftware@yahoo.ca>
> Subject: Re: [GENERAL] Duplicate key issue in a transaction block
> To: "Scott Marlowe" <scott.marlowe@gmail.com>
> Cc: "Bill Moran" <wmoran@potentialtech.com>, "Vyacheslav Kalinin" <vka@mgcp.com>, "PostgreSQL General"
<pgsql-general@postgresql.org>
> Received: Monday, June 8, 2009, 2:58 PM
>
> I am actually using Sequoia and the big problem is not the
> primary key error. I could leave with that!
> Sometimes the problem is generated not by insert but by the
> delete because the two transactions don't happen on the
> exactly same order and time on both backends and then the
> delete returns different updated records on the the db
> servers and sequoia drops one backend...
> That is my big pain...............
>
> I have some ideas I will test now and I will let you know
> which one I choose to fix my problem...
>
> Thanks a lot for your help,
> Ioana
>
>
> --- On Mon, 6/8/09, Scott Marlowe <scott.marlowe@gmail.com>
> wrote:
>
> > From: Scott Marlowe <scott.marlowe@gmail..com>
> > Subject: Re: [GENERAL] Duplicate key issue in a
> transaction block
> > To: "Ioana Danes" <ioanasoftware@yahoo.ca>
> > Cc: "Bill Moran" <wmoran@potentialtech.com>,
> "Vyacheslav Kalinin" <vka@mgcp.com>,
> "PostgreSQL General" <pgsql-general@postgresql.org>
> > Received: Monday, June 8, 2009, 2:37 PM
> > On Mon, Jun 8, 2009 at 12:25 PM,
> > Ioana Danes<ioanasoftware@yahoo.ca>
> > wrote:
> > >
> > > Well, I guess I have my answer...
> > >
> > > I tried to narrow down an issue I get on one of
> the
> > production sites, where using a similar transaction I
> get
> > the same error.
> > >
> > > In my production environment the group id is
> actually
> > a unique number for the terminal (terminalid) and the
> same
> > transaction CANNOT be called for the same terminalid.
> So
> > this should never happen because each terminal has its
> own
> > ID and this procedure is called on login operation for
> each
> > terminal... At least that's what I thought so...
> > >
> > > But it does happen during the nightly online
> backup
> > (pg_dump).
> > > It looks like when the client logs in, the
> request is
> > sent from the client to the db, the backup (pg_dump)
> slows
> > down the server (or holds the lock on that table?) and
> the
> > user does not have patience and restarts the client
> and logs
> > in again.
> > > In this case I can get two parallel transactions
> for
> > the same terminal...
> >
> > You mentioned earlier you're using slony for
> replication,
> > so the
> > answer is obvious, run the backup against a read
> slave, and
> > set the
> > users, during backup, to only have access to the
> written to
> > master.
> >
>
>
>      
> __________________________________________________________________
> Looking for the perfect gift? Give the gift of Flickr!
>
> http://www.flickr.com/gift/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


      __________________________________________________________________
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now
http://ca.toolbar.yahoo.com.