Thread: Re: [Pgreplication-general] DBMIRROR and INSERT transactions lose time ...

Re: [Pgreplication-general] DBMIRROR and INSERT transactions lose time ...

From
Hervé Piedvache
Date:
Hi Michael,

Le Lundi 31 Mars 2003 12:22, Michael Loftis a écrit :
> Couple of things here.  If I read you correctly after you COMMIT the first
> part of the transaction goes quickly, but begins to slow down?

No It's inside my transaction ... I'm not arrived to the COMMIT point ... only
INSERT command ... and it's going slow ... just doing INSERT ... and some
UPDATE, about 2000 updates are done in the time of the 320 000 inserts in the
same transaction.
I mean ... I do :
Begin;
INSERT (xxx);
INSERT (xxx);
....
UPDATE (xxx);
...
INSERT (xxx);
INSERT (xxx);
...
... x 320 000
Then I COMMIT ...

In my Perl script ... I do a FOR { ... } where are my INSERT, and I just print
each 100 loop the time passed and the current value of my loop to know where
I am ... so without DBMirror each 100 I have 0.3 seconds ... never move to
this value ... with DBMirror ... I have 0.3 for the first 1000 then I lose 1
second each 3000 ... so I get 2 seconds ... then 3 seconds etc ... and after
9 hours .. I was up to 45 seconds to passed the 100 INSERT of my loop ...
Without DBMirror (only the trigger dropped) it take about 15 min to do the
transaction fully ...

> HAve you considered removing any indexes you have on the tables prior to
> doing such a large insert and creating htem afterwards?  It's much cheaper
> like that.

I have only one index, my primary key index on a Serial ... I can't delete it
... because during this script other programs can access to the table ... for
reading ... and as I told you previusly without the DBMirror trigger it's
running perfectly ...

> Does this happen if you don't run DBMirror (IE local only) copy?  I've
> never used such large transactions myself before.

It's running perfectly if I drop the DBMirror trigger I have a constant flow
of 0.3 seconds for 100 INSERT command.

Thanks per advance for your help ... or ideas ;o)

Regards,

> --On Monday, March 31, 2003 9:46 AM +0100 Hervé Piedvache <herve@elma.fr>
>
> wrote:
> > Hi,
> >
> > Who can give us some help with DBMirror ??
> >
> > We make some test with DBMirror ... for us it's running perfectly ;o)
> >
> > Only one big trouble ... inserting data in transaction ...
> > We try to make one transaction with 320 000 inserts ... if the trigger of
> > DBMirror is not connected we have 100 inserts done in 0.3 sec, with
> > DBMirror  it start quickly but after 1000 insert we lose and lose many
> > time ... 1  second losed by 3000/4000 insert ... after 9 hours we get 45
> > sec for 100  insert ... :o(
> >
> > Any idea ? Update ? Patch ? ...
> >
> > Thanks per advance for your help ... ! :o)
> >
> > Regards,
> > --
> > Hervé
> > _______________________________________________
> > Pgreplication-general mailing list
> > Pgreplication-general@gborg.postgresql.org
> > http://gborg.postgresql.org/mailman/listinfo/pgreplication-general

--
Hervé


Re: [Pgreplication-general] DBMIRROR and INSERT transactions lose time ...

From
Hervé Piedvache
Date:
Hi Ezra,

Le Lundi 31 Mars 2003 17:27, Ezra Nugroho a écrit :
> Try doing it without replication, check the time.

I wrote it in my mail ... 16 minutes without the DBMirror trigger ... ;o)

> I think your problem has nothing to do with replication. It is simply
> because you have a huge one-shot transactions. Each time you run
> something in transaction, db needs to perform the sql in a rollback-able
> segment instead of in a permanent storage. It means that you are eating
> virtual memory like nuts...
> After a while page swap has to be done too frequently that your
> performance drops.

hum ... I look to my checker for yesterday ... no memory trouble I only lose
5% of memory I have 2Gb of RAM ... swap was at 100% free during all this time
... (9h of treatment)

Why using transaction ? ... To have a secured activity on my database ;*)

> Do you really have to run those 320 000 inserts in a transaction?

What other idea do you have ? In mean secured system please ... I remind you
that during this time ... other actions, like users, automate etc. are using
my database ... and once a month I have to insert those data ... so how to do
that securly without transaction ? ...

Regards,

> On Mon, 2003-03-31 at 06:33, Hervé Piedvache wrote:
> > Hi Michael,
> >
> > Le Lundi 31 Mars 2003 12:22, Michael Loftis a écrit :
> > > Couple of things here.  If I read you correctly after you COMMIT the
> > > first part of the transaction goes quickly, but begins to slow down?
> >
> > No It's inside my transaction ... I'm not arrived to the COMMIT point ...
> > only INSERT command ... and it's going slow ... just doing INSERT ... and
> > some UPDATE, about 2000 updates are done in the time of the 320 000
> > inserts in the same transaction.
> > I mean ... I do :
> > Begin;
> > INSERT (xxx);
> > INSERT (xxx);
> > ....
> > UPDATE (xxx);
> > ...
> > INSERT (xxx);
> > INSERT (xxx);
> > ...
> > ... x 320 000
> > Then I COMMIT ...
> >
> > In my Perl script ... I do a FOR { ... } where are my INSERT, and I just
> > print each 100 loop the time passed and the current value of my loop to
> > know where I am ... so without DBMirror each 100 I have 0.3 seconds ...
> > never move to this value ... with DBMirror ... I have 0.3 for the first
> > 1000 then I lose 1 second each 3000 ... so I get 2 seconds ... then 3
> > seconds etc ... and after 9 hours .. I was up to 45 seconds to passed the
> > 100 INSERT of my loop ... Without DBMirror (only the trigger dropped) it
> > take about 15 min to do the transaction fully ...
> >
> > > HAve you considered removing any indexes you have on the tables prior
> > > to doing such a large insert and creating htem afterwards?  It's much
> > > cheaper like that.
> >
> > I have only one index, my primary key index on a Serial ... I can't
> > delete it ... because during this script other programs can access to the
> > table ... for reading ... and as I told you previusly without the
> > DBMirror trigger it's running perfectly ...
> >
> > > Does this happen if you don't run DBMirror (IE local only) copy?  I've
> > > never used such large transactions myself before.
> >
> > It's running perfectly if I drop the DBMirror trigger I have a constant
> > flow of 0.3 seconds for 100 INSERT command.
> >
> > Thanks per advance for your help ... or ideas ;o)
> >
> > Regards,
> >
> > > --On Monday, March 31, 2003 9:46 AM +0100 Hervé Piedvache
> > > <herve@elma.fr>
> > >
> > > wrote:
> > > > Hi,
> > > >
> > > > Who can give us some help with DBMirror ??
> > > >
> > > > We make some test with DBMirror ... for us it's running perfectly ;o)
> > > >
> > > > Only one big trouble ... inserting data in transaction ...
> > > > We try to make one transaction with 320 000 inserts ... if the
> > > > trigger of DBMirror is not connected we have 100 inserts done in 0.3
> > > > sec, with DBMirror  it start quickly but after 1000 insert we lose
> > > > and lose many time ... 1  second losed by 3000/4000 insert ... after
> > > > 9 hours we get 45 sec for 100  insert ... :o(
> > > >
> > > > Any idea ? Update ? Patch ? ...
> > > >
> > > > Thanks per advance for your help ... ! :o)
> > > >
> > > > Regards,
> > > > --
> > > > Hervé
> > > > _______________________________________________
> > > > Pgreplication-general mailing list
> > > > Pgreplication-general@gborg.postgresql.org
> > > > http://gborg.postgresql.org/mailman/listinfo/pgreplication-general
> >
> > --
> > Hervé
> > _______________________________________________
> > Pgreplication-general mailing list
> > Pgreplication-general@gborg.postgresql.org
> > http://gborg.postgresql.org/mailman/listinfo/pgreplication-general

--
Hervé Piedvache

Elma Ingénierie Informatique
6 rue du Faubourg Saint-Honoré
F-75008 - Paris - France
Pho. 33-144949901
Fax. 33-144949902


Re: [Pgreplication-general] DBMIRROR and INSERT transactions lose

From
Steven Singer
Date:
On Mon, 31 Mar 2003, [iso-8859-15] Hervé Piedvache wrote:

> Hi Ezra,
>

> hum ... I look to my checker for yesterday ... no memory trouble I only lose
> 5% of memory I have 2Gb of RAM ... swap was at 100% free during all this time
> ... (9h of treatment)

Hmmm.

>
> Why using transaction ? ... To have a secured activity on my database ;*)
>
> > Do you really have to run those 320 000 inserts in a transaction?

DBMirror does 2 additional inserts for every insert that your program
does.  So you are really talking about closer to a million inserts on the
database. But you still wouldn't expect 7 minutes vs 9 hours.  The two
tables DBMirror does inserts onto (Pending andPendingData) also have
indexes on them.


You could try removing the DBMirror indexes on Pending and PendingData
Pending_XID_Index,Pending_pkey and PendingData_pkey

How does that change things?



>
> What other idea do you have ? In mean secured system please ... I remind you
> that during this time ... other actions, like users, automate etc. are using
> my database ... and once a month I have to insert those data ... so how to do
> that securly without transaction ? ...

You might want to consider turning mirroring off,(Disabling the
recordchange trigger) Do your monthly update and then do a pg_dump and
restore to your slaves.  This might work depending on your circumstances.
Mirroring 320,000 records to a slave will take some time as well.




--
Steven Singer                                       ssinger@navtechinc.com
Dispatch Systems                            Phone:  519-747-1170 ext 282
Navtech Systems Support Inc.                AFTN:   CYYZXNSX SITA: YYZNSCR
Waterloo, Ontario                           ARINC:  YKFNSCR


Re: [Pgreplication-general] DBMIRROR and INSERT transactions lose

From
Ezra Nugroho
Date:
Try doing it without replication, check the time.

I think your problem has nothing to do with replication. It is simply
because you have a huge one-shot transactions. Each time you run
something in transaction, db needs to perform the sql in a rollback-able
segment instead of in a permanent storage. It means that you are eating
virtual memory like nuts...
After a while page swap has to be done too frequently that your
performance drops.

Do you really have to run those 320 000 inserts in a transaction?




On Mon, 2003-03-31 at 06:33, Hervé Piedvache wrote:
> Hi Michael,
>
> Le Lundi 31 Mars 2003 12:22, Michael Loftis a écrit :
> > Couple of things here.  If I read you correctly after you COMMIT the first
> > part of the transaction goes quickly, but begins to slow down?
>
> No It's inside my transaction ... I'm not arrived to the COMMIT point ... only
> INSERT command ... and it's going slow ... just doing INSERT ... and some
> UPDATE, about 2000 updates are done in the time of the 320 000 inserts in the
> same transaction.
> I mean ... I do :
> Begin;
> INSERT (xxx);
> INSERT (xxx);
> ....
> UPDATE (xxx);
> ...
> INSERT (xxx);
> INSERT (xxx);
> ...
> ... x 320 000
> Then I COMMIT ...
>
> In my Perl script ... I do a FOR { ... } where are my INSERT, and I just print
> each 100 loop the time passed and the current value of my loop to know where
> I am ... so without DBMirror each 100 I have 0.3 seconds ... never move to
> this value ... with DBMirror ... I have 0.3 for the first 1000 then I lose 1
> second each 3000 ... so I get 2 seconds ... then 3 seconds etc ... and after
> 9 hours .. I was up to 45 seconds to passed the 100 INSERT of my loop ...
> Without DBMirror (only the trigger dropped) it take about 15 min to do the
> transaction fully ...
>
> > HAve you considered removing any indexes you have on the tables prior to
> > doing such a large insert and creating htem afterwards?  It's much cheaper
> > like that.
>
> I have only one index, my primary key index on a Serial ... I can't delete it
> ... because during this script other programs can access to the table ... for
> reading ... and as I told you previusly without the DBMirror trigger it's
> running perfectly ...
>
> > Does this happen if you don't run DBMirror (IE local only) copy?  I've
> > never used such large transactions myself before.
>
> It's running perfectly if I drop the DBMirror trigger I have a constant flow
> of 0.3 seconds for 100 INSERT command.
>
> Thanks per advance for your help ... or ideas ;o)
>
> Regards,
>
> > --On Monday, March 31, 2003 9:46 AM +0100 Hervé Piedvache <herve@elma.fr>
> >
> > wrote:
> > > Hi,
> > >
> > > Who can give us some help with DBMirror ??
> > >
> > > We make some test with DBMirror ... for us it's running perfectly ;o)
> > >
> > > Only one big trouble ... inserting data in transaction ...
> > > We try to make one transaction with 320 000 inserts ... if the trigger of
> > > DBMirror is not connected we have 100 inserts done in 0.3 sec, with
> > > DBMirror  it start quickly but after 1000 insert we lose and lose many
> > > time ... 1  second losed by 3000/4000 insert ... after 9 hours we get 45
> > > sec for 100  insert ... :o(
> > >
> > > Any idea ? Update ? Patch ? ...
> > >
> > > Thanks per advance for your help ... ! :o)
> > >
> > > Regards,
> > > --
> > > Hervé
> > > _______________________________________________
> > > Pgreplication-general mailing list
> > > Pgreplication-general@gborg.postgresql.org
> > > http://gborg.postgresql.org/mailman/listinfo/pgreplication-general
>
> --
> Hervé
> _______________________________________________
> Pgreplication-general mailing list
> Pgreplication-general@gborg.postgresql.org
> http://gborg.postgresql.org/mailman/listinfo/pgreplication-general


Re: [Pgreplication-general] DBMIRROR and INSERT transactions

From
"scott.marlowe"
Date:
On 31 Mar 2003, Ezra Nugroho wrote:

> Try doing it without replication, check the time.
>
> I think your problem has nothing to do with replication. It is simply
> because you have a huge one-shot transactions. Each time you run
> something in transaction, db needs to perform the sql in a rollback-able
> segment instead of in a permanent storage. It means that you are eating
> virtual memory like nuts...
> After a while page swap has to be done too frequently that your
> performance drops.
>
> Do you really have to run those 320 000 inserts in a transaction?

Sorry, but you're probably familiar with other databases.  While running
truly huge transactions in Postgresql has some issues, the one you are
listing does not exist.  All transactions happen in permanent storage all
the time in Postgresql.  So, no it wouldn't eat virtual memory like nuts,
or a rollback segment, since postgresql doesn't have those.  It will drive
up the storage requirements on the main store, and may produce lots of
dead tuples if you're updating / replacing loads of tuples, but for pure
inserts, 100,000 in a batch is no big deal at all.