Thread: long transaction

long transaction

From
"Sabin Coanda"
Date:
Hi there,

I have a script which includes 30000 called functions within a single
transaction.

At the beginning, the functions runs fast enough (about 60 ms each). In
time, it begins to run slower and slower (at final about one per 2 seconds).

I check the functions that runs slowly outside the script and they run
normally (60 ms each).

What is the problem ?

TIA,
Sabin



Re: long transaction

From
Lennin Caro
Date:
have you use VACUMM?

--- On Fri, 7/18/08, Sabin Coanda <sabin.coanda@deuromedia.ro> wrote:

> From: Sabin Coanda <sabin.coanda@deuromedia.ro>
> Subject: [PERFORM] long transaction
> To: pgsql-performance@postgresql.org
> Date: Friday, July 18, 2008, 3:34 PM
> Hi there,
>
> I have a script which includes 30000 called functions
> within a single
> transaction.
>
> At the beginning, the functions runs fast enough (about 60
> ms each). In
> time, it begins to run slower and slower (at final about
> one per 2 seconds).
>
> I check the functions that runs slowly outside the script
> and they run
> normally (60 ms each).
>
> What is the problem ?
>
> TIA,
> Sabin
>
>
>
> --
> Sent via pgsql-performance mailing list
> (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance





Re: long transaction

From
"Sabin Coanda"
Date:
No, I cannot use VACUUM inside the transaction, and it seems this is the
problem, although autovacuum is set.

However I checked the following scenario to find a solution. I call the
30000 statements without transaction. The performance it not changed. But
when I add VACUUM command after each 20 statement set, I got the linear
performance that I want. Unfortunatelly this is not possible inside a
transaction.

Do you know how could I solve my problem, keeping the 30000 statements
inside a single transaction ?

Sabin


"Lennin Caro" <lennin.caro@yahoo.com> wrote in message
news:120621.32315.qm@web59503.mail.ac4.yahoo.com...
> have you use VACUMM?
>
> --- On Fri, 7/18/08, Sabin Coanda <sabin.coanda@deuromedia.ro> wrote:
>
>> From: Sabin Coanda <sabin.coanda@deuromedia.ro>
>> Subject: [PERFORM] long transaction
>> To: pgsql-performance@postgresql.org
>> Date: Friday, July 18, 2008, 3:34 PM
>> Hi there,
>>
>> I have a script which includes 30000 called functions
>> within a single
>> transaction.
>>
>> At the beginning, the functions runs fast enough (about 60
>> ms each). In
>> time, it begins to run slower and slower (at final about
>> one per 2 seconds).
>>
>> I check the functions that runs slowly outside the script
>> and they run
>> normally (60 ms each).
>>
>> What is the problem ?
>>
>> TIA,
>> Sabin
>>
>>
>>
>> --
>> Sent via pgsql-performance mailing list
>> (pgsql-performance@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-performance
>
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>



Re: long transaction

From
"Merlin Moncure"
Date:
On Mon, Aug 11, 2008 at 2:53 AM, Sabin Coanda
<sabin.coanda@deuromedia.ro> wrote:
> No, I cannot use VACUUM inside the transaction, and it seems this is the
> problem, although autovacuum is set.
>
> However I checked the following scenario to find a solution. I call the
> 30000 statements without transaction. The performance it not changed. But
> when I add VACUUM command after each 20 statement set, I got the linear
> performance that I want. Unfortunatelly this is not possible inside a
> transaction.
>
> Do you know how could I solve my problem, keeping the 30000 statements
> inside a single transaction ?

long running transactions can be evil. is there a reason why this has
to run in a single transaction?

merlin

Re: long transaction

From
"Sabin Coanda"
Date:
> long running transactions can be evil. is there a reason why this has
> to run in a single transaction?

This single transaction is used to import new information in a database. I
need it because the database cannot be disconected from the users, and the
whole new data has to be consistently. There are different constraints that
are checked during the import.



Re: long transaction

From
"Merlin Moncure"
Date:
On Tue, Aug 12, 2008 at 4:17 AM, Sabin Coanda
<sabin.coanda@deuromedia.ro> wrote:
>> long running transactions can be evil. is there a reason why this has
>> to run in a single transaction?
>
> This single transaction is used to import new information in a database. I
> need it because the database cannot be disconected from the users, and the
> whole new data has to be consistently. There are different constraints that
> are checked during the import.

have you considered importing to a temporary 'holding' table with
copy, then doing 'big' sql statements on it to check constraints, etc?

merlin

Re: long transaction

From
"Sabin Coanda"
Date:
>
> have you considered importing to a temporary 'holding' table with
> copy, then doing 'big' sql statements on it to check constraints, etc?
>

Yes I considered it, but the problem is the data is very tight related
between different tables and is important to keep the import order of each
entity into the database. With other words, the entity imprt serialization
is mandatory. In fact the import script doesn't keep just insert but also
delete and update for different entities. So copy is not enough. Also using
'big' sql statements cannot guarantee the import order.

Sabin



Re: long transaction

From
"Merlin Moncure"
Date:
On Wed, Aug 13, 2008 at 2:07 AM, Sabin Coanda
<sabin.coanda@deuromedia.ro> wrote:
>>
>> have you considered importing to a temporary 'holding' table with
>> copy, then doing 'big' sql statements on it to check constraints, etc?
>>
>
> Yes I considered it, but the problem is the data is very tight related
> between different tables and is important to keep the import order of each
> entity into the database. With other words, the entity imprt serialization
> is mandatory. In fact the import script doesn't keep just insert but also
> delete and update for different entities. So copy is not enough. Also using
> 'big' sql statements cannot guarantee the import order.

More than likely, to solve your problem (outside of buying bigger box
or hacking fsync) is to rethink your import along the lines of what
I'm suggesting.   You're welcome to give more specific details of
what/how your imports are running, in order to get more specific
advice.

merlin