Thread: Trigger function in a multi-threaded environment behavior

Trigger function in a multi-threaded environment behavior

From
Hardik Belani
Date:
 
We have a multi-threaded environment in linux where multiple threads are performing database operations(insert, update, select and at times delete as well) in transaction mode (which may span across stored

procedures) using unixodbc. Now this as is, works fine.

 

If we introduce postgres triggers (trigger functions) on some of the tables to track insert/update/delete operations, (This is done by keeping and updating a counter field for every insert, update and delete operation performed on a set of tables inside trigger function) at this point, one or more threads get stuck in lock while executing the query, to the extent that sometimes even with the pgadmin, the database tables cannot be updated.

 

We are using postgres v8.4 and unixodbc v2.2.14.

 

Here in this case when using postgres triggers in a multithreaded application, do we have to take care of table/row level locks inside trigger function.

 

Thanks,

Hardik

 

Re: Trigger function in a multi-threaded environment behavior

From
Peter Eisentraut
Date:
On tis, 2010-06-01 at 16:40 +0530, Hardik Belani wrote:

> If we introduce postgres triggers (trigger functions) on some of the
> tables to track insert/update/delete operations, (This is done by
> keeping and updating a counter field for every insert, update and
> delete operation performed on a set of tables inside trigger function)
> at this point, one or more threads get stuck in lock while executing
> the query, to the extent that sometimes even with the pgadmin, the
> database tables cannot be updated.

You should be able to analyze the lock situation using the views
pg_locks and pg_stat_activity.





Re: Trigger function in a multi-threaded environment behavior

From
Robert Haas
Date:
On Tue, Jun 1, 2010 at 7:10 AM, Hardik Belani <hardikbelani@gmail.com> wrote:
> If we introduce postgres triggers (trigger functions) on some of the tables
> to track insert/update/delete operations, (This is done by keeping and
> updating a counter field for every insert, update and delete operation
> performed on a set of tables inside trigger function) at this point, one or
> more threads get stuck in lock while executing the query, to the extent that
> sometimes even with the pgadmin, the database tables cannot be updated.

Well, if you have a lot of concurrent backends trying to update the
same counters, it is pretty reasonable to think that you're going to
have some lock contention.  There are a variety of ways to work around
this - insert rows (instead of updating) and summarize them later,
write the data to a flatfile somewhere and summarize it later, use the
built-in statistics mechanisms, etc.

Fundamentally the problem is that if transaction A is adding 1 to a
counter and transaction B is adding 1 to a counter, one must wait for
the other to commit.  If A gets there first and updates the counter
from, say, 37 to 38, B has to wait to see whether A commits or aborts.If A aborts, B must add 1 to 37; if A commits, B
mustadd 1 to 38. 
Until A commits or aborts, B doesn't know what value it's adding 1
*to*.  So every transaction updating that counter serializes on the
counter itself.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company