Re: Help..Help... - Mailing list pgsql-general

From Shridhar Daithankar
Subject Re: Help..Help...
Date
Msg-id 3DD2A8BD.6022.56D203@localhost
Whole thread Raw
In response to Help..Help...  (Murali Mohan Kasetty <kasetty@india.hp.com>)
List pgsql-general
On 13 Nov 2002 at 19:14, Murali Mohan Kasetty wrote:

> We are running two processes accessing the same table using JDBC. Both
> the
> processes updates records in the same table. The same rows will not be
> updated by the processes at the same time.
>
> When the processes are run concurrently, the time taken is X seconds
> each.
> But, when we run the same processes together, we are seeing that the
> time
> taken is worse than 2X.

Update generates dead tuples which causes performance slowdown. Run vacuum
analyze concurrently in background so that these dead tuples are available for
reuse.

>
> Is it possible that there is a contention that is occuring while the
> records
> are being written. Has anybody experienced a similar problem. What is
> the

I am sure that's not the case. Are you doing rapind updates. Practiacally you
should run vacuum analyze for each 1000 updates to keep performance maximum.
Tune this figure to suit your need..

> LOCK mechanism that is used by PostgreSQL.

Go thr. MVCC. It's documented in postgresql manual.

HTH

Bye
 Shridhar

--
mixed emotions:    Watching a bus-load of lawyers plunge off a cliff.    With five
empty seats.


pgsql-general by date:

Previous
From: Doug McNaught
Date:
Subject: Re: Solved, and a bug found! Re: JDBC question: Creating new arrays
Next
From: "Shridhar Daithankar"
Date:
Subject: Re: error: lost syncronization with server