Re: Performance degradation after successive UPDATE's - Mailing list pgsql-performance

From Assaf Yaari
Subject Re: Performance degradation after successive UPDATE's
Date
Msg-id A3F53DEA945DA44386457F03BA78465F9D12AC@mobiexc.mobixell.com
Whole thread Raw
In response to Performance degradation after successive UPDATE's  ("Assaf Yaari" <assafy@mobixell.com>)
Responses Re: Performance degradation after successive UPDATE's
Re: Performance degradation after successive UPDATE's
List pgsql-performance
Thanks Bruno,

Issuing VACUUM FULL seems not to have influence on the time.
I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
test again (on different record) and the time still increase.

Any other ideas?

Thanks,
Assaf.

> -----Original Message-----
> From: Bruno Wolff III [mailto:bruno@wolff.to]
> Sent: Monday, December 05, 2005 10:36 PM
> To: Assaf Yaari
> Cc: pgsql-performance@postgresql.org
> Subject: Re: Performance degradation after successive UPDATE's
>
> On Mon, Dec 05, 2005 at 19:05:01 +0200,
>   Assaf Yaari <assafy@mobixell.com> wrote:
> > Hi,
> >
> > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.
> >
> > My application updates counters in DB. I left a test over the night
> > that increased counter of specific record. After night running
> > (several hundreds of thousands updates), I found out that the time
> > spent on UPDATE increased to be more than 1.5 second (at
> the beginning
> > it was less than 10ms)! Issuing VACUUM ANALYZE and even
> reboot didn't
> > seemed to solve the problem.
>
> You need to be running vacuum more often to get rid of the
> deleted rows (update is essentially insert + delete). Once
> you get too many, plain vacuum won't be able to clean them up
> without raising the value you use for FSM. By now the table
> is really bloated and you probably want to use vacuum full on it.
>

pgsql-performance by date:

Previous
From: Tino Wildenhain
Date:
Subject: Re: [GENERAL] need help
Next
From: Kathy Lo
Date:
Subject: Memory Leakage Problem